- Python Web Scraping Plain Text
- Web Scrape Text File Python
- Web Scraping Text Python Code
- Web Scraping Text Python Tutorial
- Web Scraping Html Text Python
Use an HTML Parser for Web Scraping in Python Although regular expressions are great for pattern matching in general, sometimes it’s easier to use an HTML parser that’s explicitly designed for parsing out HTML pages. There are many Python tools written for this purpose, but the Beautiful Soup library is a good one to start with. What is Web scraping (Web Scraping with Python) Web Scraping (also known as Data Extraction, Web Harvesting, and Screen Scraping) is a way of extracting large amounts of data from single or multiple websites and save it into a local file on your pc in Database or (CSV, XML, JSON) formats. Web Scraping Reference: A Simple Cheat Sheet for Web Scraping with Python October 24, 2018 Once you’ve put together enough web scrapers, you start to feel like you can do it in your sleep.
Python Web Scraping - Dealing with Text. In the previous chapter, we have seen how to deal with videos and images that we obtain as a part of web scraping content. In this chapter we are going to deal with text analysis by using Python library and will learn about this in detail.
Once you’ve put together enough web scrapers, you start to feel like you can do it in your sleep. I’ve probably built hundreds of scrapers over the years for my own projects, as well as for clients and students in my web scraping course.
Occasionally though, I find myself referencing documentation or re-reading old code looking for snippets I can reuse. One of the students in my course suggested I put together a “cheat sheet” of commonly used code snippets and patterns for easy reference.
I decided to publish it publicly as well – as an organized set of easy-to-reference notes – in case they’re helpful to others.
While it’s written primarily for people who are new to programming, I also hope that it’ll be helpful to those who already have a background in software or python, but who are looking to learn some web scraping fundamentals and concepts.
Table of Contents:
- Extracting Content from HTML
- Storing Your Data
- More Advanced Topics
Useful Libraries
For the most part, a scraping program deals with making HTTP requests and parsing HTML responses.
I always make sure I have requests
and BeautifulSoup
installed before I begin a new scraping project. From the command line:
Then, at the top of your .py
file, make sure you’ve imported these libraries correctly.
Making Simple Requests
Make a simple GET request (just fetching a page)
Make a POST requests (usually used when sending information to the server like submitting a form)
Pass query arguments aka URL parameters (usually used when making a search query or paging through results)
Inspecting the Response
See what response code the server sent back (useful for detecting 4XX or 5XX errors)
Access the full response as text (get the HTML of the page in a big string)
Look for a specific substring of text within the response
Check the response’s Content Type (see if you got back HTML, JSON, XML, etc)
Extracting Content from HTML
Now that you’ve made your HTTP request and gotten some HTML content, it’s time to parse it so that you can extract the values you’re looking for.
Using Regular Expressions
Using Regular Expressions to look for HTML patterns is famously NOT recommended at all.
However, regular expressions are still useful for finding specific string patterns like prices, email addresses or phone numbers.
Run a regular expression on the response text to look for specific string patterns:
Using BeautifulSoup
BeautifulSoup is widely used due to its simple API and its powerful extraction capabilities. It has many different parser options that allow it to understand even the most poorly written HTML pages – and the default one works great.
Compared to libraries that offer similar functionality, it’s a pleasure to use. To get started, you’ll have to turn the HTML text that you got in the response into a nested, DOM-like structure that you can traverse and search
Look for all anchor tags on the page (useful if you’re building a crawler and need to find the next pages to visit)
Look for all tags with a specific class attribute (eg <li>...</li>
)
Look for the tag with a specific ID attribute (eg: <div>...</div>
)
Look for nested patterns of tags (useful for finding generic elements, but only within a specific section of the page)
Look for all tags matching CSS selectors (similar query to the last one, but might be easier to write for someone who knows CSS)
Get a list of strings representing the inner contents of a tag (this includes both the text nodes as well as the text representation of any other nested HTML tags within)
Return only the text contents within this tag, but ignore the text representation of other HTML tags (useful for stripping our pesky <span>
, <strong>
, <i>
, or other inline tags that might show up sometimes)
Convert the text that are extracting from unicode to ascii if you’re having issues printing it to the console or writing it to files
Get the attribute of a tag (useful for grabbing the src
attribute of an <img>
tag or the href
attribute of an <a>
tag)
Putting several of these concepts together, here’s a common idiom: iterating over a bunch of container tags and pull out content from each of them
Using XPath Selectors
BeautifulSoup doesn’t currently support XPath selectors, and I’ve found them to be really terse and more of a pain than they’re worth. I haven’t found a pattern I couldn’t parse using the above methods.
If you’re really dedicated to using them for some reason, you can use the lxml library instead of BeautifulSoup, as described here.
Storing Your Data
Now that you’ve extracted your data from the page, it’s time to save it somewhere.
Note: The implication in these examples is that the scraper went out and collected all of the items, and then waited until the very end to iterate over all of them and write them to a spreadsheet or database.
I did this to simplify the code examples. In practice, you’d want to store the values you extract from each page as you go, so that you don’t lose all of your progress if you hit an exception towards the end of your scrape and have to go back and re-scrape every page.
Writing to a CSV
Probably the most basic thing you can do is write your extracted items to a CSV file. By default, each row that is passed to the csv.writer
object to be written has to be a python list
.
In order for the spreadsheet to make sense and have consistent columns, you need to make sure all of the items that you’ve extracted have their properties in the same order. This isn’t usually a problem if the lists are created consistently.
If you’re extracting lots of properties about each item, sometimes it’s more useful to store the item as a python dict
instead of having to remember the order of columns within a row. The csv
module has a handy DictWriter
that keeps track of which column is for writing which dict key.
Writing to a SQLite Database
You can also use a simple SQL insert if you’d prefer to store your data in a database for later querying and retrieval.
More Advanced Topics
These aren’t really things you’ll need if you’re building a simple, small scale scraper for 90% of websites. But they’re useful tricks to keep up your sleeve.
Javascript Heavy Websites
Contrary to popular belief, you do not need any special tools to scrape websites that load their content via Javascript. In order for the information to get from their server and show up on a page in your browser, that information had to have been returned in an HTTP response somewhere.
It usually means that you won’t be making an HTTP request to the page’s URL that you see at the top of your browser window, but instead you’ll need to find the URL of the AJAX request that’s going on in the background to fetch the data from the server and load it into the page.
There’s not really an easy code snippet I can show here, but if you open the Chrome or Firefox Developer Tools, you can load the page, go to the “Network” tab and then look through the all of the requests that are being sent in the background to find the one that’s returning the data you’re looking for. Start by filtering the requests to only XHR
or JS
to make this easier.
Once you find the AJAX request that returns the data you’re hoping to scrape, then you can make your scraper send requests to this URL, instead of to the parent page’s URL. If you’re lucky, the response will be encoded with JSON
which is even easier to parse than HTML.
Content Inside Iframes
This is another topic that causes a lot of hand wringing for no reason. Sometimes the page you’re trying to scrape doesn’t actually contain the data in its HTML, but instead it loads the data inside an iframe.
Again, it’s just a matter of making the request to the right URL to get the data back that you want. Make a request to the outer page, find the iframe, and then make another HTTP request to the iframe’s src
attribute.
Sessions and Cookies
While HTTP is stateless, sometimes you want to use cookies to identify yourself consistently across requests to the site you’re scraping.
The most common example of this is needing to login to a site in order to access protected pages. Without the correct cookies sent, a request to the URL will likely be redirected to a login form or presented with an error response.
However, once you successfully login, a session cookie is set that identifies who you are to the website. As long as future requests send this cookie along, the site knows who you are and what you have access to.
Delays and Backing Off
If you want to be polite and not overwhelm the target site you’re scraping, you can introduce an intentional delay or lag in your scraper to slow it down
Some also recommend adding a backoff that’s proportional to how long the site took to respond to your request. That way if the site gets overwhelmed and starts to slow down, your code will automatically back off.
Spoofing the User Agent
By default, the requests
library sets the User-Agent
header on each request to something like “python-requests/2.12.4”. You might want to change it to identify your web scraper, perhaps providing a contact email address so that an admin from the target website can reach out if they see you in their logs.
More commonly, this is used to make it appear that the request is coming from a normal web browser, and not a web scraping program.
Using Proxy Servers
Even if you spoof your User Agent, the site you are scraping can still see your IP address, since they have to know where to send the response.
If you’d like to obfuscate where the request is coming from, you can use a proxy server in between you and the target site. The scraped site will see the request coming from that server instead of your actual scraping machine.
If you’d like to make your requests appear to be spread out across many IP addresses, then you’ll need access to many different proxy servers. You can keep track of them in a list
and then have your scraping program simply go down the list, picking off the next one for each new request, so that the proxy servers get even rotation.
Setting Timeouts
If you’re experiencing slow connections and would prefer that your scraper moved on to something else, you can specify a timeout on your requests.
Handling Network Errors
Just as you should never trust user input in web applications, you shouldn’t trust the network to behave well on large web scraping projects. Eventually you’ll hit closed connections, SSL errors or other intermittent failures.
Learn More
If you’d like to learn more about web scraping, I currently have an ebook and online course that I offer, as well as a free sandbox website that’s designed to be easy for beginners to scrape.
You can also subscribe to my blog to get emailed when I release new articles.
In the last tutorial we learned how to leverage the Scrapy framework to solve common web scraping problems.Today we are going to take a look at Selenium (with Python ❤️ ) in a step-by-step tutorial.
Selenium refers to a number of different open-source projects used for browser automation. It supports bindings for all major programming languages, including our favorite language: Python.
The Selenium API uses the WebDriver protocol to control a web browser, like Chrome, Firefox or Safari. The browser can run either localy or remotely.
At the beginning of the project (almost 20 years ago!) it was mostly used for cross-browser, end-to-end testing (acceptance tests).
Now it is still used for testing, but it is also used as a general browser automation platform. And of course, it us used for web scraping!
Selenium is useful when you have to perform an action on a website such as:
- Clicking on buttons
- Filling forms
- Scrolling
- Taking a screenshot
It is also useful for executing Javascript code. Let's say that you want to scrape a Single Page Application. Plus you haven't found an easy way to directly call the underlying APIs. In this case, Selenium might be what you need.
Installation
We will use Chrome in our example, so make sure you have it installed on your local machine:
selenium
package
To install the Selenium package, as always, I recommend that you create a virtual environment (for example using virtualenv) and then:
Quickstart
Once you have downloaded both Chrome and Chromedriver and installed the Selenium package, you should be ready to start the browser:
This will launch Chrome in headfull mode (like regular Chrome, which is controlled by your Python code).You should see a message stating that the browser is controlled by automated software.
To run Chrome in headless mode (without any graphical user interface), you can run it on a server. See the following example:
Python Web Scraping Plain Text
The driver.page_source
will return the full page HTML code.
Here are two other interesting WebDriver properties:
driver.title
gets the page's titledriver.current_url
gets the current URL (this can be useful when there are redirections on the website and you need the final URL)
Locating Elements
Locating data on a website is one of the main use cases for Selenium, either for a test suite (making sure that a specific element is present/absent on the page) or to extract data and save it for further analysis (web scraping).
There are many methods available in the Selenium API to select elements on the page. You can use:
- Tag name
- Class name
- IDs
- XPath
- CSS selectors
We recently published an article explaining XPath. Don't hesitate to take a look if you aren't familiar with XPath.
As usual, the easiest way to locate an element is to open your Chrome dev tools and inspect the element that you need.A cool shortcut for this is to highlight the element you want with your mouse and then press Ctrl + Shift + C or on macOS Cmd + Shift + C instead of having to right click + inspect each time:
find_element
There are many ways to locate an element in selenium.Let's say that we want to locate the h1 tag in this HTML:
All these methods also have find_elements
(note the plural) to return a list of elements.
For example, to get all anchors on a page, use the following:
Some elements aren't easily accessible with an ID or a simple class, and that's when you need an XPath expression. You also might have multiple elements with the same class (the ID is supposed to be unique).
XPath is my favorite way of locating elements on a web page. It's a powerful way to extract any element on a page, based on it's absolute position on the DOM, or relative to another element.
WebElement
A WebElement
is a Selenium object representing an HTML element.
There are many actions that you can perform on those HTML elements, here are the most useful:
- Accessing the text of the element with the property
element.text
- Clicking on the element with
element.click()
- Accessing an attribute with
element.get_attribute('class')
- Sending text to an input with:
element.send_keys('mypassword')
There are some other interesting methods like is_displayed()
. This returns True if an element is visible to the user.
It can be interesting to avoid honeypots (like filling hidden inputs).
Honeypots are mechanisms used by website owners to detect bots. For example, if an HTML input has the attribute type=hidden
like this:
This input value is supposed to be blank. If a bot is visiting a page and fills all of the inputs on a form with random value, it will also fill the hidden input. A legitimate user would never fill the hidden input value, because it is not rendered by the browser.
That's a classic honeypot.
Full example
Here is a full example using Selenium API methods we just covered.
We are going to log into Hacker News:
Web Scrape Text File Python
In our example, authenticating to Hacker News is not really useful on its own. However, you could imagine creating a bot to automatically post a link to your latest blog post.
In order to authenticate we need to:
- Go to the login page using
driver.get()
- Select the username input using
driver.find_element_by_*
and thenelement.send_keys()
to send text to the input - Follow the same process with the password input
- Click on the login button using
element.click()
Should be easy right? Let's see the code:
Easy, right? Now there is one important thing that is missing here. How do we know if we are logged in?
We could try a couple of things:
Web Scraping Text Python Code
- Check for an error message (like “Wrong password”)
- Check for one element on the page that is only displayed once logged in.
So, we're going to check for the logout button. The logout button has the ID “logout” (easy)!
We can't just check if the element is None
because all of the find_element_by_*
raise an exception if the element is not found in the DOM.So we have to use a try/except block and catch the NoSuchElementException
exception:
Taking a screenshot
We could easily take a screenshot using:
Note that a lot of things can go wrong when you take a screenshot with Selenium. First, you have to make sure that the window size is set correctly.Then, you need to make sure that every asynchronous HTTP call made by the frontend Javascript code has finished, and that the page is fully rendered.
In our Hacker News case it's simple and we don't have to worry about these issues.
Waiting for an element to be present
Dealing with a website that uses lots of Javascript to render its content can be tricky. These days, more and more sites are using frameworks like Angular, React and Vue.js for their front-end. These front-end frameworks are complicated to deal with because they fire a lot of AJAX calls.
If we had to worry about an asynchronous HTTP call (or many) to an API, there are two ways to solve this:
- Use a
time.sleep(ARBITRARY_TIME)
before taking the screenshot. - Use a
WebDriverWait
object.
If you use a time.sleep()
you will probably use an arbitrary value. The problem is, you're either waiting for too long or not enough.Also the website can load slowly on your local wifi internet connection, but will be 10 times faster on your cloud server.With the WebDriverWait
method you will wait the exact amount of time necessary for your element/data to be loaded.
This will wait five seconds for an element located by the ID “mySuperId” to be loaded.There are many other interesting expected conditions like:
Web Scraping Text Python Tutorial
element_to_be_clickable
text_to_be_present_in_element
element_to_be_clickable
You can find more information about this in the Selenium documentation
Executing Javascript
Sometimes, you may need to execute some Javascript on the page. For example, let's say you want to take a screenshot of some information, but you first need to scroll a bit to see it.You can easily do this with Selenium:
Conclusion
I hope you enjoyed this blog post! You should now have a good understanding of how the Selenium API works in Python. If you want to know more about how to scrape the web with Python don't hesitate to take a look at our general Python web scraping guide.
Selenium is often necessary to extract data from websites using lots of Javascript. The problem is that running lots of Selenium/Headless Chrome instances at scale is hard. This is one of the things we solve with ScrapingBee, our web scraping API
Web Scraping Html Text Python
Selenium is also an excellent tool to automate almost anything on the web.
If you perform repetitive tasks like filling forms or checking information behind a login form where the website doesn't have an API, it's maybe* a good idea to automate it with Selenium,just don't forget this xkcd: