Throughout the module, you have been able to analyze each of the elements to improve on page SEO. Since doing this individually can be quite time consuming, I'll show you how to analyze a website all at once. I'll teach you how to use a crawler or spider, specifically one called Screaming Frog, and download that data to an Excel file. After this lesson, you'll be able to understand all of the elements of an on page SEO strategy and how you can use them in your website analysis. A good way to view information across an entire site is to use what is referred to as a crawler or spider tool. A crawler is a tool that crawls or spiders your website, just like a search engine would. This show you what data it has extracted from the pages of your site. This allows you to get a large-scale view of the information we discuss in these lessons. To crawl a website, I recommend a program called Screaming Frog. Screaming Frog is free to use if you crawl up to 500 pages on a site. But you will have to purchase it if you want to crawl more than that. For learning purposes, the free version is fine. Now, that we discussed how to locate title tags, meta descriptions, and heading tags within the pages themselves, let's go over how each of these elements can be viewed on a larger scale. We can do this by using a tool that crawls the site just like a search engine robot would. This site crawler will then show us site wide information about these important elements. For this example, I am using a crawling tool known as Screaming Frog. You can get a free version of Screaming Frog, which will crawl up to 500 pages per site. But eventually, for working on larger sites, you will want to upgrade to a professional version. Screaming Frog will give us a lot of useful data for our SEO analysis. To begin crawling a site, add the URL here in the search bar, and then hit Start. For this example I will use UC Davis extension. As the tool begins crawling the site, you can see the progress over here on the right. Note that this progress number may change depending on how many more pages it's finding. The pages the tool has found and crawled will be displayed below. You will be able to view the type of content the tool is discovering, such as whether or not this is an HTML page, whether or not it's an image, whether or not it's JavaScript, or other information such as potential PDF files you may have, You can also view any relevant status codes, and what those status codes mean. We will discuss this in more depth later on. But take a note now that this is where you can find important status code information. If you scroll to the right, you can see title tags, the length of the title tag, as well as meta descriptions, and the length of meta descriptions. If you continue to scroll, you would also be able to see heading tags, but you can also use the tabs up here at the top to view this information. If we click on the H1 tab, we've be able to see that each of this pages have the main H1 as UC Davis Extension which is the site name. And then a second H1. This also includes the length of the H1. You can view H1 tags, H2 tags, and more in this manner. To analyze these better I prefer downloading the crawled pages to an Excel file. What you can do is filter the pages just by HTML versions so you don't have to worry about sorting through images or JavaScript later on. So what you can do is select HTML and then Export. You can't export now because it's still crawling the site. So for right now we'll just stop the crawl and then export what we have. What you'll then do is you'll name this file the name of your site. So let's just name this ucdavisextension. And then let's remind ourselves that this is a sitecrawl. We can then save the file. I'll go ahead and save it to the desktop for now. Once the file is saved, you can open the Excel file and view a list of all of the information it's found. This allows you to more easily filter data and analyze it based on your specific needs. You should now know how to crawl a website and identify important information within that crawl as well as how to download and filter that information for your own analysis. You should now understand how you can crawl a website and view on-page elements across a number of pages all at once. You should also understand how you can download that data and store it in an Excel file. That completes the video portion of this lesson.