Screaming Frog Download (2021 Latest version) for Windows 10, 8, 7

Screaming Frog SEO Spider is a website crawler that allows you to crawl the URLs of websites and get key elements to analyze and audit technical and on-site SEO. Download for free or purchase a license for additional advanced features.

SEO Spider app is a powerful and flexible site tracker, capable of crawling small and very large websites efficiently, while allowing you to analyze the results in real time. Gather key data on the site to enable SEOs to make informed decisions.

What can you do with SEO Spider software?

Find broken links
Instantly crawl a website and find broken links (404s) and server errors. Bulk export of bugs and source URLs to correct or send to a developer.

Redirected audits
Find temporary and permanent redirects, identify redirect chains and loops, or upload a list of URLs to audit in a site migration.

Analyze page titles and metadata
Analyze page titles and meta descriptions during a crawl and identify those that are too long, short, missing, or duplicate on your site.

Discover duplicate content
Discover exact duplicate URLs with an md5 algorithmic check, partially duplicate items such as page titles, descriptions or headings, and search for low-content pages.

Extract data with XPath
Collect all the HTML data from a web page using CSS Path, XPath, or regex. This could include social meta tags, additional headings, prices, SKUs, or more!

Review of robots and directives
View URLs blocked by robots.txt, meta robots, or X-Robots-Tag directives such as “noindex” or “nofollow”, as well as canonicals and rel = “next” and rel = “prev”.

Generar XML Sitemaps
With Screaming Frog you can quickly create XML Sitemaps and Image XML Sitemaps, with advanced URL settings to include, last modified, priority and change frequency.

Integrate with Google Analytics
Connect to the Google Analytics API and get user data such as sessions or bounce rate and conversions, goals, transactions, and revenue for anti-tracking landing pages.

Track JavaScript websites
Build web pages using built-in Chromium WRS to crawl JavaScript-rich, dynamic websites and frameworks such as Angular, React, and Vue.js.

Visualize the site architecture
Evaluate internal linking and URL structure using interactive crawling and force-directed diagrams of tree-chart site visualizations and directories.

Features and highlights

  • Find broken links, errors, and redirects
  • Analyze page titles and metadata
  • Meta Robots and Directives Review
  • Audit hreflang Attributes
  • Discover duplicate pages
  • Generar XML Sitemaps
  • Site views
  • Trace limit
  • Programming
  • Tracking Settings
  • Save traces and re-upload
  • Custom source code search
  • Custom extraction
  • Google Analytics integration
  • Search console integration
  • Link metrics integration
  • Rendering (JavaScript)
  • Robots.txt personalizado
  • AMP Tracking and Validation
  • Validation and structured data
  • Store and view raw and rendered HTML

The Screaming Frog is an SEO audit tool, built by real SEOs with thousands of users around the world. A quick summary of some of the data collected in a crawl includes:

  • Errors – Client errors such as broken links and server errors (no responses, 4XX, 5XX).
  • Redirects: permanent and temporary redirects (3XX responses) and JS redirects.
  • Blocked URLs: view and audit URLs not allowed by the robots.txt protocol.
  • Locked Resources – View and audit locked resources in render mode.
  • External links: all external links and their status codes.
  • Protocol: whether the URLs are secure (HTTPS) or insecure (HTTP).
  • URI issues: non-ASCII characters, underscores, uppercase characters, parameters, or long URLs.
  • Duplicate Pages – The algorithmic hash value check / MD5checksums for exact duplicate pages.
  • Page Titles: Missing, Duplicate, 65+ Characters, Short, Pixel Width Truncation, Same as H1, or Multiple.
  • Meta Description: missing, duplicate, more than 156 characters, short, pixel width truncation, or multiple.
  • Meta Keywords: Mainly for reference, since they are not used by Google, Bing or Yahoo.
  • File size: size of URLs and images.
  • Response time.
  • Last modified header.
  • Page depth (drag).
  • The word count.
  • H1 – Missing, duplicate, more than 70 characters, multiple.
  • H2 – Missing, duplicate, more than 70 characters, multiple.
  • Meta Robots: indexar, noindex, follow, nofollow, noarchive, nosnippet, noodp, noydir, etc.
  • Meta Refresh – Including the landing page and time delay.
  • Canonical link element and canonical HTTP headers.
  • X-Robots-Tag.
  • Pagination – rel = “next” and rel = “prev”.
  • Follow and unfollow: page and link level (true / false).
  • Redirect Chains – Discover redirect chains and loops.
  • Hreflang Attributes – Audit for commit links, inconsistent and bad language codes, non-canonical hreflang, and more.
  • AJAX – Select this option to obey Google’s currently deprecated AJAX crawling scheme.

Leave a Reply

Your email address will not be published. Required fields are marked *