User Overview

Followers and Following



Total Followers - Last Year
Daily Follower Change - Last Year
Daily Tweets - Last Year

Tweet Stats

Analysed 15,392 tweets, tweets from the last 282 weeks.
Tweets Day of Week (UTC)
Tweets Hour of Day (UTC)
Tweets Day and Hour Heatmap (UTC)


Last 50 tweets from @screamingfrog
Hi @screamingfrog ! Is it possible or would it be possible soon to weight the Link Score by giving an arbitrary value to the menu / footer / aside ?
Replying to @adrienrusso
Hi Adrien, you can’t atm, but do have it on the ‘todo’ list :-)
In reply to @JoelMesherghi
I'm having a mare today! I can find the complete data set in the non-summary report! Cheers for the patience.
Replying to @JoelMesherghi
Yeah, you've got it! The summary is just that, so we only show a sample URL for each unique error or warning. No worries, just shout if you have anymore queries. 🐸👌
In reply to @screamingfrog
Ahhh! Yes! I see now, the summary report. I got confused between that report and the validation error report you can export on the right hand pane
Replying to @JoelMesherghi
Awesome! Yeah, there's two reports there. A summary report, and then the URL level report too with all the details. There's also some more goodies under bulk export, but the 'reports' variants should work for this scenario.
In reply to @screamingfrog
Thanks! However, I still have the same problem. I can't analyse errors at scale. I have to click the individual URLs in the interface to see more data and when I export the Validation Error report I just get URLs that have errors or warnings and all the property types a page has.
Replying to @JoelMesherghi
That screenshot is not the report I just said to export though?
Hi @screamingfrog is there a way to identify the specific structured data property that is causing a validation error? The export just shows what URL has an error and then all the different types of markup, not what type is causing an error or warning
Replying to @JoelMesherghi
Hi Joel, yep absolutely! Use the 'Reports > Structured Data > Validation Errors & Warnings' export to get all the detail as well. Little tutorial over here -…
Hey @screamingfrog - playing with your Data Studio Crawl Overview template and am wondering how I can export data from older crawls to work with the DS template? Can this only be done with scheduled crawls? I have a lot of older crawls I'd love to visualize with your template.
Replying to @ramirez_robert
Hey Robert, I was about to say no, unfortunately not. But thinking about it, you could use the CLI with --load-crawl and --export-custom-summary (…). Might be bit of a pain though. Will have a think of this is something we could help with more!
In reply to @screamingfrog
Screaming frog doesn't support GA4 integration as of now?
Replying to @_shri_vidhya_
Yep, correct! It’s on the todo.
In reply to @screamingfrog
Will you see a "canonical mismatch" when the canonical tag is not in line with the HREFLANG tag?
Replying to @YBM_NL
You'd see a 'Not Using Canonical' under the hreflang tab if it doesn't match up.
Google Explains How To Inject Canonical Tags Using JavaScript In Updated Doc -…. Nice to see this officially documented.

To find conflicting canonicals, use the 'Canonical Mismatch' filter in the JS tab ->
Replying to @tentaclequing
Ah yes, the Privi-Pee! Forgot about that, a career high for all involved I think cc @DanielCartland @OliverBrett 😂
In reply to @screamingfrog
Is there already a way to find a crawl's (crawl) duration? Perhaps in the Crawls Window? So the next time someone asks "well, how long is it gonna take?" I can say, well the last one took XX".
Replying to @EdACaraballo
Cheers, Ed. No, but do have on the 'todo', would be useful. Will add in a vote for you for the feature (if you're not already in there)!
There hasn't been a new version for a while. It used to be updated almost every week. Is everything ok? @screamingfrog :)
Replying to @erhankolci
You must have missed the LFA - . Poor log files, always neglected! (I am well thank you :-))
Just released a small update to version 5.1 of the @screamingfrog Log File Analyser -…. Includes small improvements and bug fixes.
Screaming Frog Retweeted ·  

🤏small and irritating
⛳️avoid some of my mistakes
😑I'd rather not think about this any more.

🖤Like for more of this sort of thing.
🔃Quote Retweet 'STOP' to stop this sort of thing.

Zero Cumulative Layout Shift, Please Improve Cumulative Layout Shift | ohgm
In reply to @screamingfrog
@screamingfrog Where's ye olde suggestion box.
Replying to @EdACaraballo, or just me via twitter? :-) All suggestions gratefully recevied!
Screaming Frog Retweeted ·  
What kinda music does @screamingfrog listen to?

Black Eyed Trees
What kinda music does @screamingfrog listen to?

Black Eyed Trees
In reply to @screamingfrog
What's the best way to overcome the orphan pages like removing them or to make them discoverable? Sometime data is huge!
If you want users to find then, want them indexed and to perform organically then link to them internally. If you don’t, delete or noindex etc depending on what they are.
Hello @screamingfrog i've found orphan URLs in the sitemap tab but some of them have been also discovered during the crawl.. what did I miss ?
Replying to @RemiBrandini
Difficult to say without specifics. Has it crawled the XML Sitemap, how are you determining some have been discovered in the crawl as well etc. I’d recommend sending all the detail to us via and we can help!
Screaming Frog Retweeted ·  
Managed to get round to writing.

The SEO Sprint newsletter: Lean Thinking in SEO

How focusing on value can be used to reduce waste in SEO initiatives…
Lean Thinking in SEO
In reply to @screamingfrog
I've just sent the email. Thank you! ;)
Replying to @serulux
Perfect, thank you!
In reply to @screamingfrog
thx, here another feat. req. ☺ more grouped by folder data as Site Structure tab GA traffic Clicks-Impressions-CTR Inllinks wordcounts PSI title-description lenghths
Replying to @javigarcia
Cheers, Javi. We nearly did inlinks etc when we released it, but kept it fairly small to start with. Good ideas, will add to the list as well!
Hi @screamingfrog friends, any support for XPATH 2.0 on the roadmap? 🤓🙃
Replying to @ifelsejosh
Hey Josh, yeah we’ve got it on the roadmap. Cheers for the suggestion, will add a vote against that feature for you!
In reply to @RoxanaStingu
This happens when sites use HSTS. The browser will remember this and automatically use local 307 redirects for any HTTP request. The server could itself be giving a 301/302 etc, as screaming frog would detect.
Replying to @thetafferboy @RoxanaStingu and 1 otherfalse
Yeah, Mark is spot on. You can replicate what you see in a browser by enabling 'Respect HSTS' under 'Config > Spider > Advanced', but by default we'll show you the underlying redirects and status codes. Little guide on it here if helpful -…
An SEOs Guide To Crawling HSTS
In reply to @screamingfrog
Just sent an email to follow up on this. Talk soon
Replying to @andrea_moro
Thanks, Andrea! Will keep an eye out for it.
In reply to @serulux
Sorry for the extra question. But I'm trying to debug this error: "TRUNCATION! MemorySpiderURL mEncodedUrl has had its content truncated to [2,000] characters. The input string had [2,006] characters" How can I know which URL is this coming from?
Replying to @serulux
Sorry, miles away today. Do you mind popping through to us ( and then we can come back properly with more info? Easier one for email if poss! Cheers
In reply to @screamingfrog
@screamingfrog UX Request

highlight selected/current rows

sometimes tables in tabs are too wide and you need to scroll left to see urls

Replying to @javigarcia
Sorry, missed your tweet! Makes sense, and do agree. Got a couple of bits on the ‘todo’ that should help! Thanks for the suggestion as always.
In reply to @screamingfrog
@screamingfrog a CyberSec professional is asking me how many web requests SF will send in total and per hour.. when I crawl with SF.. How do I calculate this ? Thanks you so much!
Replying to @jcia2006
Depends, but you can set the crawl speed under ‘Config > Speed’. :-)
In reply to @screamingfrog
I'll dig deeper (again) in the suggested tab too. If not I'll try to make some mock-up from existing screenshot
Replying to @andrea_moro
That would be super useful, thank you 👍🐸
In reply to @andrea_moro
Today this can be done, but URLs have to be noted down because they can't be filtered, unless I'm missing something
Replying to @andrea_moro
that's one way. You'd like a column of somekind potentially as well? Hard to understand completely what you mean, so if you want to pop through any example screenshots that would be useful. There's a few ways, but depends where you are exactly!
In reply to @andrea_moro
Today this can be done, but URLs have to be noted down because they can't be filtered, unless I'm missing something
Replying to @andrea_moro
If I understand correctly, you mean generally rather than using the added,new,removed, missing filters. So if you're just on say 'HTML' URLs in the 'current' crawl, how do you know if it's in the previous? The lower URL Details tab shows both current and previous values. So
In reply to @screamingfrog
Solved!!! I had to restart the PC. Thank you very much for the quick answer
Replying to @agalanseo
Ah, awesome! 🐸👍
In reply to @screamingfrog
@screamingfrog Hi - is it possible to view only 'contextual' links and not navigation links to a page/url?
Hello, is there any way to change the default browser? I would like to use Google Chrome.

Thanks a lot @screamingfrog
Replying to @agalanseo
👋 It uses your default web browser of the system. So double check your settings there! Pop us an email ( if you still can't get it to change and we can advise further.
Screaming Frog Retweeted ·  
Empiezo en YouTube con un Curso de Screaming Frog GRATIS (desde CERO). Espero que pueda aportar a todos los interesados en una de las herramientas más usadas por la comunidad.

En el canal hablaré de otras herramientas, de análisis SEO, entrevistas y más
Okay last question — not getting a lot of bites…

@screamingfrog is there any common issues with crawling Shopify sites?
Replying to @morgoozie
See a lot of dup content with the collections product URLs (tho they get canonicalised by default). Covered over here -… . Couple of other common bits, but nothing major.
In reply to @screamingfrog
15MB?1?! Do such pages exist? Are they created on purpose or the result of bad coding?
Replying to @sorinadascalu
Would be taking longform content to new levels :-)
In reply to @jlmosebach
I see the same thing as well.
From the data I can see, I'd say it started this morning at some stage. Looked fine yesterday. Has happened previously and seemed to resolve itself fairly quicky.
In reply to @screamingfrog
@screamingfrog do you know if there's something going on with the GSC API integration? I'm getting a lot of false-positives for "URL is Not on Google" and "URL is on Google But Has Issues".
Replying to @jlmosebach
Seeing the same here, it's the API rather than us :-)
Replying to @diije @stonetatara
Yeah, use 'text only' which is without JS. But you can also use JS rendering, and then use the JavaScript tab and filters to see the differences -… :-)
In reply to @screamingfrog
Thank you, I did check the user guides. 😊 One thing whilst I have you Frog Person, if there are two H1s on a page (H1-1 and H1-2) is it ok if one of them is duplicate across the site or should they always differ? This was my understanding but throwing two H1s in has confused me!
Replying to @alexjsryder
Multiple h1s are not strictly an issue, but generally recommend the classic approach of one unique h1 a page. Check out the 'multiple' filter description here -…
A question for my SEO expert friends… what’s the difference between H1-1 and H1-2 when it comes to header titles? I’m not talking about H1, H2, H3 etc.

It’s something I’ve noticed on a @screamingfrog search under the H1 section (different text on each) & google isn’t helping.
Replying to @alexjsryder
Hi Alex, I'll update the user guide to make this clearer. But h1-1 is the first h1 found on the page. h1-2, is the second h1 found on the page (if there is one!) :-)
Googlebot will crawl and index the first 15MB of content per page -…
In reply to @screamingfrog
Config > add timestamp to exports? If true, just add yyyy-mm-dd_hh-mm-ss to the export. 😍😍🥰
Replying to @Kevin_Ellen_
'Config > Spider > Advanced', where we hide random stuff. Perfect!
In reply to @screamingfrog
Automated too ;) And apologies for the passive aggressive tweet. Was a tad busy.
Replying to @Kevin_Ellen_
Would be cool mate, no apologies necessary 💚
Made my MacBook do a @screamingfrog crawl. Will it survive? 🥲

Already hearing noises of dissent 😂
#SEO #Crawling
Replying to @siddhesh_asawa
Free heater with every crawl 🔥
Imagine @screamingfrog having the option of date-timestamp in exports <3
Replying to @Kevin_Ellen_
😮 Sounds like a good idea 😉
Replying to @ericksantamaria
Not sure what for, but cheers 🐸👍
In reply to @screamingfrog
@screamingfrog hi 🐸 ! Any bug detected in the version 16.7 about Sitemaps analysis? I'm pretty sure it's doesn't work as usual.There are lots of URLs included in "All" filter, but not listed neither under "in Sitemap" nor "not in Sitemap". Not a single Project but a common issue
Replying to @SorroGuarnio
Hello 👋 No, no bugs. So if it's a common issue for you, please double check -…. If that doesn't help, share some specific examples with us via :-)
Free access is provided to the 8 hour timeframe for this page.

A Trendsmap Explore subscription provides full access to all available timeframes

Find out more

This account is already logged in to Trendsmap.
Your subscription allows access for one user. If you require access for more users, you can create additional subscriptions.
Please Contact us if you are interested in discussing discounts for 3+ users for your organisation, or have any other queries.