A Harvard report on Google's optional SafeSearch (censorship) facility found that many innocuous sites were blocked for no apparent reason. It turns out that if Webmasters use a robots.txt file to stop Google's spiders from crawling parts of the site, Google excludes it from SafeSearch on the very reasonable grounds that if it can't search a site, it can't tell whether it is safe or not. Read more at CNet.
Get all your news in one place.
100’s of premium titles.
One app.
Start reading
One app.
Get all your news in one place.
100’s of premium titles. One news app.
Blocking spiders knocks your site off Google's SafeSearch
Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member?
Sign in here
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member?
Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member?
Sign in here
Our Picks