You should put these entries into your robots.txt file.
To block the Google search crawler use for all of your site:
User-agent: Googlebot
Disallow: /
To block the Google AI crawler use:
User-agent: Google-Advanced
Disallow: /
CEO @ Mojeek No-tracking indepdendent search engine
You should put these entries into your robots.txt file.
To block the Google search crawler use for all of your site:
User-agent: Googlebot
Disallow: /
To block the Google AI crawler use:
User-agent: Google-Advanced
Disallow: /
We’d love to build a distributed search engine, but it would be too slow I think. When you send us a query we go and search 8 billion+ pages, and bring back the top 10, 20…up to 1,000 results. For a good service we need to do that in 200ms, and thus one needs to centralise the index. It took years, several iterations and our carefully designed algos & architecture to make something so fast. No doubt Google, Bing, Yandex & Baidu went through similar hoops. Maybe, I’m wrong and/or someone can make it work with our API.
Excellent reporting on the trials: https://www.bigtechontrial.com/