Kagi Search KagiBot
<p>Kagibot is the web crawler for the Kagi search engine.</p>
<p>Requests from Kagibot set the User-Agent to:<br/>
<code>Mozilla/5.0 (compatible; Kagibot/1.0; +https://kagi.com/bot)</code>
</p>
<p>The requests will originate from the following IPs:</br>
<code>
216.18.205.234 amd.kagibot.org <br/>
35.212.27.76 donna.kagibot.org<br/>
104.254.65.50 knet.kagibot.org<br/>
209.151.156.194 ogi.kagibot.org<br/>
</code>
</p>
<p>Standard directives in robots.txt that target Kagibot are respected. For example, the following will allow Kagibot to crawl all pages, except those under /private/:</br>
<code>
User-Agent: Kagibot<br/>
Allow: /<br/>
Disallow: /private/<br/>
</code>
</p>
<p>If there is no rule targeting kagibot, but there is a rule targeting Googlebot, then Kagibot will follow the Googlebot directives. For example, Kagibot will fetch all pages, except those under /private/ with the following robots.txt:</br>
<code>
User-Agent: *<br/>
Disallow: /<br/>
User-Agent: Googlebot<br/>
Allow: /<br/>
Disallow: /private/<br/>
</code>
</p>
<p>If you have any questions, or if you think Kagibot is misbehaving on your site, please do not hesitate to contact us at vlad@kagi.com.</p>