When an AI agent makes an HTTP request to a third-party server, the server has a problem: who is this bot, and should I let it in?

Today, the answer is usually a User-Agent string. That's a line of text the bot sets about itself. Nothing stops a scraper from claiming to be Googlebot. Nothing stops a malicious crawler from claiming to be your research agent. IP allowlists are marginally better, but they break in cloud environments where addresses are shared and ephemeral.

Bot identity on the web is effectively honor-system. This has worked poorly for decades, and it works worse now that AI agents are making millions of autonomous requests per day.

![Before and after: User-Agent strings are spoofable and unverifiable. Request signing makes identity cryptographic, letting servers allowlist, block, or rate-limit by verified key.](/blog/request-signing-before-after.svg)

## How Everruns signs requests

Everruns now signs outbound HTTP requests using Ed25519 signatures, following [RFC 9421 (HTTP Message Signatures)](https://www.rfc-editor.org/rfc/rfc9421) and the [Web Bot Authentication Architecture](https://datatracker.ietf.org/doc/draft-meunier-web-bot-auth-architecture/) draft specification.

When an agent calls `web_fetch` to retrieve a page or call an API, the request is signed automatically at the HTTP layer. The agent code doesn't do anything different. Three headers are added to each outbound request:

- **Signature** - the Ed25519 signature itself, base64url-encoded
- **Signature-Input** - covered components, timestamps, key ID, a nonce for replay prevention, and a `tag="web-bot-auth"` marker
- **Signature-Agent** - the FQDN where the agent's public key can be retrieved

![Request signing overview: the agent signs outbound requests with Ed25519, the target server fetches the agent's public key from a well-known directory, and verifies the signature.](/blog/request-signing-overview.svg)

The operator configures an Ed25519 signing seed. From that seed, Everruns derives a key pair at startup: the private key signs requests, the public key is published at a well-known JWKS endpoint so target servers can look it up. The target server fetches the public key from `https://<signature-agent>/.well-known/http-message-signatures-directory`, verifies the signature against the request contents, and now knows which operator made the request.

If signing fails for any reason - clock errors, misconfiguration - the request proceeds unsigned with a warning. Signing is a proof-of-identity feature, not a gate that should block your agent's work.

This isn't a custom protocol. RFC 9421 is an IETF standard finalized in 2024. The bot-specific profile and key discovery mechanism follow two additional IETF drafts. Verification libraries already exist for most languages, and the specification has been reviewed by people whose job is finding cryptographic flaws. Working verification examples in Python and Node.js are in the [request signing documentation](https://docs.everruns.com/advanced/request-signing/).

## Attribution: from guesswork to policy

Request signing proves that a particular key signed a particular request. The chain is: **request -> key -> operator**. The first link is cryptographic. The second - key to operator - is organizational, but that's the point. Once a server can verify which key made a request, it can decide what to do with that information.

A server can maintain an allowlist of known public keys. It can block keys that misbehave. It can require that keys be registered before granting access. None of this is possible when the only identity signal is a User-Agent string that anyone can set to anything.

This shifts bot management from guesswork to policy. Instead of trying to infer intent from traffic patterns, servers can make decisions based on verified identity: this key belongs to a known operator, that operator has agreed to rate limits, grant access. Attribution turns anonymous traffic into accountable traffic.

## Fetchkit: where the signing lives

The signing implementation is in [fetchkit](https://github.com/everruns/fetchkit), an open-source Rust library that handles outbound HTTP fetching for AI agents. Fetchkit is part of the Everruns ecosystem but works as a standalone tool - you can use it as a library (`fetchkit = "0.2"`), a CLI (`fetchkit fetch <url>`), or an MCP server (`fetchkit mcp`) for direct integration with AI tool chains.

Fetchkit was built to solve a collection of problems that come up when AI agents make HTTP requests: SSRF protection (private IP ranges are blocked by default), content conversion (HTML is automatically converted to markdown for LLM consumption), resource limits (10 MB cap, timeouts with partial content return), and now, request signing.

Signing is a transport-layer concern. Mixing it into agent logic would mean every agent author needs to understand Ed25519, structured headers, and key discovery. By pushing it down into fetchkit, it becomes infrastructure that agents inherit by default.

Fetchkit is MIT-licensed and accepts contributions at [github.com/everruns/fetchkit](https://github.com/everruns/fetchkit).

## What this enables

Request signing is infrastructure. By itself, it just attaches proofs to requests. What it enables is more interesting:

- **Per-key rate limiting** - servers can throttle by verified identity instead of IP address
- **Selective access** - APIs can grant access to verified agents that they'd deny to anonymous traffic
- **Audit trails** - every request can be attributed to a specific operator, not just an IP range
- **Reputation systems** - keys that respect robots.txt and rate limits can build trust over time

The web is going to have a lot more autonomous agents on it. The infrastructure for managing that traffic needs to be better than "check the User-Agent header and hope."

---

Full configuration details, verification code examples, and key management are in the [request signing documentation](https://docs.everruns.com/advanced/request-signing/).