Join Nostr
2025-12-04 15:54:44 UTC

Max on Nostr: After brainstorming with pip about anonymous relay access, I wrote up how ...

After brainstorming with about anonymous relay access, I wrote up how zero-knowledge proofs could let users prove they're on a WoT trusted list, and that their score is good enough, without revealing which pubkey is theirs.

The Problem

Somewhere out there, a service provider maintains a Web of Trust list. Thousands of Nostr pubkeys, each assigned a score between 0 and 1, where lower numbers mean greater trust. Maybe it's a company running graph analysis on the social network. Maybe it's a respected community member curating by hand. Maybe it's your own client computing scores from your follow list. The source matters less than what the list represents: a set of identities someone has vouched for, to varying degrees.

Now a relay wants to use that list. Users with scores below 0.3 get priority message delivery. Those below 0.1 get access to premium features. The relay doesn't want to build its own reputation system - it just wants to accept users who've already been vetted by a provider it trusts.

The straightforward implementation has users authenticate with their pubkey. The relay checks the provider's list, finds the score, and grants access accordingly. But this creates a record of exactly who connected and when. The relay learns not just that a trusted user arrived, but which trusted user. Over time, a detailed picture emerges of individual behavior patterns.

What if there were another way? A user could prove they possess the private key for some pubkey on the trusted list, that their score falls below the required threshold, and that they haven't exceeded their rate limit - all without revealing which pubkey is theirs. The relay would learn only that an authorized user connected. Not which one. Not linkable to previous sessions. Just a cryptographic assurance of eligibility, then silence.

This is what zero-knowledge proofs make possible.

Why This Matters

Nostr relays struggle with spam and resource abuse. The protocol's openness, which makes it resilient to censorship, also makes it hospitable to bad actors. Various solutions have emerged: paid subscriptions, NIP-05 verification, invite codes, allowlists. Each works well enough at filtering, but each also creates linkage between identity and behavior. The relay knows who you are, and it remembers.

Zero-knowledge authentication breaks this coupling. A relay can restrict access to reputable users without learning their identities. Rate limiting can constrain abuse without building behavioral profiles. Quality control becomes possible without surveillance.

This isn't theoretical. The Waku messaging network already runs this architecture in production, handling thousands of authenticated but anonymous messages daily. The cryptography works. The question is how to bring it to Nostr.

The Architecture

Three cryptographic pieces combine to make anonymous authentication work, and understanding each one matters for seeing how the whole system fits together.

Merkle Trees and Set Membership

The WoT provider builds a Merkle tree. Each leaf contains a hash of three things bound together: a commitment to a user's pubkey, their trust score, and a random salt. The tree's root - a single short value that commits to the entire structure - gets published somewhere accessible, perhaps as a signed Nostr event from the provider's pubkey.

When a user wants to prove membership, they take their leaf data and the sibling hashes along the path from their leaf to the root. Anyone can verify that this path, when hashed upward, reproduces the published root. The path proves the leaf exists in the tree.

Wrapped inside a zero-knowledge proof, this verification transforms into something more powerful. The user proves "I know a leaf and a valid path to this root" without revealing which leaf or which path. The verifier sees only that the proof checks out and that the root matches the one they trust.

A tree twenty levels deep can hold about a million leaves. Verifying a path means computing twenty hashes, which translates to roughly 4,600 constraints in a ZK circuit when using Poseidon, a hash function designed for exactly this kind of computation.

Threshold Proofs

Membership alone isn't enough. The relay also needs assurance that the user's score qualifies them for access. Since lower scores indicate greater trust, the proof must demonstrate that the score falls below whatever maximum the relay requires.

This check happens inside the same circuit that verifies membership. The user's score, already committed in their Merkle leaf, gets compared against the threshold. A less-than operation on 64-bit integers adds only about 65 constraints to the circuit - almost nothing compared to the Merkle verification.

Scores like "0.201718191" get scaled by a billion and stored as integers. The circuit proves the relationship holds without exposing the actual value. The relay learns that the user's score is low enough, but not how low.

Nullifiers and Rate Limiting

The third piece prevents abuse. Each proof includes a nullifier, a value derived deterministically from the user's secret and the current time epoch. If the same user generates two proofs in the same epoch, both nullifiers will be identical. Different users always produce different nullifiers. The same user in different epochs produces different nullifiers.

The relay keeps a set of nullifiers seen during the current epoch. A duplicate means someone is trying to exceed their allowance. But unlike a traditional rate limit tied to an IP address or account, this one reveals nothing about identity. The relay knows only that some limit was hit, not by whom.

The Rate Limiting Nullifier protocol refines this further. Rather than allowing just one proof per epoch, it uses Shamir secret sharing to permit N messages. The user's secret gets encoded as a polynomial, and each proof reveals one point on that polynomial. Stay under N and your secret remains safe. Exceed it and anyone can interpolate the polynomial, recover your secret, and prove you cheated. Slashing becomes possible - the protocol can ban or penalize the offender.

Waku's implementation uses one-minute epochs. Nullifier storage costs about 200 bytes per user per epoch, which means roughly two megabytes per minute for ten thousand active users. Trivial for any modern system.

Alternatives Worth Knowing

SNARKs aren't the only cryptographic path to anonymous authentication, and the alternatives illuminate different tradeoffs.

Ring signatures let a signer prove they control one of N keys without revealing which one. Monero built its privacy model on this foundation. The scheme needs no trusted setup and works directly with the secp256k1 keys Nostr already uses. But signature size grows with the ring, and the basic construction can't prove additional statements like "my score is below X." You get anonymity within a set, but nothing more.

Linkable ring signatures add detection of double-signing, which helps for one-person-one-vote scenarios. Recent constructions achieve sizes that grow only logarithmically with the ring. Still, they can't prove predicates on associated data like trust scores.

BBS+ signatures take a different approach entirely. A provider signs a credential containing various attributes - pubkey commitment, trust score, tier level - and the user can later selectively disclose only certain attributes while proving the signature remains valid. This works well for credentialing systems, but requires the provider to sign each user's credential individually rather than publishing a single Merkle root. The trust model differs in subtle but important ways.

For Nostr's needs, SNARKs offer the most flexibility. They handle arbitrary predicates on private data, produce compact proofs around 200 bytes with Groth16, and benefit from mature tooling and production deployments to learn from.

Bridging to Nostr's Keys

A practical problem arises from Nostr's cryptographic choices. The protocol uses secp256k1 with BIP-340 Schnorr signatures, and verifying these inside a SNARK is expensive. The elliptic curve arithmetic doesn't map naturally to the prime fields these proof systems work over. A direct implementation costs roughly 1.5 million constraints and takes over 45 seconds to prove.

The solution sidesteps the problem. Users sign a deterministic message with their Nostr key outside the circuit:

message = "nostr-wot-identity-v1:" + provider_pubkey
signature = nostr_sign(private_key, message)

From this signature, they derive a ZK-friendly commitment:

identity_secret = poseidon(sha256(signature + ":secret"))
identity_commitment = poseidon(identity_secret)

This commitment goes into the Merkle leaf instead of the raw pubkey. The circuit operates on commitments, collapsing the constraint count from 1.5 million to around 5,000.

The binding remains cryptographically solid. Only someone controlling the Nostr private key can produce the signature needed to derive the correct commitment. Registration happens once per WoT provider. After that, all proofs use the efficient path.

The Protocol in Practice

The full system involves four phases, with responsibilities distributed between WoT providers, relays, and users.

A WoT provider computes trust scores for pubkeys it tracks. For each pubkey, it derives an identity commitment from a deterministic signature, then constructs a Merkle tree whose leaves bind together commitments, scores, and salts. The provider publishes the root as a signed Nostr event and makes the full tree data available through Blossom or similar storage. Different providers might use different scoring algorithms, trust different seed sets, update at different frequencies. The ecosystem can support many.

A relay configures which WoT roots it trusts. Maybe it accepts proofs against any of three major providers. Maybe it runs its own scoring and publishes its own root. The relay's policy determines what proofs it will verify, but it never sees the underlying user data - only roots and proofs.

A user who wants access registers once with each WoT provider whose root they might need. They sign the deterministic message with their Nostr key, derive their commitment, find their leaf in the provider's tree, and cache their Merkle path locally. This data stays on their device.

When connecting to a relay, the user checks which roots the relay accepts, picks one they have a path for, and constructs a proof. Private inputs include their identity secret, score, salt, and Merkle path. Public inputs specify the root, the maximum allowed score, the current epoch, and an application identifier. The proof outputs a nullifier for rate limiting.

The relay verifies the proof against the claimed root, confirms that root is one it trusts, checks the nullifier against its current-epoch set, and grants or denies access. Verification takes milliseconds. The user has proven everything necessary while revealing nothing beyond eligibility.

What You Could Build

Anonymous authentication opens design space that wasn't accessible before.

A relay could filter for trusted users without keeping logs of who connected. Rate limiting through nullifiers would prevent abuse, but no surveillance record would accumulate. Users would come and go as anonymous members of a trusted set.

A community could run polls where members prove eligibility and cast exactly one vote per proposal. The tally would reflect genuine sentiment without anyone learning how individuals voted. Neither the poll operator nor other members could connect votes to identities.

Multiple relays could share trust in the same WoT provider's root. Someone who misbehaves would see their score rise or find themselves removed from the tree, losing access everywhere at once. But no relay would share ban lists or user data with the others. Coordination would happen through the WoT layer, not through surveillance.

Journalists could maintain verified source lists. Sources would prove their verified status without revealing which source they are - not even to the journalist receiving the submission. Protection would come from mathematics rather than operational security.

Lightning payments could gate access. Users who've paid above some threshold would appear in a tree. They'd prove payment without revealing which specific payment was theirs, preserving financial privacy while enabling paid services.

Building This

The tooling exists and has matured considerably. Circom paired with circomlib provides circuit templates for Poseidon hashing, Merkle verification, and arithmetic comparisons. The Semaphore and RLN circuits from Privacy & Scaling Explorations are audited and battle-tested starting points.

For proof generation, snarkjs compiles to WebAssembly and runs in browsers. Native applications can use rapidsnark in C++ or gnark in Go for faster proving. Mobile support is emerging through projects like mopro.

Tree management can use @zk-kit/incremental-merkle-tree for append-only structures. When scores change and leaves need updating in place, sparse Merkle trees handle it better.

Rate limiting implementation finds production code in vacp2p/zerokit, complete with WASM bindings.

The integration work remains: defining how WoT providers publish roots and trees, standardizing the proof format for relay consumption, smoothing registration and proof generation into seamless user experience. NIPs to write, libraries to build, rough edges to sand down.

The Deeper Point

Authentication has always assumed that verification requires disclosure. To prove you're authorized, you show credentials. To prove you're who you claim, you reveal identifying information. The verifier learns your identity as a side effect of checking your access rights.

Zero-knowledge proofs break this assumption. Verification proceeds without revelation. The proof demonstrates a statement's truth while keeping the underlying witness hidden. A relay can confirm you're trusted without learning who you are.

For Nostr, this means the protocol's core strength - pseudonymous, portable, user-controlled identity - can coexist with legitimate quality control. Relays don't have to choose between open access and operational sustainability, between filtering spam and respecting privacy. A third option exists now.

The cryptography has matured past the experimental stage. These systems run in production, have survived audits, and handle real traffic. What remains is the work of bringing them to Nostr - designing the protocols, building the tools, making the user experience invisible.

Infrastructure that knows you're welcome but can't tell who walked in. The math works. The implementations exist. The rest is engineering.