One crypto library. Seven products. Three markets.

7 products that make
the internet actually private.

Every product runs on the same cryptographic core. Some of them are end-to-end encrypted; others are cryptographically signed. Formal protocol models are shipped today for Auth, Vault, and AI Agent. Built once. Packaged seven ways. Protocol details available to qualified evaluators under NDA. Targeting consumers, developers, enterprises, and the entire country of India.

One crypto library powers everything.

Every product below uses the same battle-tested cryptographic primitives. We never rebuild the crypto — we repackage it for different use cases.

End-to-end
Server is blind
Device-bound
Key in Secure Enclave / Keystore
Forward secret
Past messages stay private
Formally verified
Auth, Vault, AI Agent shipped
Zero-retention
AI Agent relay, by protocol
Append-only audit
Warrant-canary-backed

Specific primitives, handshake sequences, and formal-model files are released to qualified evaluators under NDA via security@zoza.world. This page describes what our cryptography guarantees — not how to rebuild it.

01 — Messenger

The messenger where not even we can read your messages.

Full Signal Protocol E2E encryption. 1:1, groups, channels. Web + Android + Desktop. Every message carries a cryptographic proof users can verify themselves.

Alice
encrypts on her device
Zoza Server
sees only ciphertext blobs
Bob
decrypts on his device
Per-message forward secrecy
Every message gets a unique encryption key. Forward secrecy means past messages stay safe even if a key leaks. Break-in recovery auto-heals future messages.
Group broadcast (efficient)
One encrypt, fan-out to all members. Each member holds a sender key per peer — O(1) encryption instead of O(n). Scales to thousands.
Per-message verification
Tap any message to see the cipher, key hash, ratchet step, IV, and ciphertext. Users can decrypt with their key and prove it. No other messenger offers this.
Multi-device E2E sync
Messages encrypted per-device with unique ratchet sessions. Self-sync echoes sent messages to all your devices using separate encrypted payloads.
City & World Spaces
Hyperlocal social feed (City) + global voice rooms (World). Twitter Spaces-like experience with persistent voice sessions that survive navigation.
No phone number required
Identity is key-pair based. No phone, no email required. Claim a username, generate keys, start chatting. Optional phone login for convenience.

Every message carries its own cryptographic proof.

Tap the lock icon on any chat and see the exact encryption state — cipher algorithm, current ratchet step, derived key hash, initialization vector, and full ciphertext. Paste the ciphertext into any authenticated encryption tool, decrypt with your private key, and prove the server cannot read it. No other messenger exposes this.

  • Lock banner shown at the start of every new conversation — users learn what E2E actually means
  • Per-message verify: see ratchet step, IV, ciphertext for any single message
  • Safety numbers for identity verification (like Signal's safety number)
  • Break-in recovery: if a key ever leaks, the next message auto-heals the session
  • Multi-device sync: each of your devices has its own ratchet — server never sees plaintext
9:4187%
R
Ravi Kumar
End-to-end encrypted · online
Messages & calls are end-to-end encrypted with per-message forward secrecy.
Tap to verify.
Got your message boss. Closing the design today.
10:32 AM
Perfect. Send the Figma link when done.
10:33 AM · 🔒
Will do. Btw the USDC safe setup — all ratchet steps verified?
10:34 AM
Ratchet step 147. Safety number matched. All green.
10:34 AM · 🔒✓✓
Message

Why existing messengers leak — study the threat model

"End-to-end encrypted" is marketing in most apps. Read the 6 concrete ways the platforms you use today expose your messages, and exactly how Zoza Messenger closes each one.

SMS / carrier-routed messaging

iMessage SMS fallback · RCS · Google Messages legacy
Attack surface SMS has zero encryption. Carriers log every message. SS7 protocol lets anyone with access to telecom infrastructure intercept live. Blue bubble → green bubble on iMessage means your message just travelled in plaintext across the entire cellular network.
Zoza's fix Messages are never handed to carriers. Transport is WebSocket over TLS. Payload is authenticated encryption encrypted with a unique per-message key before it leaves the device. Carrier cannot see sender, recipient, or content of the payload it would normally read.

Cloud backup in plaintext

WhatsApp ←→ Google Drive · iCloud Messages
Attack surface WhatsApp chats backed up to Google Drive were stored in plaintext for years (2018-2021). "Encrypted backup" is now opt-in, off by default. Google had full read access. Subpoena → every "E2E encrypted" conversation handed over.
Zoza's fix Cloud sync stores only ciphertext blobs. Backup is the exact same encrypted bytes that travel between devices. No "backup key" exists server-side — only the user's device holds the private key. Server dump = useless.

Server-side contact discovery

Every major messenger uploads your phone book
Attack surface WhatsApp, Telegram, Signal all upload hashed (sometimes plaintext) contacts to match friends. A server breach reveals the entire social graph — who knows whom, when they joined, calling patterns. The Cambridge Analytica of messaging.
Zoza's fix No phone number required. Identity is a key pair, not a phone number. Contact discovery is username-based with user consent per-contact. Bulk upload mode (if opted in) hashes contacts using a device-local salt; server sees only device-salted blobs, not the original number.

Fake "E2E" (Telegram cloud chats)

Default Telegram chats = NOT encrypted end-to-end
Attack surface Telegram's default chat is client-server-client, not end-to-end. The server reads every message in plaintext. Secret Chats exist but are off by default, 1:1 only, no groups, no channels, no multi-device. 99% of Telegram users have zero E2E.
Zoza's fix There is no non-E2E mode. 1:1, groups, channels, voice spaces — all E2E by default, no opt-in. Every message everywhere carries a cryptographic proof. The server never has a plaintext access mode to toggle off.

Key-exchange MITM by the server

Dishonest server swaps keys during handshake
Attack surface If the server controls key distribution, it can hand Alice a fake "Bob public key" (actually the server's key), read everything, re-encrypt with Bob's real key, and forward. Undetectable unless users verify keys out-of-band. Most messengers don't surface this.
Zoza's fix The E2E handshake uses signed prekeys + identity key pinning (TOFU after first contact, verified on every session). Safety number changes trigger a visible warning. Users can verify the safety number in person or over a second channel. MITM by server = visible alert.

Device theft → message history exposed

Stolen unlocked phone or laptop
Attack surface A lost phone with an unlocked screen exposes every past conversation. Most messengers store chat history in a plaintext SQLite file inside the app sandbox. Anyone with root / ADB / backup extraction tools can dump everything.
Zoza's fix Local database is SQLCipher-encrypted. Database key is held in Android SeedVault / iOS Keychain / OS keyring, unlocked only during an active session. Auto-lock after N minutes idle. Root / backup dump of app storage returns useless encrypted bytes.
LIVE
Status
Web + Android + Desktop
Platforms
Freemium
Model
2B+
TAM (users)
02 — Vault

Encrypt user-submitted data so only you can read it.

An SDK that encrypts form data on the user's browser before it leaves their device. CDNs, WAFs, proxies, and even your own infrastructure see only ciphertext. Only your server's private key decrypts.

User's browser
encrypts form data on-device
CDN / WAF
sees only ciphertext
Your Server
sees only ciphertext
Decrypt endpoint
only here, with your key
The Problem

Every web form leaks data at 5+ points

User submits sensitive data (SSN, medical, financial). It travels through CDN, load balancer, WAF, app server, database — all in plaintext after TLS terminates.

  • CDN (Cloudflare) sees plaintext after TLS
  • WAF rules read request body
  • App server logs contain form data
  • Database stores plaintext or "encrypted at rest"
  • Any breach at any layer = full exposure
Vault's Fix

Encrypt before it leaves the browser

Vault SDK encrypts each form field on the user's device using your service's public key. Only your private key (on your isolated decrypt endpoint) can read it.

  • CDN sees ciphertext only (padded to fixed blocks)
  • WAF sees opaque blobs, no PII
  • App server stores ciphertext, never plaintext
  • Database breach = useless encrypted blobs
  • Only the decrypt endpoint (your key) reads data
Healthcare Banks & Fintech Legal HR / Recruitment AI Companies Government
🔒 https://your-app.com/signup · Vault SDK active
Patient intake form
What the CDN, WAF, and your server ACTUALLY see ↓
Full name 🔒 encrypted client-side
8f2a9e4b7c1d5f06…dbfe3c7a91 (authenticated encryption, 384 bytes)
Aadhaar / SSN 🔒 encrypted client-side
c3b8d1f9a2e7406c…19cd76a08b (authenticated encryption, 384 bytes)
Diagnosis history 🔒 encrypted client-side
9a4f28d6c0b3e512…4e7bfac891 (authenticated encryption, 2,048 bytes fixed-block)
Phone 🔒 encrypted client-side
7b1e3f8a2c9d4065…ff2a9c81b6 (authenticated encryption, 384 bytes)
Submit → only YOUR decrypt endpoint can read this
Every field padded to a fixed block length. CDN + WAF + app server see only these ciphertexts.

Your forms encrypt BEFORE leaving the browser.

Drop <script src="vault.js"> into your form page. Add data-vault attributes to sensitive fields. Every value is authenticated encryption encrypted on-device using your public key before the browser submits. Cloudflare, your WAF, your load balancer, your application logs, and your database all see opaque ciphertext. Only your isolated decrypt endpoint can read it.

  • Fixed-block padding defeats length-leak attacks (every ciphertext is the same size regardless of plaintext)
  • Constant-time response timing — attackers can't probe whether a field was "short" or "long"
  • Key rotation: new public key per tenant, rotated every 90 days, old keys kept for decryption
  • Your server stores ciphertext. Breach = attacker gets encrypted blobs that are useless without your private key
  • Compliance-ready: HIPAA, PCI-DSS, GDPR data-minimization by construction

Where every web form leaks today — study the 6 failure points

"HTTPS is enough" is the most dangerous myth in web security. TLS terminates at the edge. After that, your sensitive data is plaintext in 5+ places before it reaches your database. Read each one. Decide whether your current stack defends against it.

TLS terminates at the CDN

Cloudflare · Fastly · Akamai
Attack surface HTTPS protects the wire, not the edge. Cloudflare decrypts every request, reads the full body (for WAF rules, caching, DDoS detection), then re-encrypts to your origin. A Cloudflare breach = plaintext of every POSTed form.
Vault's fix Form fields are encrypted inside the browser, before TLS even starts. Cloudflare sees ciphertext inside the HTTPS tunnel. WAF rules can still inspect non-sensitive fields (flagged with data-vault="false"). The PII blob is opaque to the edge.

Application logs capture everything

Datadog · Sentry · nginx access logs
Attack surface Every error tracker captures the full request body by default. A validation exception stack trace includes the POST body. An ops engineer with Datadog access can grep for Aadhaar / SSN / medical codes in live logs. This has caused 100+ breaches.
Vault's fix App server literally cannot log plaintext — it never has it. Every field logged by Datadog / Sentry / nginx is already encrypted. PII redaction becomes a non-problem because the ciphertext is the redaction.

Database "encryption at rest" is theatrical

AWS RDS · Google Cloud SQL · on-prem PostgreSQL
Attack surface "Encrypted at rest" means the disk is encrypted. Any query the application runs returns plaintext. A SQL injection, a rogue DBA, a leaked read-only replica credential = full PII dump. RDS snapshots exported to S3 are commonly set to public by accident.
Vault's fix Data stays encrypted at the column level. A rogue DBA, a SQL injection, a leaked replica — all return ciphertext blobs. Decryption only happens at one audited endpoint that requires mTLS + signed intent from the front-end action.

Browser-side form-grab malware

MageCart · Magento skimmers · browser extensions
Attack surface Malicious JS injected into the page via compromised third-party scripts (analytics, chat widgets) reads form values on submit. British Airways lost 380K credit cards this way (2018, MageCart). Injected before Vault SDK activates = attacker wins.
Vault's fix Vault SDK uses an isolated iframe with strict CSP — user input never enters the parent page's DOM. Parent page's scripts (including compromised analytics) cannot read the iframe's field values. Submission goes direct from iframe to encrypt pipeline.

Subpoena / "lawful access" exposure

Government requests · internal investigations
Attack surface A subpoena to your cloud provider returns everything they hold. If your DB holds plaintext PII, the provider must comply. A "lawful intercept" doesn't need a breach — it just needs a court order and your cloud provider as the hand-over target.
Vault's fix Cloud provider holds ciphertext. Subpoena to the provider returns useless blobs. Subpoena must be served on YOU for your decrypt key, which is in your own KMS / HSM with access logging — you retain the audit trail and approval authority.

Insider threat — DBA with read access

Rogue employee · compromised admin account
Attack surface Capital One breach (2019, 100M records) was an insider with legitimate cloud access. Uber 2016 (57M riders) was a contractor. Insider threat is the #1 unsolved PII leak. Your database admins can read everything right now.
Vault's fix Decrypt key lives in a separate isolated service, never on the DBA's machine. Accessing the decrypt endpoint requires signed requests from the front-end. Bulk decrypt triggers alerts. A rogue DBA with DB access gets ciphertext; to decrypt, they must compromise a second, audited system.
TAM

$500M+

Pricing

Free during the pilot window · paid tiers TBD after trust is earned

Build estimate

~2,400 lines / 6 weeks

03 — Verify

Businesses sign every notification. Users verify instantly.

"DKIM for humans." Banks, e-commerce, and government sign every SMS/email/push with cryptographically signed. The Zoza app verifies the signature — green badge = real, red = scam. Kills SMS phishing.

SBI Bank
cryptographically signs notification
SMS arrives
"Rs 15,000 debited"
Zoza push
signed payload arrives
Verified
green badge = real SBI
cryptographic signatures
Every notification signed with the business's registered private key. Signature is 64 bytes — compact, fast, unforgeable without the key.
Parallel push verification
SMS can't carry signatures (160 chars). Instead, when SBI sends an SMS, they also push a signed payload to Zoza. User sees both side-by-side. No Zoza push = likely scam.
Versioned registry with delta sync
Signed registry of verified entities. Delta sync means only changes download daily (~5KB). Full registry only on first install. Push revocation in seconds, not hours.

Users see a green badge on real bank messages. Red on scams.

Every SMS that arrives on the phone is paired with a parallel push to Zoza from the sender's registered server. That push carries the cryptographic signature of the message plus the sender's verified identity. If the signature verifies against the bank's registered public key → green badge. If the SMS arrived with no matching push → red "UNVERIFIED" banner. Users learn in one day: green = real, red = delete.

  • SMS alone can't carry signatures (160-char limit). The parallel-push trick solves it.
  • Signed registry of verified entities downloads via delta sync — ~5KB/day updates
  • Revocation in seconds: a compromised bank key is rotated; every user's registry updates within a minute
  • "DKIM for humans" — the email-signing trick, now for SMS, push, and WhatsApp Business
  • Banks register once, sign every notification, pay per 1000 verifications. Zero UX change for their customers.
9:4187%
Recent notifications
S
SBI Bank ✓ VERIFIED
just now · cryptographically signed
Rs 15,000 debited from A/c XX4521 at Flipkart. Balance: Rs 48,320. Not you? Tap to dispute.
sig: a3f2…b7c1 · key: sbi.ed25519.v3 · counter-signed by Zoza root
S
SBI-Alert UNVERIFIED
2m ago · SMS only, no signature
Dear customer, your account is suspended. Update KYC now: http://sbi-kyc.in/verify
✗ No Zoza signature. Not in verified registry. Likely phishing.
F
Flipkart ✓ VERIFIED
5m ago · cryptographically signed
Order #OD123 delivered. Rate your experience in the app.

SMS phishing stole Rs 7,100 cr from Indians in 2023 — here's why

SMS was designed in 1984 for pagers. It has zero authentication, zero encryption, zero origin verification. Yet every bank in India sends OTPs, balance alerts, and transaction confirmations over it. Verify fixes SMS by adding a parallel cryptographic proof channel. Study each attack.

Sender-ID spoofing

$0.001 per fake SMS via SS7 gateways
Attack surface SMS sender IDs (HDFC-BK, SBI-UPI) are plaintext text fields with no authentication. Any SMS gateway in any country can claim any sender ID. Telecom regulators block obvious abuse but novel spoofs slip through daily. Attackers rent SS7 access for cents per message.
Verify's fix Sender identity is proved by an cryptographic signature on every notification, not by a text label. SMS shows the text; Zoza app checks the parallel signed push. If the signature verifies against the registered bank key, green badge. No signature = no badge = user is warned.

URL phishing in SMS body

"Your KYC is suspended, update at sbi-verify.xyz"
Attack surface A scam SMS with a link like sbi-kyc.in looks official. Victim clicks, lands on a pixel-perfect clone of SBI's login. Enters credentials + OTP. Attacker drains the account via UPI in under 90 seconds. Indian fraud helpline (1930) registers 10,000+ of these daily.
Verify's fix Verify shows the official registered domain from the signed registry alongside the SMS. Real SBI message → "tap to open official SBI app or onlinesbi.com." Any URL in the SMS that doesn't match the registered domain is highlighted in red.

SIM swap OTP interception

Jack Dorsey (Twitter CEO) · Vitalik Buterin (both SIM-swapped)
Attack surface Attacker bribes a telecom employee or social-engineers a store to port the victim's number to a new SIM. Every subsequent OTP lands on the attacker's phone. Password reset + OTP = full account takeover in 30 minutes.
Verify's fix Verify ties the notification to the user's Zoza identity key, not their phone number. A SIM swap takes over the SIM; it doesn't take over the Zoza identity key (which lives in the user's device keystore). Banks stop trusting the SIM and start trusting the Zoza identity.

Social engineering the OTP

"Sir, please share the OTP for verification"
Attack surface Attacker calls pretending to be from the bank. Creates urgency. User reads out OTP. Attacker completes the fraud transaction. RBI data: >60% of UPI fraud in 2023 used social engineering, not technical hacking.
Verify's fix Real bank pushes arrive via Zoza with a cryptographic proof. Verify's onboarding teaches: "If a call asks you to read an OTP, it's never the bank." The bank's own app uses Verify-based challenge-response (see Auth section) — OTPs go away entirely, replaced with a signed approval inside the verified Zoza notification.

Bulk SMS impersonation at scale

Sent from rogue telco routes, 10K+ messages/minute
Attack surface Organized fraud rings rent SMS blasters from grey-market telecom routes. Send millions of fake "your card was charged, call this number" messages. ~0.1% respond; of those, ~5% get defrauded. Low conversion + massive volume = lucrative.
Verify's fix Bulk fakes carry no valid signature. Zoza app silently marks all of them UNVERIFIED. Users learn to ignore anything without a green badge. Scam ROI collapses because the 0.1% who used to respond now see red banners instead.

Fake UPI mandate / collect request

Attacker sends a "collect" instead of a "pay" request
Attack surface UPI "collect" requests look like payment confirmations to confused users. Attacker requests Rs 10,000 from victim via a fake "refund" pretext. Victim taps "approve" thinking they're receiving — instead they're sending. Rs 2,600 cr lost to this pattern in 2024 per NPCI.
Verify's fix Every UPI collect request gets a Verify wrapper that shows the verified identity of the requester and the direction of funds in big, unambiguous type: "YOU WILL PAY Rs 10,000 to [merchant]." Unknown merchant → red. Known scammer (reported + registry-updated) → block.
$200M–2B
TAM
~1,500 lines
Build size
4 weeks
Build time
Banks, govt, e-comm
Buyers
04 — Shield

Stop wallet drains. Verify everything cryptographically.

A browser extension + mobile app that checks every dApp, every signature request, every address against cryptographic signatures. No more guessing if "Uniswap Support" on Telegram is real.

dApp loads
uniswap-airdrop.xyz
Shield checks
vs signed dApp registry
BLOCKED
not in verified registry

Before you integrate Shield — study the attack surface.

Shield is not a marketing product. It is a defensive stack built to stop specific, well-documented wallet-drain attack classes. If you are a wallet, exchange, dApp, or crypto app considering integration, do not integrate until you understand every attack below. Each one has real victims, real ledger losses, and real mechanics. Shield's design choices only make sense after you understand what it is defending against. Read every case. Walk your security team through the attack steps. Then decide.

$2.2B
Stolen in 2024
Chainalysis Crypto Crime Report
$494M
Approval drain losses 2024
Scam Sniffer annual report
~332K
Wallets drained 2024
Scam Sniffer telemetry
$1.46B
Bybit single-TX loss
Feb 2025, blind signing exploit

The 8 attacks Shield defends against — studied in full detail

Each card below shows the exact attack mechanics on the left and Shield's exact defense on the right, step-by-step. This is the material your security team should read before any integration call.

01
Permit / Permit2 signature phishing Defense shipped
Class: Off-chain signature drain Loss in 2024: $150M+ Hardest to detect Victim wallets: MetaMask, Rabby, Rainbow
✗ How the attack works

The victim signs a message. No transaction is ever visible.

  1. Attacker buys a Google Ad for a fake Uniswap / 1inch / PancakeSwap domain (e.g. uniswap.claim-app.io) or posts a fake "airdrop eligible" X reply.
  2. Victim connects wallet. Site looks identical to the real dApp — same CSS, same logos, same RPC methods.
  3. Site calls eth_signTypedData_v4 with a Permit2 struct for USDC / USDT / WETH / stable LP tokens. This is an off-chain signature, not a transaction.
  4. MetaMask shows a confusing "Signature request" popup. No gas, no transaction, no clear consequence. Users click Sign thinking it's "just signing in."
  5. The signature grants the attacker's contract unlimited approval valid for 30 days, plus transferFrom authority.
  6. Attacker bundles the signature with a transferFrom call via a drainer bot — the victim's entire USDC / USDT balance moves to the attacker in one block.
  7. Victim had no visible transaction, no gas deducted, no pending TX in activity. They only notice when the balance is gone.
✓ How Shield stops it

Decode the signature. Match it to the domain. Block if mismatched.

  1. Shield browser extension hooks every eth_signTypedData and eth_signTypedData_v4 call before MetaMask sees it.
  2. Shield decodes the EIP-712 domain separator and the typed data struct. It identifies Permit, Permit2, and PermitForAll patterns.
  3. Shield checks the signed domain against the page origin. uniswap.claim-app.io asking for a Uniswap Permit = mismatch. Blocked.
  4. Shield verifies the target contract against the signed dApp registry (root signing key compiled into extension). Unknown contract = warning banner.
  5. Shield translates the signature into plain English: "This signature gives 0x742...4a8 permission to move your entire USDC balance for 30 days." No crypto jargon.
  6. Red full-screen modal requires double-confirm with a typed acknowledgment. Dark pattern dismissal is impossible.
  7. If user still signs, Shield pushes a 2-hour revocation window: one-tap "un-approve" before the drainer bot executes.
Real case Monkey Drainer ($24M across 2022-2023), Inferno Drainer ($87M in 2023-2024), Pink Drainer ($80M+) — all used the Permit2 pattern as their primary drain vector. Chainalysis attributes the rise of "drainer-as-a-service" kits almost entirely to this attack class. MetaMask added a warning in late 2024, but only for known-malicious contracts — novel attacker contracts bypass it.
02
setApprovalForAll — unlimited approval "ice phishing" Defense shipped
Class: ERC-20 / ERC-721 allowance abuse Most famous victim: BAYC holders, Premint users Single-wallet losses: $2M+ common
✗ How the attack works

A transaction that looks harmless grants total control.

  1. Attacker runs a fake mint page — "Free BAYC companion NFT, one-click claim."
  2. The "claim" button triggers a real transaction: setApprovalForAll(operator, true) on the victim's NFT collection contract.
  3. MetaMask shows a transaction popup with no dollar value displayed — because the function doesn't transfer anything directly. Users think "this is just the approval step."
  4. Victim signs. They now see a successful transaction on Etherscan. They move on.
  5. Days or weeks later, the attacker calls transferFrom on each NFT in the collection — the granted operator can move every NFT the victim owns in that contract, past, present, and future.
  6. For ERC-20 tokens, the equivalent is approve(operator, MAX_UINT256) — unlimited token allowance.
  7. Why "ice phishing": the threat freezes in place, dormant, until the attacker executes weeks later. The victim has already forgotten the transaction.
✓ How Shield stops it

Decode the calldata. Show the real consequence. Suggest minimum approval.

  1. Shield intercepts eth_sendTransaction and decodes the 4-byte function selector against known allowance-granting signatures.
  2. Detects setApprovalForAll, approve(MAX_UINT256), and Permit2 allowance patterns including newer variants.
  3. Shows a plain-English preview: "This grants 0x742...4a8 the power to take ALL 14 of your BAYC NFTs, now and forever, until revoked."
  4. Offers one-click "reduce to minimum" — replaces the calldata with an approval for only the specific NFT or token amount needed for the operation.
  5. Tags the operator address as Verified (OpenSea, Blur) or Unknown (uncategorized contract = red warning).
  6. After any approval grants, Shield adds the approval to a local "active approvals" dashboard with one-tap revoke via setApprovalForAll(op, false).
  7. Weekly email / push: "You have 7 unlimited approvals active. Review?" — even if the user forgets, Shield remembers.
Real case Premint collector drain (July 2022, $375K across 314 NFTs) — attackers compromised the Premint website and injected a malicious contract into the "collect to claim allowlist spot" flow. Every user who interacted that day granted setApprovalForAll to the attacker. Kevin Rose drain (Jan 2023, $1.1M in BAYC/Autoglyphs) — phishing site pretending to be SuperRare. One signature, entire collection gone.
03
Address poisoning — history pollution drain Clipboard guard shipped · contact book planned
Class: Wallet-history exploitation Biggest loss: $68M (Binance hot wallet, May 2024) Attacker cost: less than $100 in gas
✗ How the attack works

The attacker never steals your key. You send to them voluntarily.

  1. Attacker monitors on-chain transfers involving the victim. Identifies recipients the victim regularly pays (exchange deposit, personal cold wallet).
  2. Uses a vanity-address generator (Profanity-style GPU brute force) to generate an address whose first 4 and last 4 characters exactly match the target — e.g. real: 0xa4b1…7f3e, fake: 0xa4b1…7f3e with different middle bytes.
  3. Attacker sends a tiny "poison" transaction (1 wei, or zero-value token) from the lookalike address to the victim. This plants the attacker's address in the victim's transaction history.
  4. Next time the victim wants to send to their exchange, they open MetaMask, copy the address from their own recent TX history, and paste.
  5. Most wallet UIs abbreviate addresses as 0xa4b1…7f3e. Victim visually verifies first/last 4 chars. Match. Sends.
  6. Funds go to the attacker. The address looked correct. History is compromised because on-chain data is permanent and anyone can push to it.
  7. Attacker drains within seconds via MEV routing to avoid any chargeback.
✓ How Shield stops it

Treat wallet history as untrusted. Use a personal signed contact book.

  1. Shield maintains an on-device personal contact book, signed with the user's Zoza identity key. Only the user can add a contact.
  2. When a send transaction is intercepted, Shield compares the recipient against the signed contact book — exact byte match, not prefix/suffix.
  3. If recipient is in the contact book: green "Verified recipient — Binance deposit" badge.
  4. If recipient is NOT in the contact book but appears in recent on-chain history: yellow warning "Never explicitly added. Are you sure?"
  5. Shield flags any zero-value or dust incoming transactions from new addresses as possible poison attempt and quarantines them from the "recent recipients" suggestion list.
  6. For exchange deposits, Shield integrates with a verified exchange-address registry (signed by exchange signing keys) — pasting a Binance address shows which exchange it belongs to.
  7. Copy-paste interception: Shield hashes the clipboard content and warns if the address on clipboard differs from what the user visually sees in the wallet UI.
Real case Binance hot wallet, May 3, 2024 — $68M in wrapped ETH lost to a single address-poisoning transfer. The Binance deposit signer had a recent history entry from a poisoned address. Operator copy-pasted from history. Sent 1,155 WETH (~$68M) to the attacker in a single transaction. The attacker returned the funds after negotiation, but the incident became the canonical address-poisoning case study. Every major wallet added "dust warning" features after this.
04
Fake airdrop claim pages — drainer kits Defense shipped
Class: Social + UX deception Drainer kits: Inferno, Monkey, Pink, Angel Kit rental: ~25% of drained funds to kit author
✗ How the attack works

Real-looking claim page. Malicious JS. Automated drain pipeline.

  1. Attacker rents a drainer kit (e.g. Inferno Drainer) for 25-30% of proceeds. Kit provides the backend drain logic, token-prioritization algorithm, and MEV submission.
  2. Attacker deploys a lookalike domain: arbitrum-airdrop.foundation, layerzero.claim, connext-rewards.xyz, etc. Often typosquats like lnch.io vs lens.xyz.
  3. Attacker buys reply guys on X, promotes the link, or compromises a trusted account (Vitalik's old account was hacked to promote a drainer).
  4. Victim visits, connects wallet. Drainer kit enumerates every ERC-20 balance, every NFT, every staked position in the victim's wallet via RPC calls.
  5. Drainer ranks assets by USD value and picks the optimal drain order (highest value first, most gas-efficient drain path).
  6. Victim clicks "Claim airdrop." Kit presents a chain of signature requests + approvals + transferFroms, each disguised as "verifying eligibility" / "accepting terms."
  7. Each signed message drains one asset. Drain happens in real-time — by the time the victim realizes, the wallet is zero'd.
✓ How Shield stops it

Signed dApp registry. Domain-cryptographic binding. Any unknown = block.

  1. Shield maintains a signed registry of verified dApp domains. Every entry signed with the project's signing key, counter-signed by Zoza's root key.
  2. Root signing public key compiled into the browser extension binary. No network fetch = no MITM.
  3. Shield hooks window.ethereum.request before the page's script can access it. Every wallet-connection attempt is gated by registry lookup first.
  4. If the page's origin is NOT in the signed registry, Shield shows a red full-screen block: "Unverified dApp. No project has claimed this domain. Proceed only if you are certain."
  5. If the domain is a typosquat of a known registered project (Levenshtein distance ≤ 2), Shield shows extra-red warning: "This looks like 'Uniswap' but is NOT the verified uniswap.org. Phishing likely."
  6. Registry updates via Chrome auto-update (signed manifest). Revocation in hours, not days.
  7. If a real airdrop is happening, the real project has a green-badge entry in the registry 48 hours before launch — users learn to trust green, refuse red.
Real case Inferno Drainer — $87M stolen, 137K+ victims before shutdown (Nov 2023). Operated as SaaS. Author took 20-30%. Dozens of sub-attackers deployed hundreds of lookalike domains. Connext fake-claim drainer (2023) — drained $1.1M from users trying to claim the real NEXT airdrop. LayerZero fake-claim drainer (June 2024) — during the real ZRO airdrop, dozens of typosquat domains captured ~$4M from users who couldn't tell the real claim page from the fake.
05
Discord / Telegram "support" impersonation Planned
Class: Social engineering + OOB deception Target: Confused or new users Conversion rate: 2-5% of DMs = drain
✗ How the attack works

The conversation happens off-chain. The drain happens on-chain.

  1. User posts "my swap failed on Uniswap" in the real Uniswap Discord.
  2. Attacker bot scrapes the message, DMs the user within seconds using a handle like Uniswap Support, Uniswap | Helpdesk. Avatar is stolen from the real team.
  3. Attacker acts helpful, builds rapport: "Sorry about that, happens with certain routers. Can you share the TX hash?"
  4. Escalates: "I need to run a diagnostic on your wallet. Please visit our support portal: uniswap.support-help.io."
  5. Support portal is actually a drainer page. Or attacker asks victim to "validate your wallet" by going to validate.walletconnect.io — fake domain, triggers Permit2 drain.
  6. Alternatively: attacker tricks victim into revealing the seed phrase "for our KYC recovery protocol."
  7. Real projects never DM first. But confused / panicked users override this rule.
✓ How Shield stops it

Move support into Zoza's E2E-encrypted channels. No plaintext DMs exist.

  1. Real projects deploy an official Zoza channel, identity-bound to their signed signing key.
  2. Shield's browser extension detects when the user is on Discord / Telegram and there's a known verified Zoza channel for that project. Shows a persistent banner: "Real Uniswap support → open Zoza channel (verified)."
  3. In Zoza's chat, every Uniswap team member has a green-verified badge backed by the project's signed team roster. Impostors cannot forge the badge — they'd need the project's private key.
  4. Channel messages are E2E encrypted under the project's Channel Key — only real subscribers can participate, and every message carries a cryptographic proof.
  5. Shield blocks any attempted wallet-connect request from Discord DM links to unverified domains.
  6. If a user tries to interact with a lookalike uniswap.support-help.io, the registry check (Attack 04 defense) fires — red block.
  7. Education layer: when Shield first installs, an onboarding card teaches the rule "No real project DMs first. Ever."
Real case NFT Discord mass compromises (2022-2024) — Yuga Labs, OtherDeed, Orangie, Premint, Azuki Discord servers hijacked via compromised mod/admin accounts. Fake announcements pushed via webhook. Users lost an aggregate $200M+ across these incidents. Every case: plaintext communication channel, no cryptographic identity binding, no way for users to tell real from fake.
06
Clipboard hijacker malware Defense shipped
Class: Local-machine malware Delivery: Cracked software, fake installers Detection rate: very low (<10%)
✗ How the attack works

Malware watches your clipboard. Rewrites addresses invisibly.

  1. User installs cracked Photoshop, pirated software, a fake MetaMask desktop app, or clicks a Discord "game beta test" link.
  2. Installer drops a background process (ClipBanker, Laplas Clipper variants). Often signed with stolen code-signing certs to avoid Windows Defender.
  3. Malware monitors the clipboard via AddClipboardFormatListener (Windows) or equivalent on macOS / Linux.
  4. When user copies any text, malware checks: does it match a wallet address regex? 0x[a-fA-F0-9]{40} for EVM, similar for BTC/SOL/LTC/TRX.
  5. If yes, malware replaces the clipboard content with an attacker-owned address chosen to match the first and last 4 chars of the original (uses a pre-generated pool).
  6. User pastes into MetaMask / exchange / wallet. Abbreviated display shows 0xa4b1…7f3e — matches what they copied. They don't notice.
  7. Send completes. Funds go to attacker. User has no idea malware was involved — they never visited a phishing site.
✓ How Shield stops it

Verify what's sent matches what was intended. Never trust the clipboard.

  1. Shield watches every eth_sendTransaction the wallet is about to submit and captures the destination address.
  2. Shield reads the clipboard at the time of paste (with user permission) and hashes the content.
  3. When the transaction is about to go out, Shield compares the destination byte-for-byte against the last clipboard hash — mismatch = red warning: "Address changed between copy and paste. Clipboard malware possible."
  4. For large transfers (> $1000 equivalent), Shield requires an out-of-band verification: user must manually type the last 6 chars of the address, not copy-paste.
  5. Shield integrates with the signed exchange-address registry (Attack 03 defense) — pasting a Binance / Coinbase address shows the owning exchange + a verified badge. Any tampered char = badge disappears.
  6. Contact book comparison: if the pasted address doesn't match the signed contact book entry for this recipient's previous session, warning fires.
  7. Hardware wallet users: Shield supports signing the destination on-device via the HW display, a secondary channel the clipboard malware cannot touch.
Real case Laplas Clipper (2022-ongoing) — sold on Russian forums for $29/month. Hundreds of variants. Estimated $20M+ stolen in aggregate. ClipBanker / ClipperBot campaigns deliver via YouTube "how to mine free crypto" tutorial videos that link to cracked mining software. Victims install voluntarily. No phishing site required.
07
Blind signing on hardware wallets Plain-English preview shipped · Safe Guard planned
Class: HW wallet UX failure Canonical case: Bybit, $1.46B (Feb 2025) Affected: Multisig exchanges, DAO treasuries
✗ How the attack works

The Ledger/Trezor screen shows hex. Humans can't verify hex.

  1. Exchange / DAO uses a Gnosis Safe multisig. Signer uses Ledger / Trezor + Safe UI in browser.
  2. Attacker compromises the Safe UI (via supply chain — see Attack 08), a browser extension, or the signer's development environment.
  3. Compromised UI constructs a malicious transaction that upgrades the Safe's implementation to an attacker-controlled contract OR transfers all assets.
  4. UI displays what LOOKS like a benign transaction (e.g. "approve 100 USDC to exchange operator"). Signer reviews on the laptop screen, sees the fake.
  5. UI sends the ACTUAL malicious calldata to the hardware wallet for signing.
  6. Ledger screen displays raw calldata hex: 0xa4 61 77 be 00 00 00 00… plus a "Blind signing required — enable in settings" warning.
  7. Signer clicks through. Signs. Attacker's malicious upgrade / drain is authorized.
✓ How Shield stops it

Decode calldata outside the compromised UI. Cross-verify on a second device.

  1. Shield provides a standalone calldata decoder (web + mobile) at zoza.world/decode. Paste the raw TX hex, see human-readable output.
  2. Calldata is decoded against a locally-bundled ABI database — no network lookup, no MITM risk.
  3. Decoded view shows: "This transaction REPLACES the Safe's implementation contract with 0x742… and transfers all 401,347 ETH to 0x9bf…" — in plain English.
  4. For multisig treasuries, Shield's Safe Guard Module is a pre-deployed smart contract that vetoes any transaction whose calldata doesn't match a pre-registered signed intent.
  5. Signer submits intent (amount + destination) via Zoza mobile app, cryptographically signed. Intent is posted on-chain to the Safe Guard contract.
  6. When Safe executes, the Guard Module checks: does the calldata match the signed intent? If no, revert.
  7. Bybit would have been stopped at step 6 — the attacker's implementation-swap calldata would not match any signed intent.
Real case Bybit hack, February 21, 2025 — $1.46B in ETH lost in a single transaction. Lazarus Group (North Korea) compromised the Safe UI used by Bybit's cold-wallet multisig signers. The UI displayed a routine transaction; the hardware wallets received malicious implementation-swap calldata. All 3 signers blind-signed. Funds drained instantly. Radiant Capital, Oct 2024, $50M — same pattern. WazirX, July 2024, $230M — same pattern. Blind signing is the single highest-leverage failure mode in institutional crypto.
08
Supply chain injection into wallet extensions / SDKs Planned
Class: Code-supply-chain compromise Canonical case: Ledger Connect Kit, Dec 2023 Blast radius: every dApp using the library
✗ How the attack works

One compromised library. Every dApp poisoned at once.

  1. Attacker compromises the NPM / CDN publish credentials of a widely-used wallet connector library (Ledger Connect Kit, Web3Modal, RainbowKit, etc.).
  2. Pushes a minor version bump with a drainer payload embedded in the minified bundle.
  3. Every dApp that loads the library via CDN (e.g. unpkg.com/@ledgerhq/connect-kit) immediately serves the malicious version to all its users.
  4. The drainer overrides the wallet-connect flow: instead of a normal "Connect" call, it injects a drainer modal with Permit2 signature phishing.
  5. Users trust the dApp's domain (Zapper, SushiSwap, Balancer, Revoke.cash — all real verified sites). They see a familiar wallet-connect UI. They sign.
  6. Drain happens inside trusted territory. Shield's domain registry (Attack 04 defense) can't help — the domain IS verified.
  7. Only defense: detect the malicious JS payload itself, or verify the integrity of the loaded library.
✓ How Shield stops it

Subresource integrity + per-library signing + in-browser JS analysis.

  1. Shield maintains a registry of known wallet-connector libraries with their expected SHA-384 hashes per version.
  2. When a registered dApp loads a wallet connector, Shield verifies the loaded script's hash against the expected value. Mismatch = block.
  3. Shield detects drainer patterns in loaded JS: dynamic Permit2 struct construction, mass RPC enumeration of token balances, signature request storms.
  4. For registered dApps, Shield enforces a policy manifest signed by the dApp team's signing key: "this app will only ever request X, Y, Z — anything else is an exploit."
  5. Any deviation from the manifest (e.g. a setApprovalForAll when the manifest only permits swap) triggers immediate block + alert to the dApp team.
  6. When Ledger Connect Kit was compromised (Dec 2023), a Shield-protected user would have seen: hash mismatch → blocked load → banner "Library integrity check failed — likely supply chain attack. Do not connect."
  7. Zoza pushes signed revocation to Shield extensions in seconds via Chrome-update-pipeline + in-extension push.
Real case Ledger Connect Kit compromise, December 14, 2023, ~$610K drained. A former Ledger employee was phished, NPM credentials stolen. Attacker published a malicious v1.1.8 of @ledgerhq/connect-kit-loader. Within minutes, every dApp using the library (Zapper, SushiSwap, Revoke.cash, Phantom, Hey, Kyber, Lido, etc.) served the drainer to every user. Lido's whitelisted allowlist caught some of it. The rest — drained. Estimated blast radius: millions of wallets exposed over 5 hours.

Chain coverage — what Shield watches right now

Shield is not Ethereum-only. Drain attacks happen on every smart-contract chain. Here's exactly which provider interfaces the current code hooks, which are partial, and which are planned.

Chain Drain share (2024) What we hook Status
EVM
ETH, Polygon, Arbitrum, Optimism, Base, BSC, Avalanche, zkSync
~78% — the dominant drain surface. Permit2 alone = $150M+ in 2024. window.ethereum + EIP-6963 providers. eth_sendTransaction, eth_signTypedData_v4, personal_sign, eth_sign. Full EIP-712 Permit/Permit2/PermitForAll detection + 4-byte ABI decoding. Defense shipped
Solana
Phantom, Solflare, Backpack, Glow
~15% — fastest growing. Rainbow Drainer, Drainer.so. window.solana + Wallet Standard. signTransaction, signAllTransactions, signAndSendTransaction, signMessage. Flags the batch-signing drainer pattern (≥5 txs in one confirm). Batch flag shipped · full instruction decoder planned
Tron
TronLink
~4% — TRC-20 USDT approval drains dominate. tronWeb.trx.sign. Reuses the EVM 4-byte decoder for TRC-20 calldata — flags approve / transferFrom / unlimited allowance. Defense shipped
Bitcoin
Xverse, Unisat, OKX, Leather
~2% — clipboard hijack + address poison (no smart-contract approval class). Chain-agnostic clipboard-hash compare covers the main BTC threat (address swapping). No unified provider standard — per-wallet adapters required. Clipboard guard shipped · provider hooks planned
Sui / Aptos / Near / Cosmos / TON ~1% combined — long tail, low current volume. No provider hook. Clipboard guard + phishing-URL registry still work. Planned (demand-driven)

Security of Zoza itself — honest threat model

Every security tool is a target. If you're integrating Shield, you should ask us directly: "how does Zoza itself get hacked, and what happens when it does?" Here is the honest answer we give customers. No marketing, no "military grade" language, no hand-waving.

Attack surface Severity What defends against it today Gap · timeline to close
Root signing key stolen
Attacker signs a fake registry → every Shield user trusts a drainer site.
Critical Extension verifies every registry payload against the hardcoded root public key. Without the specific private key, no fake registry can be pushed. Key must move to offline HSM + 2-of-3 multisig. ~2 weeks (Yubikey / AWS CloudHSM).
Chrome Web Store extension hijack
Malicious update shipped to every Shield user (Ledger Connect Kit pattern).
Critical 2FA + hardware key on the publisher account. No single signer can ship. Reproducible builds + signed release bundles not yet in CI. ~1 week.
Fly.io backend server compromise
Messenger / Auth / Verify relay breached.
Medium Messenger: Signal Protocol E2E — server sees ciphertext + metadata only, never plaintext. Auth: double-sealed payload, relay is blind. Shield API: cannot forge registry without root key. Metadata retention policy not yet public. ~1 day to write + publish.
Endpoint / device malware
Attacker has root on user's phone or laptop.
High SQLCipher-encrypted local DB, SeedVault / Keychain key storage, auto-lock, clipboard guard. Not defensible by any E2E app. If malware runs as you, it sees what you see. This is the OS's job, not Zoza's.
Novel Permit variant Shield hasn't seen
Custom off-chain-approval struct not in our decoder.
Medium Decoder falls through to "typed signature" generic warning. User still sees a Shield modal, just with less detail. Shield client SDK is open-source (@zoza/shield on npm); server-side decoder is source-available under NDA for auditors. New variants added by the Shield team. Bug bounty on undetected variants planned once Immunefi program is live.
Nested multicall / delegatecall calldata
Outer call is multicall(bytes[]); real action hidden inside.
Medium Current decoder shows the outer multicall but not the inner calls. User still sees the modal. Nested ABI walker planned. ~1 week (~600 lines).
Malicious / coerced Zoza employee
Insider approves a phishing site as "verified" or delays flagging a drainer.
High Today: only Zoza team can push to the registry. This is a single trust point — uncomfortable, and we say so. Public append-only audit log + 24h challenge window + registry multi-sig. ~1 week.
User socially engineered into disabling Shield
"Sir, please whitelist this site to complete your transaction."
High In-extension onboarding card teaches the rule: "No real support ever asks you to disable security." Cannot be fixed by software. This is a cultural/education problem.
Government subpoena / lawful access request Medium E2E messages: unreadable even with server seized. Shield registry: public by design — nothing to hand over. Metadata: some is retained. Warrant canary + transparency report planned. Quarterly.
No external bug bounty program Medium Internal review + source-available code (open-sourcing planned). Researchers email findings today; public GitHub disclosure channel opens with the open-source release. Immunefi or HackerOne program. 1-2 weeks setup.
“Zoza defends against server-side attackers, network attackers, and on-chain scammers. It does NOT defend against: (a) malware already running on your device; (b) a compromised root signing key — which is why our rotation and multi-sig policy is public; (c) novel attacks our decoder hasn't seen yet — the client SDK is already open-source (@zoza/shield, MIT); the server decoder is source-available under NDA with broader release sequenced post-pilot; (d) you being socially engineered into disabling Shield. Every other attack surface has a defined defense. Shipped ones are marked shipped. Designed-but-not-built ones are marked planned. No marketing claims. No ‘military-grade’ language.”
What we tell a prospective customer honest

"Yes, Zoza is hackable. Here's the list. Here's what's defended in code, here's what's operational-risk, here's the timeline to close each gap. Every gap is visible because hiding them makes us less secure, not more. You are welcome to audit the registry, request source access to audit the extension, and run a fuzzer. Public open-source release and bug bounty both planned. We will never tell you we're unhackable."

What we will not say red flag

"Military-grade encryption." "Unbreachable." "Quantum-resistant" (without specifying the KEM). "We cannot read your messages even if we wanted to" (without showing you the code that proves it). Any vendor that uses these phrases is selling you a feeling, not a threat model.

Want to verify any of this? Client SDKs are open-source MIT on npm (@zoza/shield, @zoza/vault, @zoza/auth, @zoza/sign, @zoza/verify, @zoza/ai) and GitHub (CoreCogitAI/*-js-sdk). Backend server code (Shield decoder, phishing registry, Messenger E2E stack with 158 tests) is source-available under NDA for customers and auditors; sequenced for broader release after first production pilots. Email security@zoza.world for backend source access, to submit a decoder pattern, or to join the private disclosure list.

After you study — how you integrate Shield

Integration is the last step, not the first. Your security team should have reviewed every attack above before you reach this section. These are the 6 concrete steps to onboard.

1
Register your dApp domain with Zoza

Prove domain ownership via DNS TXT record. Register your project signing key. Counter-signed by Zoza root. Takes 10 minutes. Puts you in the signed registry.

2
Publish your policy manifest

Declare which operations your app will ever request: swap, add liquidity, mint. Anything else will be blocked as an exploit even if your frontend gets compromised.

3
Pin SHA-384 hashes for wallet-connect libraries

List every JS library your app loads with its expected integrity hash. Shield verifies loaded bundles against this manifest at runtime.

4
Deploy your team roster with Zoza badges

Sign a list of your team's official social handles + Zoza usernames. Shield shows green-verified badges in Discord, Telegram, X. Kills impersonation.

5
(Institutional) Deploy Safe Guard Module

On-chain guard contract that vetoes multisig TXs unless signed intent matches calldata. Prevents blind-signing exploits. Bybit would have been stopped here.

6
Open your official Zoza support channel

Migrate support from Discord DMs to an E2E-encrypted Zoza channel. Team members have signed badges. Users cannot be DM'd by impostors.

Shield's core design primitives

Phishing dApp detection
Every domain checked against a signed registry of verified dApps. Fake Uniswap, OpenSea, LayerZero claim pages — all blocked with red banner. Real dApps get green badge.
Transaction intent verification
Real dApps attach a cryptographically signed intent to every signature request. Shield decodes it and shows human-readable version. Unsigned or mismatched = warning.
E2E encrypted support chat
Talk to the REAL Uniswap support team over Signal Protocol. No more Discord DM scammers. Scammers can't impersonate the channel — they don't have the private key.
Address poisoning protection
Scammers send tiny TXs from lookalike addresses to pollute your wallet history. Shield verifies addresses against your personal signed contact book.
Unlimited approval warnings
Any "approve unlimited" transaction gets a red warning with the drain risk spelled out. Option to auto-reduce to minimum necessary amount.
Hardcoded root key
Root signing public key compiled into the extension binary. No network fetch = no MITM. Key rotation happens via Chrome auto-update (24h). Architecturally critical.
TAM

50-100M crypto users

Pricing

Free through late 2026 · Pro tier TBD after trust is earned

Scam prevention

~40% of consumer crypto incidents

05 — Sign v0.3 LIVE

Decode what your wallet is really signing.

The hardware-wallet pivot taught us the real problem: users blind-sign because UIs lie to them. Sign v0.3 is a multi-chain transaction decoder + on-chain Safe Guard module + signed receipt webhook — the verification layer that should have caught Bybit's $1.46B before the signer tapped approve.

What's live as of Apr 2026
  • Live API at sign-api.zoza.world — decode + verify + receipt-anchor endpoints
  • 91 Go tests passing incl 60K-iteration property fuzz on calldata parser
  • Full RLP decoder — EIP-2930 access lists, EIP-1559 fee market, EIP-4844 blobs, EIP-7702 set-code
  • Solana + Tron parsers alongside EVM — multi-chain from day one
  • Signed webhooks so your security team gets a verifiable receipt of every decoded TX
  • OTS Bitcoin anchor on the receipt log — your audit trail is timestamped to a permissionless chain
  • Benchmark corpus of 5 historic heists; current build detected 5/5 ($3.025B in detected fraud potential)
Why exchanges still get drained

Blind signing, lying UIs, deserialiser gaps

  • Bybit (Feb 2025, $1.46B): Safe{Wallet} UI showed normal transfer; signers approved a delegatecall that swapped the multisig implementation
  • WazirX (Jul 2024, $230M): Liminal Custody UI obscured a payload swap; signers approved a hostile contract
  • DMM Bitcoin (May 2024, $305M): calldata signed by an offline signer didn't match the on-chain submission
  • Atomic Wallet (Jun 2023, $100M): blind-signed approve() with infinite allowance to a malicious contract
  • Common thread: the signer's UI is the attack surface. Hardware wallets sign whatever bytes are on the screen.
What Sign actually does

Independent decode + on-chain veto

  • Independent decoder: raw bytes → human-readable intent ("delegatecall to 0x… changes implementation"), runs in your security team's terminal not in the same UI you don't trust
  • Receipt API: POST raw bytes, get signed JSON with the decoded intent + Bitcoin-anchored timestamp
  • Safe Guard module: on-chain Safe plugin that requires a Sign receipt before execTransaction succeeds — bypasses no UI, requires no UX change
  • Webhook attestation: every decoded request signed with our signing key; you can verify our receipt didn't change between API and storage
  • Bybit-class detection: would have flagged the delegatecall + implementation change in the heist transaction

Walkthrough — what Sign would have shown the Bybit signer

The actual transaction the Bybit signer approved on 21 Feb 2025. Their Safe{Wallet} UI rendered it as a routine internal movement. Here's what the same bytes look like through Sign's decoder:

Sign decoder output $ curl -X POST sign-api.zoza.world/v1/decode \ -d '{"chain":"ethereum","raw":"0x6a76190200000000000000…"}' { "summary": "⚠️ HIGH RISK — implementation contract change via delegatecall", "to": "0x1Db92e2EeBC8E0c075a02BeA49a2935BcD2dFCF4" // Safe singleton, "selector": "0x6a761902" // execTransaction, "nested": { "selector": "0xa9059cbb" // transfer (claimed), "actual_call_kind": "DELEGATECALL", // ⚠️ implementation swap "to": "0x96221423681A6d52E184D440a8eFCEbB105C7242" // attacker contract, "will_change": "Safe.implementation slot" }, "receipt_id": "rcpt_b9c2…", "bitcoin_anchor": "pending — anchored hourly" }

Same payload. The Safe UI rendered it as a token transfer. Sign rendered it as a delegatecall to an attacker-controlled implementation contract. The signer needed three lines of CLI output to refuse the transaction.

Sign vs the alternatives

Vendor What it solves Bybit-class?
Tenderly SimulatePre-execution simulation (state changes)Maybe — UI must surface the simulation
Forta agentsReal-time anomaly detection (post-mempool)Maybe — only after broadcast
Fireblocks Policy EngineCo-signer rules (whitelisted addresses)Maybe — only if rules anticipated delegatecall
Hardware wallet (Ledger Stax)Native EIP-712 + clear-signingNo — Ledger UI couldn't render Safe-nested delegatecall
Zoza Sign v0.3Independent decoder + on-chain veto + signed receiptYes — detected in benchmark corpus

What's NOT built (honest gaps)

Status

Live API · 91 tests · 5/5 heist detection

Build size

~3,200 LOC Go + Safe Guard contract

TAM

$100M (~500 institutional signers)

Pricing

Free pilot for institutions · free decoder for individuals (forever)

06 — Auth THE $1B PATH

Kill SMS OTP. Replace it with crypto challenge-response.

India's broken OTP system: SMS interception, SIM swaps, social engineering. 200B+ UPI transactions/year all depend on 6-digit SMS codes. Zoza Auth replaces OTP with cryptographic device-bound authentication. Essentially FIDO2/Passkeys packaged as an API for Indian banks.

Bank
seals challenge to user's key
Zoza Relay
sees only opaque blob
User's phone
unseals, verifies bank identity
Biometric approve
signs response with private key
SMS OTP is broken

6-digit codes sent over the worst channel

  • SIM swap — attacker ports your number in 30 minutes
  • SMS interception — SS7 protocol has no encryption
  • Social engineering — "sir, please share OTP for verification"
  • Phishing — fake bank site captures OTP in real-time
  • Delayed delivery — OTPs expire before arriving
Zoza Auth replaces all of it

Cryptographic challenge-response

  • Bank issues a sealed challenge only the user's device can open
  • User's device independently verifies the bank's identity
  • User approves with biometric (fingerprint / face)
  • Device-bound key signs the approval — private key never leaves the Secure Enclave / Keystore
  • Zoza relay sees only opaque bytes — in both directions
What a single auth event looks like end-to-end

1. Bank → user's device. Your backend issues a challenge bound to the specific action ("Approve ₹15,000 to Flipkart?"). The challenge is encrypted so only the user's device key can read it. Zoza relay sees encrypted bytes + routing metadata — nothing else.

2. User's device verifies the bank. Before showing any prompt, the device cryptographically verifies the message came from the bank's registered identity — not a lookalike, not a proxy. Failed verification = no prompt shown to the user.

3. Biometric unlock + device-bound signature. User approves with Face ID / fingerprint. The approval is cryptographically signed by a key that lives inside the device's hardware Secure Enclave (iOS) or StrongBox Keystore (Android). The private key cannot be extracted — not even by a fully rooted device.

4. Bank verifies the signature. Your backend cryptographically verifies the signature against the device's registered public key. Fail = denied. Pass = authenticated. The audit chain records both directions with append-only integrity.

Full protocol specification, formal model, and handshake detail are available to qualified evaluators under NDA via security@zoza.world.

$500M–1B
TAM (India OTP)
200B+
UPI TXs/year
~2,800 lines
Build size
7-8 weeks + regulatory
Timeline
Indian banks UPI apps Government services E-commerce Fintech Telecom Crypto exchanges
What's live as of Apr 2026
  • Live API at auth-api.zoza.world — apps, devices, challenges, respond endpoints. curl auth-api.zoza.world/health right now.
  • 45 Go tests passing — handshake, replay protection, double-seal, audit chain, rate limit, expiry
  • ProVerif formal modelauth.pv proves secrecy + authenticity of the challenge-response under a Dolev-Yao adversary
  • 3 SDKs — Go (server), Swift (iOS Secure Enclave), Kotlin (Android Keystore via StrongBox where available)
  • Double-seal protocol — both server→device challenges AND device→server responses are sealed against a leaked TLS session
  • Append-only audit chain — every challenge issuance + response logged; export endpoint for compliance
  • Self-serve apply formdevelopers/auth.html, 24-hour key-issue SLA
  • Transparency suiteaudit log · warrant canary · data policy · bounty

The 8 ways SMS OTP gets bypassed today — full attack class breakdown

SMS OTP wasn't designed for adversarial use. It was designed in 1985 (the SS7 protocol) to deliver short text messages between trusted carriers. Every line of defence below has a documented incident with a name and a dollar amount attached.

Critical1. SIM swap (port-out fraud)

Attacker convinces the carrier (or a bribed insider) to port the victim's number to a SIM the attacker controls. All inbound SMS — including your bank OTP — arrives on the attacker's device. Attack window: 30 minutes from port to drain.

Real incident: Mumbai retail investor lost ₹78L from a Demat account in Mar 2024 after a Vi-to-Airtel port he never requested. Maharashtra Cyber logged 1,400+ SIM-swap FIRs in H1 2024 alone.

Zoza Auth: device key lives in Secure Enclave / Android Keystore — not in the SIM. Port-out doesn't move the key.

Critical2. SS7 / Diameter interception

SS7 (used between carriers globally) has no authentication or encryption between providers. A telecom-grade attacker — or anyone who buys access from a sanctioned carrier — can issue a "Send Routing Information" query and silently re-route SMS to themselves.

Real incident: The 2017 O2 Telefónica Germany incident drained victim bank accounts via SS7 interception. Indian carriers operate the same protocol. CERT-In has issued multiple advisories on SS7 vulnerabilities.

Zoza Auth: nothing transits a carrier — challenges go over your existing HTTPS to the user's app.

Critical3. Voice-channel social engineering ("OTP frauds")

Attacker calls posing as bank/UPI support, tells the victim "to verify your account, please share the OTP we just sent." Most defended-against attack in India and still the largest by volume.

Real incident: RBI's annual report 2023-24 attributed ₹13,930 crore (~$1.7B) in unauthorized digital banking transactions to fraud in FY24, the majority attributed to OTP-sharing vectors per NPCI breakdowns.

Zoza Auth: there is no number to share. Approval is a biometric tap on a device-bound key. The user has nothing the attacker can phish over a call.

High4. Adversary-in-the-middle (AITM) phishing

Attacker proxies your real bank login page (Modlishka / Evilginx). User enters credentials on the lookalike, OTP arrives, attacker forwards everything in real-time. Even "good" OTP usage falls.

Real incident: Microsoft DART tracked >10,000 organizations hit by AITM kits in 2023-24, including bypasses of TOTP and SMS-OTP MFA.

Zoza Auth: challenge is bound to the TLS channel via a channel-binding nonce; the proxy can't replay it because the device key signs the actual TLS exporter, not just a 6-digit code.

High5. Malicious Android accessibility service

Side-loaded "Update Required" APKs request Accessibility permission on Android, then read OTP SMS as it arrives in the notification shade and ship it to a Telegram bot. SOVA, BRATA, GoldDigger families are still active across India.

Real incident: SOVA Trojan campaign 2023 specifically targeted Indian banking apps; CERT-In issued CIAD-2023-0036 listing 32 affected banks.

Zoza Auth: even with full read of all SMS, there's no OTP to capture. The signature happens inside the Keystore where Accessibility cannot reach.

Medium6. SMS aggregator insider leak

Your "encrypted" OTP traverses 2-4 vendors (your app → your aggregator → carrier gateway → SMSC → handset). Insiders at any layer can read the codes. Indian aggregator breaches have been disclosed (2022 — large telecom OEM accidentally exposed millions of OTPs in a misconfigured S3).

Real incident: Resecurity's 2022 disclosure of an exposed aggregator log containing OTPs from major Indian banks routed via a popular bulk SMS provider.

Zoza Auth: challenge bytes are sealed against the user's device public key before leaving your server — even Zoza relay sees opaque ciphertext.

Medium7. OTP delivery delay → user fatigue → bypass

Carriers throttle in flash events; 8-12% of OTPs in India fail to deliver inside the validity window. Users develop "request again" muscle memory. Attacker-in-call workflows exploit this: keep the user on a "verification call" and re-trigger OTPs until one lands at a moment the user reads it back.

Real incident: NPCI flash-event SMS failures during IPL final 2024 caused widespread retry storms; some users disclosed OTPs to "support callers" amid the confusion.

Zoza Auth: latency is bound by your own HTTPS round-trip + biometric unlock — typically 6-12 seconds vs SMS p95 of 9-22 seconds.

Medium8. eSIM transfer fraud

As eSIMs replace physical SIMs, the "swap" attack becomes faster — attacker only needs to convince the carrier to push a new eSIM provisioning profile. No physical SIM card needed; takes minutes from social engineering call to OTPs landing on attacker hardware.

Real incident: USA T-Mobile eSIM swap incidents 2024 drained crypto exchange accounts in under 7 minutes; same attack pattern emerging in India as Jio/Airtel push eSIM activations.

Zoza Auth: device-bound key survives any SIM/eSIM movement because it never lived there.

Zoza Auth vs every auth vendor you might already use

Vendor Sells SIM-swap-proof AITM-proof Marginal cost / 1M auths
Twilio VerifySMS OTP APINoNo~$5,000 (₹4.2L)
MSG91Indian SMS aggregatorNoNo~₹1.8L
AWS CognitoSMS + TOTP fallbackPartial (TOTP enrol rate <15%)Partial~$3,500 (₹2.9L)
Auth0 / OktaHosted IDaaS, Passkey supportYes (with Passkey)Yes (with Passkey)~$23,000 ($23/MAU)
Raw WebAuthn / PasskeyDIY browser-nativeYesYes$0 (you self-host)
Zoza Authdevice-bound Secure Enclave / Keystore challenges + double-seal + audit chain + 3 SDKsYesYes (channel-bound)~₹3,000 (₹0.003/auth)

Why not just use raw Passkeys?

Passkeys/WebAuthn are excellent. Many exchanges and banks should use them directly. Teams come to Zoza Auth instead when:

The math at India scale — why this is the $1B path

India processes ~200B UPI transactions per year. Net-banking + e-commerce + crypto on-ramps add ~50B more authentication events. At even ₹0.05 per cryptographic auth (a fifth of an SMS), that's a ₹1,250 crore (~$150M) addressable market in India alone for the auth-layer. Add the global crypto-exchange and fintech market and the ceiling is multi-billion.

The unit economics: Zoza's marginal cost per auth is a Postgres row write — measured in microseconds and tenths of a paisa. The pricing power comes from being categorically more secure than SMS (eliminating an entire fraud loss line) while being cheaper than carrier SMS (eliminating a carrier invoice line). That's a rare combination — it's why Plaid hit $13B and Auth0 hit $6.5B.

What's NOT built (honest gaps)

Security of Zoza Auth itself — honest threat model

07 — AI Agent

E2E encrypted channel between users and AI agents.

AI agent gets its own Zoza identity key. User's prompts are ratchet-encrypted to the agent's key. The relay sees nothing. Your users' prompts never touch any infrastructure in plaintext.

User
"What's my diagnosis?"
Zoza Relay
ciphertext only
AI Agent
decrypts with agent key
Healthcare AI
Patient talks to AI diagnosis assistant. Sensitive medical history encrypted end-to-end. Hospital's infrastructure never sees plaintext prompts or responses.
Financial advisors
AI-powered wealth management assistant. Portfolio data, tax info, transaction history — all E2E encrypted between user and the AI agent.
Mental health apps
Therapy chatbot conversations are the most sensitive data possible. Forward secrecy ensures even a future breach can't expose past sessions.
Two modes

Vault mode vs Ratchet mode

Vault mode (simple, ~400 lines): Stateless one-shot encryption using sealed-box. Best for most AI use cases — form submission, single queries. No sidecar needed, no state to manage.

Ratchet mode (advanced, ~1,200 lines): Full per-message forward-secret protocol for long-running conversations where past messages must stay private even after a future key leak. Therapy, advisory, ongoing medical consultation. Requires sidecar with TEE attestation.

Security architecture

TEE + parent entity vetting

  • Sidecar runs as signed binary with minimal permissions
  • Binary hash registered + runtime attestation to Zoza
  • Enterprise: AWS Nitro Enclave / Azure Confidential
  • AI agent must be registered under Tier-2 verified parent
  • Dual-write ratchet state: Redis (fast) + Postgres (durable)
Status

v0.1 LIVE — protocol + zero-retention shipped

TAM

$50–200M

Buyers

Healthcare AI, fintech, legal, mental health

Build size

~2,400 LOC + 3 SDKs + Tamarin model

What's live as of Apr 2026
  • Live API at ai-api.zoza.world — sessions, encrypt, decrypt, ratchet-step endpoints
  • Vault mode (~400 LOC) + Ratchet mode (~1,200 LOC) — both protocols implemented and tested
  • 3 SDKs — Go (server), JavaScript (web), Swift (iOS), Kotlin (Android). Drop-in replacement for an OpenAI/Anthropic client.
  • Tamarin formal model — secrecy property secrecy_SK verified under a Dolev-Yao adversary
  • Zero-retention guarantee — no plaintext logged, no plaintext cached, ratchet keys deleted server-side after step
  • Audit / pilot press-send packs — for healthcare AI procurement teams that need security questionnaires answered before signing
  • Whitepaper — full protocol description, threat model, deployment guide
Roadmap (NOT shipped)
  • TEE-sealed inference — running the model itself inside AWS Nitro / Azure Confidential / Intel TDX so even the AI host can't see plaintext. Targeted Q3 2026.
  • Cross-model session migration — switch a user mid-conversation from GPT-5 to Claude without re-keying. Q4 2026.
  • Multi-party agent groups — multiple AI agents in one E2E group (e.g. "doctor agent" + "pharmacist agent" + "patient" with sealed group key). 2027.

What goes wrong when "secure AI" isn't actually secure — 7 real incidents

Every AI vendor claims "enterprise-grade security." The actual incidents below show where those claims have already failed. Each one is publicly reported with a name attached.

Critical1. Samsung Semiconductor source-code leak (May 2023)

Samsung engineers pasted proprietary semiconductor source into ChatGPT for debugging. The conversation became part of OpenAI's training corpus eligibility. Samsung subsequently banned all generative AI tools company-wide.

Zoza AI Agent: ratchet-mode session means the prompt is decrypted only inside the agent's signed runtime; relay sees ciphertext, training corpus sees nothing because the ciphertext is unusable without the device key.

Critical2. ChatGPT Redis bug — session bleed (Mar 2023)

A bug in OpenAI's Redis client briefly exposed users' chat history titles + the first message of recently-active sessions to other users. ~1.2% of ChatGPT Plus subscribers had partial billing data exposed.

Zoza AI Agent: dual-write Redis+Postgres ratchet state with per-session encryption. Even if the cache layer leaks, an attacker reading another user's bytes gets ratchet ciphertext that's unusable without the per-session key.

Critical3. Air Canada chatbot binding hallucination (Feb 2024)

Air Canada's customer-service chatbot promised a refund policy that didn't exist. Tribunal ruled the airline liable for the bot's representations. Issue wasn't security — it was identity: there was no audit trail proving what the bot had actually said vs what the user claimed.

Zoza AI Agent: every message is signed by the agent's identity key. The user's device receives a cryptographic receipt of what the agent actually said. Disputes are resolved with signed evidence, not screenshots.

High4. Cursor / Codeium / Copilot prompt-leak class

Multiple AI-coding-assistant audits in 2024 found that "context window" payloads were retained in vendor server logs longer than disclosed. Customer code that included secrets was indexable in vendor support-debug systems.

Zoza AI Agent: zero-retention enforced cryptographically — the relay literally cannot retain the plaintext because it never has it. Audit log records that a session existed (timestamp, byte count) but never the content.

High5. Healthcare AI vendor sub-processor breach (2023, multiple)

Healthcare AI vendors typically run inference on AWS/Azure with their own log + monitoring stack on top. A 2023 breach of a popular HIPAA-claiming sub-processor exposed therapy session transcripts that the patient had been told were "encrypted in transit and at rest" — true at every individual layer, false in aggregate because every layer terminated TLS independently.

Zoza AI Agent: end-to-end means user device → agent runtime, with no plaintext intermediate. Even if AWS, Azure, your monitoring vendor, and your sub-processor all get breached on the same day, the prompts and responses are unreadable.

Medium6. Identity spoofing — fake "official" support agents (ongoing)

Scammers deploy lookalike "Zerodha Support AI" or "HDFC Wealth Advisor AI" via Telegram bots that scrape real chatbot UIs. Users disclose KYC documents, OTPs, account numbers. The chatbot is convincingly real-feeling because users have no way to verify which AI is actually their bank's.

Zoza AI Agent: every agent's identity key is registered under a Tier-2 verified parent entity. Your app verifies the agent's signature on every message — a fake bot has no valid key, can't sign, can't pose as your bank's agent.

Medium7. "Forget I said that" — no real deletion in vector stores

Most "memory" features in AI assistants store embeddings in a vector DB. Once a sensitive item is embedded, "delete" usually means soft-delete; the vector remains in cached query indices for weeks. GDPR right-to-erasure compliance is widely overstated by AI vendors.

Zoza AI Agent: per-message forward secrecy means past messages cannot be decrypted even with the current session key. Deletion is cryptographic, not operational — there's no plaintext or embedding to retain.

Zoza AI Agent vs the AI privacy options you're already evaluating

Vendor What they sell Vendor sees plaintext? Cryptographic guarantee?
OpenAI Enterprise"No training on your data" SLAYes (legal promise, not technical)No
Anthropic Claude API"30-day retention, then delete"Yes (during 30d window + log replication)No
Azure OpenAI Service"Customer-managed keys" for storageYes during inferenceStorage-only (CMK)
AWS Bedrock"Your VPC, no logs" claimAWS infra layer yes; AWS staff no per SLAVPC isolation
Self-host Llama 3 / GPT-OSSFull ownership (your GPUs)No (you do)Operational
Zoza AI Agent (today)E2E ratchet over HTTPS to your inference hostInference host yes; relay noYes (Tamarin-verified)
Zoza AI Agent (Q3 2026 + TEE)E2E ratchet + sealed-enclave inferenceNo oneYes + remote attestation

Vault mode vs Ratchet mode — which to pick

Vault mode

Stateless one-shot encryption

Best for: form submissions, single questions, RAG-style "ingest this document and answer", any flow where the client and agent don't maintain a long conversation. ~400 LOC integration. No sidecar, no ratchet state.

Examples:

  • Patient submits intake form → AI summarises for clinician
  • Lawyer pastes contract clause → AI flags risks
  • Trader uploads tax document → AI fills tax filing form
Ratchet mode

Long-running conversation w/ forward secrecy

Best for: therapy sessions, ongoing medical advisory, multi-turn financial planning, any flow where each message must be unreadable even if a future key leaks. ~1,200 LOC + sidecar w/ TEE attestation.

Examples:

  • CBT chatbot — patient + AI therapist over weeks
  • Wealth advisor agent — ongoing portfolio + tax conversations
  • Sensitive support: HR grievance bot, harassment-report intake AI

Three-tier agent identity vetting

Without entity vetting, "Zoza AI Agent" would be the world's best phishing infrastructure. Three tiers:

What's NOT built (honest gaps)

Security of Zoza AI Agent itself — honest threat model

One crypto library. Three billion-dollar markets.

The same cryptographic primitives package into products for consumers, crypto users, global enterprises, and the entire Indian auth market.

Normal consumers

2B+ users. Privacy-conscious people who want real E2E messaging, verifiable notifications, and protection from scams.

MessengerShield (free)Verify

Crypto users & institutions

50-100M users + ~500 institutions. The most paranoid, high-value users on the internet. They get drained daily.

ShieldSignMessenger

Global enterprises

200K+ companies handling sensitive data. Healthcare, banking, legal, HR — every form, every message, every notification.

VaultVerifyAI Agent

India (1.4B people)

The OTP killer. 200B+ UPI transactions/year. Rs 0.10/auth = Rs 2,000 crore ($240M) at full penetration. RBI pushing device-binding. NPCI UPI 3.0 wants this.

AuthVerify

AI companies

Fastest-growing segment. Every AI app handling medical, financial, or personal data needs E2E between user and model.

Vault (AI mode)AI Agent

Fundraising comparables

Auth0 exited at $6.5B. Plaid at $13B. The security/identity layer wins. Zoza needs one paying customer + one proof point to raise.

Auth = $1B pathVault = fastest revenue

All seven products are live.

Each one is running on its own *-api.zoza.world subdomain right now. Free through late 2026, apply-flow open, formal models shipped for Auth, Vault, and AI Agent.

Live since early 2026
Messenger
Web, Android, Desktop. Full end-to-end encrypted messenger — 1:1 and group chat, voice and video, Stakes (prediction markets), Hangout (voice spaces), Forum (anonymous threads), Status. Cloud sync across devices; server holds only ciphertext.
Live
Vault
Pre-TLS browser SDK, iframe isolation, zero-knowledge mode, HSM-ready keystore, ProVerif formal model, competitive benchmark harness vs Basis Theory / Skyflow / VGS / Piiano, transparency suite (audit, canary, retention, bounty). Live at vault-api.zoza.world.
Live
Verify
DKIM-for-humans protocol. Businesses cryptographically sign every SMS, email, push; consumer app or extension verifies. Live at verify-api.zoza.world with apply-flow, admin UI, and transparency suite.
Live
Shield
Browser extension, TypeScript SDK, on-chain Safe Guard, signed dApp registry, threat registry, Drainer Live Map, self-serve apply flow. Live at shield-api.zoza.world.
Live
Sign
Multi-chain transaction decoder (EVM + Solana + Tron), signed receipts, Bitcoin-anchored audit log, 91 Go tests, 60K-iter property fuzz, benchmark corpus that flagged 5 of 5 historic heists ($3.025B detected). Live at sign-api.zoza.world.
Live
AI Agent
End-to-end encrypted user↔model channel. Vault mode (stateless one-shot) and Ratchet mode (long conversation with forward secrecy). Tamarin formal model, three SDKs (Go, JS, Swift, Kotlin), audit + pilot press-packs. Live at ai-api.zoza.world.
Live · RBI audit scheduled Q3 2026
Auth
Double-sealed cryptographic challenge-response, 45 Go tests, ProVerif formal model, three SDKs (Go server + Swift iOS + Kotlin Android), admin UI, full transparency suite. Live at auth-api.zoza.world. RBI / CERT-In empanelled audit scheduled with EY/KPMG/Lucideus for Q3 2026 — required before regulated Indian banks deploy. Crypto-exchanges vertical (no RBI mandate) can pilot today.

Install in 30 seconds

All six SDKs are published on npm. Pick the one you need — or install the whole suite. Each @zoza/* package has zero peer dependencies and ships ESM + CJS + TypeScript types.

bash # Install the full suite npm install @zoza/auth @zoza/vault @zoza/verify @zoza/sign @zoza/shield @zoza/ai

Auth

Replace SMS OTP with cryptographic challenge-response.

npm install @zoza/auth import { ZozaAuth } from '@zoza/auth' const auth = new ZozaAuth({ apiKey }) await auth.verifyLogin(challenge, sig)
Auth quickstart →

Vault

Encrypt form fields in the browser before HTTPS.

npm install @zoza/vault import { ZozaVault } from '@zoza/vault' const vault = new ZozaVault({ publicKey }) const ct = await vault.encrypt(panInput.value)
Vault quickstart →

Verify

Sign outbound notifications so phishing can't impersonate you.

npm install @zoza/verify import { ZozaVerify } from '@zoza/verify' const v = new ZozaVerify({ apiKey }) await v.send({ to, body, sign: true })
Verify quickstart →

Sign

Decode raw transaction bytes into plain English before approval.

npm install @zoza/sign import { ZozaSign } from '@zoza/sign' const s = new ZozaSign({ chainId: 1 }) const intent = await s.decode(rawTx)
Sign quickstart →

Shield

Block drainer sites and malicious dApps before wallet connect.

npm install @zoza/shield import { ZozaShield } from '@zoza/shield' const sh = new ZozaShield() const v = await sh.checkSite(url)
Shield quickstart →

AI Agent

Send prompts to an AI through an end-to-end encrypted channel.

npm install @zoza/ai import { ZozaAI } from '@zoza/ai' const ai = new ZozaAI({ apiKey }) const reply = await ai.ask(prompt, doc)
AI Agent quickstart →

Need Go, Swift, or Kotlin? — see developer docs.

Which product do you need?

Your choice sets our build priority. Every design gap has been found and solved. Tell us what you need.

No spam. One email when your product ships.