We adversarially audited all 7 products against real attack vectors, privacy leaks, race conditions, and architectural impossibilities. One product was downgraded entirely. Here's what broke and how we fixed it.
Affects Verify, Auth, Shield, and AI Agent. Without solving this, all four products are vulnerable to impersonation at the registry level.
No vetting process exists. Current services table has no domain verification, no manual review, no DNS challenge. First attacker to register "SBI Support" gets a legitimate signing key. Every downstream product that relies on the registry inherits this vulnerability.
Tier 1 — Automated (minutes): DNS TXT record challenge. Service claims "sbi.co.in" → Zoza generates a random token → service adds TXT record _zoza-verify=token123 → Zoza queries DNS to confirm. Proves domain ownership. Same pattern as Let's Encrypt.
Tier 2 — Enhanced (24-48h): For banks, government, healthcare — manual review. Requires: (a) signed letter on company letterhead, (b) WHOIS cross-check, (c) verification call to the entity's published phone number. Approved entities get a "Verified" badge distinct from Tier 1's "Domain-confirmed" badge.
Tier 3 — Transparency log: Every registration (approved or rejected) is published to an append-only log at transparency.zoza.world. Security researchers can audit who registered when. If "SBI Support" registers and passes Tier 1 with domain "sbi-support.xyz" (not "sbi.co.in"), the transparency log exposes it. Community flagging → manual review → revocation if fraudulent.
Emergency revocation: When a key is compromised, push a registry_revoke event via FCM/WebSocket to all connected devices. Cached entry invalidated within seconds, not 24h.
pubspec.yaml uses sqflite: ^2.3.2 (unencrypted). Messages, ratchet states, e2eInfo all stored in plaintext SQLite. Phone theft or forensic extraction = full message history + crypto metadata exposed.
Replace sqflite with sqflite_sqlcipher (drop-in compatible). Derive DB encryption key from identity secret via HKDF (info="Zoza_DB_v1"). Key stored in Android Keystore / iOS Keychain. Migration: one-time re-encrypt existing DB on app update. ~150 lines, transparent to user.
AuthPage.tsx stores the cryptographic seed as CSV-joined bytes in localStorage. Same-origin XSS or malicious browser extension = attacker gets seed = can derive identity key = can decrypt all messages.
Use Web Crypto API to generate a non-extractable AES key (stored in browser's secure key store). Encrypt the seed with this key before storing in IndexedDB. Non-extractable = JavaScript cannot read the raw key bytes, only use it for encrypt/decrypt operations. XSS can still CALL decrypt, but cannot exfiltrate the key itself. Combined with Content-Security-Policy headers, this raises the bar significantly. ~200 lines.
Two browser tabs open. Both receive a message simultaneously. Both read ratchet counter=5, both advance to counter=6, both write back. One tab's state overwrites the other's. Chain keys corrupted — subsequent messages fail to decrypt.
Use BroadcastChannel to elect one tab as the "ratchet leader." Only the leader decrypts incoming messages. Other tabs receive decrypted content via BroadcastChannel. If leader tab closes, another tab takes over. Same pattern used by Google Docs for multi-tab editing. ~100 lines.
CDN sees ciphertext but also sees request size. 256-byte ciphertext = short text field. 8KB = document upload. 50KB = medical image. Attacker monitoring traffic can infer WHAT was submitted even without reading it. No padding in current design.
SDK pads all ciphertext to the next power-of-2 boundary: 256B, 512B, 1KB, 2KB, 4KB, 8KB, 16KB, 32KB, 64KB. A 300-byte form submission looks identical to a 500-byte one (both pad to 512B). Adds ~5 lines to the SDK encrypt function. Documented in security whitepaper as a known tradeoff: larger payloads still distinguishable at the 64KB+ tier.
GET /api/v1/services/{id}/bundle is public (needs to be, for browser SDK to fetch without API key). But: attacker can enumerate all registered service IDs by brute-forcing UUIDs and checking for 200 vs 404. Successful enumeration reveals which companies use Zoza.
Return HTTP 200 for ALL requests, including non-existent IDs (return a dummy bundle with a random key for unknown IDs). Attacker can't distinguish real from fake. Rate limit: 60 req/min per IP, CAPTCHA after 100. Service IDs use UUIDv4 (2^122 possible) — brute-force is infeasible at 60 req/min. ~50 lines middleware.
Developer stores their own private key. If laptop is stolen, git repo deleted, or env file lost — all future ciphertext is undecryptable. No escrow, no recovery, no backup verification during onboarding.
During service creation: (1) show private key, (2) require developer to paste back the last 8 chars to confirm they saved it, (3) offer optional encrypted escrow: developer enters a recovery passphrase, SDK encrypts private key with PBKDF2(passphrase, 100K iterations) + AES-256-GCM, uploads encrypted blob to Zoza. Zoza cannot decrypt (no passphrase). Developer can recover by re-entering passphrase. ~200 lines.
If registry is a monolithic signed JSON blob, 100K entities = ~50MB download on every device startup. Current design has no delta sync.
Registry is versioned (incrementing integer). Client stores last_version. API: GET /registry?since=42 returns only entries changed since version 42 (additions + revocations). Full download only on first install. Daily delta at 10K entities: ~5-50KB. At 100K: still ~5-50KB (only changes). ~200 lines backend + ~50 lines SDK.
If SBI's signing key is compromised, users who cached the registry could see fraudulent "Verified SBI" notifications for up to 24 hours (the registry sync interval).
When a key is revoked: (1) push a registry_revoke event via existing FCM infrastructure + WebSocket to all connected clients, (2) client immediately marks that entity's cached key as invalid, (3) any message signed by the revoked key shows "KEY REVOKED — do not trust" instead of green badge. Propagation time: seconds, not hours. Reuses existing FCM infra (push/fcm.go). ~100 lines.
SMS is 160 chars plain text. Ed25519 signature is 64 bytes (128 hex chars). Cannot fit payload + signature in one SMS. India's most important notification channel is unusable for Verify in its current form.
Don't try to embed signatures in SMS. Instead: when SBI sends an SMS, SBI ALSO sends a signed payload to Zoza relay. Zoza pushes to user's device: "SBI sent you a message. Verified ✅ — ₹15,000 debited for Flipkart." User sees the SMS AND the Zoza verification push side by side. If they receive an SMS with NO corresponding Zoza push → it's likely a scam. Requires: Zoza app installed + SBI integration. ~150 lines.
Manifest V3 service workers are killed by Chrome after 30s of inactivity. If the worker dies between reading ratchet state (counter=5) and writing updated state (counter=6), the stored state is stale. Next message decrypts with wrong key. Ratchet chain is permanently corrupted.
Before decrypting: snapshot current state to IndexedDB with dirty=true. After successful decrypt: write new state with dirty=false. On service worker wake: if dirty=true, discard the dirty state and restore from the pre-decrypt snapshot. Same pattern databases use for crash recovery. Adds ~80 lines to the ratchet persistence layer.
Content scripts inject into pages. A sophisticated phishing site can detect the content script (via DOM timing, injected element detection, or API interception) and show legit-looking content when Shield is active, switching to the scam interface when Shield is disabled.
Use Chrome's declarativeNetRequest API for domain-level matching instead of content scripts. This runs at the network level — the page cannot detect it. Content script is used only for the optional in-page badge overlay and E2E chat panel, not for the primary domain check. Attacker can detect the badge but cannot detect the domain verification itself. ~100 lines refactor.
If the root key (used to sign the registry) is fetched from a URL, attacker who MITMs the URL can inject a fraudulent registry. The claim "signed by root key" only works if the root key itself is trustworthy.
Root Ed25519 public key is compiled into the extension source code, reviewed in Chrome Web Store submission, and versioned with the extension release. Key rotation happens via extension update (Chrome auto-updates within 24h). Root key never fetched from network. ~5 lines (just a constant), but architecturally critical.
Ledger and Trezor deliberately prevent third-party apps from accessing raw transaction bytes before signing. This is a core security feature of hardware wallets — the device is a secure enclave. Sign's claimed flow ("companion app reads raw bytes from hardware wallet and decodes them independently") contradicts the hardware wallet security model. The Ledger API does NOT expose transaction bytes to external applications.
If hardware wallet won't share bytes, only option is intercepting at the Safe{Wallet} level. But Safe IS the compromised component in the Bybit scenario. Trusting Safe's output = trusting the attacker's UI. The entire point of Sign was to be independent of the signing UI.
Option A — Manual Transaction Decoder (ship this): A web tool at zoza.world/decode where any operator can paste raw transaction calldata and see a human-readable decode. Operator manually compares what the tool shows vs what Safe/hardware wallet shows. Not automated — but works with ANY wallet, ANY signing platform, no integration needed. Free. Builds credibility with the crypto security community.
Option B — Safe Guard Module (future): Safe{Wallet} supports "Guard" modules — smart contracts that can veto transactions before execution. Build a Zoza Guard that checks transaction calldata against a signed intent registry on-chain. If the intent doesn't match, the Guard rejects the transaction even if enough signers approved. This bypasses the UI entirely — it's an on-chain check. Requires Solidity development (~500 lines). Doesn't depend on hardware wallets at all.
Option C — Hardware partnership (long-term): Work with Ledger to build verification INTO their firmware. This is the ideal solution but requires months of partnership negotiation. Not feasible for a solo dev in 2026.
Bank sends {amount: 15000, merchant: "Flipkart"} through Zoza relay unencrypted. Zoza employee or compromised relay = sees every transaction detail for every user.
Challenge (bank → user): Bank fetches user's Curve25519 pub from GET /users/{id}/auth-bundle. Bank seals (payload + bank_sig) using Vault's sealed-box to user's key. Zoza relay sees only an opaque blob.
Response (user → bank): User seals {approved, challenge_id, timestamp, user_sig} using sealed-box to bank's Curve25519 pub. Zoza relay sees only an opaque blob.
Zoza relay sees: "bank_service_id X communicated with user_id Y." Routing metadata only. Same as what a telecom sees when you call someone. Cannot see amount, merchant, or approval status. ~150 lines total (both directions).
Attacker who knows a user's Zoza ID can spam challenge requests. 1000 challenges in 10 seconds = phone buzzes nonstop = denial of service on the user's device.
Max 3 challenges per user per minute, 10 per hour. Per-bank limit: 1000/min total (protects against compromised bank flooding). Excess returns 429 to bank. User can configure "quiet hours" (no challenges 11pm-7am). If user rejects 5+ from same bank in a row, auto-mute that bank's challenges for 1 hour. ~80 lines middleware.
User approves transaction. Zoza tries to forward approval to bank's webhook. Webhook is down (maintenance, network issue). Approval is lost. User approved but transaction never processes.
Retry schedule: 1s, 2s, 4s, 8s, 16s (5 attempts over ~31 seconds). If all fail: store approval in PostgreSQL auth_pending_deliveries table with TTL matching challenge TTL (5 minutes). Background worker retries every 30s until delivered or expired. Bank can also poll GET /auth/responses/{challenge_id} as a pull-based fallback. ~120 lines.
The ratchet sidecar holds decrypted plaintext of every user message. If the sidecar container is compromised (code injection, stolen image, insider access), all active conversations are exposed. The sidecar IS the security boundary.
Phase 1 (MVP): Sidecar runs as a signed binary with minimal permissions — no network except Zoza relay + AI inference endpoint, no disk except ratchet state store. Binary hash is registered with Zoza. Sidecar periodically sends attestation (signed hash of own binary + runtime state) to Zoza. If hash mismatches registered value → Zoza stops delivering messages to that sidecar.
Phase 2 (enterprise): Run sidecar inside AWS Nitro Enclave or Azure Confidential Container. Hardware attestation proves to the user's device that the sidecar is running trusted code. User can verify before sending sensitive data. ~300 lines for Phase 1 attestation.
"MediCare AI" registers. How does Zoza verify it's really MediCare? Without vetting, impersonator AI collects medical data. Exact same gap as Verify entity registration.
"MediCare AI" can only register if "MediCare Inc." has already passed Tier 2 enhanced vetting (domain + manual review). AI agent inherits parent entity's verification status. Agent is registered under the parent's service account, not independently. Prevents orphan AI agents with no verified parent. ~50 lines policy enforcement in registration handler.
Ratchet states cached in Redis. Redis restart (OOM, deploy, failover) flushes all states. Every active conversation shows "decryption failed" on the next message. No recovery mechanism.
Every ratchet state write goes to both Redis (fast, primary) AND PostgreSQL (durable, backup). On Redis miss (after flush): load from PostgreSQL, populate Redis cache, continue normally. First message after flush has ~100ms extra latency (Postgres read). If BOTH are lost: send session_reset to user (same recovery flow as messenger's existing mechanism, already battle-tested). ~200 lines dual-write layer.
| Product | Previous | Revised (with fixes) | Time | Status change |
|---|---|---|---|---|
| Prerequisite: Developer identity + Entity vetting | 600 lines | ~1200 lines | 2-3 weeks | Vetting added (DNS + transparency log) |
| Messenger fixes | 200 lines | ~450 lines | 1 week | +SQLCipher + IndexedDB + BroadcastChannel |
| Vault | 2000 lines | ~2400 lines | 6 weeks | +Padding + rate limiting + key escrow |
| Verify | 1100 lines | ~1500 lines | 4 weeks | +Delta sync + push revocation + parallel push |
| Shield | 1300 lines | ~1500 lines | 4-5 weeks | +WAL ratchet + declarativeNetRequest + key pinning |
| 4500 lines | ~800 lines (tool) + ~800 (guard) | 2 + 4 weeks | DOWNGRADED from product to tool | |
| Auth (OTP killer) | 2400 lines | ~2800 lines | 7-8 weeks code | +Double-seal + rate limit + webhook retry |
| AI Agent | 900 lines | ~1200 lines (ratchet) or ~400 (vault mode) | 4-5 weeks | +Attestation + dual-write + parent vetting |
Your choice sets our build priority. Every gap on this page will be fixed before your product ships.
No spam. One email when your product ships.