Thanks Victor — I’m sorry you’re seeing this for one specific email. I hear the urgency. From your message I’ve pulled the key details so the support team can triage quickly, and I’ve listed targeted next steps and exactly what to attach so we don’t waste time.
Summary (what you’ve already provided)
- SDK: @web3auth/modal
- SDK version (client): ^9.5.0
- Platform / Framework: React + Vite (web)
- Network: SAPPHIRE_MAINNET
- Auth method: Email (passwordless)
- Mode: Redirect
- Symptom: Single email fails everywhere while other emails and other users work → strongly indicates server-side tkey / key-store corruption
Likely cause
- Server-side corrupted/invalid tkey (key metadata or shares missing/invalid) for that email, or a serialization/version mismatch of stored key data for that account.
What the support team will need (please attach these if possible)
- Server-side SDK versions and dependencies
- web3auth server packages (e.g. web3auth-core, tkey packages) and their versions on the server where keys are stored
- Exact getKeyDetails error message + full server stack trace (copy/paste) — this is critical
- Server logs around the failing login attempts (timestamps) for the failing email (redact unrelated secrets)
- The server-side stored key record for the failing email (JSON dump of the tkey/keyDetails blob) and one example of a healthy user’s corresponding blob for comparison (redact secrets)
- Server-side code snippets for tkey/web3auth initialization and the login flow (server code that looks up/retrieves the tkey), and any DB retrieval code
- Indicate DB type and whether any DB restore/migration occurred recently (Postgres, Mongo, Redis, etc.)
- Browser console screenshot is optional here (client evidence is useful) — but server artifacts above are the priority
Quick checks you can run now (safe, low-risk)
- Confirm server-side package versions (npm list or package.json) and share them — look for mismatches between environments
- Compare the stored tkey JSON for the failing email against a working user’s tkey JSON to spot missing fields, different versions, or truncated data
- Search server logs for any write/DB errors when that user’s key was created/updated (partial writes or failed migrations often cause corruption)
- Confirm there was no recent DB restore, migration, or code deploy that changed serialization format for stored keys
Potential remediation paths (things to consider; be careful with production data)
- If the stored key blob is corrupted and you have no safe backup, you may need to create a new tkey record for that user and have them re-enroll (this WILL lose access to any assets controlled by the old key if you proceed) — do not do this until you’ve extracted/backup the corrupted blob for forensic comparison
- If corruption is small (e.g., encoding/character corruption), a programmatic fix might be possible after comparing a healthy blob
- If the issue is a version/format mismatch, you may need to normalize old blobs (or run a migration) — share package versions so we can advise further
Helpful links
- MetaMask docs: https://docs.metamask.io
- Web3Auth docs (general): Introduction | Documentation | Web3Auth
How I’ll escalate this for you
- If you attach the items above (especially the getKeyDetails error + server key blob + server package versions + a healthy blob for comparison), I’ll flag this to the support team as high priority and include a short diagnostic summary so they can jump straight to investigating the server-side tkey data.
Would you like me to (pick one):
A) Provide a checklist template you can paste into the ticket with the exact files/logs to upload? (fast)
B) Immediately escalate with the info you’ve posted so far and request the missing server artifacts from you in follow-up? (I’ll mark priority)
Again — I know this is urgent. If you can attach the getKeyDetails error stacktrace + the server-side key blob for the failing email and a healthy example, that will get this triaged fastest.
Note: This response was generated by an AI bot and may require further verification. The MetaMask Community Team has tracked this issue and will be responding as soon as possible.
Additionally, we would recommend you to join our biweekly Office Hours to get this resolved even faster. MetaMask Developer Office Hours