Feb 16, 2026

Don’t Leak API Keys: Where They Slip and How to Prevent It

A founder-friendly guide to keeping secrets out of frontends, GitHub, logs, and screenshots—and what to do fast if a key gets exposed.

← Go back

API keys are the fastest way to turn a working MVP into a surprise bill, a breached account, or a weekend-long fire drill. If you built quickly (especially with AI-assisted coding), it’s easy to accidentally paste a “secret” into the wrong place and ship it to the world.

Definition (plain English): An API key is a credential that lets software call a service on your behalf. If someone else gets it, they may be able to use your account—often with your permissions and on your dime.

The one rule that prevents most leaks

If a key can perform actions you’d be upset about (charge money, read customer data, delete records), treat it like a password:

  • Never put it in the browser.
  • Never commit it to Git.
  • Assume anything logged can be copied.

That’s it. Everything else is just implementation details.

Where API keys leak (the common founder mistakes)

Most leaks are not “hacks.” They’re regular product work under time pressure.

1) Frontend code (the browser is a glass box)

Anything shipped to the browser can be viewed. That includes:

  • JavaScript bundles (React, Next.js, Vue, etc.)
  • “Hidden” config in window.__ENV__
  • Mobile apps (API keys can be extracted from binaries, too)

Common scenario: a developer adds OPENAI_API_KEY (or similar) to a frontend .env file, then the build tool exposes it to client code. The app works. The key is now public.

Rule of thumb: If the user’s device can run it, the user can read it.

2) GitHub (commits, PRs, forks, and “just this once”)

Git is forever unless you do painful cleanup. Keys leak through:

  • Committed .env files
  • Copy-pasted keys in README or docs
  • PR descriptions or comments
  • “Temporary debug” commits pushed to a public repo
  • Accidental pushes to the wrong remote

Even in private repos, keys can still leak to contractors, screenshots, copied snippets, and future repo exposure.

3) Logs and monitoring (debugging that becomes data)

Keys show up in logs when you:

  • Log full request headers
  • Log full request/response bodies
  • Print environment variables on startup
  • Dump exception objects that include config
  • Forward logs to third parties (monitoring, error tracking, analytics)

This gets worse in AI-built codebases because generated code often logs “everything” when something fails.

4) Support tickets, chat, and internal tools (humans paste secrets)

Under pressure, someone will paste a key into:

  • A support ticket (“Can you test this?”)
  • Slack/Discord/Teams
  • A CRM note
  • A shared Google Doc / Notion page
  • A screen recording

Even if you trust your team, you can’t control where that text gets mirrored, indexed, or retained.

5) Screenshots and screen shares (the accidental broadcast)

Keys leak via:

  • Terminal screenshots
  • CI/CD output on screen
  • Dashboard screenshots (vendor UIs often display keys once)
  • Investor demo screen shares with “notes” visible

If you’ve ever shared your screen while debugging prod, assume you’ve been one alt-tab away from exposing something.

Environment variables: good habit, not a magic shield

Environment variables (“env vars”) are a common way to pass secrets to your app without hardcoding them. They help, but only if you handle them carefully.

Env vars go wrong when:

  • You store them in a .env file and commit it
  • Your CI prints them (even partially) during builds
  • Your app logs them on boot
  • You inject them into the frontend build output

Use env vars to feed secrets into server-side code at runtime, not to “hide” secrets in the client.

Secret managers: the boring solution that works

A secret manager is where you store secrets centrally and deliver them to servers safely. Examples include your cloud provider’s secret store or tools like 1Password/Bitwarden for team-managed secrets (the right choice depends on your stack and stage).

What you get:

  • Access control (who can see what)
  • Audit trails (who accessed it)
  • Rotation support (swap keys without chaos)
  • Fewer “where did we put that key?” moments

If you’re past “solo founder hacking,” a secret manager pays for itself the first time someone leaves the team, a laptop is lost, or a key leaks.

Client-side vs server-side: when a key can be public

Not all “keys” are equal. Some are meant to be public identifiers; some are true secrets.

Safe to expose (usually)

  • “Publishable” keys designed for the browser (example: some payment providers have a public key)
  • Public analytics IDs
  • Public map tokens with strict domain restrictions and low-risk permissions

Should never be in the client

  • Keys that can read/write data
  • Keys that can create charges, issue refunds, or manage billing
  • Keys that can call LLM APIs or expensive compute APIs directly
  • Admin tokens, service accounts, database URLs with passwords

If you’re unsure, assume it must stay server-side. The safe pattern is: browser calls your backend → backend calls the third-party API.

Scopes and permissions: limit the blast radius

A leaked key is less damaging when it can do less.

When you create keys, look for:

  • Scopes/permissions: only enable what the app needs
  • Environment separation: different keys for dev/staging/prod
  • Project separation: avoid one “god key” used everywhere
  • IP/domain restrictions: where supported (helpful, not perfect)

Founders often skip this because it feels like “enterprise process.” In reality, it’s the simplest way to prevent one mistake from becoming a full incident.

Rotation: make it routine, not a panic move

Key rotation means issuing a new key and retiring the old one. Do it:

  • On a schedule for sensitive services (monthly/quarterly)
  • When someone with access leaves
  • After any suspected exposure
  • After copying a key into a place you don’t fully control

Rotation fails when the key is scattered across laptops, random .env files, and hardcoded strings. It succeeds when you have one source of truth (secret manager) and a predictable deploy process.

Practical tip: for services that support multiple active keys, overlap them briefly—deploy the new key, verify, then disable the old key.

Webhooks: the “reverse” secret people forget

Webhooks are incoming requests from a vendor to your app (payments, forms, email providers, etc.). The secret here is not an API key you send out—it’s a signing secret you use to verify the webhook is real.

Common mistake: accepting webhook requests without verification (“it worked in dev”), or logging the full webhook payload + headers (which may include signatures and identifiers).

Minimum safe webhook setup:

  • Verify signature on the server
  • Reject requests that fail verification
  • Avoid logging raw headers/body unless redacted
  • Store the webhook secret server-side like any other secret

Quick prevention checklist (10 minutes that saves days)

  • Keep all secret keys server-side; never in the browser bundle.
  • Add .env and secret files to .gitignore; verify they’re not tracked.
  • Turn on secret scanning where possible (repo hosting + CI checks).
  • Redact secrets in logs; don’t log headers or full request bodies by default.
  • Use a secret manager (or at least a team vault) as the source of truth.
  • Split dev/staging/prod keys; don’t reuse production credentials locally.
  • Use scoped keys with minimal permissions.
  • Document rotation steps once; practice rotating a non-critical key.

What to do if a key leaked (fast incident response)

This is the “don’t panic, do the basics” plan. Speed matters more than elegance.

Step 1: Contain (first 15 minutes)

  • Revoke or rotate the key immediately. Assume it’s already copied.
  • If the service supports it, disable the key and create a replacement.
  • If revocation breaks production, roll a hotfix to switch to the new key, then revoke the old one.

Step 2: Remove exposure (same hour)

  • If it’s in GitHub: remove it from the repo and history if needed, but do not treat deletion as containment. Rotation is containment.
  • If it’s in logs or tickets: delete/redact where you can, and restrict access.
  • If it’s in screenshots/videos: remove them from shared folders and chats (and assume copies may exist).

Step 3: Check impact (same day)

  • Review vendor dashboards for unusual usage, charges, or access patterns.
  • Look for calls from unknown IPs, odd user agents, or spikes in volume.
  • Identify what the key could do (scopes) and what data/actions are at risk.

Step 4: Close the loop (within 48 hours)

  • Write a short internal note: what leaked, where, how, and what changed.
  • Add a guardrail that would have prevented it (CI check, log redaction, secret manager, key scoping).
  • Rotate related credentials if there’s any chance they were exposed together.

If customer data may be involved, bring in someone experienced with incidents and follow your legal/contract obligations. “We think it’s fine” is not a plan.

When it’s worth bringing in a steady hand

If your product is moving fast, you want prevention that doesn’t slow shipping: clean separation of client/server, sane secret handling, safer logging, and repeatable rotation. That’s the kind of stabilization work Spin by Fryga typically helps with—fixing the risky parts without turning your roadmap into a rewrite project.