The Day America’s Cybersecurity Chief Treated ChatGPT Like a Scratchpad
Published: February 18, 2026
The most dangerous security failures I’ve seen didn’t start with hackers.

They started with convenience.
A tired expert. A deadline. A quiet decision that felt harmless in the moment.
This one ended with sensitive U.S. government files sitting inside a public AI system.
I’ve spent years building and operating real systems — the kind that page you at 3 a.m. when something leaks. Anyone who’s worked close to security knows this truth: breaches rarely come from brilliance. They come from shortcuts.
That’s why this story matters more than the headlines suggest.

Last summer, the acting head of America’s cybersecurity agency uploaded sensitive government documents into the public version of ChatGPT.
Let that land.

The incident everyone is treating as a footnote

Dr. Madhu Gottumukkala, then acting director of Cybersecurity and Infrastructure Security Agency, uploaded internal documents marked For Official Use Only into public ChatGPT.
Not classified.
Not trivial either.
These are the exact materials that U.S. agencies explicitly train employees not to place into consumer software.
CISA’s own monitoring systems flagged the activity. Alerts fired. An internal review followed. A spokesperson later said there was a “temporary authorized exception”
If you’ve ever run security for a real organization, you already know how thin that explanation is.
Here’s what most people get wrong about ChatGPT data security
This isn’t about intent.
It’s about where data goes once you lose custody of it.

Public ChatGPT is not a private assistant. It is a shared system operated by OpenAI.
When you upload data:
- It is transmitted to external servers
- It is logged and retained under platform policy
- It may be reviewed or used to improve systems
- You cannot meaningfully revoke it
In my experience, people conflate “not instantly visible” with “secure” That’s a catastrophic misunderstanding.
Security isn’t about whether someone is staring at your data right now.
It’s about whether you still control it tomorrow.
Public AI vs enterprise AI is not a branding difference
Anyone who’s deployed AI inside a company knows this distinction is everything.
Enterprise AI tools:
- Explicit data isolation
- No training on customer inputs
- Contractual guarantees
- Audit trails
- Encryption controls
Public AI tools:
- Shared infrastructure
- Retention by default
- No practical containment
- No enforceable deletion
Treating them as interchangeable is like emailing payroll spreadsheets because Slack was down.
I’ve watched this exact mistake happen before
Years ago, I watched a senior engineer paste proprietary logic into an external tool “just to clean it up” No malice. Just speed.

Weeks later, fragments of that logic appeared in places they had no business being.
That’s the uncomfortable truth: once data enters a learning system, it stops being inert.
You don’t need a nation-state adversary.
You just need probability and time.
Why leadership mistakes hit harder than employee mistakes
When a junior analyst slips, you retrain.
When leadership slips, you normalize the behavior.
CISA exists to define cyber hygiene for the entire federal government. Its director doesn’t just follow policy — he models it.
That’s why this incident matters even if nothing catastrophic happened. Because it teaches the wrong lesson:
“If it’s useful enough, security rules are flexible”
They aren’t.
They’re brittle by design.
This isn’t an isolated failure — it’s a cultural one
Around the same period, multiple U.S. officials were caught sharing sensitive information through unsecured messaging platforms. Different tools. Same pattern.
Speed over rigor.
Convenience over containment.
Confidence over caution.
If you’ve built production systems, you know how this ends.
Not with one big explosion — but with a slow erosion of trust.
What organizations should actually take from this
If the head of federal cybersecurity can make this mistake, your company already has.
Here’s what works in practice:
- Block public AI tools at the network level for sensitive roles
- Provide approved internal alternatives
- Train explicitly on AI data boundaries
- Assume employees will choose convenience unless prevented
- Design systems that make the secure path the easy path
Policy without tooling is theater.

The uncomfortable conclusion
This wasn’t incompetence.
It wasn’t betrayal.
It was something more dangerous.
It was normalization.
And anyone who’s been inside a breach postmortem knows that’s how the worst ones start — not with villains, but with smart people quietly bending rules because the tools made it easy.
The question isn’t whether this should have happened.
It’s how many times it already has — and we just didn’t catch it.
If you’ve worked in security, AI, or government systems, you already know the answer.
That’s why this story deserves more than a shrug.
The Day America’s Cybersecurity Chief Treated ChatGPT Like a Scratchpad was originally published in Write A Catalyst on Medium, where people are continuing the conversation by highlighting and responding to this story.