~/home ~/blog ~/projects ~/about ~/resume

Shadow AI and the CISO's Blind Spot: Everyone's an Engineer Now

I am going to be at SecureWorld Boston in April, presenting on Day 1 with a talk titled “Shadow AI and the CISO’s Blind Spot: Everyone’s an Engineer Now.”

This is a topic I have been living with daily in my role as CISO, and one that I think every security leader needs to confront head-on. So let me lay out the problem — and what I plan to cover in the session.

The AI Revolution That Nobody Approved

The AI revolution did not knock on the CISO’s door. It walked right past it.

ChatGPT, Copilot, Claude, Gemini, and a thousand niche AI tools have turned every employee in your organization into a self-proclaimed engineer. Marketing is building automated workflows. Sales is feeding CRM data into LLMs for forecasting. Finance is using AI to parse contracts. Legal is running document review through AI assistants. And none of them filed a ticket with security.

Welcome to the era of Shadow AI — where the barrier to “engineering” a solution dropped to zero and your threat surface exploded overnight.

Compare this to previous shadow IT waves. Dropbox, Slack, unauthorized SaaS tools — those took years to proliferate. AI tools went from zero to everywhere in weeks. ChatGPT hit 100 million users in two months. Over 75% of knowledge workers are now using AI tools, most without IT’s knowledge. The scale and speed are fundamentally different from anything we have dealt with before.

And here is what makes this harder than traditional shadow IT: the people using these tools are not malicious. They are trying to be productive. They are trying to do their jobs better. That does not make the risk any less real — it makes it harder to address.

The CISO’s Nightmare: What You Cannot See

For cybersecurity engineering teams, this is an existential shift. The traditional model — where security reviews happened before deployment, where IT controlled the toolchain, where data flows were visible and governed — is broken.

Three problems keep me up at night:

Invisible data flows. Traditional DLP sees network traffic, endpoints, and file transfers. It does not see an employee pasting sensitive data into a chat window. Your SOC 2 controls, your CCPA compliance, your regulatory posture — all of it assumes you have visibility into where data goes. When sensitive data leaves your environment through a prompt, you have a compliance violation you do not know about.

The governance gap. Business units adopt AI tooling faster than security teams can assess it. By the time you learn about a tool, it has been in use for weeks. There are workflows built on it. There are dependencies. The cost of removing it just went up by an order of magnitude.

AI-generated code in production. Copilot, ChatGPT, and Claude are writing production code right now. Developers ship AI-generated code 3-5x faster than they write it manually. AppSec review cycles have not changed. The backlog is growing exponentially. And the real problem is that AI-generated code looks correct — it passes basic review because it reads like something a senior engineer wrote. But it might be reproducing vulnerability patterns from training data, suggesting outdated dependencies, or introducing subtle logic flaws. Worst of all, nobody is tracking which code in the codebase was written by AI. You cannot audit what you cannot identify.

The SDLC Under Siege

In a healthy SDLC, security review, infrastructure planning, and budget approval happen before a line of code is written. Shadow AI skips all of it.

Every unsanctioned AI tool is an unvetted vendor, an unreviewed data flow, and an unassessed risk — discovered after it is already in production. AI-generated services spin up without capacity planning, without architecture review, without knowing who owns them when they break at 2 AM. SaaS subscriptions, API keys, and token costs appear on expense reports, not in the technology budget.

Bypassing these checks does not eliminate the work. It shifts it. Security, infrastructure, and compliance teams inherit the cleanup at 10x the effort.

From Gatekeeper to Enabler

Here is the hard truth I will be delivering at SecureWorld: if your answer to AI adoption is “block it,” you have already lost. Your people will use personal devices, personal accounts, personal email. You cannot firewall your way out of this.

The organizations winning this fight are not the ones with the strictest policies. They are the ones who gave their people secure alternatives faster than the shadow could spread.

The mindset shift looks like this:

  • Block AI tools at the firewall becomes provide approved AI tools with guardrails
  • Security reviews before any adoption becomes risk-tiered governance that matches adoption speed
  • “File a ticket” becomes embedded security in AI workflows
  • Compliance as a blocker becomes compliance as a feature
  • Saying no becomes saying yes, and here is how

If security is slower than shadow AI, shadow AI wins. Every time.

What Actually Works: The Playbook

In the talk, I will walk through a four-part playbook that we have been developing and refining in practice:

1. Discover — AI Asset Inventory and Telemetry. You cannot secure what you cannot see. Deploy AI usage discovery at the network and endpoint level. Monitor DNS, proxy logs, and API calls for known AI service domains. Build a living inventory of AI tools in use — both sanctioned and unsanctioned.

2. Classify — Risk-Tiered Governance. Not every AI use case needs a full security review. If marketing wants to brainstorm taglines with ChatGPT using no customer data, let them. Tier your governance: low-risk use cases get self-service with guidelines, medium-risk gets a lightweight 48-hour security review, and high-risk involving PII, regulated data, or confidential information gets a full security assessment. The 48-hour SLA is the key — if you cannot review it in 48 hours, people will go around you.

3. Enforce — API-Layer and DLP Controls. Route AI traffic through managed proxies or API gateways. Implement AI-aware DLP that scans prompts, not just file transfers. Build token-level monitoring for sensitive data patterns — SSNs, API keys, credentials. Enforcement without enablement is just a fancier way of saying no. You need both.

4. Embed — Security Engineers in AI Workflows. Assign security engineers to business unit AI initiatives. Create AI Security Champions in each department. Build pre-approved tool catalogs with security configurations baked in. Shift security left into the AI adoption lifecycle. If security is not in the room when the business unit decides to adopt an AI tool, you are going to find out about it in an incident report.

The Security Engineer’s New Role

The cybersecurity engineer of 2026 is not the same role as 2023. The new skill set includes understanding LLM architectures and data flows, API security and gateway enforcement patterns, AI-specific threat modeling (prompt injection, data poisoning, model extraction), building internal AI platforms with security baked in, and cross-functional collaboration — because you are in every department now.

And there is a new metric I want every security team to adopt: Mean Time to Govern (MTTG). How fast can you go from discovering an unsanctioned AI tool to providing a secure alternative? That number is your competitive advantage.

Come to the Talk

I will be covering all of this and more at SecureWorld Boston in April, Day 1. The session is designed for CISOs, security engineers, and anyone responsible for keeping their organization secure while AI adoption accelerates around them.

Three takeaways for the CISO: Shadow AI is your number one unmanaged risk and it is growing daily. Blocking does not work — enable with guardrails or lose visibility entirely. Risk-tier your governance because not every AI use case needs the same scrutiny.

Three takeaways for the security engineer: Learn AI fundamentals because you cannot secure what you do not understand. Build discovery and telemetry first — visibility before control. Embed, do not gate — be in the room where AI decisions happen.

Everyone is an engineer now. The question is whether your security team is engineered to handle it.

If you are attending SecureWorld Boston, come find me after the talk. I will be around for hallway conversations and would love to hear how your organization is tackling this.

Moose is a Chief Information Security Officer specializing in cloud security, infrastructure automation, and regulatory compliance. With 15+ years in cybersecurity and 25+ years in hacking and signal intelligence, he leads cloud migration initiatives and DevSecOps for fintech platforms.