An AI agent hacked McKinsey's internal AI platform in two hours. Through SQL injection.
McKinsey built an internal AI platform called Lilli. Three quarters of their 40,000 employees use it for strategy work, client research and document analysis. AI advisory is now 40% of their revenue. They practice what they sell.
A cybersecurity startup’s AI agent broke into it in two hours. No credentials. No insider knowledge. Through a SQL injection flaw. One of the oldest vulnerabilities in the book. Lilli had been running in production for two years.
In two hours the agent accessed 46.5 million internal chat messages, 728,000 files, and 57,000 user accounts. McKinsey disputed the full scope of the breach but confirmed the vulnerability was real and patched it within 24 hours.
Here is the lesson. Not that AI is dangerous. Not that you should slow down. But that every new technology comes with edge cases and misuse risks that are easy to miss when you are focused on the upside.
The value AI brings to your business is real. So is the downside risk. Understand both before you deploy. Move forward with guardrails, not blinders.
Want longer reads on these topics?
Insights covers the same topics in depth: research-backed analysis on AI, value creation, and building companies.
Read Zaruko Insights