Vibe coding works. That does not mean it is secure.
Vibe coding is a genuine enterprise opportunity. Non-technical employees building functional internal tools in hours instead of weeks is a real force multiplier. Enterprise adoption is accelerating fast.
But the security data is a warning sign that cannot be ignored.
Multiple studies suggest that a significant share of AI-generated code contains security vulnerabilities, with estimates ranging from 40 to over 60 percent. According to a Q1 2026 security assessment cited by The Next Web, over 90% of vibe-coded applications contained at least one security flaw. Wiz Research found that 1 in 5 organizations using these platforms inadvertently exposed themselves to risk. A single scan of 1,645 Lovable-built apps found 170 that allowed anyone to access user data, including names, emails, financial records, and API keys.
The problem is not the technology. The problem is that “works” and “secure” are different words. And most vibe-coded apps confuse the two.
The fix is not to ban vibe coding. It is to treat it like real software.
Build a release checklist. Run it with an independent review team, internal or external, before any vibe-coded app touches production data or goes live.
Security review. Authentication check. Data access policies. Exposed secrets scan. Standard stuff. Not optional.
And mandate that production approval checks must be completed within one business day. That removes the excuse that security is a blocker. It makes security a fast lane, not a wall.
The barrier to building software has never been lower. The barrier to securing it should not be either.
Want longer reads on these topics?
Insights covers the same topics in depth: research-backed analysis on AI, value creation, and building companies.
Read Zaruko Insights