If Mythos Can Break Code, Can It Break Patents?
President, Zaruko
Table of Contents
Anthropic's Mythos model is dominating the news cycle. The company says its new AI can autonomously find and exploit software vulnerabilities at a speed and scale no human team can match, including bugs that have hidden in critical infrastructure for over two decades. Anthropic is not releasing Mythos to the public. Instead, it has assembled a coalition of major technology partners including Amazon, Apple, Google, Microsoft, CrowdStrike, and JPMorgan Chase, and extended access to roughly 40 additional organizations through a controlled program called Project Glasswing.1
The cybersecurity implications are real and well-covered elsewhere. This post asks a different question.
Mythos: Genuine Threat or Brilliant Marketing?
There are two ways to read what Anthropic has done, and both are worth considering.
If the capabilities are real, Anthropic is acting as a responsible corporate citizen. By giving defenders a head start, by letting infrastructure companies scan their own code before attackers gain access to comparable tools, Anthropic is doing something unprecedented in the AI industry. The UK's AI Security Institute tested Mythos and confirmed a significant step up over previous models: on expert-level cybersecurity tasks that no model could complete before April 2025, Mythos succeeds 73 percent of the time.2 Bruce Schneier, one of the most respected voices in security, called it a development with "major security implications."3
If this is marketing, it is brilliant. By withholding the model and framing it as too dangerous for public release, Anthropic has generated more attention for a single product launch than most companies achieve in a decade. David Sacks, the former White House AI and crypto czar, said it bluntly: "Anytime Anthropic is scaring people, you have to ask, 'Is this a tactic? Is this part of their Chicken Little routine? Or is it real?'" He later conceded the threat should be taken seriously.4 Snehal Antani, CEO of offensive AI hacking firm Horizon3.ai, called the entire Mythos narrative "a nothingburger," arguing that "the adversary doesn't need Mythos to hack you."5 Independent researchers at AISLE separately found that smaller, open-weights models could reproduce much of the same analysis on Mythos's flagship vulnerabilities when given the relevant code.10
The truth probably lies in between. The vulnerabilities Mythos found are real. The bugs are real. The patches are being applied. Whether Mythos represents a qualitative leap or an incremental improvement wrapped in extraordinary marketing is a question that will take months to settle.
But that is not today's topic.
The Parallel Nobody Is Talking About
Today's question is about patents.
If an AI model can systematically scan millions of lines of code, identify structural weaknesses, and generate working exploits, can the same capability be applied to patent claims? Can an AI model read a patent, understand the boundaries of what it protects, and generate a design that achieves the same functional outcome while avoiding every claim?
The structural parallel is closer than it might seem.
Software vulnerability scanning and patent claim analysis share a common structure. Both involve parsing large bodies of structured text. Both require identifying boundaries: in code, the boundary is between secure and exploitable; in patents, the boundary is between protected and unprotected. Both ask the same fundamental question: is there a way around this?
The verification side is where the two domains differ. A software exploit either works or it does not. A patent workaround can be evaluated against the claims, but the final answer often depends on claim construction and litigation risk. The verifier is partly automated and partly human.
That asymmetry is not a flaw in the analogy. It is the same structure that makes AI useful in mathematics. In a recent series post, I covered the generator-verifier gap: AI proposes candidate solutions, a verifier checks them, and the loop iterates.11 The verifier can be a Python script, a human expert, or both working together. The point is not that the AI reasons. It is that the gap between cheap generation and tractable verification creates value.
Patent design-arounds fit this structure exactly. The AI generates candidate designs, fast and cheap. A patent attorney evaluates each one against the claims, fast and expensive but far faster than designing alternatives from scratch. Where claim construction is ambiguous, courts ultimately arbitrate. The verifier is human and the answer is sometimes probabilistic, but the workflow is the same one Fields Medalist Terence Tao described in a recent Quanta profile: AI is very good at scouring large lists of candidates for low-hanging fruit while a human plays the verifier role.11
How Patent Workarounds Work Today
Designing around a patent is one of the most expensive and specialized tasks in intellectual property law. The process typically works like this:
A patent attorney reads the claims of an issued patent. The claims define, in precise legal language, the boundaries of what the patent protects. The attorney then identifies the essential elements: which parts of the claim, if removed or modified, would take a product outside the scope of protection. Finally, the attorney designs an alternative implementation that achieves a similar result while avoiding every element of every independent claim.
This process is manual, slow, and expensive. A single design-around analysis for a complex software patent can cost tens of thousands of dollars and take weeks. The attorneys who do this work are among the most highly paid specialists in law, because the task requires deep technical understanding combined with precise legal reasoning.
The economics of this process mean that design-arounds are only performed when the stakes justify the cost: when a company faces a patent infringement lawsuit, when a competitor's patent blocks a product launch, or when a licensing negotiation requires leverage.
What AI Can Already Do With Patents
AI-assisted patent analysis is not hypothetical. It is already in production. The U.S. Patent and Trademark Office launched the Artificial Intelligence Search Automated Pilot (ASAP!) in late 2025, which provides inventors with AI-generated prior art search reports before formal examination begins.6 AI tools can already draft patent claims from invention descriptions,7 evaluate novelty by comparing claims against prior art,8 and generate claim charts for litigation.9
The step from prior art search to design-around analysis is not trivial, but both tasks ask the model to do the same thing: map claim language to technical alternatives. Prior art search asks: has someone already done this? Design-around analysis asks: can someone do this differently? Both tasks require the same foundational capability: understanding what a patent claim covers and what it does not.
Researchers at Carnegie Mellon have already demonstrated that large language models can assess patent novelty by comparing claims against cited prior art documents, following a process similar to what patent examiners do.8 The models produced explanations accurate enough to understand the relationship between a target patent and the prior art. If an LLM can determine that a claim is not novel because prior art covers it, the inverse question, what modification would make a design novel relative to the claim, is a natural extension.
Two Possible Outcomes
If AI-powered patent design-arounds become practical, two outcomes are possible, and they lead in opposite directions.
Outcome 1: More patents, not fewer. Some AI-generated workarounds become new patent filings. If a model can produce ten different ways to achieve the same functional outcome while avoiding an existing patent's claims, several of those alternatives are potentially patentable. The result would be an explosion of patent filings, a thickening of the patent thicket that already makes software innovation expensive and legally hazardous. Companies with resources to run AI-assisted patent generation at scale would accumulate larger portfolios. The patent system, already criticized for rewarding volume over novelty, would become even more congested.
Outcome 2: Patents become less defensible. If anyone with access to an AI model can generate a design-around for any patent, the economic value of holding a patent decreases. Why pay a licensing fee if a model can produce an alternative implementation in hours? Why file a patent if a competitor can design around it before the ink dries? In this scenario, the practical enforceability of software patents erodes, and the innovations that would have been patented instead flow into the public domain, either as open implementations or as trade secrets that companies choose to protect through secrecy rather than filing.
Which outcome prevails depends on how patent law adapts, how courts interpret AI-generated workarounds, and whether the patent office can keep pace with AI-assisted filings. But either way, the economics of software patents change fundamentally.
The Deeper Pattern
Step back from both cybersecurity and patents, and the underlying pattern is the same.
Large language models, and the systems built on top of them, are exceptionally good at a specific class of tasks: scanning large bodies of structured text, identifying patterns and boundaries, and generating alternatives that satisfy or violate specific constraints. Code is structured text. Patent claims are structured text. Regulatory filings are structured text. Contracts are structured text.
Any domain where the task is "read a large body of precise language, understand what it permits and prohibits, and find a path through or around it" is a domain where AI capabilities are advancing rapidly. This is not reasoning in the human sense. It is pattern matching at scale, applied to text whose structure the model has learned from billions of examples. I covered why this distinction matters in the reasoning series on Zaruko Insights. Cybersecurity is the first domain where this capability has become headline news. Patents may be the next. Regulatory compliance, contract negotiation, and legal discovery are not far behind.
The question for enterprise leaders is not whether AI will be able to do this. It is whether your organization is prepared for a world where both your defenses and your competitors' defenses can be systematically analyzed at machine speed.
- Anthropic, "Project Glasswing," April 2026. ↑
- UK AI Security Institute (AISI), "Our evaluation of Claude Mythos Preview's cyber capabilities," April 2026. ↑
- Bruce Schneier, "What Anthropic's Mythos Means for the Future of Cybersecurity," Schneier on Security, April 2026. ↑
- The Hill, "Anthropic's Mythos puts DC, Wall Street on high alert," April 2026. ↑
- The Register, "Anthropic Mythos shaping up as nothingburger," April 2026. ↑
- U.S. Patent and Trademark Office, "USPTO launches new AI Pilot for pre-examination utility application search," October 8, 2025. ↑
- Lexology, "The Future of Patent Drafting: AI, LLMs, and the Evolution of IP Management," February 2025. ↑
- Hayato Ikoma and Teruko Mitamura, "Can AI Examine Novelty of Patents?" Carnegie Mellon University, 2025. ↑ ↑
- IP Copilot, "The End of Patent Drafting: How AI Is Changing IP Workflows in 2026," January 2026. ↑
- AISLE, "AI Cybersecurity After Mythos: The Jagged Frontier," April 2026. ↑
- Stefanos Damianakis, "Your AI Model Isn't Reasoning. It's Searching.," Zaruko Insights, April 2026. ↑ ↑
Frequently Asked Questions
Can AI design around patent claims?
AI-assisted patent analysis is already in production. The U.S. Patent and Trademark Office's ASAP! pilot uses AI to generate prior art search reports, and commercial tools draft claims, evaluate novelty, and produce claim charts. The step from prior art search to design-around analysis is not trivial, but both tasks require the same foundational capability: mapping claim language to technical alternatives. Carnegie Mellon researchers have demonstrated that large language models can assess patent novelty by comparing claims against cited prior art. The inverse question, what modification would make a design novel relative to a claim, is a natural extension of the same workflow.
What is Anthropic's Mythos model and Project Glasswing?
Mythos is Anthropic's AI system for autonomously finding and exploiting software vulnerabilities at machine speed, including bugs hidden in critical infrastructure for decades. Anthropic has not released Mythos publicly. Instead it assembled Project Glasswing, a coalition of major technology partners including Amazon, Apple, Google, Microsoft, CrowdStrike, and JPMorgan Chase, plus roughly 40 additional organizations with controlled access. The UK AI Security Institute confirmed Mythos succeeds on 73 percent of expert-level cybersecurity tasks that no prior model could complete before April 2025.
Will AI weaken patent protection?
Two outcomes are possible and they lead in opposite directions. If AI-generated workarounds get filed as new patents, the patent thicket thickens and large companies with resources to run AI-assisted generation at scale accumulate even bigger portfolios. If anyone with model access can generate a design-around for any patent, the economic value of holding a patent decreases, licensing revenue erodes, and innovations flow into trade secrets or the public domain instead of patent filings. Which path prevails depends on how patent law adapts, how courts interpret AI-generated workarounds, and whether the patent office keeps pace with AI-assisted filings.
Continue Reading
Your AI Model Isn't Reasoning. It's Searching.
Generator-verifier loops explain why AI looks like reasoning when it isn't, and where the gap between cheap generation and tractable verification creates real value.
Your AI Vendor Claims Their LLM Can Reason. Here's What's Actually Happening.
Every AI vendor claims their LLM can reason. They all run next-token prediction underneath. Here's what that means for the capability claims you trust.
Your AI Can't Reason. But You Can Still Get Reliable Results.
AI doesn't need to reason to be reliable. It needs problems with verifiable answers. A four-question framework for where AI works in the enterprise.
Wondering how AI capabilities map to your industry's structured-text problems?
We help mid-market companies identify where AI delivers measurable value today and where capabilities are headed next. Let's talk.
Let's Talk