Project Glasswing and the Next Phase of AI Adoption
A new Anthropic initiative points to a future where the biggest AI gains come from stronger systems paired with stronger human stewardship
When I saw Anthropic’s Project Glasswing announcement this week, one of my first thoughts was that if this were not about cybersecurity, it would sound like the setup for a great marketing campaign: a secretive new AI system, a handpicked coalition of elite partners, and a promise to secure the future! That is obviously not a serious suggestion, but it does say something about how intentionally this was framed.
Still, the more I read, the less I saw this as just a cybersecurity story and the more I saw it as a signal about where frontier AI is heading.
Anthropic says its Claude Mythos Preview model has already found thousands of high-severity vulnerabilities, including in major operating systems and web browsers, and it is not planning to release the model broadly. Instead, it is making it available through a controlled initiative with partners including AWS, Apple, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks, along with a wider set of organizations working on critical software and infrastructure. Anthropic is also putting real resources behind it, with a large pool of usage credits and funding for open-source security work.
That has understandably gotten a strong reaction.
Some of the partner commentary treats this as a threshold moment, arguing that AI has crossed into a new level of usefulness for defensive security and that the pace of cyber offense and defense is going to change. Other coverage has been more measured, and I think that is healthy. Wired, for example, noted that some experts see this less as a sudden break from the past and more as a serious acceleration of trends that were already underway in AI-assisted security research. That distinction matters, but I am not sure it changes the bigger takeaway all that much.
I do not want to pretend I know more about this than the people who work in cybersecurity every day, because I do not. What I do think is clear is that announcements like this reveal something important about how the next phase of AI adoption is likely to unfold. The biggest advances are not always going to arrive first as mass-market product launches. In some cases, they are going to show up in gated, high-trust, workflow-specific deployments where the upside is enormous, the risks are real, and the organizations involved want to move carefully. (Think about the CIA's physics-defying Ghost Murmur that uses AI that must be more advanced, and was deployed to find a missing airman 40 miles away from his heartbeat.)
That part is what caught my attention most. Anthropic is not just saying, “Look what our model can do.” It is also saying, in effect, “This capability is consequential enough that we are going to roll it out in a controlled way, with safeguards, partners, and a lot of oversight.” Whether you think Mythos is a dramatic leap or a fast acceleration, that feels like an important glimpse of what is coming.
From a FOMO.ai perspective, I do not read that as a story about humans becoming less important. I read it as the opposite. The more capable these systems become, the more valuable the human-and-technology layer becomes. Stronger tools do not remove the need for judgment, prioritization, review, accountability, and strategy. They make those things more important because someone still has to decide where the system should be pointed, what matters most, what to trust, and how the output connects back to real-world goals.
That has always been our view. Our value does not come from software alone, nor from humans trying to compete with machines. It actually comes from combining the two in a way that actually moves the business forward. If the technology takes another meaningful step forward, that is good news for our clients and for us, as it increases what is possible while maintaining the good stewardship of humans.
So even if you never touch a cybersecurity workflow, I think Project Glasswing is worth paying attention to. It looks like an early example of a broader pattern: more powerful AI systems, introduced first in narrow, high-stakes settings, with real advantages going to the teams that know how to direct them effectively.
To me, that is the real story. AI is getting more operationally capable, and the organizations that benefit most will not just be the ones with access to the best models; they will be the ones that know how to shepherd those models toward useful outcomes with the right mix of human judgment and technology.
Dax Hamman is the CEO & Co-Founder at FOMO.ai, the author of 84Futures, and dax.fyi. If you’re ready to generate sales & leads without paid ads, schedule a free call and we will make you a custom $5,000 Playbook.


