🤖 OpenAI Launches GPT-5.5 With Agentic Coding Upgrades, Token Efficiency Gains, and Two Distinct Model Tiers
- OpenAI officially announced GPT-5.5 on Friday, April 24, describing it as a significant upgrade to the model family powering both ChatGPT and its Codex coding agent - the company's direct competitor to Anthropic's Claude Code
- The release ships as two distinct variants: GPT-5.5 Thinking, which OpenAI positions as offering "faster help for harder problems," and GPT-5.5 Pro, which is pitched as a research partner for tasks where accuracy matters more than speed
- OpenAI claims the new model is meaningfully better at multi-step planning, tool use, and self-verification - meaning it can catch and correct its own errors during long agentic tasks without requiring constant human intervention
- The company is also claiming improved token efficiency specifically for Codex tasks, which is a pointed claim: if true, it means developers running long coding sessions will burn through fewer tokens to complete the same work, which translates directly to lower costs at scale
- Access tiers are tiered by subscription: GPT-5.5 Thinking is available to ChatGPT Plus, Pro, Business, and Enterprise subscribers, while the more powerful GPT-5.5 Pro is limited to Pro, Business, and Enterprise plans; in Codex, GPT-5.5 spans Plus, Pro, Business, Enterprise, Edu, and Go plans - and API access is described as coming "very soon"
- This release follows OpenAI's April 9 launch of a new $100/month ChatGPT subscription tier designed specifically to support heavier Codex use, offering 5x more Codex usage than the standard $20/month Plus plan - a clear signal that the company is betting heavily on agentic coding as a primary revenue driver
Why it matters: Agentic coding - where an AI model doesn't just suggest code but plans, executes, and verifies multi-step programming tasks autonomously - is the current central battleground between OpenAI and Anthropic. GPT-5.5 is OpenAI's most direct answer yet to Claude Code, and the token-efficiency claim is the one developers should scrutinize most carefully: if it holds up in production, it changes the cost calculus for teams running Codex at scale. API access arriving "very soon" means this isn't just a consumer product update - it's a signal to enterprise developers to start evaluating now, before the next model cycle makes this one obsolete. (source)
🕵️ Anthropic's Unreleased "Mythos" Model Accessed by Unauthorized Users Through Third-Party Contractor
- A Bloomberg report revealed this week that a handful of users in a private online forum gained unauthorized access to Anthropic's Mythos model - a frontier AI system that has not yet been publicly released - through a third-party contractor working with Anthropic
- The users reportedly made an "educated guess" about the model's online location based on the URL formats used for previously released Anthropic models, which is a remarkably low-tech entry point for what is potentially a very high-stakes breach
- Anthropic confirmed it is investigating the incident and told Bloomberg it has found no evidence that the breach extended beyond the third-party contractor's environment - but that qualification is doing a lot of work in that sentence
- The stakes of this particular breach are unusually high: Anthropic has stated that Mythos is capable of identifying vulnerabilities in every major operating system in the world, which puts it in a category of AI systems that security researchers and government officials have been warning about for months
- White House Office of Science and Technology Policy Director Michael Kratsios issued a memo this week stating that the administration would work closely with private AI companies to combat industrial-scale espionage, with particular alarm directed at "AI distillation" techniques - a method by which smaller models are trained on outputs from larger frontier models and then used to extract proprietary capabilities through jailbreaking
- Marcus Fowler, CEO of Darktrace Federal - a cyber solutions provider for defense and intelligence agencies - put it plainly: "You also need to be securing these things as much as possible, because they are targets for industrial espionage, whether it's hobbyists that are just dying to play with it to nation-states that will want to weaponize it or leverage it for their own gains"
Why it matters: The Mythos breach is not just an embarrassing security incident for Anthropic - it is precisely the scenario that the White House OSTP memo was written to address. A frontier model capable of finding zero-day vulnerabilities across every major operating system is, by definition, a dual-use technology with serious national security implications. The fact that access was gained through a third-party contractor via a predictable URL pattern - not through a sophisticated cyberattack - underscores a point that security professionals have been making for years: the weakest link in AI security is almost never the model itself. It is the surrounding infrastructure, the contractor relationships, and the operational security practices that most organizations treat as an afterthought. The AI distillation threat flagged by Kratsios adds another layer: even partial access to a model like Mythos could theoretically be used to train smaller, more accessible models that inherit some of its capabilities.
🏢 Meta Laying Off 8,000 Employees in May While Deploying Keystroke-Tracking Tool to Train AI Models
- Meta is planning to cut 10% of its global workforce in May, affecting approximately 8,000 employees, according to a report by the Wall Street Journal - the company declined to comment on the layoffs directly
- The cuts come alongside the cancellation of plans to hire for 6,000 open roles, meaning the total headcount impact is closer to 14,000 positions when unfilled roles are factored in
- Meta's stated rationale is offsetting the enormous cost of its AI infrastructure investment - the company has committed to spending aggressively on data centers, compute, and model development, and the workforce reduction is framed internally as a rebalancing toward that priority
- In the same week as the layoff announcement, Meta deployed a tool to track employee keystrokes and click locations to generate training data for its AI models - specifically to teach those models how humans interact with computers in real-world workflows; a Meta spokesperson said "there are safeguards in place to protect sensitive content, and the data is not used for any other purpose"
- Meta is not alone: the layoffs follow similar workforce reductions at Oracle, Amazon, and fintech company Block, suggesting a broader pattern of large tech companies using AI investment as justification for headcount reduction across the industry
- Matt Calkins, founder, chair, and CEO of software and cloud company Appian, offered context when speaking to the Washington Post: "Their motto is move fast and break things. So, this is the kind of firm that they're going to be. I think the long-term picture is a positive one, but, in the meantime, there will be some turbulence" - though public skepticism about that long-term picture is at an all-time high, with polls showing voters are increasingly unconvinced that AI will produce tangible benefits for average workers
Why it matters: This is the clearest real-world illustration yet of what the AI-labor tradeoff actually looks like inside a major technology company. Meta is simultaneously reducing its human workforce and using that same workforce's behavioral data - their keystrokes, their mouse movements, their navigation patterns - to train the AI systems that may eventually automate significant portions of that work. The irony is not subtle, and it is not lost on the public. House Democratic candidate Alex Bores articulated the anxiety that many workers feel when he said on "The Ezra Klein Show" this week: "I would love to understand why you think we're not headed to a world of full automation, because it's tough for me to know where that stops once we start on it." The Meta story is not just a business story - it is a political and cultural flashpoint that will continue to shape the public debate around AI and labor displacement for months to come. (source)
⚖️ DOJ Intervenes in xAI's Lawsuit Against Colorado's AI Anti-Discrimination Law - A Federal First
- The Department of Justice moved on Friday to formally intervene in xAI's lawsuit challenging Colorado's AI anti-discrimination law, marking the first time in U.S. history that the DOJ has intervened in a case challenging state-level AI regulation
- Colorado's law - which is set to take effect on June 30, 2026 - requires AI developers and deployers to disclose specific information when creating algorithms designed to make or influence decisions in sensitive domains, including mortgage lending, employment, and hiring
- xAI sued to block the law earlier in April, alleging it is unconstitutional; the DOJ's complaint takes particular issue with the law's "explicit carveout for discriminatory algorithms designed to advance 'diversity' or 'redress historic discrimination'"
- Assistant Attorney General Harmeet K. Dhillon issued a statement that left little ambiguity about the administration's position: "Laws that require AI companies to infect their products with woke DEI ideology are illegal. The Justice Department will not stand on the sidelines while states such as Colorado coerce our nation's technological innovators into producing harmful products that advance a radical, far left worldview at odds with the Constitution"
- David Sacks, President Trump's AI adviser, reinforced the position on X: "AI models should not be required to alter truthful output to comply with DEI"
- The Colorado Attorney General's office declined to comment on the DOJ's intervention
Why it matters: The significance of this intervention extends well beyond Colorado's specific law. This is the first time the federal government has used the Department of Justice as an active instrument to challenge state AI regulation - and it sets a precedent that every state legislature currently drafting AI bills needs to understand. The Trump administration has made it clear that it views state-level AI regulation as a potential obstacle to federal AI policy goals, and it is now willing to litigate that position in federal court. If Colorado's law is struck down, it creates a legal template that could be used to challenge similar legislation in other states. The era of states independently setting AI rules without federal legal pushback is, effectively, over. (source)
🌐 The DOJ's Colorado Intervention Is One Piece of a Coordinated Federal AI Regulatory Strategy
- Trump's December 2025 executive order on AI specifically named Colorado's AI anti-discrimination law as a target - making it the only state AI law explicitly called out by name in a presidential executive order
- The same executive order directed the Commerce Department to review state AI laws, identify those deemed "onerous" or in conflict with federal policy, and flag them to the DOJ's AI Litigation Task Force by March 11, 2026 - a deadline the Commerce Department missed
- The DOJ's AI Litigation Task Force was established specifically to coordinate federal legal strategy around AI regulation, and Friday's intervention in the Colorado case appears to be its first major public action, though the DOJ did not immediately confirm whether the Colorado case is formally part of that task force's work
- David Sacks, serving as Trump's AI and crypto adviser, has been a consistent public voice for the administration's position that AI models should not be required to modify their outputs to comply with what the administration characterizes as DEI mandates - his X post on the Colorado case is consistent with a broader communications strategy around AI deregulation
- The administration's approach reflects a deliberate tension between federal AI policy - which under Trump has emphasized speed, deregulation, and competitive advantage over China - and state-level efforts to impose transparency and anti-discrimination requirements on AI systems used in consequential decisions
- Legal experts and civil rights advocates are likely to challenge the DOJ's constitutional arguments, particularly around whether federal AI policy goals can preempt state anti-discrimination laws - a question that may ultimately require Supreme Court resolution
Why it matters: What is emerging is not a series of isolated legal skirmishes - it is a coordinated federal strategy to establish the boundaries of permissible state AI regulation before those boundaries are set by the courts or by Congress. The missed Commerce Department deadline is notable: it suggests the administration's coordination mechanisms are still being built in real time, even as the legal interventions are already underway. For businesses operating across multiple states, this creates genuine compliance uncertainty - state AI laws that exist today may not exist in six months, and the federal framework that would replace them is not yet defined. The outcome of the Colorado case will be one of the most consequential AI policy decisions of 2026, regardless of which side prevails. (source)
🔐 AI Supply Chain Security Has a Serious Problem - Here Is the 10-Step Framework to Fix It
- According to IBM's 2025 Cost of a Data Breach Report, 13% of organizations reported AI-related security breaches in the past year - and of those, 97% lacked proper AI access controls; the average cost of a data breach in the United States hit $10.22 million, a figure that makes the cost of implementing proper security controls look modest by comparison
- The AI supply chain is broader and more complex than traditional software supply chains: it includes data collection and cleaning, model training, weight generation, model registries, deployment pipelines, and third-party integrations - each of which represents a distinct attack surface that most security teams are not yet auditing systematically
- Real-world incidents demonstrate that these are not theoretical risks: in 2023, researchers discovered a backdoored open-source model that had been uploaded to a public model hub, where it appeared legitimate and passed standard tests - but triggered harmful behavior only on specific prompts, a classic supply chain backdoor; in 2022, PyTorch nightly builds were compromised by a dependency confusion attack that left systems vulnerable to full compromise
- NIST has formally identified data poisoning as a key AI supply chain risk, warning that adversaries can introduce corrupted training data to cause AI systems to make harmful or erroneous decisions in ways that are extremely difficult to detect after the fact - a risk that is compounded by the fact that AI models "hold onto what they learn," meaning mistakes can propagate across an organization rapidly
- Security engineers Anoop Nadig and Snahil Singh, writing in Infosecurity Magazine, recommend a 10-step layered defense framework drawing on the SLSA (Supply-chain Levels for Software Artifacts) open-source industry standard: the steps include generating SBOMs (Software Bills of Materials) for all models and datasets, enforcing provenance verification and cryptographic signing of model artifacts, pulling models only from trusted sources with pinned versions, implementing Zero Trust access controls, logging data origins with tamper-evident trails, reducing blast radius through minimal base images and strict network policies, testing models in isolated environments with canary prompts to detect backdoors, implementing continuous behavioral monitoring for output anomalies, maintaining incident response playbooks with model rollback capabilities, and tracking model and data licenses for compliance
- The authors use the "Swiss cheese model" of layered security to frame the approach: each individual control has gaps, but when SBOMs, provenance checks, signed artifacts, data lineage tracking, pinned digests, admission controls, network policies, and runtime verification are stacked together, the gaps rarely align - making a successful attack require navigating every layer simultaneously
Why it matters: Most security teams are still auditing AI model outputs rather than the pipelines that produced those models - and that is precisely where the most dangerous attacks are happening. The Anthropic Mythos breach covered earlier in today's newsletter is a perfect illustration: the model itself was not compromised, but the surrounding infrastructure and contractor relationships were. Data poisoning, unsigned artifacts, over-permissioned integrations, and dependency confusion attacks are live, documented threats - not future concerns. The SLSA framework and the layered defense approach described here are not aspirational; they are the practical starting point for any organization that is deploying AI models in production and has not yet built a formal AI supply chain security program. Given that the average breach cost in the U.S. now exceeds $10 million, the ROI on getting this right is not a difficult calculation. (source)
Ready to try every AI model in one place? One subscription gives you access to ChatGPT, Claude, Flux, Sora, ElevenLabs, and 20+ more.
Like what you read?
Subscribe to get Oakgen AI Daily delivered to your inbox every morning — free.
Subscribe for free