Google's Full-Stack Offensive and the Pentagon's Wartime Purge
6 stories · ~9 min read
The One Thing: Google just revealed it's the only company on earth that builds its own AI chips, trains its own frontier models, runs its own cloud, ships its own agentic IDE, and now operates a marketplace where third-party agents run on its protocol. That's not a product launch. That's an operating system play for the entire AI economy.
If You Only Read One Thing
VentureBeat's analysis of why Google doesn't pay the "NVIDIA tax" is the best single piece contextualizing yesterday's TPU 8 launch. It explains how Google's vertical integration insulates it from the compute cost pressures crushing every other AI lab.
TL;DR
Google Cloud Next 2026 wasn't a product launch, it was Google revealing the most vertically integrated AI stack in the industry from custom silicon to agent marketplace, challenging both NVIDIA's compute dominance and the hyperscaler-lab partnership model that defines AI's current era. Meanwhile, the Pentagon's wartime leadership crisis deepened as Defense Secretary Hegseth fired Navy Secretary Phelan during an active naval blockade of Iran, leaving the service branch that operates the blockade under its third civilian leader in fourteen months.
Google Cloud Next: The Full-Stack Offensive No One Else Can Run
Here is an incomplete list of what Google announced at Cloud Next in Las Vegas on Tuesday: two new custom AI chips, an agentic development IDE, an enterprise agent management platform, an agent-to-agent communication protocol now in production at 150 organizations, a marketplace where third parties sell AI agents, a security platform built with its new $32 billion acquisition Wiz, and a megascale data center networking fabric called Virgo. Oh, and the fact that 75% of all new code at Google is now AI-generated, up from 50% last fall.
Any one of these would be a headline. Together, they reveal something more important: Google is building an operating system for the AI economy, and it's the only company that can.
Why it matters (Value Chain Analysis): The AI infrastructure market has consolidated around a simple pattern. Labs build models. Hyperscalers provide compute. NVIDIA supplies the chips. Everyone rents from everyone else. Amazon locked up Anthropic with $25 billion and a 5 GW compute commitment. Microsoft bolted OpenAI to Azure. These are bilateral deals where value leaks at every interface.
Google's Cloud Next announcements reveal a different architecture entirely. The TPU 8t (training) scales to 9,600 chips with 2 petabytes of shared high-bandwidth memory in a single superpod, claiming 3x the processing power of last-generation Ironwood at 2x the performance per watt. The TPU 8i (inference) connects 1,152 chips per pod with 3x more on-chip SRAM for the low-latency demands of running millions of agents concurrently. This is the first time Google has split its TPU line into dedicated training and inference variants, a tacit acknowledgment that inference compute is becoming a fundamentally different workload from training.
But the chips are table stakes. What matters is everything above them. Google's Agent2Agent (A2A) protocol launched with 50 partners and now has 150 organizations routing real production tasks between agents built on different platforms. It's integrated into LangGraph, CrewAI, LlamaIndex, Semantic Kernel, and AutoGen. The new Agent Marketplace lets ISVs sell A2A-compatible agents directly to enterprise customers. Agent Gateway inspects and secures every agent interaction, understanding both MCP and A2A protocols. And Antigravity, Google's agentic IDE, lets developers spawn, orchestrate, and observe multiple autonomous agents across workspaces. One internal team built a native macOS Swift app prototype in days. A complex code migration ran 6x faster with agent-engineer collaboration than with engineers alone.
The strategic logic is simple. Amazon controls the compute relationship with Anthropic. Microsoft controls it with OpenAI. Google controls the entire stack. It makes the chips. It trains Gemini on those chips. It runs the cloud those chips live in. It sets the protocol agents use to communicate. It operates the marketplace where agents are bought. And it secures the traffic with Wiz. No interface means no value leakage. Google Cloud revenue grew 48% year-over-year last quarter and its market share climbed from 12% to 14%, the largest gain among the Big Three.
Room for disagreement: NVIDIA's CUDA moat is a decade deep. Every frontier lab's training stack is built on PyTorch and Triton, which are optimized for GPUs. Google's TPU performance claims are vendor-reported and have not been independently audited at Anthropic or Meta scale. NVIDIA's upcoming Rubin architecture claims 35 petaFLOPS of FP4 training with 288 GB of HBM4. And vertical integration cuts both ways: customers who adopt A2A, Agent Marketplace, and TPU-optimized workloads are locked into Google's ecosystem in a way that multi-cloud GPU deployments avoid. The software ecosystem gap is real.
What to watch: The tell is whether OpenAI expands its TPU usage beyond the initial capacity deal. If the company that defined the NVIDIA-first training paradigm starts running meaningful inference on TPU 8i, the competitive moat narrative shifts permanently. Also watch Google Cloud's Q1 2026 earnings for whether 48% growth accelerates or decelerates under the weight of these infrastructure commitments.
Pentagon in Freefall: Navy Secretary Fired During Active Blockade
Defense Secretary Pete Hegseth fired Navy Secretary John Phelan on Tuesday, effective immediately, with no public explanation. Phelan had addressed a crowd of sailors and defense industry professionals at the Navy's annual conference in Washington just hours earlier. Hung Cao, a 25-year Navy SEAL veteran who lost Virginia's 2024 Senate race to Tim Kaine, stepped into the acting role.
The firing happened during an active U.S. naval blockade of Iranian ports, with three carrier strike groups deployed to the Middle East and the service responsible for every ship enforcing it.
Why it matters (Incentive Mapping): Strip away the personalities and what you see is a structural pattern. Hegseth has now replaced most of the Joint Chiefs of Staff. Only two original members remain: Gen. Eric Smith of the Marine Corps and Gen. Chance Saltzman of the Space Force. The former Chairman, Gen. C.Q. Brown. The Chief of Naval Operations, Adm. Lisa Franchetti. The Vice Chief of Staff of the Air Force. The Army Chief of Staff, fired in a phone call lasting less than a minute after 42 years of service. And now the Navy's top civilian, fired during active naval combat operations.
Multiple sources point to tensions between Hegseth and Phelan over shipbuilding speed, with Stephen Feinberg, the Pentagon's number two, aligned with Hegseth against Phelan's approach. But a deeper dynamic is at work. Hegseth called Trump, got approval, then informed Phelan he could resign or be fired. The pattern is consistent: loyalty to the secretary's agenda, tested and enforced during wartime, with institutional expertise treated as an obstacle rather than an asset.
Cao's appointment underscores the point. He's a decorated combat veteran with genuine operational credentials. But he's also a political figure who ran for Senate on a platform of military culture reform, and he has no prior civilian defense management experience at the Pentagon. The Navy is currently running the most significant naval operation since the Gulf War, maintaining a blockade that requires coordinating carrier groups, logistics chains, and allied naval forces across thousands of miles of ocean. Leadership continuity is not an abstraction here. It is operational capacity.
Room for disagreement: Civilian control of the military is a constitutional feature, not a bug. Phelan was a hedge fund executive with no prior military or government experience before his appointment. If Hegseth and Trump believe shipbuilding modernization requires more aggressive leadership, they have the legal authority to make that change. The military adapts to leadership transitions routinely. Career officers below the political appointee level provide continuity that doesn't depend on any single secretary.
What to watch: Whether the Iran blockade operations show any degradation in coordination or tempo over the next two weeks. The Navy's annual conference was supposed to announce a shipbuilding modernization plan. That plan is now in limbo. Also watch whether Cao's appointment becomes permanent. A Senate confirmation fight would force a public accounting of the Pentagon's leadership churn during active combat.
The Contrarian Take
Everyone says: Google's Cloud Next announcements are impressive but won't dent NVIDIA's dominance. CUDA's software moat is unbreachable, and no hyperscaler's custom silicon has ever displaced general-purpose GPUs for frontier AI training.
Here's why that's wrong (or at least incomplete): The frame is backwards. Google isn't trying to replace NVIDIA for external customers. It's making NVIDIA irrelevant for its own operations, and then making its own operations the platform everyone else runs on. Google Cloud grew 48% last quarter while running Gemini entirely on TPUs. Anthropic signed a 3.5 GW TPU deal and now has $30B+ ARR. Even OpenAI is taking TPU capacity. The question isn't whether CUDA's moat holds. The question is whether it matters when the three largest AI model providers are all running meaningful workloads on Google's chips, and Google is building the agent infrastructure layer above them. NVIDIA won the training era. Google is positioning to own the agentic era.
Under the Radar
-
The 75% number deserves more scrutiny. Google says three-quarters of new code is AI-generated and "approved by engineers." But approved by engineers is doing a lot of work in that sentence. What's the rejection rate? What's the rework rate? What's the defect rate of AI-generated code in production vs. human-written code? Google cited a 6x speedup on one complex migration, but a single case study isn't a productivity metric. Until we see systematic data, 75% is a marketing number dressed as an engineering milestone.
-
Consulting is now an AI delivery business. BCG reported (first reported by Bloomberg [paywalled]) that AI services generated 25% of its 2025 revenue, and AI-plus-tech work accounted for over 40% of the firm's $14.4 billion total. BCG grew its workforce to 33,500, heavily weighted toward AI engineers and data scientists rather than traditional consultants. The consulting industry has quietly become the largest distribution channel for enterprise AI adoption, and nobody is tracking the conflict of interest: the firms advising companies on AI strategy are the same firms selling AI implementation services.
-
IBM's AI anxiety is real. IBM posted (first reported by Bloomberg [paywalled]) in-line Q1 results with software revenue up 11% to $7.05 billion, but the stock didn't move because investors can't figure out whether AI helps or threatens IBM's consulting-plus-middleware business. When your customers can use Claude to do what they used to hire your consultants for, "in-line results" is the best case.
Quick Takes
BCG: AI is now a quarter of consulting revenue. Boston Consulting Group's AI services work generated 25% of its $14.4 billion in 2025 revenue, with the broader AI-and-tech practice accounting for over 40%. The firm added AI engineers and data scientists at scale, growing to 33,500 employees. This confirms that the real AI adoption bottleneck isn't technology, it's implementation. And consulting firms have positioned themselves as the toll booth. (BCG press release)
Virginia redistricting: judge blocks certification hours after voters approve. A Tazewell County judge blocked certification of Tuesday's redistricting referendum, calling the ballot language "flagrantly misleading" and the enabling legislation unconstitutional. Virginia AG Jay Jones promised an immediate appeal. Democrats need those four seats. Republicans may have found their firewall in the courts rather than at the ballot box. (CNBC)
Microsoft explored buying Cursor before SpaceX's $60B deal. CNBC reports that Microsoft evaluated a Cursor acquisition and passed. This means the company that owns GitHub, runs Copilot, and has the deepest developer ecosystem in the world looked at the hottest AI coding tool and decided the price wasn't worth it. SpaceX valued it at $60 billion. Microsoft, which actually competes in the space, didn't. That gap in valuation conviction tells you everything about how different buyers assess the AI dev tools market. (CNBC)
Intel Q1 earnings land after close today. Intel reports after market close, with the stock up 74% in 2026 and near all-time highs. The binary question: does Intel Foundry Services revenue clear $500M for Q1? Analysts expect foundry revenue up double digits quarter-over-quarter from the EUV wafer mix shift, but the business still runs roughly $10 billion in annualized losses. A beat validates the $100 billion rally. A miss could unwind it fast. We predicted in our April 15 briefing that foundry revenue below $500M would trigger a selloff. We'll score that prediction tomorrow. (Yahoo Finance)
Stories We're Watching
-
Iran Blockade: Extended Ceasefire vs. Continued Naval Operations (Day 55) — Trump extended the ceasefire indefinitely, but the blockade continues and three carrier groups remain deployed. Now the Navy is under new civilian leadership mid-operation. Iran says the extension "means nothing" and is reportedly returning to talks. The gap between diplomatic language and operational reality keeps widening.
-
The AI Dev Tools Valuation Crisis (Week 2) — SpaceX's $60B Cursor option. Microsoft explored and passed. GitHub paused new Copilot signups. Anthropic pulled Claude Code from Pro. The entire category is growing revenue while destroying margin. Now Google enters with Antigravity, a free agentic IDE. If the best AI coding tool is free, what's the $60 billion for?
-
OpenAI vs. Musk Trial (4 days away) — The trial begins April 27. With OpenAI in "focus era" mode (Sora killed, triple exec departure, side quest purge), a loss could force governance concessions that complicate the Q4 2026 IPO timeline.
The Thread
The connecting thread across today's stories is institutional capacity under stress. Google's Cloud Next keynote was an exercise in institutional strength: a single company that can design chips, train models, build developer tools, set industry protocols, and secure enterprise infrastructure simultaneously. The Pentagon presented the inverse: an institution systematically stripping itself of experienced leadership during the most complex naval operation in a generation.
The BCG numbers sit between these poles. Consulting firms are growing because enterprises lack the institutional capacity to implement AI themselves. They're renting competence. The 25% AI revenue figure at BCG isn't a sign that AI adoption is working. It's a sign that most organizations can't make it work alone. Google can build the full stack. The Pentagon can't maintain leadership continuity. Most companies fall somewhere in between, writing checks to BCG to figure it out.
Predictions
New predictions:
-
I predict: Google Cloud's market share will reach 16%+ by Q4 2026, driven by TPU 8 capacity and A2A adoption, narrowing the gap with Azure's current 24% share. (Confidence: medium; Check by: 2027-02-28)
-
I predict: Hung Cao will not receive a Senate confirmation vote for permanent Navy Secretary before the midterm elections in November 2026, leaving the Navy under acting leadership through the Iran blockade's likely duration. (Confidence: medium-high; Check by: 2026-11-03)
Generated: April 23, 2026, 5:45 AM ET
Tomorrow morning in your inbox.
Subscribe for free. 10-minute read, every weekday.