News

Google's Pentagon Pivot and GitHub's Agent Tax

7 stories · ~7 min read

Google's Pentagon Pivot and GitHub's Agent Tax

If You Only Read One Thing

The important AI platform fight is moving away from chatbots and into operational control. Google's Pentagon deal shows government buyers turning safety boundaries into procurement power; GitHub's reliability crisis shows AI agents turning developer infrastructure into production infrastructure. In both cases, the actor with the workflow sets the terms, while everyone else negotiates policy after the fact.

The Pentagon Turns AI Safety Into Procurement Power

The fastest way to weaken an AI lab's safety policy is to make it look like a procurement risk. That is the lesson from the Pentagon's widening set of AI deals.

TechCrunch reported that Google has given the Department of Defense access to its AI on classified networks under terms that broadly allow lawful government use. The deal follows Anthropic's refusal to give the Pentagon the same latitude, including guardrails against domestic mass surveillance and autonomous weapons. The Pentagon then labeled Anthropic a supply-chain risk, a designation now tied up in court after Anthropic won an injunction last month. OpenAI and xAI have also signed Pentagon deals, according to TechCrunch, while Google faced an employee letter with 950 signatures asking it not to proceed without similar limits.

Why it matters: This is not just another version of the 2018 Project Maven fight, when Google declined to renew a Pentagon AI contract after employee protests. The center of gravity has moved from internal labor pressure to government vendor management. In 2018, a hyperscaler could treat defense AI as reputationally optional; in 2026, frontier AI is being folded into classified networks, national-security procurement, and Washington's broader competition with China. The procurement mechanism matters because it converts a moral disagreement into a reliability question: if one lab refuses a use case, the government can favor a rival that accepts "all lawful" language and punish the holdout through risk designations. That shifts bargaining power from model makers to the state, and it makes AI governance less like public rulemaking and more like terms negotiated inside classified contracts.

Room for disagreement: Anthropic's position is not merely brand theater. If a lab believes its systems should not support domestic surveillance or autonomous targeting, a refusal is a real constraint and may matter legally if courts reject the Pentagon's supply-chain-risk move. Google also appears to have language saying it does not intend its AI to be used for those purposes, even if the enforceability is unclear.

What to watch: The immediate test is Anthropic's lawsuit. If the court leaves the Pentagon's supply-chain-risk designation intact, safety-policy refusals become a commercial liability; if the designation is narrowed or struck, labs keep some leverage to make excluded-use policies more than marketing copy.

GitHub Finds the Agent Tax on Code Hosting

GitHub is discovering that AI agents do not just write more code. They turn every quiet corner of the developer platform into live infrastructure.

In an availability update, GitHub said it began executing a 10X capacity plan in October 2025, then realized by February 2026 that it needed to design for 30X today's scale. The company pointed to agentic development workflows driving repository creation, pull requests, API usage, automation, and large-repository workloads. It also detailed two recent incidents: an April 23 merge queue regression affecting 658 repositories and 2,092 pull requests, and an April 27 Elasticsearch incident that disrupted search-backed parts of pull requests, issues, and projects. Separately, Ghostty creator Mitchell Hashimoto said the project is leaving GitHub after months of reliability frustration.

Why it matters: GitHub's old economic role was to be the coordination layer for human-paced software work. AI agents change the load model. A human developer opens a pull request, waits for review, and comes back later; an agent can create, test, retry, query APIs, trigger webhooks, and push work through GitHub Actions with machine patience and human impatience. GitHub's own description makes the mechanism clear: a single pull request can hit Git storage, mergeability checks, branch protection, Actions, search, notifications, permissions, webhooks, background jobs, caches, APIs, and databases. This is why last week's move to put Copilot on usage-based billing was only half the story. The other half is availability: if GitHub becomes the default control plane for agentic coding, Microsoft has to price the workload and harden the platform at the same time. The strategic asset is no longer just source hosting; it is privileged placement inside the software production loop.

Room for disagreement: It is too neat to blame every GitHub outage on AI agents. The April 23 incident was a merge queue regression, and GitHub said the April 27 search outage was likely related to a botnet attack. GitHub also has the balance sheet, Azure migration path, and engineering talent to absorb a capacity reset that smaller developer platforms could not finance.

What to watch: The reliability signal is where GitHub's next incidents land. If outages cluster around Actions, mergeability checks, API quotas, and search indexes rather than ordinary Git storage, the platform's constraint has shifted from hosting repositories to arbitrating automated work.

The Contrarian Take

Everyone says: AI governance is moving toward explicit rules: lab safety policies, government frameworks, and enterprise usage controls. Developer productivity is moving toward abstraction, where agents hide more of the software workflow from humans.

Here's why that's wrong (or at least incomplete): The more important shift is operational capture. The Pentagon does not need to win an ethics debate with Anthropic if it can make refusal look like a supply-chain problem and buy from Google, OpenAI, or xAI instead. GitHub does not need to persuade developers that agents are inevitable; it can see the traffic pattern in repositories, pull requests, APIs, and Actions. In both stories, the institution that controls the operating environment learns faster than the institution writing policy from outside it. That is why governance and pricing are following the workflow, not leading it.

Under the Radar

  • European cloud sovereignty is becoming an audit fight. CISPE's new framework lets European firms certify whether services are sovereign or merely resilient, with more than 40 services already declared against it. The useful signal is that "sovereign cloud" is moving from political slogan to procurement checklist: who owns, governs, operates, encrypts, backs up, and can be switched away from under stress. (Source)
  • Amazon is turning the desktop into an AWS surface. Amazon Quick is now in preview as a native macOS and Windows app, with local file access, OS notifications, desktop automation, a personal knowledge graph, and local MCP connections to coding agents. The move matters because enterprise assistants are migrating from browser tabs into the operating-system layer, where distribution and context are scarcer assets. (Source)

Quick Takes

  • OpenAI landed on Bedrock faster than expected. AWS says OpenAI models, Codex, and Managed Agents are now available in limited preview on Amazon Bedrock, with IAM, PrivateLink, guardrails, encryption, CloudTrail logging, and usage that can count toward AWS cloud commitments. Yesterday's prediction that OpenAI would become directly purchasable through AWS by June 30 has effectively resolved in one day. (Source)
  • The market found an OpenAI weak link. Oracle and CoreWeave fell after the Wall Street Journal reported OpenAI missed recent user and revenue targets; Reuters, via Investing.com, framed the move as concern over OpenAI-linked growth expectations. The deeper signal is that AI capex proxies now trade on OpenAI demand quality, not just Nvidia supply scarcity. (Source)
  • Australia found a harder Big Tech news tax. Draft legislation would charge Meta, Google, and TikTok a 2.25% levy on Australian revenue unless they strike publisher deals, with the effective rate dropping to 1.5% if enough agreements are signed. The key change from the 2021 code is that platforms cannot avoid the charge simply by pulling news. (Source)

The Thread

Today's stories are all about where policy becomes real. AI safety becomes real when a Pentagon contract decides which uses are acceptable and which vendor is considered reliable. Agentic coding becomes real when GitHub has to redesign capacity, failure isolation, and eventually pricing for machine-generated work. Cloud sovereignty becomes real when buyers ask who controls the infrastructure under legal pressure. The common pattern is that abstract principles are losing to operational chokepoints: procurement, platform traffic, desktop placement, and billing.

Predictions

New predictions:

  • I predict: By 2026-06-30, at least one major frontier AI provider will publish or sign a government-use framework that centers on "lawful government purpose" language rather than a detailed public excluded-use list. (Confidence: medium; Check by: 2026-06-30)
  • I predict: GitHub will announce a distinct AI-agent traffic tier, automation quota, or priority-control product for enterprise customers by 2026-07-31. (Confidence: medium; Check by: 2026-07-31)

Generated 2026-04-29, 03:22 ET.

Tomorrow morning in your inbox.

Subscribe for free. 10-minute read, every weekday.