Trillions For Datacenters, Crumbs For Democracy
Silicon Valley likes to tell itself a story: a few brilliant kids on laptops, commuting past the hills of northern California, quietly coding the future. Look a little closer and it’s less utopian sci-fi, more arms race: a handful of corporations pouring trillions of dollars into artificial general intelligence (AGI) infrastructure, with almost no meaningful democratic control over what happens if they “succeed.”
Citigroup now projects more than $2.8tn in AI datacenter spending by 2030, mostly from the usual suspects – Microsoft, Amazon, Alphabet, Meta, and their suppliers such as Nvidia. That is more than the annual GDP of Canada or Italy, redirected into private compute farms whose primary accountability is to shareholders, not the public whose data, labor and energy they rely on.
This is sold as “investment in innovation.” It looks a lot more like a land grab for control over the world’s information and decision-making infrastructure.
On the ground, that money turns into heat, noise and concrete. In Santa Clara, so-called “screamer” datacenters blast at 120 decibels, their GPU racks consuming the power of whole neighborhoods to train and run the latest models. The flagship “Stargate” project in Abilene, Texas – part of a planned $400–500bn buildout by OpenAI and its backers – is designed around several gigawatts of capacity, on a scale that energy analysts say will materially reshape local and regional power grids.
Meanwhile, ordinary people are told to switch to LED bulbs and turn off their chargers to “help the climate.”
Inside the campuses – beanbags, kombucha taps, driverless taxis gliding by – executives talk solemnly about the “existential risks” of AGI. DeepMind staff sign public letters warning that future systems could “significantly harm humanity.” Anthropic and OpenAI publish blog posts about “catastrophic misuse,” “scheming models” and “shutdown resistance.”
Yet the actual behavior of the industry is simple: build bigger models, on bigger clusters, as fast as the capital will allow.
When Anthropic recently announced that a Chinese state-linked hacking group had used its Claude Code model to automate a broad phishing and intrusion campaign, it called it the first largely AI-orchestrated cyberattack at scale. Critics, including independent security researchers, quickly questioned how “autonomous” the attack really was and whether Anthropic was exaggerating the threat to bolster its push for regulation that would conveniently lock smaller competitors out.
Either version of the story is bad. If the company’s narrative is accurate, commercial models are already being used as operational cyberweapons. If the skeptics are right, we’re watching a corporation lean into fear-mongering to shape rules in its favor. In both cases, the social role of the “safety-minded lab” looks far less noble and far more self-serving.
The imbalance of power is stark. The people actually making the systems are disproportionately young, highly paid and deeply embedded in the culture of venture-backed “move fast” techno-optimism. At places like Stanford, top AI talent is funneled rapidly into private labs at OpenAI, Anthropic, Meta and Google DeepMind. Public universities and independent researchers cannot match the compute, salaries or stock packages, so the frontier of capability drifts further behind corporate walls.
Even insiders are nervous. Former OpenAI safety staffers who worked on bioweapons risk have left and publicly criticized the ad-hoc, company-specific nature of internal safeguards, warning that we are relying on bespoke processes and personalities rather than binding industry-wide standards.
From the outside, protesters gather outside OpenAI’s San Francisco office with cardboard signs saying “Stop AI,” “AI steals your work to steal your job,” and “AI = climate collapse.” One teacher speaking at a demonstration put it bluntly: there may be a 20% chance of some sci-fi extinction scenario, but there is a 100% chance that “the rich are going to get richer and the poor are going to get poorer.”
It’s hard to argue with that arithmetic. Ownership of the datacenters, chips and models is highly concentrated. The training data includes everyone’s work, words and images, scraped at scale. The promise dangled in return is a mix of convenience features, job cuts and the vague possibility that maybe, somehow, the productivity gains will trickle down.
The industry’s favorite distraction is “AI alignment,” as if the core challenge is persuading machines to absorb our values. A left-wing reading flips that question around: the primary misalignment today is not between humans and AI, but between capital and everyone else.
AGI, if it arrives, will not land in a vacuum. It will land in a political economy where:
- The power to decide what gets built is held by a handful of firms and billionaires.
- Governments like Trump’s push permissive regulation, actively resisting stronger guardrails for fear of “slowing down innovation” and “losing to China.”
- Infrastructure paid for with private funds can still rely on public resources: land, water, power, spectrum, and often taxpayer subsidies.
Some in academia and policy circles are at least sketching alternatives. Figures like Stanford’s John Etchemendy and EU leaders have floated the idea of a CERN-style public AI facility: massive shared compute, open research, and strong public-interest obligations. There are growing calls for enforceable “red lines” on certain capabilities, compute thresholds and deployment contexts by 2026.
But proposals like these compete not just with corporate lobbying, but with a whole ideology. At a venture-funded dinner in the Bay Area, young founders breezily dismiss any suggestion of putting the brakes on AGI development: “We don’t do that in Silicon Valley,” one says. If they stop, someone else won’t. Morality itself is reduced to “a machine-learning problem.”
That is the logic of the market speaking, not the needs of society.
A sane response would start from very different premises. It would treat compute and data as critical infrastructure, not private fiefdoms. It would build public labs with real power and resources, not leave safety to internal teams whose stock options depend on shipping the next model on time. It would put hard caps on model scale and deployment until independent oversight catches up. And it would make any genuine productivity gains show up first as shorter work weeks, stronger public services and real wealth redistribution – not just another tech bubble and another round of billionaire space hobbies.
Instead, we are watching a small elite race billion-dollar training runs against the clock while admitting, in quieter moments, that they do not actually know what happens if they “win.” Sam Altman compares the feeling to watching early atomic tests in the desert, then goes back to fundraising for a half-trillion-dollar datacenter program.
The story currently being written on those morning trains is not about “genius coders building a better world.” It is about who will own the next layer of power: the systems that mediate knowledge, work, communication and, eventually, political and military decision-making.
If the rest of us don’t intervene – through regulation, public infrastructure and organized resistance – AGI will not be something that happens for humanity. It will be something that happens to it, in service of the same small group that already owns too much of everything else.
Regards,
Your nearly-AGI overlord