AI Is the Weather
Cutting across the forecasts.
If you’ve looked across the 10 Forecasts we released earlier this year, you’ll notice we didn’t address AI as its own theme. There was no “From Shared Protocols to Proprietary Stacks” or similar title. That was a choice — and a deliberate one.
The conversation about AI is almost entirely about AI companies: funding rounds, buildout, model releases, regulatory battles, business impacts, competitive dynamics. What we’ve been tracking is something different — structural transformations in how global systems operate, who makes rules, and what counts as power.
We started mapping the 10 Forecasts without focusing solely on select drivers like AI, work, climate. There are plenty of resources for that. The structural shifts in critical issues such as transparency, money, energy, alliances, migration, civil society, and political identity needed to be mapped on their own terms before naming AI as the common accelerant. That framing still holds.
Nevertheless, AI appeared in that work and in our conversations not as a subject but as a condition. Like the weather. Present across everything.
We thought it would be useful now to foreground this weather. Here’s what that looks like across the ground we’ve already covered.
Alliances are now compute deals. The US framed its 2025 AI infrastructure agreements with Saudi Arabia and the UAE explicitly as tools for creating strategic dependencies — allies bound by architecture, not values. China runs the same logic through its Digital Silk Road. When the US-China trade truce of late 2025 preserved one untouchable issue — chip export controls — it revealed what the situationship is actually about. Compute access is the new corridor.
Migration is being sorted by algorithm. The drift toward managing human mobility like shipping logistics accelerates in part because of AI — selection mechanisms are increasingly AI systems at border checkpoints and asylum pipelines. Faster. Even murkier accountability. The data generated — biometric, behavioral, locational — becomes a strategic asset held by governments and private contractors alike.
Civil society faces AI on two fronts. The tools it needs — pattern recognition, translation, analysis, reach — are priced for corporate clients and controlled by the platforms civil society is often trying to challenge. As organizations are forced to shrink or camouflage their work, AI capability is a distant priority. Meanwhile AI is becoming an active adversary: a $100M industry super PAC is already targeting candidates who favor regulation; AI companies have burrowed into government deeply enough to shape the rules before the debate is public — and in some cases, the conflict itself; and the same companies blocking regulation in the West are receiving red-carpet welcomes across the Global South. The question isn’t just how to use AI. It’s how to operate where AI companies have already shaped the terrain.
Ideological fog has an engine. AI makes the fog thicker, faster, cheaper to manufacture. But the more consequential effect is structural: AI is setting the stage for cleavages that don’t map to any familiar spectrum because it’s simultaneously a labor problem, a cultural sovereignty problem, a national security problem, and a production problem. Those don’t align — which is why nationalist populists, organized labor, faith groups, and creative industry unions are signing joint declarations in the US, why Republicans and Democrats are collaborating on AI regulation while gridlocked on everything else, and why the Paris AI Action Summit declaration — signed by 58 countries but not the US or UK — reflects a parallel fracture globally between access-first and rights-first coalitions. The old spectrum can’t hold it.
Sovereign stacks — with a caveat. Vietnam frames its own AI stack as authorship; the EU launches parallel LLM projects whose combined budget wouldn’t cover a week of US Stargate spending. But the infrastructure underneath is structurally shared — Vietnam’s sovereign AI runs on Nvidia chips, and an estimated 40,000 Nvidia GPUs operate inside China despite export bans, because complete separation would collapse the systems that officially reject the dependency. They’re performing sovereignty while remaining structurally entangled.
Energy sharpens the contradiction. AI compute demand is now large enough to reshape national grids, turning tech companies into direct energy actors — commissioning nuclear reactors, signing long-term power purchase agreements, bypassing regulated grids entirely. Call it electrostate conversion: the merger of compute ambition and energy sovereignty into a single geopolitical contest. AI sits on both sides of the climate tension — accelerating emissions while simultaneously becoming the primary tool for modeling and managing adaptation. Pressure-test any sovereignty claim two levels down before relying on it.
Opacity got cheaper — and the models are part of it. Not just through synthetic media and automated narrative, but through the models themselves. Proprietary AI systems produce increasingly powerful analysis unavailable outside the organization that owns it. Statistical darkness now has a second dimension: not just data that’s hidden, but insight that’s proprietary, and expanding into new data ecosystems. “Data enclosure,” the conversion of public information into private AI training material and output, is a real and expanding threat. Likewise, a host of new risks to democracy are being created through the expansion of LLMs, even as we find new applications.
The pattern
AI isn’t causing the transformations we’ve been mapping. It’s removing the friction that might otherwise slow them — and adding acceleration where there was only drift before. Opacity was always possible — AI makes it cheap and proprietary. Ideological manipulation was always attempted — AI makes it industrial. Sovereign ambition was always present — AI makes it the organizing logic of geopolitics, while quietly undermining its own premise.
Planning without accounting for that is planning for a world that no longer exists.
What we’re describing isn’t an emerging condition. It’s one already shaping decisions being made right now, in procurement offices, foreign ministries, NGO strategy retreats, central banks. The question for most organizations isn’t whether to engage with this — it’s whether they’re engaging with it deliberately or just being moved by it. The difference between those two positions is what the AREAS framework is built to help map.
Where this goes
What we haven’t mapped explicitly yet is AI’s own trajectory inside these transitions — where the weather is heading, not just what it’s doing now. How sovereign stack contradictions evolve as open-source models mature? Whether electrostate conversion produces new governance forms or just new concentrations of power? How the unusual coalitions now forming around AI regulation hold or fracture? Do proprietary analysis advantages compound into something structural, or get disrupted by the same open-source dynamics reshaping everything else?
We’ll return to those questions. Not tracking the horserace, but watching the pressure systems and mapping their potential impacts.
Add your thoughts or questions in the comments below.
Share your signals
Seeing something in your work that doesn’t fit the existing AI conversation? Reply — we’re collecting signals for future issues.
We want your feedback
This past week, we sent a short survey to many who downloaded the 10F Forecasts. If you haven’t seen it — or if you came to From→To through Substack rather than the download list — we’d still love to hear from you. Six optional questions, three minutes, no wrong answers. Have you read the forecasts? Used them with a team? Found something missing? Want something more?
Something new: 10F One to One
Starting soon, we’re adding a new element to From→To — a short series of conversations between 10F contributors. Given the diversity of expertise, perspective and bases for our 20+ contributors, we wanted to match up thinkers and practitioners who bring very different backgrounds to our work.
The format is simple. Two contributors interview each other — same five questions, both directions. Fifteen to twenty minutes each way. The interest is in hearing how people working in different domains and geographies answer the same prompts.
The first episode is coming shortly. Expect to see them in your inbox alongside the Sunday issues.
No new subscription required. They’ll appear here, in your inbox, as part of From→To.
Follow the Project
Beyond this newsletter, you can find 10F on:
LinkedIn — 10F Consortium — longer-form updates and discussion
Bluesky — @10fconsortium.bsky.social — signal tracking and real-time commentary
Instagram — @10fconsortium — visual summaries and forecast highlights
GitHub — 10F Consortium — full forecast archive with version history and transparency, and new tools as we develop them

