How Fast Is AI Advancing?

Key points
AI advancement is accelerating exponentially, disrupting white-collar jobs and software engineering. This poses major economic and geopolitical risks, including autonomous weapons, creating a global prisoner's dilemma that demands urgent cooperation for safe integration.
Key takeaway
The rapid acceleration of AI capabilities, particularly in software engineering and autonomous systems, presents a profound and imminent disruption to the global workforce and geopolitical stability. While the technology promises significant benefits like curing diseases and boosting economic growth, it also introduces severe risks, including mass job displacement, the potential for autonomous weaponry, and a destabilizing global arms race. The development trajectory mirrors a high-stakes prisoner's dilemma, where competitive pressures may force actors to prioritize acceleration over safety. Navigating this future requires urgent, cooperative global governance frameworks to harness AI's potential for liberty and human flourishing, rather than allowing it to descend into conflict or oppressive control.
The Unbeatable Army
A swarm of billions of fully automated armed drones, locally controlled by powerful AI and strategically coordinated globally by even more powerful AI, could be an unbeatable army. I don't know that the technology inherently favors liberty. It will be hard to restrain them.
The Current Landscape
The biggest story right now is the absolute crazy stuff going on with artificial intelligence. Last week, I demoed an app I built with Claude, and the response ranged from skepticism to deep concern. This skepticism is worth examining because significant developments are happening. For instance, Claude recently one-shotted a C compiler, a deeply technical and error-sensitive form of software. This suggests concerns about AI code quality may be misplaced.
The Scale of Change
A viral article titled "Something big is happening," with over 76 million views, frames the scale. The author, Matt Schumer, compares this moment to February 2020, just before COVID transformed the world, and argues this is way bigger. He notes the future is being shaped by a remarkably small number of people—a few hundred researchers at companies like OpenAI, Anthropic, and Google DeepMind. A single training run by a small team can shift the entire trajectory of the technology.
A Phase Shift
On February 5th, 2026, two major AI labs released new models: GPT-5.3 Codex from OpenAI and Opus 4.6 from Anthropic. This marked a phase shift. Developers report that AI now produces finished work from plain English descriptions, better than they could do themselves, requiring no corrections. This is the step-function change everyone in the space is discussing.
Researchers Sounding Alarms
Concurrently, AI researchers are sounding alarms on their way out the door. Former heads of safety teams warn the world is in peril, citing the technology's potential for manipulation. One departing Anthropic researcher stated he would "become invisible," adding to the unsettling tone.
The Pace of Improvement
The pace of improvement is concrete. In 2022, AI couldn't do basic arithmetic reliably. By 2023, it could pass the bar exam. By 2024, it could write working software and explain graduate-level science. By late 2025, top engineers handed over most coding work to AI. The February 5th, 2026 models made everything before feel like a different era. The meme of a slowdown is false; acceleration is increasing.
Predictions for the Near Future
Microsoft AI CEO Mustafa Suleyman predicts human-level performance on most, if not all, professional white-collar tasks within the next 12 to 18 months. Software engineers are already using AI for the vast majority of code production, shifting their role to debugging and strategy.
The "Centaur" Phase and Disruption
Anthropic CEO Dario Amodei discusses this shift. He notes software disruption might happen fastest because developers adopt quickly. We are in a "centaur" phase, where humans and AI collaborate, akin to human-computer teams in chess after Deep Blue beat Kasparov. However, that chess era lasted only 15-20 years before AI surpassed the need for human input entirely. Amodei worries this centaur phase for software may be very brief, leading to major disruption for entry-level white-collar and software engineering work. The physics formula F=ma (Force equals mass times acceleration) applies: we have a lot of acceleration (A), so we will experience a lot of force (F)—a major impact.
Potential Upsides
The potential upsides are significant. Amodei moved from computational neuroscience and cancer research to AI with the hope of accelerating progress on complex problems like curing diseases. AI could raise developed world GDP growth to 10-15%, an unprecedented increase that could solve fiscal challenges like national debt.
Technology and Liberty
However, the technology does not inherently favor liberty. Historical technological shifts, like radio, were used by totalitarians to centralize power. The question is whether we can make AI favor liberty. This leads to discussions about using AI for "information wars" or defending democracies with swarms of AI-powered drones—a scary prospect.
The Geopolitical Dilemma
The geopolitical dimension has a precedent: nuclear proliferation. Unlike nuclear weapons, which led to a Mutually Assured Destruction equilibrium, AI presents a different game theory challenge. The prisoner's dilemma framework is crucial here. In this dilemma, two separated prisoners must choose to cooperate (stay silent) or defect (betray the other). Individually, defecting always seems beneficial, but if both defect, they both end up worse off than if both had cooperated.
AI as a Global Prisoner's Dilemma
AI development is a global prisoner's dilemma. Companies and nations face the choice to cooperate on safety restraints or defect and race ahead. Defecting promises short-term advantage, but mutual defection risks catastrophic outcomes for all. The history of nuclear proliferation shows that even successful restraint came close to failure, as during the Cuban Missile Crisis, and may have encouraged other nations to seek weapons for protection.
Strategies for Cooperation
Research by Robert Axelrod on repeated prisoner's dilemma games shows the most successful strategy is "tit for tat": be nice (never defect first), be forgiving (retaliate but don't hold a grudge), be retaliatory (strike back immediately if defected against), and be clear (your strategy must be understandable to build trust). This highlights the importance of transparency and trust in navigating the AI race.
The Builders' Perspective
The people building this technology are both the most excited and the most frightened. They believe it's too powerful to stop and too important to abandon. The next two to five years will be disorienting. Engaging now with curiosity and urgency is critical.
The Human Challenge
In the best-case scenario, where AI brings abundance and ends disease, the fundamental human challenge remains: no pain, no gain. If all challenge is outsourced, life risks becoming monotonous and devoid of meaning. The ultimate danger is a world where pain is wiped away, leading to cognitive decline and a lack of purpose.
A Final Human Note
A final, human note comes from actor James Van Der Beek's message during his cancer treatment. He realized that when stripped of all his roles—actor, husband, father, provider—his core worth was not in what he did, but simply in being worthy of love. This is a profound reminder of our humanity as we face a technologically transformative future.
Navigating the Future
We are heading into a brave new world. The challenge is to navigate it in a way that preserves what makes us human.
Frequently Asked Questions
Qany questions?
Please read the article carefully. If you have any questions, please contact [email protected].
Audio synthesized by Entity-Echo AI Agent