Decel or die
Russian roulette but for the galaxy
*This article was originally written in November of '24 and has since been updated to serve as a prologue to my last article - "AGI is an ASI and ASI means game over"*
Differential Technology Development
Technology features can be leveraged for both beneficial and harmful purposes. If you believe in prioritizing responsible technology use, differential technology development should be your primary focus.
Historically, innovation often occurs in stages. The initial creation of a technology is followed by an optimization process to make it user-friendly and eliminate bugs. Over decades, this process leads to innovations that are inexpensive, high-quality, and relatively bug-free. However, this optimization often happens only after the product reaches users. This approach is more prevalent today, particularly with Silicon Valley’s iterative development model. This model emphasizes:
Getting products into the hands of paying customers.
Using user feedback to improve the product (essentially outsourcing research and marketing).
Leveraging initial sales revenue to fund further development.
Notable examples of this approach include Tesla (Roadster, Full Self-Driving), the one Phone, and recent products like the Humane Pin. These technologies often lack immediate advantages over existing alternatives but target early adopters to validate concepts and secure funding therefore initiating a feedback loop for growth.
This focus on rapid product iteration has enabled the rise of companies like Tesla. However, when developing transformative technologies—as opposed to incremental improvements—this approach can endanger users and society. For instance, AI development represents a step-change technology. We cannot afford to wait for users to expose vulnerabilities. Guardrails must be implemented from day one. Beyond addressing rogue agents, we must proactively create mechanisms to manage these risks.
Unlike current efforts, which lack a deep understanding of large language models (LLMs), we need a Manhattan Project-style initiative to advance mechanistic understanding. This would include developing rules of thought, boundaries of acceptable use, deterministic ability, and intentional system defects to mitigate risks.
U.S. Election and AI Governance
Over the last decade, every U.S. presidential election has been dubbed “the most important of our lifetimes.” This was said in 2016 (Trump vs. Clinton), 2020 (Trump vs. Biden), and now in 2024 as we witness a rematch. This time, however, the stakes may indeed be unparalleled. Not because of Trump’s social authoritarianism or Biden’s economic policies, but due to the trajectory of AI development. If AI progress continues unchecked, this could be the last presidency before we face potentially existential challenges by 2028.
Late last year, the cryptocurrency industry demonstrated its political influence after the Biden administration introduced regulations seen as existential threats. While mishandling crypto may cause setbacks, it would not lead to catastrophic outcomes. In contrast, mismanaging AI could have devastating consequences.
In this election, the real choice is not between Trump and Biden but between AI safety and AI acceleration. We need a president who prioritizes responsible AI governance over laissez-faire policies. For the first time, an economically authoritative approach might be necessary, to slow AI development and ensure safety mechanisms are in place.
The Kelly Criterion and AI Funding Allocation
The Kelly Criterion is a formula for optimally placing bets or allocating capital under uncertainty. Though the mathematical details may be complex, the heuristic suggests that to maximize long-term growth, you should bet a percentage of your resources equal to the probability of success minus the probability of failure. For example, with a 75% chance of success and a 25% chance of failure, the optimal bet is 50% of your capital.
Applying this to AI safety, suppose there’s a 10% chance of existential AI risk (“P-doom”) and a 90% chance of beneficial AI outcomes. Using the Kelly Criterion, 20% of AI funding should be allocated to disaster prevention. Notably, OpenAI had dedicated approximately 20% of its budget to its Superalignment team (which has since been disbanded).
However, current investments fall short. Since 2020, approximately $25 billion has been invested in AI, with $14 billion going to OpenAI and $9 billion to companies like Databricks, Anthropic, and Shield AI. In contrast, the Machine Intelligence Research Institute (MIRI), a leading organization focused on AI safety, has struggled to raise $1 billion over its 20-year existence.
For perspective, venture funding for AI startups increased in Q1 2024 compared to Q4 2023. Crunchbase reports $12.2 billion invested in 1,166 deals in Q1 2024, a modest 4% increase from Q4 2023’s $11.7 billion across 1,072 deals. This highlights the vast disparity in funding between AI development and safety measures, underscoring the urgent need to rebalance priorities.
Decel or die
This isn’t a matter of marginal economic impact—a slight decrease in GDP or slower global growth. The stakes are existential. Getting AI wrong doesn’t mean mild setbacks; it could mean the end of humanity or a fate worse than our extinction. This is the pivotal challenge of our time— it is the civil rights movement for our era, it is the Cold War, it is the war on terrorism, it is about everything human beings have worked for, and no one is paying attention. Alarmingly, those who recognize the gravity of the situation are often rooting for unbridled acceleration. We must shift this narrative and act decisively to ensure AI development serves humanity’s long-term survival and flourishing. Decel or Die.
Future looks dim
Atlas Aristotle
About the Creator
Atlas Aristotle
Trying to do my best



Comments
There are no comments for this story
Be the first to respond and start the conversation.