Humans use a deeply ingrained cognitive shortcut when they are faced with wild ideas: the gut-check. For a claim to pass, it has to feel realistic. It needs to align with our prior experience. And any claim that does not pass our gut check faces a higher burden of proof—logical evidence has to overwhelm and recalibrate the emotional firewall.
The emotional firewall usually serves us well. It keeps most conspiracy theories at the fringe and most contrarians siloed. But, sometimes, the truth is outside of our comfort zone, beyond our emotional firewall. The voracity with which Donald Trump has refactored our government has proven that many of our firewalls, including my own, were placed quite badly. It was very hard, in December, to believe that a president would jail innocent political opponents or ship innocent people to international prisons. Even though I understood such a thing was logically possible or even plausible, I couldn't feel it. I couldn't see it. The thing had to happen before I believed it was possible. I had to shift my emotional firewall. I don't think I was alone.
This year has also forced me to recalibrate what I believe are plausible outcomes of the invention of AI. Here, the breadth of possible futures is so wide, the uncertainty so deep, that properly calibrating our expectations about the future might be impossible. But let's try.
With AI, it's important to step back and look at what is really being aimed at. Fundamentally, AI companies want to create a capable, intelligent, and autonomous actor that can do any task that a human might do. Such a thing will be able to produce and pursue goals, make value judgements, and possess some degree of power over the world so that it may accomplish its goals. Attributes like these may or may not come "for free" with intelligence itself, but that question may not matter. Goals, values, and power will be given to AI agents, because good employees need these attributes to be economically valuable. And good employees are what they are building.
History shows that, given time and compute, AI will become better than humans for each task it is trained to perform. This began with very narrow "tasks" such as chess and Go, and as time has progressed, AI systems have become able to perform a wider and wider variety of tasks. As both their skill and generality increase, possibilities will emerge that are hard to fathom from our current point of view.
Whenever different types of actors with differing levels of capability interact, there are profound consequences for the less capable. A pet is entirely at the mercy of their owner. A single person can't wage a war against a state.1
If we're able to control our AIs, then we will have a far greater lever on our world than ever before. If they really do go rogue, we will be up against a far more capable species. These possibilities are likely beyond everyone's firewall. Even just separating sci-fi and real possibility on an intellectual level is very difficult, let alone believing that the possibilities really are possible.
I won't try to convince you that AI is an existential threat. But I do believe that simply writing off the possibility is intellectually dishonest. There are simply too many credible people making intellectually rigorous arguments. Of course, they might be wrong, but we simply don't know enough about the future to be certain. They might be right.
My point is that we all need to understand that the future, even the next few years, has a reasonably high chance of being wild. As in, "a completely new phase of life on Earth" wild. Everything from human extinction to a fully equal, prosperous, utopian society is within the realm of possibility. We need to take this potential seriously because we do have influence over the future. If we simply write off strange futures as "seeming crazy", we leave it up to chance whether or not they occur. Steering the future requires collective action.
Much of the AI conversation is centered—rightly—around making sure that a dictator does not control a future superintelligence. It is not difficult to imagine a dictator with a superintelligent robot army possessing far more control than dictators of the past or present. This is why many have argued that superintelligence must be built in America.
But I'm not sure most people are feeling the political possibilities either. As I see it, Trump is making almost every move you'd expect an aspiring autocrat to take, and there is a very real possibility that American democracy is ending. Even if you disagree, consider that—regardless of the end result or motive—the power structure of the American government is changing rapidly. If you think Trump’s power is acceptable, consider your least favorite candidate in the 2028 election gaining such power. It is, regardless, a fundamental change to our governance.
There are plenty of whistleblowers arguing America is sliding into autocracy. There are also plenty of whistleblowers arguing AI will be transformative. But I'm not sure they are the same people. They need to be, because each trend makes the other far more important. The collapse of American democracy would be alarming without the prospect of superintelligence, but with superintelligence comes power, and that should make the prospect of autocracy much more alarming. Likewise, the invention of superintelligence would be a pivotal moment in any decade; but it is even more so given that it's occurring in America2 at a time when the American president is seemingly focused on accumulating power and nothing else.
I don’t know what the solutions are. But the intersection of American politics and the invention of AI must be a broad public discourse. Democracy is the best mechanism humans have invented to make decisions, and we should use it. For this to happen, we must all consider that the seemingly impossible might be, in fact, probable.
If you read only one thing about AI, it should be AI 2027. The authors there are, almost explicitly, trying to break down the emotional firewall. Although it was extensively researched, it is not a forecast. It is a story that shows the flavor of what is likely. Their scenario will not happen, but something like it probably will.
This one-sentence open letter was signed by hundreds of top researchers and reads
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
I also recommend podcasts such as those from the Future of Life Institute and Dwarkesh such as this.
Here is a good overview of Trump's assault on our past government's norms (from the NY Times, should be free as a gift article). Sam Harris speaks with clarity about the threat of Trump, for example here, although this was before the election. More recent episodes are good but paywalled.
1I've argued that states and other human collectives should be themselves considered as superintelligences. Importantly, though, a very different type of superintelligence than future AI.
2I don’t mean to write off Chinese AI companies, but since AI progress is so compute-driven, and America has most of the compute, it is worth focusing on America.