The 'dangerous' promise of a techno-utopian future
Don't dismiss the commitment tech billionaires have to 'longtermism,' warns critic
On the first full day as the 47th U.S. President, Donald Trump announced what he called the "largest AI infrastructure project in history," with the three leaders of three top tech firms alongside him — OpenAI's CEO Sam Altman, SoftBank's CEO Masayoshi Son, and Oracle's Chairman Larry Ellison.
The joint venture is a new company called Stargate, with an initial investment of $100 billion. In the coming years, companies will invest up to $500 billion and, according to Trump, create 100,000 jobs. The venture is a partnership between OpenAI, Oracle, Japan's Softbank — led by Masayoshi Son — and MGX, a tech investment arm of the United Arab Emirates government.
Stargate's mission is to build "the physical and virtual infrastructure to power the next generation of AI." That includes the construction of data centres all over the U.S. with the first one — a one million square-foot project — already underway in Texas.
But there's a cost to growth that degrades environmental resources. Data centres use vast amounts of energy, including electricity and water. According to the International Energy Agency, "large hyperscale data centres, which are increasingly common, have power demands of 100 MW or more, with an annual electricity consumption equivalent to the electricity demand from around 350, 000 to 400,000 electric cars." The consumption rate will only increase over time.
With so much at stake, the hyperpush toward AI development has plenty of critics. They're concerned about the impact on workers, to education, to the corruption of publicly accessible knowledge. The other worry is policy interests and money could be redirected to a small number of actors in the tech space rather than being earmarked for programs that would benefit the greater population.
Philosopher and historian, Emile P. Torres is a vocal critic of ideologies espoused by tech billionaires like Elon Musk, Peter Thiel, and Sam Altman. Torres's work over the past years has focused on threats to humanity and civilization, including climate change, nuclear war, bioengineering, and artificial intelligence.
Torres sees this promise of a techno-utopia as "dangerous" and shared how this future is described in the ideology.
"We become post-human. Then we spread into space, colonize the universe, and create this sprawling, multi galactic civilization where there are trillions and trillions of super happy people who basically live forever."
Enter artificial superintelligence
The disparate but overlapping ideologies that guide this futuristic vision is TESCREAL — a term dubbed by Torres and computer scientist Timnit Gebru. Torres says the most salient parts are represented by the "T" and the "L" — transhumanism and longtermism, the realization of which depends on the creation of artificial superintelligence.
"It's long been thought that the outcome of superintelligence will either be total annihilation or that superintelligence will usher in Utopia," said Torres.
There's also something distinctively religious about TESCREALism, they argue, from its narrative of transcending our base selves, to redemption and resurrection, and, finally, the promise of immortality in Paradise.
"If you don't live long enough to live forever, as the transhumanist Ray Kurzweil says, you could always opt to have your body or just your head and neck cryogenically frozen so that at some point in the future you can be reanimated, for example, with digital being," said Torres.
"One way to think about the race to build artificial general intelligence with respect to TESCREALism as a religion is this: if God doesn't exist, then why not just create Him? A lot of people in the community refer to AGI or Superintelligence as God-like A.I."
The 'galaxy brain'
Torres focusses on existential risk. It's a term that was coined within the transhumanist movement.
"A key part of the transhumanist vision is that we're going to develop advanced technology and then use this technology to radically re-engineer the human organism," said Torres, who is also a research fellow at Case Western Reserve University.
Imagine if we could conquer the need for sleep; reverse our chronological age; connect our brain to a computer and acquire superhuman cognition; upload our mind to a server and live forever. Torres says the assumed result of this — if AI doesn't kill us first — will be the creation of a new superior post-human species that will usher in a kind of post-human utopia.
"So the idea of existential risk is supposed to refer to any event that would prevent us from realizing this techno-utopian future."
Transhumanism and longtermism are the backbone of the TESCREAL bundle of ideologies. They fit together like puzzle pieces. Transhumanism is a movement — philosophical and intellectual — that believes in creating better humans through technology.
Torres says longtermism is the "galaxy brain" of TESCREALism.
"It says in order to fulfill our long term potential in the universe over the coming millions, billions and literally trillions of years, we will need to become post-human. And ultimately, we will very likely have to not just merge with technology, but completely replace our biological substrate with something technological, maybe computer hardware, or we could live as digital beings in a simulated world."
These views are accepted by super-powerful individuals.- Emilie Torres
Torres explains that, according to longtermist thinking, once we become post-human, there's a moral obligation to colonize as much of the universe as possible and create a larger future population.
"Then we will be able to maximize the total amount of value that could exist in the universe. So the more people there are living in these vast computer simulations running on planet-sized computers, the more possibility there is for happiness or pleasure or whatever else one takes value to be," said Torres.
Believe this is real?
To the average person, this all sounds like the plot of a sci-fi novel. There might be a tendency to dismiss a techno-utopia as "billionaire boys and their toys" but Torres says the danger comes from the amount of power and influence these billionaires have to make policy and spend money.
"These ideologies are increasingly infiltrating governments around the world. So, for example, there was a UN dispatch article published towards the end of 2022 that said, and I'm quoting the article, 'The Foreign policy community in general and the United Nations, in particular, are beginning to embrace longtermism,'" said Torres.
Musk has been dubbed co-president with Donald Trump by Democrats, who claim the tech billionaire calls the shots. Peter Thiel has a long relationship with U.S. Vice-President, J.D. Vance who once referred to his sometime mentor, Thiel, as "possibly the smartest person" he'd ever met. Sam Altman is helping to build Stargate.
These ideas seem "implausibly fantastical," says Torres, but they add TESCREALists see building this future utopia not only as important but as a priority.
"These views are accepted by super-powerful individuals. I mean people who are shaping the world right now and who will continue to shape the world that we live in in the coming decades, many centuries."
Torres says they are not hopeful that TESCREAList influence will come to an end all on its own. Torres thinks given how influential TESCREALists are, the best chance for stopping them is if they damage their own cause themselves.
They say maybe some kind of "own goal" or "disaster" brought about by tech companies — like mass unemployment in a short time — could interrupt the narrative.
"Maybe that's something that could actually galvanize enough people to push back and say, you know, what is this worldview that is driving these individuals and causing harm to us?"
Download the IDEAS podcast to listen to this episode.
*This episode was produced by Naheed Mustafa.