Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Salon
Salon
Politics
Émile P. Torres

Elon Musk's bizarre vision of the future

Elon Musk Dimitrios Kambouris/Getty Images for The Met Museum/Vogue

The world has gone mad. — Elon Musk

What does Elon Musk want? What is his vision of the future? These questions are of enormous importance because the decisions that Elon Musk makes — unilaterally, undemocratically, inside the relatively small bubble of out-of-touch tech billionaires — will very likely have a profound impact on the world that you and I, our children and grandchildren, end up living in. Musk is currently the richest person on the planet and, if only by virtue of this fact, one of the most powerful human beings in all of history. What he wants the future to look like is, very likely, what the future for all of humanity will end up becoming.

This is why it's crucial to unravel the underlying normative worldview that has shaped his actions and public statements, from founding SpaceX and Neuralink to complaining that we are in the midst of "a demographic disaster" because of under-population, to trying — but, alas, failing — to purchase Twitter,  the world's most influential social media platform.

Musk has given us some hints about what he wants. For example, he says he hopes to "preserve the light of consciousness by becoming a spacefaring civilization & extending life to other planets," although there are good reasons for believing that Martian colonies could result in catastrophic interplanetary wars that will probably destroy humanity, as the political theorist Daniel Deudney has convincingly argued in his book "Dark Skies." Musk further states in a recent TED interview that his "worldview or motivating philosophy" aims

to understand what questions to ask about the answer that is the universe, and to the degree that we expand the scope and scale of consciousness, biological and digital, we would be better able to ask these questions, to frame these questions, and to understand why we're here, how we got here, what the heck is going on. And so, that is my driving philosophy, to expand the scope and scale of consciousness that we may better understand the nature of the universe.

But more to the point, Elon Musk's futurological vision has also been crucially influenced, it seems, by an ideology called "longtermism," as I argued last April in an article for Salon. Although "longtermism" can take many forms, the version that Elon Musk appears most enamored with comes from Swedish philosopher Nick Bostrom, who runs the grandiosely named "Future of Humanity Institute," which describes itself on its website as having a "multidisciplinary research team [that] includes several of the world's most brilliant and famous minds working in this area."

Musk appears concerned about under-population: He's worried there won't be enough people to colonize Mars, and that wealthy people aren't procreating enough.

For example, consider again Elon Musk's recent tweets about under-population. Not only is he worried about there not being enough people to colonize Mars — "If there aren't enough people for Earth," he writes, "then there definitely won't be enough for Mars" —he's also apparently concerned that wealthy people aren't procreating enough. As he wrote in a May 24 tweet: "Contrary to what many think, the richer someone is, the fewer kids they have." Musk himself has eight children, and thus proudly declared, "I'm doing my part haha."

Although the fear that "less desirable people" might outbreed "more desirable people" (phrases that Musk himself has not used) can be traced back to the late 19th century, when Charles Darwin's cousin Francis Galton published the first book on eugenics, the idea has more recently been foregrounded by people like Bostrom.

For example, in Bostrom's 2002 paper "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards," which is one of the founding papers of longtermism, he identified "dysgenic pressures" as one of many "existential risks" facing humanity, along with nuclear war, runaway climate change and our universe being a huge computer simulation that gets shut down — a possibility that Elon Musk seems to take very seriously. As Bostrom wrote:

It is possible that advanced civilized society is dependent on there being a sufficiently large fraction of intellectually talented individuals. Currently it seems that there is a negative correlation in some places between intellectual achievement and fertility. If such selection were to operate over a long period of time, we might evolve into a less brainy but more fertile species, homo philoprogenitus (lover of many offspring).

In other words, yes, we should worry about nuclear war and runaway climate change, but we should worry just as much about, to put it bluntly, less intelligent or less capable people outbreeding the smartest people. Fortunately, Bostrom continued, "genetic engineering is rapidly approaching the point where it will become possible to give parents the choice of endowing their offspring with genes that correlate with intellectual capacity, physical health, longevity, and other desirable traits."

Hence, even if less intelligent people keep having more children than smart people, advanced genetic engineering technologies could rectify the problem by enabling future generations to create super-smart designer babies that are, as such, superior even to the greatest geniuses among us. This neo-eugenic idea is known as "transhumanism," and Bostrom is probably the most prominent transhumanist of the 21st century thus far. Given that Musk hopes to "jump-start the next stage of human evolution" by, for example, putting electrodes in our brains, one is justified in concluding that Musk, too, is a transhumanist. (See Neuralink!)

More recently, on May 24 of this year, Elon Musk retweeted another paper by Bostrom that is also foundational to longtermism, perhaps even more so. Titled "Astronomical Waste," the original tweet described it as "Likely the most important paper ever written," which is about the highest praise possible.

Given Musk's singular and profound influence on the shape of things to come, it behooves us all — the public, government officials and journalists alike — to understand exactly what the grandiose cosmic vision of Bostromian longtermism, as we might call it, actually is. My aim for the rest of this article is to explain this cosmic vision in all its bizarre and technocratic detail, as I have written about this topic many times before and once considered myself a convert to the quasi-religious worldview to which it corresponds.

The main thesis of "Astronomical Waste" draws its force from an ethical theory that philosophers call "total utilitarianism," which I will refer to in abbreviated form as "utilitarianism" below.

Utilitarianism states that our sole moral obligation — the goal we should aim for whenever presented with a moral choice — is to maximize the total amount of value in the universe, where "value" is often identified as something like "pleasurable experiences."

When our universe has finally sunk into a frozen pond of maximal entropy, the more value that has existed, the better that universe will have been. But how exactly do we maximize value?

So, whenever you enjoy a good TV show, have a fun night out with friends, gobble down a good meal or have sex, you are introducing value into the universe. When it's all said and done, when the universe has finally sunk into a frozen pond of maximal entropy according to the second law of thermodynamics, the more value that has existed, the better our universe will have been. As moral beings — creatures capable of moral action, unlike chimpanzees, worms, and rocks — we are obliged to ensure that as much of this "value" exists in the universe as possible.

This leads to a question: How exactly can we maximize value? As intimated above, one way is to increase the total quantity of pleasurable experiences that each of us has. But utilitarianism points to another possibility: we could also increase the total number of people in the universe who have lives that, on the whole, create net-positive amounts of value. In other words, the greater the absolute number of people who experience pleasure, the better our universe will be, morally speaking. We should therefore create as many of these "happy people" as we possibly can.

Right now these people don't exist. Our ultimate moral task is to bring them into existence.


Want a daily wrap-up of all the news and commentary Salon has to offer? Subscribe to our morning newsletter, Crash Course.


Underlying this idea is a very weird account of what people — you or me — actually are. For standard utilitarians, people are nothing more than the "containers" or "vessels" of value. We matter only as means to an end, as the objects that enable "value" to exist in the universe. People are value-containers and that's it, as Bostrom himself suggests in several papers he's written.

For example, he describes people in his "Astronomical Waste" paper as mere "value-structures," where "structures" can be understood as "containers." In another paper titled "Letter From Utopia," Bostrom writes that by modifying our bodies and brains with technology, we can create a techno-utopian world full of endless pleasures, populated by superintelligent versions of ourselves that live forever in a paradise of our own making (no supernatural religion necessary!). Pretending to be a superintelligent, immortal "posthuman" writing to contemporary human beings, Bostrom proclaims that "if only I could share one second of my conscious life with you! But that is impossible. Your container could not hold even a small splash of my joy, it is that great" (emphasis added).

If you want to object at this point that you are not just a "container for value," you wouldn't be alone. Many philosophers find this account of what people are very alienating, impoverished and untenable. People — as I would argue, along with many others — should be seen as ends in themselves that as such are valuable for their own sake. We do not matter simply because we are the substrates capable of realizing "value," understood as some impersonal property that must be maximized in the universe to the absolute physical limits. We are all unique, we matter as ends rather than just means, in contrast to the utilitarian view of fungible containers whose merely instrumental value is entirely derivative. (In that view, without us value cannot be maximized, and so for that reason alone it is important that we not only continue to exist, but "be fruitful and multiply," to quote the Bible.)

The central argument of "Astronomical Waste" adopts this strange view of people and why they matter. Since the more value-containers — i.e., people like you and me — who exist in the universe with net-positive amounts of value the "morally better" the universe will become (in the utilitarian view), Bostrom sets out to calculate how many future people there could be if current or future generations were to colonize a part of the universe called the "Virgo Supercluster." The Virgo Supercluster contains some 1,500 individual galaxies, including our own Milky Way galaxy, of which our solar system is one of a huge number — we don't know the exact figure because we haven't yet counted them all.

On Bostrom's count, the Virgo Supercluster could contain 1023 biological humans per century, or a "1" followed by 23 zeros. Now think about that: if these biological humans — the containers of value — were to bring, on average, a net-positive amount of value into the universe, then the total amount of value that could exist in the future if we were to colonize this supercluster would be absolutely enormous. It would both literally and figuratively be "astronomical." And from the utilitarian perspective, that would be extremely good, morally speaking.

But that's only the tip of the iceberg. What if we were to simulate sentient beings in the future: digital consciousnesses living in simulated worlds of their own, running on computers made out of entire exoplanets and powered by the suns around which they revolve? If this were possible, if we could create digital beings capable of having pleasurable experiences in virtual reality worlds, there could potentially be far more value-containers (i.e., people) living in the Virgo Supercluster.

How many? According to Bostrom, the lower-bound number would rise to 1038 per century, or a 1 followed by 38 zeros. Let me write that out to underline just how huge a number it is: 100,000,000,000,000,000,000,000,000,000,000,000,000. By comparison, less than 8 billion people currently live on Earth, and an estimated 117 billion members of Homo sapiens have so far existed since our species emerged in the African savanna some 200,000 years ago. Written out, 117 billion is 177,000,000,000. Ten to the power of 38 is way, way bigger than that.

What does this all mean? Bostrom draws two conclusions: first, since entropy is increasing in accordance with the second law of thermodynamics, resources that we could use to simulate all of these future people (i.e., value-containers) are being wasted every second of the day. "As I write these words," he says at the beginning of "Astronomical Waste," "suns are illuminating and heating empty rooms, unused energy is being flushed down black holes, and our great common endowment of negentropy [or negative entropy, the stuff we can use to simulate people] is being irreversibly degraded into entropy on a cosmic scale."

This means that we should try to colonize space as soon as possible. On his calculation, if 1038 value-containers (i.e., people) could exist in huge, solar-powered computer simulations within the Virgo Supercluster, then "about 1029 potential human lives" are lost every single second that we delay colonizing space. Since our sole moral obligation is to create as many of these people as possible, according to utilitarianism, it follows that we have a moral obligation to colonize space as soon as possible.

This of course fits with Elon Musk's rush to build colonies on Mars, which is seen as the stepping stone to our descendants spreading to other regions of the Milky Way galaxy beyond our humble little solar system. As Musk recently tweeted, "Humanity will reach Mars in your lifetime." In an interview from June of this year, he reiterated his aim of putting 1 million people on Mars by 2050.

The importance of this is that, as the longtermist Toby Ord — a colleague of Bostrom's at the Future of Humanity Institute — implies in his recent book on the topic, flooding the universe with simulated people "requires only that [we] eventually travel to a nearby star and establish enough of a foothold to create a new flourishing society from which we could venture further." Thus, by spreading "just six light years at a time," our posthuman descendants could make "almost all the stars of our galaxy … reachable," given that "each star system, including our own, would need to settle just the few nearest stars [for] the entire galaxy [to] eventually fill with life."

In other words, the process could be exponential, resulting in more and more people (again, value-containers) in the Virgo Supercluster — and once more, from the utilitarian point of view, the more the better, so long as these people bring net-positive, rather than net-negative, amounts of "value" into the universe.

But the even more important conclusion that Bostrom draws from his calculations is that we must reduce "existential risks," at term that refers, basically, to any event that would prevent us from maximizing the total amount of value in the universe.

It's for this reason that "dysgenic pressures" is an existential risk: If less "intellectually talented individuals," in Bostrom's words, outbreed smarter people, then we might not be able to create the advanced technologies needed to colonize space and create unfathomably large populations of "happy" individuals in massive computer simulations.

That's also why nuclear war and runaway climate change are existential risks: If we cause our own extinction, then of course there will be no one left to fulfill our moral obligation of maximizing value from now until the universe becomes uninhabitable in the very distant future. As Bostrom concludes, "for standard utilitarians, priority number one, two, three and four should consequently be to reduce existential risk. The utilitarian imperative 'Maximize expected [value]!' can be simplified to the maxim 'Minimize existential risk!'"

Consistent with this, Musk has on numerous occasions mentioned the importance of avoiding an "existential risk," often in connection with speculations about the creation of superintelligent machines. Indeed, the existential risk of superintelligent machines was discussed in detail by Bostrom in his 2014 bestseller "Superintelligence," although most of the ideas in that book — along with Bostrom's elitist attitude toward the problem — has come from other theorists. "Worth reading Superintelligence by Bostrom," Musk tweeted out shortly after it was published, in what Bostrom has since used as a blurb to promote sales, as seen on his website.

In this worldview, nuclear war and climate catastrophe are "existential risks," but poverty, racism and genocide are essentially no big deal.

While not all retweets should be seen as endorsements, Elon Musk's retweet of Bostrom's "Astronomical Waste" paper sure looks like just that. Not only does the original tweet claim that it might be the "most important" article ever published, but we know that Musk has read and been greatly influenced by at least some of Bostrom's key contributions to the rapidly growing longtermist literature.

Musk wants to colonize space as quickly as we can, just like Bostrom. Musk wants to create brain implants to enhance our intelligence, just like Bostrom. Musk seems to be concerned about less "intellectually gifted" people having too many children, just like Bostrom. And Musk is worried about existential risks from superintelligent machines, just like Bostrom. As I previously argued, the decisions and actions of Elon Musk over the years make the most sense if one takes him to be a Bostromian longtermist. Outside of this fanatical, technocratic framework, they make much less sense.

All of this is worrisome for many reasons. As I argued last year, longtermism is "quite possibly the most dangerous secular belief system in the world today." Why? Because, if avoiding an existential risk should be — for supposedly "moral" reasons — our top four global priorities as a species, where the fifth priority should be to colonize space ASAP, then all other problems facing humanity end up being demoted, minimized, pushed into the background.

By "all other problems," I mean all problems that are "non-existential," i.e., those that would not prevent us from, in the long run, spreading into the cosmos, simulating huge numbers of digital people and maximizing total value.

Racism? Sure, it's bad, but it's not an existential risk, and therefore fighting racism should not be one of our top global priorities. Climate change? Well, as long as it doesn't directly cause an existential catastrophe, or indirectly increase the probability of other existential risks that much, we shouldn't be all that concerned with it. Genocide? Terrible, but the erasure of an entire ethnic, racial, religious, etc., group almost certainly won't threaten our long-term prospects in the universe over the coming trillion years.

To quote Bostrom's own words, a genocide like the one unfolding in Ukraine right now might constitute "a giant massacre for man," but from the longtermist perspective it is little more than "a small misstep for mankind." Elsewhere he described things like World War I, World War II (which of course includes the Holocaust), the AIDS pandemic that has killed more than 36 million people, and the Chernobyl accident of 1986 like this: "Tragic as such events are to the people immediately affected, in the big picture of things … even the worst of these catastrophes are mere ripples on the surface of the great sea of life." Mere ripples.

This is the ethical framework that Elon Musk seems to have endorsed in tweeting out Bostrom's "Astronomical Waste" paper. The future could be so big — it could contain so many people — that nothing much matters right now, in the 21st century, other than avoiding existential risks and spreading into space as soon as we can.

Given that Elon Musk is one of the most powerful individuals in all of human history, we should be very concerned.

Not only do these considerations provide strong reason to take immediate steps that would make Musk less powerful — for example, by demanding that, at minimum, he pay his fair share in taxes — but it offers a more general argument against wealth inequality: No one should be in a position where they can unilaterally and undemocratically control in some significant way the future course of human development. Such control should be left to the demos, the people — we should be able to decide our own future for ourselves.

Right now, the future is controlled by a small group of extremely wealthy individuals who are almost entirely unaccountable. And some of those individuals espouse normative worldviews that should make us all very nervous indeed.

 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.