Escape Velocity
The Race to Find a New Cosmic Rock to Call Home
Technology gives us access to the universe while at the same time creating risks to our continued existence. Does humankind’s best hope for long-term survival come down to leaving Earth behind?
Existential threats seem to be creeping up around every corner. Wherever you look, there’s another reason to worry, another menace to fear.
Are we doomed?
If we’re anything like other life forms, yes, we probably are. An estimated 99.9 percent of all species ever to inhabit our planet faced their existential test, failed and disappeared—for some, into the fossil record. When we consider that almost 9 million species are thought to share the planet with us today (making up that 0.1 percent), the loss is staggering.
But we’re still here. Whew.
The question of our time, though, is for how long? And how can we make it longer rather than shorter?
The End Is Near?
Most living things have limits to their environmental tolerances. The earth has been around a long time and has endured many changes, cycles, fluctuations and insults. Whether pummeled by asteroids or shattered by supervolcanoes, the earth abides. Rocks and water are almost indestructible, but living things disappear when too much change occurs. In the end it doesn’t really matter whether the fatal change was self-inflicted (ate all the food) or just circumstantial (pH shifted too much).
There’s a growing concern for both types of risks to our survival. Over the last few millennia, and especially the most recent 200 years, we have become the greatest creators and drivers of change on the planet. For the most part we’ve improved our survival. Yet now, as the natural world has given way to our world, the Anthropocene, we wonder about the impact on long-term survival.
At the same time our knowledge of the hazards of life on a small planet in a large universe has also grown. Some risks we cannot control—pulses of cosmic radiation, or the paths of other stars, for example. A million years from now the star Gliese 710 is predicted to make a flyby through the outer edge of our solar system. Its gravity will whip distant asteroids into new orbits, which may bring them into a collision course with Earth.
By then will it matter? Will we already have self-destructed? Or will we have moved on?
A Mind for Eternity
Our dilemma is combining scientific and technological know-how in a way that advances our species while not triggering self-destruction. As King Solomon wrote three thousand years ago, we have been endowed with a sense of eternity (Ecclesiastes 3:11). We intend to go on, so what will we need to do?
Russian rocket scientist Konstantin Tsiolkovsky (1857–1935) had a remarkably prescient view: “Men are weak now, and yet they transform the Earth’s surface. In millions of years their might will increase to the extent that they will change the surface of the Earth, its oceans, the atmosphere, and themselves.”
The easy part of Tsiolkovsky’s prophecy has been changing Earth’s environment. Its surface is very malleable. But to change ourselves—to alter how we think, how we’re motivated—is much harder ground to break up and reconfigure. The heart of man may seek eternity, but it carries within itself a stiff core of selfish desires; like launchpad bolts, these anchor us down, never allowing us to move to where we’d really rather be.
Hollywood does a good job of illustrating this fatal flaw. In many space-based films, the ongoing and all-too-human dynamic between collective success and individual power and ego is often at the center of the story. Interstellar (2014), for instance, concerns the collective effort to move humanity from a dying Earth to a new planet. A crisis arises when one astronaut reneges on his mission promises. His betrayal creates chaos, resulting in tragic loss, destruction and heartache.
The tagline for Outland (1981) sums up the enduring problem: “The ultimate enemy is still man.”
Small-screen programs such as Lost in Space and The Expanse are interesting and compelling because they speak to and examine the internal struggles and questions we all share: What made me do that? Why don’t I choose cooperation over competition? Why don’t I accept loss rather than cheat for gain? As the pioneering space colonists in The Expanse consider impending war, one muses on the missing ingredient: “We make it all this way, so far out into the darkness; why couldn’t we have brought more light?”
“The real problem with your colony [at Alpha Centauri] is the people. You travel across millions of miles of space and everybody thinks it’s gonna be so different. Whatever people think that they’re running away from on Earth, they’re just bringing it all with them.”
Imagining a Better Future
As he imagined the future, Tsiolkovsky created a story that would take us on a leap forward. Both in attitude and scientific capacity, he foresaw a positive progression that could be all but infinite, expanding out into the stars: “[Humans] will control the climate and the Solar System just as they control the Earth. They will travel beyond the limits of our planetary system; they will reach other Suns, and use their fresh energy instead of the energy of their dying luminary.”
It’s all eerily similar to another biblical statement, this one from Solomon’s father, King David: “You have given him dominion over the works of your hands; you have put all things under his feet” (Psalm 8:6).
Tsiolkovsky’s 1920 fiction Beyond the Planet Earth outlined his view of humankind’s interstellar potential. It’s not dissimilar to the way Russian-American writer Ayn Rand used her novels to play out and justify her own worldview. In fact, Rand’s Atlas Shrugged (1957) has compelling parallels to Tsiolkovsky’s story. Both follow a group of elite scientists and entrepreneurs who live in isolated enclaves; his are in a castle in the Himalayas, Rand’s settle in what she calls Galt’s Gulch in Colorado.
Tsiolkovsky’s characters invent a space craft, build and launch space greenhouses, and travel through the solar system. But unlike in Rand’s story, where the inventor class builds and retreats to its own closed community, Tsiolkovsky’s protagonists share their inventions with an ecstatic world.
This is an interesting contrast that illuminates the type of motivational change he knew was necessary. Where Rand’s idea of a better world was one where self-centered and me-based motivations prevail, Tsiolkovsky imagined a future of service and giving, where inventors shared with all to the benefit of the global community. Such a shift in motivation would indeed be a light to guide us forward.
The time frame for Beyond the Planet Earth was 97 years into the future, or 2017. He imagined that by then the language barrier would be gone. Along with each person’s cultural language, he predicted, each would speak a universal language inspiring unity in a world of diversity. He wrote, concerning the new spacefaring inventions and explorations, “News about world events was circulated without hindrance.” As the space project grew and the world became involved in building and traveling, “people became as excited as though the imminent end of the world had been announced. But their excitement was a joyful one; what prospects had opened up for mankind!”
What about us? Where are we on this journey as a global civilization? Will this be our future? Or is our understanding of world conditions coming too little, too late? Are we even looking in the right place for answers?
Existential?
“An existential risk is one where humankind as a whole is imperiled,” wrote Oxford philosopher Nick Bostrom when he first addressed the subject in 2002. “Existential disasters have major adverse consequences for the course of human civilization for all time to come.”
Bostrom’s expertise is in the risks of artificial intelligence (AI) and its possible downsides. In a 2018 paper he suggests a new term, vulnerable world, and argues that what he calls “turnkey totalitarianism” may be the only way to govern and stabilize threats, even those that fall short of species extinction. A deep surveillance network is the core of his plan. But many factors and players bear watching, and even now keeping track is difficult, to say the least.
“In practice, the control problem—the problem of how to control what the superintelligence would do—looks quite difficult. It also looks like we will only get one chance.”
Unfortunately the list of vulnerabilities is growing, a metastasizing work in progress. Today’s short list includes such deadly risks as an alien invasion from space; AI run amok; an impact from a stray asteroid or comet; bioterror and natural pandemics; climate change and agricultural collapse; DNA-wrecking cosmic rays and gamma-ray bursts; global ecophagy (runaway nanobots chewing everything into a gray goo); nuclear and conventional wars; terrorist attacks; supernovae; supervolcanoes; physics disasters (a supercollider creates a black hole, and down we go).
Some of these are science fiction at best, and indeed they have been or are being explored through imaginative literature and films as noted earlier. Others are essentially beyond our control. But all are important pieces in the sustainability puzzle.
That’s why, as Phil Torres points out in The End: What Science and Religion Tell Us About the Apocalypse, awareness is important. These possible catastrophes “are completely unique among all other risk types.” He emphasizes that these threats are “historically singular events for a given species. . . . If even a single existential risk were to occur, the game would be over, and we’d have lost.”
Two-Faced Technology
Torres frames our dilemma as the two faces of technology. “Most or perhaps all advanced technologies are dual-use in nature,” he writes. “This means that the very same artifacts, techniques, theories, research, information, knowledge, and so on can be used for both morally good and morally bad ends.” In other words, there is a possible use and an abuse in everything we do and invent.
The conundrum of the double-edged technology sword is not new. In an essay titled “Interplanetary Man?” (1948), science-fiction writer and philosopher Olaf Stapledon noted that “it is a platitude that man has gained power without wisdom.”
“This possibility of affording to all men full opportunity is now no merely Utopian dream. . . . Nothing now stands in the way but the ignorance, the stupidity and the evil will of men.”
From his vantage point, just after the annihilation of Hiroshima and Nagasaki at the end of World War II, Stapledon warned that we would have to choose what kind of future we wanted: self-destruction or totalitarian suppression of human freedom on one end of the spectrum, or on the hopeful but conditional other end, reconstructing “a new kind of human world, in which the Aladdin’s lamp of science will be used wisely, instead of being abandoned to that blend of short-sighted stupidity and downright power-lust that has played so tragic a part in the application of science thus far.”
The latter option seems obvious enough, yet we’re seven decades down the road and have not come to a decision. Are we still here through wisdom—or through dumb luck?
Either way, the clock keeps ticking toward a day of reckoning. Bostrom’s vulnerable-world hypothesis seems spot on, and according to Torres, it’s becoming ever easier to slip from the wise, constructive uses of technology to the destructive, abusive side; DIY destruction is not as difficult as it once was.
“The intellectual capacity needed to use certain technologies to radically modify the world is quickly decreasing,” Torres says. “One need not be an ‘evil genius’ to bring about large-scale catastrophes these days. . . . This is worrisome because there are many people in the world who harbor a death wish for our species, who fantasize about the collapse of civilization.”
It may take a village, but as Martin Rees likes to say, “the global village will have its village idiots.”
The human proclivity is to opt for the abuse path; that’s the heart of the problem. “There seems to be no scientific impediment to achieving a sustainable and secure world,” writes Rees, Astronomer Royal and cofounder of the Center for the Study of Existential Risk at Cambridge. “We can be technological optimists,” he says. In fact, “coping with global threats requires more technology—but guided by social science and ethics.” But, he continues, “the intractable geopolitics and sociology—the gap between potentialities and what actually happens—engenders pessimism.”
“We need to think globally, we need to think rationally, we need to think long-term—empowered by twenty-first-century technology but guided by values that science alone can’t provide.”
Things Are Looking Up
The trend that Rees, Torres and Bostrom recognize is the need to control how technology is abused, whether by design or by accident. Technology is not going away. We invent; we don’t uninvent. We have never unrung a bell. For Tsiolkovsky, and for many of his contemporaries and many of us today, there’s great excitement about space travel’s potential. The technology offers us access to worlds beyond Earth, whether other planets or free-floating space settlements. That’s the use that we believe would set us free. But as things stand with human nature, such freedom entails risk.
This was the ironic twist Stanley Kubrick embedded in his 2001: A Space Odyssey. Sputnik opened space to us, but soon enough Atlas missiles provided a way to deliver nuclear bombs to the far side of the globe. Mutually Assured Destruction (MAD) ensured a stalemate. In Kubrick’s film, the Discovery spacecraft travels “beyond the infinite.” Through astronaut Bowman’s odyssey, humankind discovers its destiny in the stars. Meanwhile, back at home, space-based weapons orbit on patrol, projecting geopolitical power across the globe—no different, really, than earlier in the story, when man-apes projected power across a waterhole by waving a bone as a deterrent.
Just as Kubrick’s bone/weapon and spacecraft/bomb are symbolically linked, the good and the bad seem to track together in technology. But German-born rocket scientist Krafft Ehricke (1917–1984) believed that extending our presence out into the solar system would actually serve as a safety valve. He suggested that moving beyond the earth would meet our species’ need to grow and expand while keeping our home planet safe from exploitation and irreparable damage; colonization of space would save us from spoiling our own cradle, and from fighting among ourselves down here.
In his essay “Extraterrestrial Imperative,” Ehricke argued that we need to feed our eternal orientation, our pull toward the infinite: “Man seems to be locked into a cosmic reservation that, for all its wealth, threatens to be a scanty Eden for his numbers and aspirations in the future. . . . Confidence in a soaring future—spiritually as well as materially—is the essence of our techno-scientific civilization and Western Man’s greatest message to mankind.”
He insisted that “we must give Man of tomorrow a world that is bigger than a single planet.”
Space Colonies
On Ehricke’s heels, Princeton physicist Gerard O’Neill picked up the theme in The High Frontier: Human Colonies in Space (1976).
In a presentation to the US Congress’s Committee on Science and Technology in 1975, O’Neill had outlined his plan: “The basic ideas of a space colony are the construction of a pressure vessel in space, which would contain an ordinary atmosphere, and that everything within it would run on solar energy. . . . The residents of such a colony could use energy freely at a high rate with no guilt, because of the fact that they would be using a source which is not being pumped out of the ground.”
The High Frontier is considered a classic work outlining how to mine the moon for materials, and then construct, live, and make money on a free-floating, gravity-creating, solar-powered satellite “town” orbiting between Earth and the moon.
It may come as a surprise, he told the congressional committee, “that I’m talking in practical engineering terms, of things that we could do within the technology of the 1970’s.”
A Letter From Space
Gerard O’Neill began The High Frontier with an imaginary future letter from a couple already living in space to another couple grappling with the same decision. The following excerpts give a sense of the normality O’Neill tried to convey; it also brings to mind ideas put forward in Tsiolkovsky’s writings and in movies such as 2001 and Outland.
Dear Brian and Nancy:
. . . Though I never was in the Peace Corps, I understand that the selection methods are similar to theirs. . . .
Then there’s the big step of the first space flight. . . .
In the six months’ training period you’ll have had cram courses in foreign languages. . . .
. . . We live in Bernal Alpha, a sphere about 500 meters in diameter, with a circumference inside, at its ‘equator’ of nearly a mile. . . .
Alpha has a Hawaiian climate, so we lead an indoor-outdoor life all year. . . . For a town of 10,000 people we’re in rather good shape for entertainment. . . . Ballet in one-tenth gravity is beautiful to watch. . . .
You asked about our government, and that varies a great deal from one community to another. Legally, all communities are under the jurisdiction of the Energy Satellites Corporation (ENSAT) which was set up as a multinational profit making consortium under U.N. treaties. ENSAT keeps us on a fairly loose rein as long as productivity and profits remain high. . . .
. . . Some of us do get ‘island fever’ to some degree, probably because we’re really first generation immigrants—it never seems to bother the kids who were born here. . . .
With our best wishes for good luck on the tests,
Cordially,
Edward and Jenny
Echoing Ehricke, O’Neill also described to Congress how investing in a space-colony program would reinvigorate the American dream of conquest and frontiersmanship: “I believe that from the vantage point of several decades in the future, our children will judge the most important benefits of space colonization to have been not physical or economic, but the opening of new human options, the possibility of a new degree of freedom, not only for the human body, but much more important, for the human spirit and sense of aspiration.”
Wishin’ Upon a Star
O’Neill did not get his funding. But today’s space dreamers are multibillionaires who don’t need public funding to pursue cosmic horizons. Among them are Elon Musk, with SpaceX and its Falcon rockets, and Jeff Bezos, with Blue Origin and its New Shepard and New Glenn rockets; both entrepreneurs are advancing toward manned missions, with development of reusable rockets as their prime focus. This would make space flight more akin to air travel today. The ability to reuse the equipment over and over would bring down the cost, and with lower costs would come more payloads and thus greater access.
Musk says, “You want to wake up in the morning and think the future is going to be great—and that’s what being a spacefaring civilization is all about. It’s about believing in the future and thinking that the future will be better than the past. And I can’t think of anything more exciting than going out there and being among the stars.” O’Neill and Ehricke would have agreed.
Bezos recently received the Gerard K. O’Neill Memorial Award for Space Settlement Advocacy. In a May 2018 interview he noted, “We will have to leave this planet, and we’re going to leave it, and it’s going to make this planet better.”
That sounds a lot like O’Neill. Bezos continued, “Professor O’Neill was very formative for me. I read The High Frontier in high school. I read it multiple times, and I was already primed. As soon as I read it, it made sense to me. It seemed very clear that planetary surfaces were not the right place for an expanding civilization inside our solar system. For one, they’re just not that big. There’s another argument that I like to make, too, which is that they’re hard to get to.”
Finding a New Earth?
Back in 2001 cosmologist Stephen Hawking said, “I don’t think the human race will survive the next thousand years, unless we spread into space. There are too many accidents that can befall life on a single planet. But I’m an optimist. We will reach out to the stars.”
Hawking finished the thought in Brief Answers to the Big Questions, posthumously published in 2018: “I was quoted at the time as saying that I feared that the human race is not going to have a future if we don’t go into space. I believed it then, and I believe it still.”
He remained hopeful that in the near term, as more influencers gained access to actually seeing Earth from low orbit—for example, through the efforts of Richard Branson’s Virgin Galactic or the late Paul Allen’s Stratolaunch—our sense of connectedness and care for our terrestrial environment would blossom: “It is my hope that spaceflight will become within the reach of far more of the Earth’s population. Taking more and more passengers into space will bring new meaning to our place on Earth and to our responsibilities as stewards.”
Beyond this, Hawking hoped, our expanding view would “help us to recognise our place and future in the cosmos—which is where I believe our ultimate destiny lies.”
The Breakthrough Starshot initiative (of which he was a Board member along with entrepreneurs Mark Zuckerberg and Yuri Milner) is just one organization that is combing the heavens for a habitable planet. Rather than sending astronauts, their plan is to create nanobot sail ships that will ride a laser beam to probe planets beyond our solar system. The light would push the bot up to one fifth the speed of light, meaning it could reach Alpha Centuri (our next nearest star) in 20 years.
Could this really work? Hawking says that’s merely an engineering problem, “and engineers’ challenges tend, eventually, to be solved.” When they are, he continues, “it would be the moment when human culture goes interstellar, when we finally reach out into the galaxy. And if Breakthrough Starshot should send back images of a habitable planet orbiting our closest neighbour, it could be of immense importance to the future of humanity.”
He adds, “We are standing at the threshold of a new era. Human colonisation on other planets is no longer science fiction. It can be science fact.”
One may argue about how close we really are to that threshold (or is it a ledge?), but there is certainly no end to the optimistic paths one can imagine. Already the unmanned Voyager 1 and Voyager 2 spacecraft have left our solar system, and New Horizons is well on its way. Both Musk and Bezos (in concert with various space agencies) are committed to developing heavy-lift rockets that will support even greater efforts to move away from Earth. But just achieving escape velocity for larger payloads will not be enough.
“I offer no Utopia. Man changes only on a time scale of millennia, and he has always within him the capacity for evil as well as for good.”
Escaping the Final Risk
In 1926, Tsiolkovsky, the pioneering Russian rocket scientist, proposed a 16-point plan for space exploration. Several have to date been met, including designing a pure rocket without wings, reaching escape velocity to fly into Earth orbit, and using pressurized space suits for activity outside a spacecraft. Others, such as colonizing the asteroid belt, have not.
Most important is Point 14: “Achievement of individual and social perfection.” To reach that stage of space development will require a change of human character. Without that change, Tsiolkovsky understood, our efforts would cut a path to nothing new or helpful. Whatever new world we found would be no different than the old one we thought we’d escaped.
It’s not enough just to be in space. If we take our same human nature with us, we’ll only be transferring our problems to another world. It will never be “a new heaven and a new earth”; our collective wish for no more tears or sorrows will never be fulfilled. The new will just be the old all over again unless we first figure out how to transform ourselves into new men and women. Beyond being in another physical place—another planet or a space colony—we need to be in another spiritual place.
“If the world’s ethical standards and moral laws fail to rise and be adhered to with the advances of our technological revolution, we run the distinct risk that we shall all perish.”
German-American rocket scientist Wernher von Braun summed up the challenge most accurately: we are the biggest existential risk. This will need to change if we expect the future, wherever that may be, to be different than the past: “In this age of space flight, when we use the modern tools of science to advance into new regions of human activity, the Bible—this grandiose, stirring history of the gradual revelation and unfolding of the moral law—remains in every way an up-to-date book. Our knowledge and use of the laws of nature that enable us to fly to the Moon also enable us to destroy our home planet with the atom bomb. Science itself does not address the question whether we should use the power at our disposal for good or for evil. The guidelines of what we ought to do are furnished in the moral law of God. It is no longer enough that we pray that God may be with us on our side. We must learn again to pray that we may be on God’s side.”