[MUSIC PLAYING] LISA KALTENEGGER: My name's Lisa Kaltenegger. I'm the director of the Carl Sagan Institute here. And I'll tell you a little bit more about it after I introduce our first welcome speaker.
It is my pleasure. She's one of the most amazing person I've ever met. She's also an Emmy-award-winning writer, Peabody-Award-winning producer, and New York Times best-selling author. In addition-- or even more close to my heart-- she's a visionary who brought us her inspired humankind's vision of the [INAUDIBLE], the earlier one and the one that we're seeing today.
And she also shaped the message that humankind sent out into the universe that's just left our solar system. And I think if that's not enough, she is an inspiration that make this Carl Sagan Institute here from Cornell possible. And so without any further ado, I give you the amazing Ann Druyan.
ANN DRUYAN: I'm deeply humbled by that introduction, especially from the likes of Lisa Kaltenegger, who manages to maintain a sparking scientific research career on identifying the signatures of life on distant worlds at the same time that she's provided the leadership for the Carl Sagan Institute. And she's an inspiration to me, and she couldn't have chosen a greater inaugural speaker for tonight's Carl Sagan lecture.
Those of you who've heard me speak about Carl, forgive me for repeating myself. But there is a Hebrew prayer that we say at Passover, which has the refrain, in Hebrew, Dayenu, it would have been sufficient. Now, traditionally we say to God, and I'm not making any odious comparisons here, but the way I feel about Carl is that if he had just been a poet, we would know who he was.
If he had just been a scientific researcher, an author, or a co-author of some 600 peer-reviewed scientific papers, if he had just been a co-founder of the field of astrobiology, if he had just been one of the first generation of planetary scientists that made not only planetary astronomy, but the search for extraterrestrial life and intelligence a respectable field, Dayenu, we would have known about him. If he had just been an absolutely peerless public speaker and public educator who believed that science belongs to every one of us and that you can't have a democratic society if it only belongs to the few. And for those of you who read the New York Times today and saw that the scientists were purged from the Environmental Protection Agency of the United States, you know how important it is that every one of us have not only an understanding, a working understanding, of our true circumstances in nature, but also a voice in policy.
If he had just been those things, then I think we still would have heard him, but he was also a citizen scientist who was so conscientious that he mapped an independent campaign without any help from anyone to fight for the future of his planet. He was arrested at the Nevada nuclear test site when the Reagan administration continued underground nuclear testing in the face of what was then the Soviet Union's moratorium.
As a young postdoc, he went to all-black colleges in the south in 1961 and '62 to teach a course called the search for intelligent life on Earth, and how that talk must have resonated with those audiences. He was simply the most alive, well-developed, conscientious person I ever knew. And how brilliant of Lisa to ask Martin Rees to give this lecture tonight. How lucky we are that he accepted.
I first met Martin in a Chinese restaurant in London, or in Cambridge, perhaps, almost as long ago as that Grateful Dead concert that took play just a few hundred feet from here in Bailey Hall. It's a good 39 years ago and change. And I felt a deep affection for him that night and enormous respect and that's continued to this day.
Aside from his scientific brilliance and the contributions that he's made, he is a great citizen scientist who understands that someone as learned and respected as he is, with as much knowledge as he has, that it's incumbent upon such a person to share that knowledge, and when necessary, to sound the alarm. I can't wait to hear his talk, and I thank you all for coming tonight and for remembering Carl in the absolutely most appropriate way. Thank you so much.
LISA KALTENEGGER: So that's going to be really easy to follow.
And what I wanted to say is just a few words about the Carl Sagan Institute here at Cornell. So we have 26 faculty from 11 different departments looking into the question, how you could find life within the solar system and outside? And it's completely open. So if you're interested in this question, just send me an email.
We have brainstorming meetings, and we meet and talk about things that are of common interest. And we're bridging the divide between departments to come up with novel ideas of how we could actually find signs of life within our solar system and outside. And we are privileged to actually really stand here on the shoulder of giants, as Ann was just describing, because Carl Sagan started this idea here a long time ago.
So the Department of Astronomy and Planetary Science is a natural breeding ground for such a revolutionary idea, if you'd like. And adding scientists from different ways of life and from different departments into the mix of the Cal Sagan Institute makes it even more [? lifely, ?] and to me, a lot of interesting discussions make it actually worth coming to the office every day trying to figure the new things out that have to do with trying to find life in the universe.
But this is not about our research here, this is actually about Sir Martin Rees, Lord Martin Rees, who has gracefully accepted our invitation to be the first distinguished Carl Sagan lecturer. And what I wanted to say about him are just a couple of words.
The internet, and especially, for example, [? Tet, ?] calls him the world's most eminent-- one of the world's most eminent-- I would say the world's-- OK, one of the world's most eminent astronomers. He is an emeritus professor of Cosmology and Astrophysics from the University of Cambridge. And he has been called, by many people, our key thinker on the future of humanity in the Cosmos.
And that is what he will talk about today. And he phrased it, I think, kindly for us, because surviving the century has a lot of hope in it. And so I'm very much looking forward to your talk, Martin. Welcome to the stage.
LORD MARTIN REES: Thank you very much, Lisa. It's a great honor, of course, to be here tonight to give the first Carl Sagan lecture, with the ideas that Carl stood for, me proclaiming louder than ever today. We need his optimistic vision of life's destiny in this world and far beyond this world.
We need to think globally, we need to think rationally, we need to think long-term. [INAUDIBLE] was, indeed, a mess [INAUDIBLE] science and, most of all, as Ann has said, through his eloquence and his global outreach. In this talk, I'll try to address some themes that would, I think, have engaged him.
You've been familiar with this image for about 50 years. It's iconic for environmentalists. But suppose some hypothetical aliens have been watching the earth for its entire history, what would they have seen? Over nearly [INAUDIBLE] time, 4 and 1/2 billion years, things would have changed very gradual.
Continents drifted, the ice cover waxed and waned, successive species emerged, evolved, or became extinct. But in just a tiny sliver of earth's history, the last one billionth part, a few thousand years, the patterns of vegetation altered much faster than before. This signaled the start of agriculture. And changes in land use accelerated as human populations rose.
Then came even faster changes. The carbon dioxide in the atmosphere began to rise enormously fast, the planet became an intense emitter of radio waves. And something else unprecedented happened. Small projectiles [INAUDIBLE] the planet's surface, escaped the biosphere completely. Some were propelled into orbit around the earth, some joined into the moon and planets.
When they found us with astrophysics, these hypothetical aliens could confidently predict that our biosphere would face doom in a few billion years when the sun flares up and dies. [INAUDIBLE] predicted this unprecedented runaway fever less than half-way through the earth's life. And what might they see if they watched for another 100 years? Will this [? spasm ?] be followed by silence, or will stability ensue? And [? beyond ?] all projectiles leave the earth at, perhaps, establish a races of life elsewhere. These are some questions I'll speculate about this evening.
Some years ago, I wrote a book, which I had titled, Our Final Century?, with a question mark. The publisher deleted the question mark.
And the American publishers changed it to our final hour.
You Americans like instant gratification [INAUDIBLE]. But the theme way that this century is special. It's the first where one species-- ours-- has the planet's future in its hands. We're deep in an era that's called the Anthropocene.
We have huge powers for good. We could trigger the transition from biological to electronic intelligence. But on the other hand, we can irreversibly degrade our biosphere. And advanced technology, if misdirected, could cause a devastating setback to our civilization.
We've had one lucky escape already, which, anytime in the Cold War era, when armament levels escalate beyond all reason, the superpowers could have stumbled towards Armageddon through [? muddle ?] and miscalculation. And that threat is only in a [INAUDIBLE] that still looms over us. Nuclear weapons are based on 20th century science. I'll focus later in my talk on 21st century science-- bio, cyber, and AI, which offer huge potential benefits, but also expose us to novel [INAUDIBLE].
Astronomers often have to remind people that they're not astrologers. Like all scientists, they have, really, a rotten record as forecasters, almost as bad as economists. But even with a very talented crystal ball, things some things we can't predict about how our whole planet is going to change.
For instance, humanity's collective footprint is getting even heavier. 50 years ago, world population was about 3 billion. It's now 7.3 billion. Because we mainly, in Asia and Africa, shown in this distorted maps, where we've [? descaled ?] portion to the growth in the last 30 years.
But the growth is leveling off a bit. The number of births per year worldwide peaked a few years ago and is going down. Then the next world population is forecasted to rise to about 9 billion by 2050. That's partly because most people in the developing world are young, they get to have children, and they live longer.
And the age histogram in the developing world, which Africa is shown there, will become more like it is in Europe, where most people live out a full lifespan, and the birth rate is long [INAUDIBLE]. Population growth seems currently under-discussed. This is maybe because doom-laden forecasts made in the 1970s by the [INAUDIBLE] and others proved off the mark.
Up till now, food production's more than kept paced. So famines now stem from wars or maldistribution, not overall shortages. And it's [INAUDIBLE] to [INAUDIBLE] [? subjects ?] tainted by [? social ?] eugenics in the '20s and '30s with Indian policies under Indira Gandhi and more recently, with China's hardline one-child policy.
So can the US carry 9 billion people? I think there's no need for global panic on this front. Improved agriculture, low-till, water conserving, and perhaps involving GM crops could feed that number by mid-century.
The buzz phrase is sustainable intensification. But lifestyle changes will be needed. The world couldn't sustain even its present population if everyone lived like Americans do today, using much energy and eating as much beef.
Population trends beyond 2050, they are harder to predict. They will depend on the choices made by people as yet [? onboard ?] about the number of spacing of their children. Enhanced education and empowerment of women-- surely a benign priority in themselves-- would reduce fertility rates.
But the demographic transition has reached parts of India, north Sub-Saharan Africa. And if families in Africa remain large, then according to the UN projections, that continent's population could double again between 2050 and 2100, reaching 4 billion and raising the world's population to 11 billion. In Nigeria alone, we have now population equal to Europe and North America combined, and half the world's children will be in Africa.
Well, optimists remind us that each extra mouth brings also two hands and the brain. Nonetheless, the higher the population becomes, the greater the pressure will be on resources, especially if the developing world narrows its gap in the developed world [? to a capital ?] consumption. So we will surely hope that the mobile figure declines rather than rises after 2050, the lower of those three graphs there.
Moreover, if humanity's collective impact on nature pushes too hard against what the Swedish scientist Johan Rockstrom calls planetary boundaries, then the resultant ecological shock could irreversibly impoverish our biosphere. Extinction rates are rising. We're destroying the book of life before we've read it.
Biodiversity is a crucial component for long-term human well-being. We're clearly harmed if fish stocks dwindle to extinction. There are plants in the rain forest whose gene pool might be useful to us.
But for many environmentalists, preserving the richness of our biosphere has value in its own right over and above what it means to us humans. And to quote the great ecologist EO Wilson, "mass extinction is the sin that future generations will least forgive us for." So the world is getting more crowded.
And there's a second firm prediction-- it will be getting warmer. In contrast to population issues, climate change is certainly not under-discussed, though it is, I'm afraid, under-acted-upon at the moment. The famous Keeling Curve shows how the concentration of CO2 in the air has risen over the last 50 years. The oscillations are the falling of leaves in the northern hemisphere, which produces excess CO2, which is soaked up the next spring. But the overall rise is clear. And the fifth IPCC report presented temperature projections for different assumptions about future rates of fossil fuel use.
Now, for all these projections, there's a spread shown by the vertical bars at the right, which is the scientific uncertainty, the uncertainty because we don't know the feedback effects between rising CO2 and clouds and water vapor. So that's the uncertainty, but the four trajectories there are different assumptions about future use of fossil fuels.
But the main point is that despite these uncertainties, the science tells us that under business-as-usual scenarios-- that's the upper two of those curves-- we can't rule out, by 2100, really catastrophic warming and tipping points, triggering long-term trends, like the melting of Greenland's ice cap. Sadly, many deny all this. But even among those who accept that this threat is real, there's a range of views. And these stem from differences not in the science, but in the economics and the ethics-- in particular, in how much obligation we should feel towards future generations.
Some of you may have heard of Bjorn Lomborg and his Copenhagen Consensus of Economists. They apply commercial [? star ?] discounting, and in effect, therefore, write off everything that happens beyond about 2050. So unsurprisingly, Lomborg downplays the priority of addressing climate change in comparison with shorter-term efforts to help the world's poor.
But if you care about those who will live into the 22nd century and beyond, then, as other economists like Stern in the UK and Weitzman at Harvard argue, you deem it worth paying an insurance premium now to protect those generations against the worst-case scenarios. So above all, the policy you favor depends on an ethical issue. In optimizing people's life chances, should we, as it were, discriminate on grounds of date of birth?
As a parenthesis, I'd note, there's one policy context where an essentially zero discount rate is applied, and that's radioactive waste disposal where the depositories as Yucca Mountain, in this country, are required to prevent leakage for 10,000 years. And that's somewhat ironic when countries can't plan the rest of their energy policy even 30 years ahead.
Consider this analogy-- suppose astronomers had tracked an asteroid and calculated that it would hit the earth in, say, 2080, 63 years from now, not with certainty, but with, say, a 10% probability. Would be relax, saying it's a problem that can be set on one side for 50 years because people will then be richer, and it may turn out it's going to miss the earth anyway. I don't think we would. I think there's be a consensus, if we should start straight away and do our damnedest to find ways to deflect it or mitigate its effect. And that's the way I feel about climate change.
Many still hope that our civilization can segue smoothly towards a low-carbon future. The pledges made at the Paris Conference about 15 months ago are a positive step. But even if they are honored, this may not happen fast enough to prevent CO2 concentrations rising to a dangerous level.
Politicians seldom take a long-term view, and won't gain much residence by advocating unwelcome lifestyle changes now when the benefits mainly accrue to distant parts of the world and decades into the future. But there is one measure to mitigate climate change, which genuinely seems a win-win situation. And this is the proposal that all nations should accelerate their R&D into all forms of low-carbon energy generation, renewables, fourth-generation nuclear, fusion, and the rest, and into other technologies where parallel progress is crucial, especially storage, batteries, compressed air, pumped [? stories, ?] flywheels, et cetera, and smart grids.
And that's why an encouraging outcome of the Paris Conference was an initiative called Mission Innovation, which was taken up by more than 20 countries. It's a campaign to double publicly-funded R&D into clean energy by 2020. And there was a parallel pledge by Bill Gates and other private philanthropists.
This target is a modest one, with, presently, only 2% of publicly-funded R&D is on energy. Why shouldn't the percentage be comparable to what's spent on defense research or, indeed, medical research? Because the faster these clean technologies advance, the sooner will their prices fall so that they become affordable to developing countries, where more generating capacity will be needed, and where, at the moment, the health of the poorest billions is jeopardized by smokey stoves, burning wood, or dung, and where there would otherwise be pressure to build coal-fired power stations.
And it would be hard to think of a more inspiring challenge to young engineers than devising clean economical energy systems for the entire world. But if this fails, and if it's clear 20 years from now that the climate sensitivity is high and our climate seems heading irreversibly into dangerous territory, there may be pressure for, as it were, panic measures. This would be being fatalistic about continuing dependence on fossil fuels by combating its effects by geoengineering, plan B.
It's feasible, for instance, to inject enough aerosols into the stratosphere to cool the world's climate. Indeed, what's scary is that this might be within the resources of a single nation, even a single corporation. And there will be unintended side effects. Moreover, the warming will return with a vengeance if these countermeasures were discontinued, and other consequences of rising CO2, like acidification of the oceans, would be unchecked.
Geoengineering would be a political nightmare. Not all nations would want to adjust the thermostat the same way. We'd need very elaborate climatic models to predict what would happen.
And I think the only beneficiaries would be the lawyers. They'd have a bonanza if nations could litigates over bad weather. So I think it might be prudent to explore geoengineering techniques a bit to see which options make sense, and perhaps damp down on [? your optimism ?] about a technical quick fix.
But I think we should be evangelists for new technology, because without it, the world can't provide food and sustainable energy for an expanding and more demanding population. But we need wisely-directed technology advanced renewables are wise goals, geoengineering techniques probably aren't. But what now about other technologies that pervade our lives? Can we cope with their headlong advance?
We're getting more vulnerable. Our increasingly interconnected world depends on elaborate networks-- electric power grids, air traffic control, international finance, globally-dispersed manufacturing, and so forth. And unless these networks are highly resilient, their benefits could be outweighed by catastrophic, albeit rare, breakdowns. Cities will be paralyzed without electricity, air travel can spread a pandemic worldwide within days, and social media can spread panic and rumor and economic contagion literally at the speed of light.
Advances in microbiology are especially exciting at the moment. And they are, of course, potentially going to give prospects for containing pandemics. So they're very good news.
But the same research does have controversial aspects. For instance, in 2012, researches in Wisconsin and in Holland showed it was surprisingly easy to make the influenza virus both more virulent and more transmissible. To some, this was a scary portent of things to come. And in 2014, the US federal government decided to cease funding these so-called [? gain ?] [? of ?] function experiments.
And the new CRISPR Cas9 technique for gene editing is hugely promising, also, but there are ethical concerns already raised by Chinese experiments on human embryos. And gene drive programs remove a species can be deployed to wipe out, for instance, the mosquitoes that carry the Zika virus.
And in England, incidentally, some lovers of brown squirrels would like to eliminate the gray ones, which are the most successful ones in competition. But disturbing natural ecologists, even for good reason, does risk unintended consequences. So we should surely be careful.
Back in the early days of recombinant DNA research, a group of biologists met at a famous conference in Asilomar, California, and they agreed guidelines on what experiments should and shouldn't be done. This seemingly encouraging precedent has triggered several recent meetings to discuss these new developments in the same spirit. But today, 40 years after Asilomar, the research community is far more broadly international and far more influenced by commercial pressures. So I'd worry that whatever regulations are imposed on prudential or ethical grounds, they can't be enforced worldwide any more than the drug laws can or the tax laws can.
I worry that whatever can be done will be done by someone somewhere. And that's a nightmare. Whereas an atomic bomb can't be built without large-scale special purpose facilities, biotech involves small-scale [INAUDIBLE] equipment. Indeed, biohacking is burgeoning, even as a hobby in competitive game.
We know all too well that technical expertise doesn't guarantee balanced rationality. The global village will have its village idiots, but they'll now have global range. The rising empowerment of tech-savvy groups, or even individuals, empowered by bio, as well as cyber technology, will, I think, pose an intractable challenge to governments and aggravate the tension between freedom, privacy, and security.
Concerns about bioerror and bioterror are relatively near term. I think they're looming within the next decade or 15 years. But what about looking further ahead, 2050 and beyond?
Well, the smartphone, the web, and their ancillaries already crucial to our networked world would have seemed magic, even 25 years ago. So looking several decades ahead, we must keep our minds open, or at least ajar, to transformative advances that now seem science fiction. And predictions, of course, are extremely uncertain.
But just a word about them-- on the bio front, we can expect to use developments. We don't know what, but the great physicist, Freeman Dyson, conjectures a time when children will be able to design and create new organisms just as routinely as his generation played with chemistry sets. Well, let's hope this stays science fiction, because if it becomes possible to, as it were, play God on the kitchen table, ecology, and even our species, may not last long unscathed.
And what about another transformative technology, robotics and artificial intelligence, AI? There have been exciting developments in what's called generalized machine learning. You've probably read about these.
DeepMind, a small London company now taken over by Google, last year, achieved a remarkable feat. Its computer beat the world champion in the Chinese game of Go. And Carnegie Mellon University developed a machine that can bluff and calculate as well as the best human poker player.
Whereas the first [? site ?] may not seem a big deal, because, you remember, it was 20 years ago that IBM's Deep Blue beat Kasparov, the world chess champion. But Deep Blue was programmed in detail by expert players. In contrast, the machines that play Go and poker gained expertise by absorbing huge numbers of games and playing games themselves. Their designers don't, themselves, know how the machines make seemingly insightful decisions.
The speed of computers allows them to succeed in these endeavors by brute force methods. They learn to identify dogs, cats, and human faces by crunching through millions of images, not the way babies learn. They learn to translate by reading millions of pages of, for example, multilingual European Union documents. They never get bored.
But advances are patchy. Robots can do all of these things, but they're still clumsier than a child in moving piece on a real chessboard. They can't tie your shoelaces or cut other people's toenails.
But sensor technology, speech recognition, information searches, and so forth are advancing at pace. And these developers won't just take over manual work, indeed, some blue collar jobs like plumbing and gardening will be one of the hardest to automate. But they will take over routine legal work, medical diagnostics, and even surgery.
And the big and much-addressed social and economic question is, will this new machine age be like earlier disruptive technologies-- the car, for instance-- and create as many jobs as it destroys, or is it really different this time? The money earned by the robots could generate huge wealth for an elite, but I think that to preserve a healthy society, we require massive redistribution to ensure that everyone has at least a living wage-- not as a handout, but I think by creating and upgrading public service jobs where the human element is crucial and demand is huge, especially, for instance, carers for young and old, but also jobs like custodians, gardeners in public parks, and so on. So we need massive socialist redistribution to achieve this.
But let's look further ahead. If robots could observe and interpret their environments as adeptly as we do, they would truly, then, be perceived as intelligent beings to which, or to whom, we can relate. And such machines, of course, purveyed popular culture and various recent movies, like Her, Transcendence, and Ex Machina.
So would we have obligations towards them? We worry about fellow humans, and even if some animals can't fulfill their natural potential. So should we feel guilty if our robots are underemployed or bored?
And what if the machine developed a mind of its own? Would it stay docile or go rogue? If it could infiltrate the internet and the internet of things, it could manipulate the rest of the world. It may have goals utterly orthogonal to human wishes, or even treat humans as an encumbrance. Where some AI pundits take such scenarios seriously and think this field already needs guidelines, just as biotech plainly does, but I should say to others, regard these concerns about a robot takeover as being premature and very less about artificial intelligence than about real stupidity.
But be it as it may, it's likely that society will be transformed by autonomous robots, even though the jury's out on whether they will be idiot savant or display superhuman capabilities. There's disagreement, incidentally, about the route towards human-level intelligence. Some think we should emulate nature and reverse engineer the human brain. Others say that's as misguided as designing flying machines by copying how birds flap their wings. And of course, philosophers debate whether consciousness is special to the wet, organic brains of humans, apes, and dogs, so that robots, even if their intellect seems to be human, will still lack self-awareness [INAUDIBLE], whereas others say that consciousness is emergent, and these [? entities ?] would have it, too.
The futurologist, Ray Kurzweil, now working for Google, argues that once machines have surpassed human capabilities, they could themselves design and assemble a new generation of even more powerful ones, an intelligence explosion, the so-called singularity. He thinks that humans could transcend biology by merging with computers. In old star, spiritualist parlance, they would go over to the other side.
But Kurzweil is worried this may not happen in his lifetime. He's in his 60's. So he wants his body frozen until his nirvana is reached. And Alcor in Arizona will do this, so that when immortality's on offer, you could be resurrected or your brain downloaded.
I was once interviewed by a group of these cryonic enthusiasts based in California, a group called the Society for the Abolition of Involuntary Death. And I told you I'd rather end my days in an English churchyard than the colorful new refrigerator.
And they derided me as a deathist. They thought I was really old-fashioned.
But I was surprised, recently, to find that three academics in England had gone in for cryonics. Two had paid the full whack, the third have taken a cup prize option of just having his head frozen.
And I'm glad they were from Oxford, not from my university.
But of course, despite all of that, research on aging is being seriously prioritized. But will the benefits be incremental and no more, or is aging a disease that can be cured? Dramatic life extension will play in to be a real wild card in population projections with huge social ramifications. But it may happen, along with human enhancement in other forms.
But it's in deep space, Carl Sagan's special arena, that robots will surely be transformative. During this century, the whole solar system will be explored by flotillas of miniaturized probes, far more advanced than, for instance, the robots that ESA's Rosetta landed on a comet or NASA's New Horizons probe that transmitted amazing pictures from Pluto, 10,000 times further away than the moon. These two instruments took 10 years on that journey. And the amazing Cassini probe of Saturn is even more of an antique. It below 20 years ago.
It's done amazing things. Think how much better we could do today when you think how much smartphones have changed in 20 years. And we could do far better, too, than the Curiosity Rover on Mars.
Later this century, I think [INAUDIBLE] Fabricators may be able to assemble vast lightweight structures in space-- gossamer-thin radio reflectors or solar energy collectors, for instance-- maybe using raw materials mined from the moon or from asteroids. And these robotic and AI advances, both for technology and for exploration, are eroding the practical case for human space flight. Nonetheless, I hope people will follow the robots, though it will be as risk-seeking adventurers rather than for any practical goals.
The most promising developments are spearheaded by private companies. Elon Musk's SpaceX has launched unmanned payloads and docked with a space station, and has successfully recovered and reused the launch rocket's first stage, [? presenting ?] real cost savings. And he hopes, soon, to offer orbital flights to paying customers, as does the Blue Origin Company and others. And wealthy adventurers have already signed up for a week-long trip around the far side of the moon, voyaging further from Earth than anyone has been before. I'm told they sold a ticket for the second flight, but not the first flight.
If that's true, it has a message. Well, we should surely acclaim these private enterprise efforts in space, because they could tolerate far higher risks than a Western government can impose on publicly-funded civilian astronauts. You can thereby cut costs compared to NASA.
But these exploits should be promoted as adventures or extreme sports. The phrase space tourism should be avoided, because that lulls people into unrealistic confidence, to the extent that the first accident will then be a trauma, like the shuttle accidents. But I think that by 2100, courageous pioneers, in the mold of, to take an example, Felix Baumgartner, who broke the sound barrier in a free-fall from a high-altitude rocket. People like that may have established bases independent from the earth, on Mars, or maybe on asteroids. And Musk himself, who is aged 45, says he wants to die on Mars, but not on impact.
But don't ever expect mass emigration from Earth. Nowhere in our solar system offers an environment even as clement as the Antarctic or the top of Everest. It's a dangerous delusion to think that space offers an escape from the earth's problems. There's no planet B.
Indeed, space is an inherently hostile environment for humans. For that reason, even though, as I've said, we may wish to regulate genetic and cyber technology on earth, we should surely wish these crazy space pioneers good luck in using all such techniques to adapt to alien conditions. They'll be free from terrestrial regulations, and they'll have maximal incentive to do so because they're in such hostile conditions.
Indeed, the space farers may spearhead the post-human era, evolving within a few centuries into a new species. Just a few words to put this in context-- the stupendous time spans of the evolutionary past are now common culture, at least outside some fundamentalist circles. But most people, though happy with what's depicted in this rather crude time chart, tend to somehow think we humans are the culmination of it all, the end of evolutionary tree.
But that hardly seems credible to an astronomer, because our sun formed 4 and 1/2 billion years ago, but it's got 6 billion to go before the fuel runs out. And the expanding universe will continue perhaps forever. To quote Woody Allen, "eternity is very long, especially towards the end."
So we may not be even at the halfway stage of evolution. Few doubt that machines will gladly surpass more and more of our distinctively human capabilities, or enhance them via cyborg technology. Disagreements are basically about the time scale, the rate of travel, not the direction of travel. The cautious amongst us imagine time scales of centuries rather than decades before humans are overtaken or transcended by electronic intelligence, far transcending the chemical and metabolic limits of wet organic brains.
But these entities will then persist, continuing to evolve for billions of years. And moreover, the time scale for technological advances are but an instant compared to the slow time scale millions of years for Darwinian selection, the process that led, stage by stage, to humanity's emergence. And more relevantly, they are less than a millionth of the vast expanses of cosmic time lying ahead.
So post-human evolution will be far more wonderful than what's happened up till now. But we humans shouldn't feel too humbled, because even though we are surely not the terminal branch of evolution, we could be a special cosmic significance for jump-starting the transition to an organic or potentially immortal entities, spreading our influence far beyond the Earth and far transcending our limitations. Moreover, a planetary environments may shoot us organics, but interplanetary and interstellar space may be the preferred arena where these robotic fabricators will have the grander scope for large-scale construction and where non-biological brains may develop powers that humans can't even imagine.
And then they could spread to the cosmos. Interstellar travel isn't daunting to near-immortal beings. So any creatures witnessing the death of the sun, sending his postcard, they won't be human. They'll be more distant from us than we are from a bug.
Well, that's the future for us, even if it's nothing else out there. But then the other key question is, is there life out there already or is the galaxy waiting for our progeny? We know there's nowhere in our solar system which harbors advanced life. However, there could be some sort of freeze-dried life on Mars. There may be creatures swimming under the ice on Saturn's moon, Enceladus, or indeed in other places in the solar system.
But let's now widen our horizons to the realm of the stars to what is the prime subject matter of the Carl Sagan Institute, topics that would have really enthralled Carl. We've learned in the last 20 years that no stars in the sky are orbited by retinues of planets, just like the sun is.
We know that most stars have planets around them. And roughly speaking, one in every six has an Earth-like planet, so there are literally billions of Earth-like planets in our Milky Way galaxy. And we are especially interested in possible twins of our Earth, planets the same size as ours on [? orbits ?] with temperature such that water neither boils nor stays frozen.
And some of these have been found. There are certainly millions in our galaxy. And there's one orbiting the nearest star, Proxima Centari. And just recently, we found us another nearby faint star has seven Earth-like planets orbiting around it. Will there be life on any of those, even intelligent life?
The outer three of these seven planets are thought to be in the so-called habitable zone. They'd be spectacular places to live, because this is a miniature solar system. The years of these planets are just measured in a few days. And viewed from the surface of one of the planets, the others would loom as large as the moon does for us in the sky zooming past.
But they're very unearthly. This is just an artist's impression, of course. They're probably tightly locked, so that they present the same face to their star, one hemisphere in perpetual light, the other in darkness. And [INAUDIBLE] with astronomers on the dark side, everyone else on the light side.
And these planets are discovered, mainly indirectly, by looking for a small wobble they induce in the star they're orbiting or looking for the slight dimming of a star when a planet moves across in front of them. So that's the only evidence we have for most of them. But we'd like, really, to see them directly, not just to infer them from their shadows, as it were. But that's hard.
To realize how hard, lets suppose that the alien astronomers who I invoked earlier were viewing the Earth with a powerful telescope from, say, 30 light years away, the distance of a nearby star. Our planet would seem, in Carl Sagan's famous phrase, a pale blue dot, very close in the sky to its star, our sun, which outshines it by many billions, a firefly next to a search light.
But the aliens could learn quite a bit, nonetheless, by looking at this pale blue dot. The shade of blue will be slightly different, depending on whether the Pacific Ocean or the landmass of Eurasia was facing them. So they could infer the length of our day, the seasons, the gross topography, and the climate. And by analyzing the faint light, they could infer that it had a biosphere. And how to do this best is one of the main programs of the Carl Sagan Institute here.
The James Webb telescope may offer some clues. You'll need to collect lots of lights to be able to isolate the spectral of a planet from a much brighter star. And here in Europe, we're building what's called the ELT, the extremely large telescope. We're not very imaginative in our nomenclature for these telescopes.
But this will have a mirror 39 meters across. That's probably at least 1 and 1/2 times the width of this lecture room, and it's a mosaic of 800 sheets of glass. And this instrument, when it's finished in about 10 years, will be talking inferences about excess solar planets the size of the earth, rather like the imaginary aliens were drawing about the earth. So they will learn a great deal about the nearby Earth-like planets.
Well, habitable doesn't mean inhabited. And for most of us, of course, the number one question is, are these planets inhabited? We still don't know the likelihood. Indeed, we don't know how life began on earth.
We know too little about it to to lay confident odds. We don't know what triggered, here on earth, the transition in complex molecules to entities that can metabolize and reproduce. It could have involved a fluke so rare that it happened only once in our galaxy. That's logically possible.
On the other hand, this crucial transition might have happened almost inevitably, given the right environment. We just don't know, nor do we know if the DNA, RNA chemistry of terrestrian life is the only possibility or just one chemical basis among many options that could be realized elsewhere. Moreover, even if simple life is widespread, we can't assess the odds that it will evolve as it has on earth into a complex biosphere. And even if it did, it might be unrecognizably different.
It makes sense, first, to look at earth-like planets, but we shouldn't limit it to that, because I recall that Carl Sagan and his Cornell colleague Ed Salpeter, they envisaged balloon-like creatures which could float in the atmosphere of Jupiter. It could be things like that. We have no idea what we're looking for.
We should also be mindful that seemingly artificial emissions could come from something that's not organic. If the emergence of technology on a planet which had evolved rather like the earth was lagging behind what happened here on earth, if it had a slower start, then that planet, now, will, of course, be in the pre-human stage and would reveal no evidence for ET. But life on a older planet around an older star could have had a head start of a billion years or more, and thus, it may already have spawned the futuristic scenario, transitioning from organic to inorganic, which I envisage as our earth's post-human future.
So even if SETI searches reveal some artificial emission, we'd be most unlikely to catch alien intelligence in the brief sliver of time when it's in organic form. So I therefore think that any artificial transmission is less likely to be a decodable message than to be a byproduct, or even a malfunction, of some super complex interstellar technology which could trace its lineage back to alien organic beings who might still exist on the home planet, or might long have died out. I won't hold my breath, but SETI programs are a worthwhile gamble, because success in the search would carry the momentous message that concept of logic and physics aren't limited to the hardware in human skulls or the things that we can create.
And even if intelligence were widespread in the cosmos, we may only ever recognize a small and atypical fraction of it. Moreover, the habit of referring to ET as an alien civilization may be too restricted, because a civilization connotes a society of separate individuals, whereas ET might be a single, integrated intelligence, maybe not on a planet. So perhaps the cosmos teams with life, even complex life.
On the other hand, our earth could be unique among the billions of planets that surely exist. This would be depressing for the searchers, but it would allow us to be less cosmically modest, because Earth, though tiny, could be the most complex and interesting entity in the entire galaxy, and its fate of cosmic, and not merely terrestrial, significance. Of course, there's some people who think they know the answer to this question. They're the people who write letters saying they've been abducted or they've been visited. I get some of these letters. Carl must have had huge numbers of them. I tell these people two things. The first is, do they really think that if the aliens had made the huge effort to divert interstellar spaces, they'd just make a corn circle, meet one or two well-known cranks and go away again? I don't. And secondly, I tell these people to write to each other, and not write to me.
Well, I've got no time, fortunately, to speculate further beyond this [? flaky ?] fringe, which is perhaps a good thing. So let me conclude by focusing back closer to the here and now. My theme has been that even in the concertinaed time scale that astronomers envisage, extending billions of years into the future, as well as into the past, this century may be a defining era, the century when humans jump-start the transition to electronic and potentially immortal entities, or, to take a dark view, the century when our follies could foreclose the immense future potential.
We fret unduly, most of us, about small risks-- air crashes, carcinogens in food, low radiation doses, et cetera. But I'm afraid we're in denial about these newly-emergent threats, which may seem improbable, but whose consequences could be globally devastating. Some of these are environmental, others are the potential downsides of novel, powerful technologies. I'm thinking of events where even one occurrence is too often.
So how can those of us concerned about these issues and inspired by Carl Sagan's long-term vision influence policymakers? The trouble is that even the best politicians tend to focus on the urgent and the parochial, and on getting re-elected. And this is an endemic frustration for any people who've tried to be official scientific advisors to politicians. I know many in the UK and in the US.
To attract politicians' attention, you must get headlined in the press and fill their inboxes. So scientists can have more leverage indirectly by campaigning so that the public and the media amplify their voice. Carl was, of course, the preeminent exemplar of the concerned scientist, and he had huge influence through his writings, his broadcasts, his lectures, and campaigns, and this was even before the age of social media and tweets. He would have been a leader of the recent March for Science, electrifying crowds through his passion and his eloquence.
And of course, the challenges I've addressed are global. Coping with the potential sources of resources and transitioning to low-carbon energy can't be solved by each nation separately. And here, scientists have an advantage.
Science is a universal culture, spanning all nations and faiths. So scientists confront fewer impediments in straddling political divides. Carl was himself close to the leaders of his Soviet space program. I think of the SETI initiative, [? Vishkovski, ?] and joint projects with [INAUDIBLE], and also in the campaign to raise concern about nuclear winter.
We need, all of us, to focus on projects which are long-term in a political perspective, even if a mere instant in the history of our planet. That's something, incidental, that universities can do quite well. They are international and they're full of young people who will live to the end of the century. My own university is setting up a center to address the extreme low-probability, high-consequence threats that I've mentioned to assess which can be dismissed firmly as science fiction, and to consider how to enhance resilience against more credible ones.
But though we live under the shadow of these threats and may be political pessimists, we must remain techno optimists. Advances in AI, biotech, nanotech, and space can boost a developing, as well as a developed world. Indeed, if we don't responsibly progress these new technologies, we won't achieve the kind of vision which Carl and all the rest of us would like to see.
We're all on this crowded world together. Spaceship Earth is hurtling through the void. Its passengers are anxious and fractious. Their life support system is vulnerable to disruption and breakdowns, but there is too little planning, too little horizon-scanning, too little awareness of these long-term risks. It's a wise mantra that the unfamiliar is not the same as the improbable.
And I want to conclude with brief words from two scientific sages from the past. First, HG Wells-- back in 1902, he was already alert to the risk of global disaster. And I quote, "it is impossible to show why certain things should not utterly destroy and end the human story and make all our efforts vain. Something with space, something from space, or pestilence, or some drug, or wrecking madness in the mind of man."
But nonetheless, Wells retained a vision. "Humanity," he proclaimed, "has come some way, and the distance we have traveled gives us some earnest of the way we have to go. All the past is but the beginning of the beginning. All the human mind has accomplished is but the dream before the awakening."
His [? rather purple ?] prose still resonates more than 100 years later. Were he writing today, he would have been elated by our expanded version of life in the cosmos, but he'd have been even more anxious about the perils we might face. He reflects the mix of optimism and anxiety, and the speculation and science which I've tried to offer in his lecture.
So we mustn't leap from denial to despair as I gave the very last word to another sage, the eloquent biologist, Peter Medawar. I quote, "the bells that toll for mankind are like the bells of alpine cattle. They're attached to our own necks, and it must be our fault if they don't make a tuneful and melodious sound." That's a message which would have resonated with Carl, and it's so sad he's not here to help to move it forward. Thank you very much.
LISA KALTENEGGER: Does anybody have any question they want to ask? If they do-- if you don't and you're considering other things-- do you mind coming up here? Sorry.
SPEAKER 1: Yes.
LISA KALTENEGGER: I'll just do this.
SPEAKER 1: Well, I'll take the bait on your comment about wet brain self-awareness versus intelligence in general. So at some point, if you follow the curve of the increased capability of computers to whatever point in the coming decades, one gets to the place where the complexity of the machine equals that of the human brain by some measure. And then, in principle, this machine can be as intelligent as a human being.
LORD MARTIN REES: Yes.
SPEAKER 1: But what we don't know is whether that machine is self-aware or not. And of course, there are lots of entertaining books that people have written-- Roger Penrose, and so on-- on the quantum nature of self-awareness. The question, of course, is, how do we know if that intelligent machine is also self-aware? We know that each of us is self-aware simply by analogy to our own experience. How do we determine that before we take the step of somehow giving the world over to these machines?
LORD MARTIN REES: Yes. No, I agree with that. They could be zombies, and this is relevant in two contexts. First, if it becomes possible to download your brain into a machine. You wouldn't want to do that if you thought you'd lose self-awareness.
And it also, I think, affect people's attitude to this post-human future I alluded to, because if these brains are conscious, they would have deeper thoughts and deeper brains than we, and that seems a bright future, whereas if they are zombies, then people will think it's a bleak future, because some would say that the only thing that gives value to the beauties of the world is being able to appreciate it. And so they would say it's a bleak vision. And in fact, when I wrote an op-ed in the newspaper about this, I got these two conflicting views from the people, depending on whether they thought that the machines would be conscious or whether they would be just zombies.
LISA KALTENEGGER: Do we have any other questions? Do you mind stepping forward? I'll try this.
SPEAKER 2: Thank you. Thank you for your talk. For a while now, for many years, I feel like I've been in mourning. I'm in mourning for the planet. I feel very sad.
I wonder-- what you're discussing, and what a lot of people-- some people or their vision of intelligence, artificial intelligence, and where that could take us, my concern is how the pace of technology is moving. It's moving much faster than we could debate it. It seems, to me, like I don't have a choice in the matter.
I mean, the technology is movi-- I mean, cell phones-- recently, there was a man, his name is Harris. It was on the news where he worked for Google, I think, and he's now outspoken in talking about how the company is making the cell phone so we're addicted to them. So it goes into the ethics of using technology in a bad way.
Anyway, so my question here is that I want to be the way I am now. I don't want to be half-computer and half-human, or all alien or all computer, and this technology that we're doing is moving very fast. And it's moving faster than we can debate it, whether it's biotech, whether it's what we hold in our hands.
So what I'm asking is, can we do this as a world? Because I don't see that happening. And the other thing I'm also extremely concerned about is population growth. I don't know if we could all have-- Helen Caldicott, actually, who came here, talked about having a one child-- not in a way that you bop somebody over the head if you have another kid.
But how do we deal with that? Because population growth is scaring me. I don't also see technology necessarily solving that issue, also. So I'm sorry I was on.
LORD MARTIN REES: There's two things. Population growth, I agree with you, but I think it may sort itself out. It depends on the choices. Of course, in many countries, the birth rate's below the replacement level. So the question-- if that happens in Africa, so it may sort itself out. But I agree, we don't want it to keep on growing, because that has negative [INAUDIBLE] for the rest of us.
But as regard to your general points about technology running away faster than we can absorb it or adapt to it, I completely agree with you. And I would just say two things. First, more people need to bang on about these concerns so that the public is aware of them. And also, we want to make sure that too much power does not accrue to these huge companies.
I think those are two things we've got to do. And the more people who are aware and join in these campaigns, the better.
SPEAKER 3: It's an old saying that a machine should work and people should think. And now that the machines are probably going to think, too, what can people do? I mean, what is left for humanity to do for itself?
LORD MARTIN REES: Well, I think they can do artistic things, even if the machine could do them. But I think, for a long time, I think, if you are old and wealthy, you would want to pay real human beings to look after you. You won't be happy with a machine.
And so I think one thing we could do is provide enough well-paid, dignified jobs to ensure that every old person has a real carer. And that would be millions of jobs, but I think, preferably, in the public sector, and that would provide dignified employment in a human way. And so carers of young and old and, I would say, gardeners and artists and people would be able to.
But I think carers could be employed in huge numbers, and there's a desperate shortage of them. And I think that's better than just giving people a living wage for doing nothing. So that'd be answer.
LISA KALTENEGGER: I think, if I can add an answer to that, is, you were saying machines should work and people should think, I think there are so many problems that if we would actually add our collective minds to trying to solve, we would achieve incredible advances, whether it's medicine, biology, environment. And if we could outsource, in a way, the mundane tasks, as you wish, or the calculating of some answers to machines, I actually take a more positive take on that. What an amazing accomplishment the human race could make if we actually managed that the right way.
But I agree with Martin. The problem is, we don't have a structure in place that makes sure that that is what we're going to do and that we don't go the wrong way, where it actually takes over jobs and people don't know what to do, and so on. But if you would just think-- let's take your example of art.
If people had the leisure to actually think about what they would produce, whether it's in art or songs or music, if every one of us had the leisure to actually think what we could contribute to a human endeavor in that, I think that would be fascinating, just that small aspect. But we need to make sure that we're going the right way. I completely agree with you.
LORD MARTIN REES: Yes, people have to feel they're making a contribution. And I think caring, as well, there's unlimited demand.
LISA KALTENEGGER: Other questions?
SPEAKER 4: There was a interesting article in the New York Times, about a year or so ago, by an astronomer who claimed that any advanced civilization, wherever in the universe, will be self-destructive in the sense that advancing civilizations use more and more energy, and then the second law crops up, and they must, therefore, generate more heat. I'd like to hear your thoughts about that.
LORD MARTIN REES: Yes. Well, that could happen, but it's certainly not inevitable, because we can reduce our energy consumption in lots of ways. So I think that it could happen, but I don't see any reason why that's inevitable.
LISA KALTENEGGER: Are there any last questions?
SPEAKER 5: First, just a quick comment about the first question, because I also go to various AI talks, and they point out that, on that chart, the computer that beat Kasparov was equivalent to a mouse. And then they project, about a decade from now, you will be able to buy something comparable to the human brain and storage and bandwidth for about $1,000, but it will be connected to the network, so it'll be thousands of times more powerful. It doesn't answer the question of whether it'll be a zombie or not.
The question I have is, in some sense, your talk bypassed some of the most pressing issues. I mean, you talked about all of the pressing issues we're facing and you emphasized how poor we are at projecting for the future when we can't even decide things on the half-year to one-year time frame. But we're at a very disheartening point right now, in both of our countries, especially with the marginalization of science in this country where one has the feeling you're shouted down if you are speaking the truth. And you can't fight that. And so my question is, given the pressure of the problem, what, practically, would you encourage us to do?
LORD MARTIN REES: Well, I did say I was a political pessimist, though a technical optimist. And there's plenty of reasons to be pessimistic. And I think all that we can do is band on as much as we can in the spirit of Carl Sagan. We may not win, but I think we've got to try.
But I certainly have no quick fixes to suggest for any of these problems. I think we're going to have a bumpy ride through this century, particularly because of the greater empowerment of individuals and small groups. It'll be very, very hard to prevent massive disorder or disruption. So I'm very worried about that, and I don't see any solution.
LISA KALTENEGGER: However, I would say, on the last point in this talk, is, having an audience actually come to a talk this death and caring about, from many different departments or ways of life, about surviving the century is a very positive take on who cares and how many people we'll have to help get that message across.
And so with that, thank you all for coming. And thanks to Martin Rees.
We've received your request
You will be notified by email when the transcript and captions are available. The process may take up to 5 business days. Please contact email@example.com if you have any questions about this request.
After 4.5 billion years of existence, Earth’s fate may be determined this century by one species alone – ours. The unintended consequences of powerful technologies like nuclear, biotech and artificial intelligence have created high cosmic stakes for our world.
The United Kingdom’s Astronomer Royal, Lord Martin Rees, explored our vulnerabilities and possibilities in the first Carl Sagan Distinguished Lecture, May 8, 2017. Rees was introduced by Ann Druyan, Emmy and Peabody award-winning writer/producer of the PBS documentary series "Cosmos" and board member of Cornell’s Carl Sagan Institute, sponsor of the lecture.