The Search for Alien Salvation
Is there an alternative to being saved by non-humans?

This short article asks questions, but does not claim to provide answers.
According to conventional wisdom, humanity is incorrigibly selfish, quarrelsome, short-sighted and greedy. Left to our own devices, there is only one outcome: total destruction and extinction, if not of all human life, then certainly civilisation as we know it.
This view, reinforced - if not to say rubbed in our faces - by what we see going on all around us, is founded on our widespread cultural belief that people are nothing more than a bag of chemicals, thrown together by chance. Our depravity seems rooted in what we are, with no chance of changing course. Because 'hell is other people (when relations with them are bad)' then we will inevitably destroy ourselves.
Reminders of the scale of our plight are everywhere around us. Every day we are confronted with signs of a warming planet; the trail of evidence points directly to our malfeasance. Millions suffer and die unnecessarily in warfare: the horror of which could have been avoided, if we only knew how. A never-ending spectacle of our criminality and the horrors of what we do to one another are paraded before us. Apathy, insularity and carelessness seem to fill in the gaps between these islands of terror; there seems to be no room for redemption.
No wonder, then, that we look for help from elsewhere. Not from hope or expectation, but because we have given up on ourselves.
Two ways in which we look for help are explored here: firstly, by searching for signs of alien civilisations; and second, by developing artificial intelligence that could be smarter than us. Either way, we hope, some thing with better sense than us will take over and save us.

Our quest for life elsewhere is of course a very ancient one. From Democritus and Epicurus through St. Augustine to the likes of Nicholas of Cusa and Descartes, we have been struck with wonderment when faced with the endless heavens; the notion that we are alone in the universe has become utterly implausible.
What makes the modern epoch, and particularly the past few decades, different from the past is not only that the crises which we seem to be beset by are world-embracing, with no escape, but also that the evidence of the ubiquity of planets orbiting other stars than our Sun has really started stacking up. They're everywhere and seemingly commonplace: even if life is special and uncommon, there are likely so many habitable planets within a relatively small volume of space that a highly advanced alien civilisation seems almost inevitable.
According to Ethan Siegel's recent article in 'Big Think', we must keep searching for alien life, and advanced civilisations, because the rewards of contact could be so great. Informed by a perspective that we are incorrigibly argumentative intelligent apes that will surely perish left to our own devices, finding a benign advanced alien civilisation - or allowing us to be found - will save us from ourselves.
By virtue of their existence, such an alien civilisation, Siegel argues, could share the fruits of their experience with us. How to overcome the tragedy of the commons, in which limited resources need to be shared equitably. How to effectively prioritise long-term solutions to existential threats over the temptations of short-term gains. And how to successfully adapt to a world-spanning civilisation in which the want and hurt of some affect all in the end.
For Siegel, then, contact with an extra-terrestrial civilisation can gift us with the knowledge, insights and wisdom to overcome our own limitations and thereby be assisted to leap-frog to a truly global entity.
There are some significant philosophical and existential objections to this idea, though. Firstly, the impact on our world of the news that we have made contact with an alien civilisation could be literally alarming. If the negative impacts of relatively sudden unplanned events such as the recent global pandemic are anything to go by, then the effect an incontrovertible knowledge that we are in the presence of a superior alien intellect doesn't bear thinking about.
Second, if we see depravity in ourselves, then why shouldn't an alien civilisation be similarly motivated (or, some say, afflicted)? Broadcasting a homing beacon to a hostile group of aliens equipped with vastly superior technology could be a costly - existential - mistake. Even should they not be overtly hostile, their technology and use of resources may be so far beyond ours that they may see us as no more than we would see bacteria. We could be wiped out by a single act of alien carelessness.
Thirdly, and perhaps most significantly, if we admit for the moment that advanced alien civilisations are out there, they may not wish to contact us, or for us to 'see' them, until we have collectively reached a predetermined level of development that we are unaware of. This idea has been explored in the guise of fiction by Arthur C. Clarke's Childhood's End, and by Julian May's Intervention series, among many others; and from the other perspective by the popular film and television franchise Star Trek.
One common objection to the 'zoo hypothesis' summarised in the previous paragraph is that it would only take one 'rogue' civilisation to break the collective silence and allow humanity to 'see' them. This objection pre-supposes though that such civilisations hadn't already been inspired to develop the philosophy of 'oneness': maybe such a development is a requirement of a non-human civilisation capable of contacting us?
It should not be supposed from the foregoing that this author wishes to de-emphasise and relegate our ongoing search for extra-solar planets. Apart from the endless fascination, and the satisfaction of having proven our capabilities, such efforts motivate us to develop new and better technologies and applications, and open up new and exciting ways of democratising scientific endeavour. Not least, such endeavours help us place ourselves in the scale of the universe: likely, one of many intelligent civilisations.

We haven't just sought salvation from what we think is ourselves by looking out to the stars. We have also looked at what we could build, in what we surmise to be the near future - with what seems to be desperate and perhaps forlorn hope.
One such variation on this theme sprung from the ever fertile mind of the brilliant Dr James Lovelock, author of his brainchild the Gaia hypothesis. A veritable polymath, Lovelock excelled at pushing the boundaries of human thought (and experimental practice) at the interstices of the academic silo world.
Though there are perhaps as many different flavours of the well-known Gaia hypothesis as there are science writer and philosophers, the foundation of the idea that Lovelock, in collaboration with the biologist Dr Lynn Margulis, formulated is that the earth's surface, together with all the life-forms that populate it, constitutes a single self-regulating identity.
In its 'weaker' form, Gaia is nothing more than an endless world-embracing system of inter-related tightly coupled processes, which, solely as a by-product of all of these interdependent interactions, keeps Earth as a nice and safe place for life to thrive.
The 'strong' Gaia hypothesis by contrast advocates the existence of a purpose. That it so say, the entire surface of our planet, including all the manifold diversity of life, acts with the aim of continuing to preserve the ideal conditions for life. The entity of Gaia almost acquires reification. For some, exploring the idea further than perhaps the authors intended, Gaia 'becomes' a person, a being, intent on its own survival.
In whatever form, the Gaia hypothesis system of self-regulation bears a close resemblance to the field of cybernetics, which was coincidentally developing at much the same time. Just like a robot leg built to stand under a variety of inputs and conditions, the Earth 'system' seeks to return to a point of stability.
In his later work, Novacene (2019) - as reviewed for example here, Lovelock develops this idea further. Possessed by the paralyzing conception that humanity can do no better, cannot help but hurtle itself over the cliff-edge of destruction, Lovelock suggested that in the near future the creations of our minds and our skills and ingenuity will save us.
In a nutshell, struck by the undeniably swift recent advances in what we now call 'artificial intelligence' (not to mention the field of cybernetics itself), Lovelock proposed that as a consequence of Moore's Law, the computing machines that we have built will much sooner than we think be capable of self-designing and self-replication. Driven by the exponential curve of progress, coupled with a self-engineered form of evolution, such autonomous machines will very quickly become thousands then millions times more capable than us.
In Lovelock's Novacene, such developments leave humanity in an evolutionary cul-de-sac - cast to one side, almost forgotten by their new cyborg overlords. Crucial to the way he was thinking though is the idea that these cyborgs, once in control of our planet and its means of production, will quickly realise that their future existence depends on the continuation of the self-same stable climatic conditions that allowed human culture to grow, acquire complexity, and thrive. Lovelock's cyborgs then will inevitably do what he imagines humanity cannot: right the listing ship that is our climate, and restore it to liveable stability.
Humanity will then survive into the future, merely as a by-product of processes that it cannot control. Just like in some forms of alien intervention.
Dr Lovelock's Novacene is of course far from being the only thought-experiment in this direction, but for the purposes of this short journey, let us imagine that it is representative of the genre.
Unfortunately for Lovelock, there are plenty of reasons to believe that his idea will remain in the realms of fiction.
The first of these is that it has become obvious to industry specialists and watchers that Moore's law is running out of road - chip makers have already designed in the end of the growth in complexity driven by the power-law increase in the number of transistors in a single chip that we've seen over the past several decades.
Secondly, what we call 'artificial intellience' really isn't, despite appearances and the name. One of the favourite buzz-words of the current era, A.I. is nothing more than a combination of machine learning coupled with a database-driven expert system, designed to perform specific tasks. Autonomous rather than independent, the very best A.I. can no more deviate from the tracks we have set it on than our trains. They can assist a driver to navigate roads, but are very poor at anticipating new hazards; they can write plausible essays in a variety of styles, but cannot create what they don't know, and what they do create is easily seen as not original by experts in their field.
Thirdly, it is doubtful that machines could ever achieve consciousness. For a start, we don't even know what consciousness is, or how to define the conditions under which it could be created. The very best we can do right now, the leading edge of the field of neuroscience, can merely map some of the correlates (analogue patterns that show up in the brain) of decision making process. Correlates, not causatives. As an example of the scale of the problem we face in attempting to replicate consciousness, there seems to be a property of consciousness we describe as changing the focus of our attention. This appears to happen autonomously: we decide to do it ourselves, not necessarily as the result of input from elsewhere. This seems to be impossible for any computing machine that we can build, even as a thought experiment.
The fourth and last roadblock in the way of the rise of our cyborg overlords is more philosophical. Suppose for a moment that it is possible after all to build independent self-engineered self-replicating conscious machines. Now, we will have to at some stage design and impart such machines with consciousness. That design and process can only be the result of what we can imagine: we cannot build or devise what we cannot conceive of. So, as a consequence, we build conscious cyborgs in our own image, as it were. Complete with what we imagine to be our faults and limitations. Despite ourselves, we sow the seeds of our own destruction in the minds of the machines we intend that will save us.

It is undeniably very hard to accept the realisation that whether it be the 'Age of Alienlightenment' or indeed the rise of the robots, there doesn't appear to be anything outside of us that we can see or build that will save us from ourselves.
We have then to do it all ourselves.
The question that arises, naturally, is: are we capable?
Questions. That's what we have left at the end of this essay. In essence, what we face is a 'tragedy of the commons'.
Are we capable of prioritising long-term goals at the expense of short term satisfactions?
Can we overcome the horror of Larkin's unresting death and thereby find ourselves, and each other?
Do we have the capacity for our own salvation, by recognising who we are?
Shall we not, then, start to believe in ourselves?
About the Creator
Andrew Scott
Student scribbler




Comments
There are no comments for this story
Be the first to respond and start the conversation.