Alien Intelligence (AI): Is it time to panic?

AI Creative Challenge 4.0_ Winner01

Become A Digital Member

Subscribe only for €3.99 per month.
Cancel anytime!

What are we to make of AI? As Patrik Schumacher has argued, is it an amazing tool everyone should use? Or is it terrifying, as some might argue? And does it even constitute a bigger threat to humanity than climate change, as Mo Gawdat, former Chief Financial Officer of Google, has claimed?

For me, it is both. It is both an extraordinarily powerful tool, but – precisely because of that fact – it is also a threat. Let us be clear: there is nothing inherently evil about AI. As far as we are aware, it has no intentionality, and without intentionality, it cannot be evil. It is just a tool. But – like any other tool – in the wrong hands, it could be lethal. After all, a person can use a kitchen knife either to cut up vegetables or potentially to kill someone. But don’t blame the kitchen knife; as yet, no tool has ever been convicted of a crime. However, AI has surprised many experts because it has proved to be far more capable than anyone had ever imagined. No one had predicted that chatbots, like ChatGPT, would know 10,000 times more than a human being, and no one had predicted that image-generating diffusion models, like MidJourney, would be able to design better than any architect.


No one is more alarmed than Geoffrey Hinton, often referred to as the ‘Godfather of AI.’ Hinton is a remarkable figure. He comes from a distinguished family of scientists in the UK. His great-great-great-grandfather was George Boole, of Boolean geometry fame. Hinton was actually admitted to the University of Cambridge to study architecture, but quit after two days as soon as he realized that architecture was not for him. He studied science instead and eventually moved into AI. Hinton has proved to be the hero of AI in that he remained convinced that the best way to make AI work would be to model it on the brain. In an era in which neural networks had been dismissed because they had failed to deliver, Hinton stubbornly persisted. Eventually, however, once Graphic Processing Units (GPUs) had been introduced at the turn of the millennium, and computers had become much faster and more powerful, neural networks started to deliver on their promise, and Hinton was vindicated. These developments are what led to the whole Deep Learning revolution that is powering AI today. These days, neural networks – Deep Learning – are almost synonymous with AI.

AI, it would seem, is now working amazingly well. This, of course, is a huge success. AI is now incredibly useful. So why are people so terrified of the capabilities of AI? The problem, it seems, is that AI is working too well and has started to develop capabilities that most had thought would take decades to develop – if at all. For example, it has been generally assumed that AI has no more capacity to think than your pocket calculator and would be unlikely to ever be able to think in human terms. But according to some experts, AI is now capable of actually thinking in exactly the same way as humans think. Likewise, when Google engineer Blake Lemoine claimed that AI might be aware and have feelings, he was laughed at and eventually lost his job at Google. But now experts are not so sure.

For Hinton, the first inkling that AI might be more capable than anyone thought came when he discovered that PaLM – Google’s version of GPT – was able to explain a joke. Now, if it was able to explain a joke, it must have been able to ‘understand’ that joke. Another concern arose when Hinton began to wonder whether AI could ‘think,’ and, if it could ‘think,’ whether that was a metaphorical use of the word ‘think’ or whether it was exactly the same kind of ‘thinking’ that humans engage in. Eventually, he became convinced it was the latter. As he put it, “I strongly believe that use of the word ‘think’. . . was exactly the same way of using ‘think’ as we do with people.” This is all compounded by the fact that he also began to realize that AI has a better way of learning and a more efficient way of sharing its knowledge than humans have. This is because many copies of the same AI model can run on different hardware but do exactly the same thing. “Whenever one [model] learns anything, all the others know it,” Hinton noted. “People can’t do that. If I learn a whole lot of stuff about quantum mechanics, and I want you to know all that stuff about quantum mechanics, it’s a long, painful process of getting you to understand it.”

And so, could we now claim that AI is genuinely intelligent? Much depends, of course, on how we understand ‘intelligence’. There are, surely, different forms of intelligence, and it would be wrong to limit ourselves to a definition tied to human intelligence. For Hinton, the two main types of intelligence are animal brains and neural networks, and the intelligence of neural networks is superior: “It’s a completely different form of intelligence – a new and better form of intelligence.” Personally, I like to call it ‘alien intelligence,’ a term that has been used already by Philip Rosedale.

Neil Leach

My point here is that there are many different forms of ‘intelligence,’ just as there are many different forms of ‘thinking’, ‘understanding’, and ‘learning’. We need to use inverted commas when referring to these terms because otherwise, there is a risk of anthropomorphizing them. The problem is that we human beings tend to adopt an anthropocentric outlook. We tend to judge the world on our terms and consider ourselves to be the center of intelligent life in the universe. But is this not a mistake? Instead of judging AI on our terms, what if we were to judge ourselves on AI’s terms? Would we not appear to be vastly inferior? This, at any rate, is why some call for a ‘Second Copernican Revolution.’ We need to correct this anthropocentric outlook and recognize that we human beings are no longer the center of intelligent life in the universe.

How has AI achieved these mysterious powers? This is where the story gets intriguing. After all, the neural networks in these LLMs are not so complicated. The algorithm used consists of barely 2000 lines of code. But it is the sheer size of these LLMs that has proved to be so significant. Strangely, they are exhibiting what are referred to as ‘emergent capabilities’ precisely because of their size. The term ‘emergence’, refers to a principle that we have known about for some time but so far have been unable to explain in a convincing way. Emergence can be found in natural systems, such as the aerial acrobatics of a flock of starlings – referred to as ‘murmuration’ – when they come into roost in the evening; it can be found in the stigmergic behavior of ants laying pheromone trails; and it can be found in the collective behavior of a slime mold, where thousands of individual cells come together to form a single entity when foraging for food.

The principle here is that in any multi-agent system, there tends to be a form of bottom-up ‘emergent’ behavior that is unpredictable, whereby the whole is greater than the sum of the parts. Furthermore, the larger the multi-agent system, the more extraordinary the emergence that appears. Scientists have long been aware of this principle but have struggled to explain it. Strong forms of emergence have even been compared to magic. Of course, magic does not technically exist. A magician does not perform magic. Rather a magician performs a trick whereby the actual operations are concealed so as to fool the audience into thinking it is magic. Magic, however, – if we are to believe a statement commonly attributed to Arthur C Clarke – is merely a phenomenon that science has yet to explain.

It is these ‘emergent capabilities’ that have allowed AI to be ‘creative.’ It is these that have allowed AI to learn to write, translate language, and generate code. Indeed, as Yuval Harari has claimed, AI has now hacked our human operating system – language. This is potentially terrifying because words are key to everything. Here, however, I want to claim that AI has also hacked our visual system – design. Indeed, take a look at any of the MidJourney-generated illustrations that accompany this article, and you will notice that AI is quite capable of composing a design. These images are generated based on ‘prompts’ or verbal descriptions that MidJourney translates into images. But these prompts – detailed as they are when it comes to describing particular attributes of the image generated, such as lighting conditions, hyperrealistic detailing, rendering, and so on – contain only very few words to describe a building and its landscape.

In fact, the only words used to describe the design are limited to expressions, such as ‘ultra-contemporary futuristic house high on a mountain in the Austrian Alps.’ Nothing more. MidJourney does the rest. It generates images so convincingly that they offer a strong sense of materiality; it adds reflections; it adds trees, rocks, and mountains in the background; it adds all the details. In short, MidJourney generates the entire design. Furthermore, we can use exactly the same prompt but change the reference from ‘building’ to ‘jewelry’ or ‘fashion item’, and it will generate stunning outcomes. This is both amazing and somewhat terrifying.

Most terrifying of all, however, is the question of what else AI has learned to do, of which we are unaware. My point here is that when any intelligent entity is operating at a level way beyond human understanding, we human beings simply cannot grasp what it is thinking, just as we cannot detect the smells and sounds that a dog can detect. The dumb, they say, do not know how dumb they are.

So AI has hacked both language and design. But might it not also have hacked the very ‘genome’ of human culture itself? Those familiar with the cult book series and movie The Hitchhikers Guide to the Galaxy will recall that a supercomputer – named ‘Deep Thought’ – provided the answer to ‘life, the universe and everything.’ Somewhat disconcertingly, as we know, the answer was ‘42’. But could AI actually now do the same? Could AI explain our whole existence? The only difference – if we base our experience on ChatGPT or MidJourney – is that whereas Deep Thought took 10 million years to come up with the answer, AI would be able to answer in 3 seconds.

On the back cover of the book, The Hitchhikers Guide to the Galaxy, are written the words, ‘DON’T PANIC!’

But is it not time to panic?

*Images generated by Neil Leach, using MidJourney V5.2. Here, you can access Leach’s other works.

Share with a friend:

Learn about parametric and computational from the online courses at the PAACADEMY:

Leave a Comment

Your email address will not be published. Required fields are marked *

Become A Digital Member

Subscribe only for €3.99 per month. Cancel anytime!

Weekly Newsletter in Your Inbox

Explore More

Sponsored Content

Subscribe to our weekly newsletter