May 12

Should we fear the emergent properties of Artificial Intelligence?


Welcome to the first words of this website 🙂

Gotta start somewhere. So, let me begin with one of the most important issues in AI in 2023: that of Emergent Properties.

Large Language Models such as ChatGPT can accurately translate text from one language to another. And yet, they were never taught to. They can also handle multiplications, or generate or run functional code (acting like virtual dev environments), without having been taught anything else than statistical text manipulations!

Imagine finishing a linguistics course and suddenly being great at maths and software development, without having taken a single class in those topics. Now, you get the idea of what is happening in AI.

Exchange with ChatGPT to test the system's arithmetic capabilities, which are an emergent property.

What are emergent properties?

The term emergent property describes:

an important characteristic of a complex system that cannot be explained by analyzing its individual components.

Nothing in ChatGPT’s neural structure or data was specifically intended to make it a good translator. Yet, it is. The ability emerges “spontaneously” from the complexity of its system, not from explicit coding. It is, in essence, a side effect, one among many others. This is what ChatGPT itself has to say about some of them:

Screen copy of an exchange with ChatGPT about the emergent properties in its system. Among emergent properties described by ChatGPT are conversational ability, language translation, information retrieval, knowledge representation, and creative writing.

Other notable emergent properties around us include the sparrow murmurations we often see fantastic photographs of. None of the individual birds are programmed to create those striking flight patterns. These emerge from the individual reactions of one bird relative to the others next to it (and the complex interactions between tens of thousands of them).

In human beings also, emergent properties are a big deal. Though this is debated, consciousness itself is often thought to be an emergent property of the brain’s great complexity.

And I believe that tacit learning – the abilities that come to us through experience rather than explicit training (and often far more valuable to us) – is a form of emergent property. Some people are excellent photographers, not because a course specifically taught them to, but because of accumulated experience and feedback, i.e. the sort of training neural network receive, which coalesce into high-quality intuitive (emergent, rather than rule-based) shooting over time.

Note: For a more detailed overview on how ChatGPT works and what emergent properties are, read So, what is ChatGPT and how does it work?


Are AI’s emergent properties a danger?

As access to data, scalable models, and computing power increased, programmers expected that larger models would become better at the task they were designed for. However, they didn’t fully anticipate the increase in performance at jobs those models were never intended to perform.

It’s been 8 years since AlphaGo beat Lee Sedol at Go. In Moore’s Law terms, we now have 30 times more computing power at our disposal and the program itself has also been made significantly better over time. So it’s possible that today’s version is hundreds of times more capable than the one who beat the world champion.

The combination of fantastic power and abilities that escape our control naturally leads to worrying conclusions. What if said abilities are used against us? What if the system develops a mind of its own, and we can no longer pull the plug? What if it perceives our attempts at controlling it as an existential threat and retaliates? More realistically, for now, what will happen to my job, if it can do everything faster than me?

Testing is possible

Let’s frame this fear in two ways :

1. What we are really dealing with here is a problem called AI alignment. As models grow bigger, so does their number of emergent properties. But, their performance on their intended goal can also degrade, in parallel. Those systems become misaligned with their original purpose, exploiting specification loopholes to achieve best performance, rather than conform to spec. And many leading experts feel that this problem increases with system size and ability. So this puts a damper on model size.

2. Unexpected doesn’t mean completely unpredictable. Software quality assurance is a well charted process and it is possible to consider and anticipate potential emergent properties of an AI system before its release. The current pressure to release ‘undertested’ models will either lead to some shocking incident, triggering political intervention to impose more thorough deployment practices, or wash away as the performance hazards of models erode their maker’s reputation and revenue. No company wants a strategy based on products that escape their control.

Fear the human more than the tool

1. In 2016, Microsoft released a chatbot called Tay. Within hours, Tay began to exhibit racist and sexist behavior. Partly because it was designed to learn from interactions between Twitter users, Tay’s emergent properties led it to learn the racist and sexist values of the users.

This reflects more on the contents of Twitter than on the technology, which simply mirrors human interactions. Emergent properties certainly raise important ethical and social questions but many of them link directly to their creators and users, not the tech itself.

2. While high-quality machine translation might have been a pleasant surprise during the early phases of GPT development, it’s highly unlikely that OpenAI was taken by surprise by this capability after public release (and that they hadn’t anticipated it at all) … While it is impossible to eliminate all risks associated with emergent properties, it is possible to mitigate them through careful design and deployment, and through ongoing monitoring and evaluation.

Know who you are dealing with, which company built the tools you are using, and what its values are.

Ultimately, it is important for researchers and developers to carefully consider the potential unintended consequences of their work, and to design systems that are robust, transparent, and accountable. But policymakers and the public should also be aware of the risks and benefits associated with all of those systems before they use them. It is up to us to decide who we want to deal with or not, based on the ethics and practices of those actors.

3. With a fast-paced AI ecosystem influencing our daily lives more and more, it is our responsibility to keep the raving optimists and the cassandras at bay and ensure our understanding of this new component of our society is good enough to allow us to form our own opinions and make our own decisions.

This is what this blog is designed for 🙂


You may also like

Leave a Reply

Your email address will not be published. Required fields are marked

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

Get in touch

0 of 350