Artificial Intelligence#

Artificial intelligence is a branch of computer science concerned with building smart machines capable of performing tasks that are typically associated with some form of intelligence when performed by humans. It is important to make a preliminary distinction between Artificial General Intelligence (AGI) and Narrow AI when introducing this topic because these two types of AI systems have very different capabilities and potential impacts. Making this distinction helps to clarify the public debate around AI and avoid misjudgments about what AI is and is not capable of.

Artificial General Intelligence (AGI) is a type of AI that is able to perform any intellectual task that a human being can. It is sometimes referred to as “strong AI” or “human-level AI.” AGI would have the ability to learn and adapt to new situations, just like a human, and could potentially surpass human intelligence in a variety of domains.

Narrow AI, on the other hand, is a type of AI that is designed to perform a specific task or set of tasks. It is often referred to as “weak AI” or “specialized AI.” Narrow AI systems are not capable of adapting to new situations or learning new tasks. Instead, they are designed to perform one particular task, or a narrow range of tasks, very well.

One way to think about the difference between AGI and narrow AI is to consider a human being versus a calculator. A human being has the ability to perform a wide range of intellectual tasks, from math to language to problem-solving. A calculator, on the other hand, is a narrow AI system that is designed to perform one specific task – in this case, arithmetic calculations – very well. However, a calculator is not capable of adapting to new situations or learning new tasks, whereas a human being is.

In the public discourse around AI, there is often a tendency to conflate AGI with narrow AI, leading to misunderstandings and exaggerations about the capabilities of AI systems. This can lead to misjudgments about the potential risks and benefits of AI, as well as unrealistic expectations about what AI can and cannot do.

Making a clear distinction between AGI and narrow AI can also help to contextualize the real-world applications of AI and separate them from science-fiction scenarios. It allows us to have a more grounded and realistic conversation about the current and future potential of AI, and to better understand the limitations and opportunities of different types of AI systems.

Artificial General Intelligence (AGI) is a topic of scientific, philosophical, and cultural discussion, but it does not currently have any practical applications in our daily lives because it has not yet been invented, and many argues that can not be invented at all. On the other hand, narrow AI, which is designed to perform specific tasks or a narrow range of tasks, is becoming increasingly common in our society. In recent years, narrow AI has been used in a wide range of applications, including medical diagnosis, film recommendation algorithms, and text generation. These systems can be used both explicitly, when the user is aware of their use, and implicitly, when they are integrated into more complex applications such as voice assistants and holiday booking platforms. Narrow AI is now present in many aspects of our lives, both in obvious and subtle ways.

Recent advances in AI (i.e. Narrow AI) have been made possible by developments in machine learning and deep learning, a field of computer science that aims to teach machines how to learn and perform tasks without being explicitly programmed to do so. Machine learning is an approach that involves building models, a mathematical representation of reality, by means of exposing the machine to data and let it “learn” from it.

In particular, models try to make accurate predictions given a specific input. As an example, consider sentiment analysis in its simplest form: exposing the machine to a great number of movie reviews paired with their polarity (positive or negative) will allow the algorithm to make a prediction of the most probable polarity a review conveys when exposed to a previously unseen review. Given the right data and a clearly defined predictive task, most activities that require some sort of analysis and decision taking can be performed with different levels of quality by AI. In this respect, machine learning emulates to some extent a very human principle, i.e. a learning process driven by experience.

It is important to point out that to perform well, and in many cases to outperform humans, machines do not have to imitate human intelligence, but can follow completely different approaches [Floridi, 2014]. This is similar to mechanical engineering: to create an airplane it is not necessary to follow the example of nature and imitate birds (like Leonardo Da Vinci tried out in the fifteenth century), but a deep knowledge of the laws of nature are needed, in particular aerodynamics. This idea detaches the creation of AI-based applications from the need to achieve AGI. You do not need to recreate the complexity of the humans to achieve some human-like tasks.

Going back to AGI, Stephen Hawking famously said, ‘Intelligence is the ability to adapt to change.‘ Because current AI are not able to adapt to several tasks, they could be better defined as smart and less as intelligent. Today’s AI systems can achieve impressive performance in specific tasks, from accurate image recognition to usable machine translation, but they continue not being able to easily adapt to a wide range of new tasks and environments. So to say, they are specialized experts in their domain, defined by the data they have been given to learn their task, but they still struggle with transfer of learning - something that seems to come naturally to humans and animals who apply their knowledge to new contexts continuously during their lifetime. While this missing feature still holds true, it must be said that new Large Language Models (for example GPT-3) show so-called emergent behaviors, i.e. unexpected and sometimes undesirable behaviors that are not explicitly programmed into the model but emerge as a result of the model’s training and processing of data.

Why do we need AI to solve complex problems? Automatic Speech Recognition (ASR) is a perfect example of such a problem. Imagine we are developing a speech recognition tool. The input isn’t text, but it is a signal, a wave signal to be precise. Each instance of sound will be unique. Even the same person repeating the same word doesn’t produce a perfectly identical, repeating wave form. As a consequence, it is impossible to create clear rules following the classical pattern of “if/then” (if the wave is “X”, then the word to transcribe is “Y”) and transcribe a spoken utterance applying such rules. To solve this challenge, a more sophisticated way to map inputs to outputs is required, where the algorithm learns features (key attributes) of the spoken language and match a signal pattern to a given word. Using machine learning, it is possible to teach a system to recognize given signal-to-word matches by building a model of how they relate.

Bibliography#

1

Luciano Floridi. The 4th Revolution: How the infosphere is reshaping human reality. Oxford University Press, 2014.

Further reading#