Machine learning is at the core of artificial intelligence. It forms the basis for the development of intelligent systems that can analyze and understand large amounts of data in order to make informed decisions and predictions.
Machine learning (ML) is an area of artificial intelligence that focuses on using data and algorithms to allow a machine to perform a specific task without explicitly giving it instructions on how to do it. While a traditional program generally contains a set of rules handcrafted by the programmer, ML will learn to achieve the same goal by looking at a great quantity of data and predicting the results of the task on the base of a model of reality created during training. This is somehow similar to how the human brain gains knowledge and understanding, i.e. by exposure to facts and information through experience or by observation. Typical examples of ML in the area of natural language processing are email spam filters, auto-correct tools, machine translation and speech recognition.
ML relies on great quantity of training data, in the case of speech recognition audio files and their transcriptions, to learn the relation between a stimulus (audio) and the desired result (transcription). The fact that the relation between input and output is established without the need to have explicit rules is key to ML. Not only because rules are very complex and time-consuming to define, but because many tasks are too complex to be encapsulated in a set of rules. Think about how we human process language: while we know that there are rules of grammar, syntax, etc. we do not learn language by applying those rules on a minute-by-minute basis. As a child we learnt to speak by being exposed to language use and to interact with the people and the world around us by using language.
Leaving aside, at least for the moment, some very important differences that exist between the way humans and machines learn languages, it is important to understand that neural networks can make sense of unstructured data and generate general observations without explicit training. For instance, using so called sentence embeddings they are able to recognize that two different input sentences have a similar meaning:
Can you tell me how to go to the train station?
How do I reach the station? This ability to generalize is key to many modern NLP applications, such as text rephrasing, terminology extraction, machine translation and many others.
A very popular type of machine learning process, called deep learning, uses neural networks in a way that is inspired by the human brain. With their layered structure of interconnected nodes (neurons), neural networks create an adaptive system that computers use to learn from their mistakes and improve continuously. To understand the learning process of a neural network and its similarity to human brains, let’s consider Ivan Pavlov’s classic experiments, where he found that dogs could learn to salivate at the sound of a bell. In 1949, psychologist Donald Hebb applied what is known as Pavlov’s “associative learning rule” to explain how brain cells might acquire knowledge. Hebb advanced the hypothesis that when two neurons fire together, which means they send off impulses simultaneously, the connections between them (the synapses) grow stronger. This is the moment when learning has taken place. In the Pavlov’s experiment, it would mean that the brain now knows that the sound of a bell is followed immediately by the presence of food. This is what neural networks try to reproduce: knowledge gets encoded in the so-called weights, i.e. the parameters within a neural network that transform input data into the output.
A good way to break out the learning system of a ML algorithm is offered by UC Berkeley. The learning process is divided into three main parts:
A Decision Process: In general, machine learning algorithms are used to make a prediction or classification. The machine translation process, for example, can be seen as the task of predicting a sentence in the target language (the output) given a sentence in the source language (the input).
An Error Function: An error function evaluates the prediction of the model. Should there be known examples available, an error function can make a comparison to assess the accuracy of the model.
A Model Optimization Process: In order to increase the quality of the prediction, the parameters of the model are changed to reduce the discrepancy between the known example and the model estimate. This process of evaluation and optimization is repeated until a threshold of accuracy has been met.
There are three main categories of machine learning:
Supervised machine learning#
Supervised machine learning uses labeled datasets to train algorithms that classify data or predict outcomes. Labeled data is a designation for datasets that have been tagged with one or more labels identifying certain properties, or classifications or contained objects. Machine Translation is a typical example of supervised learning. Training a MT model typically requires labelled data in form of texts and their translations aligned at sentence level. As this bitext is fed into the model, it adjusts its parameters (weights) until the model is able to reach the desired quality of translation appropriately. Neural networks are typical supervised learning algorithms.
Unsupervised machine learning#
Unsupervised machine learning uses algorithms to analyze unlabeled datasets in order to discover hidden patterns without the need for human intervention. It allows the model to work on its own to discover patterns and information that was previously undetected. It mainly deals with the unlabelled data. The machine is forced to build a compact internal representation of the data and, so to say, to self-organize knowledge about the data.
Semi-supervised learning is an approach that lays between supervised and unsupervised learning. During training, it uses a smaller labeled data set to guide the learning process from a larger, unlabeled data set. Semi-supervised learning are useful to solve the problem of labeled data scarcity.
Reinforcement learning is a learning method that interacts with its environment by producing actions and discovering errors or rewards. The most relevant characteristics of reinforcement learning are trial and error search and delayed reward. This method allows machines and software agents to automatically determine the ideal behavior within a specific context to maximize its performance. Simple reward feedback — known as the reinforcement signal — is required for the agent to learn which action is best.