AI Glossary

On this page you will find a short explanation of the most important terms to understand information about artificial intelligence.

Machine Learning (ML) is a subset of artificial intelligence that involves the creation of algorithms that allow computers to learn from and make decisions or predictions based on data. In other words, the machine learns through training and doesn’t need explicit programming to make decisions. Instead, it uses statistical methods to improve its predictions or decisions over time. Examples of machine learning techniques include linear regression, decision trees, and k-nearest neighbors.

Deep Learning (DL) is a subset of machine learning that’s based on artificial neural networks with representation learning. It models high-level abstractions in data through the use of multiple layers. In essence, deep learning is a technique for implementing machine learning. It is called “deep” learning because it makes use of deep neural networks, where “deep” refers to the number of layers in the network. Examples of deep learning applications include image recognition, natural language processing, and voice recognition.

Generative AI (GenAI) refers to a type of artificial intelligence that can create new content. It is often associated with generative adversarial networks (GANs), a type of deep learning method. Generative AI can generate images, music, speech, or text that is similar to something it has been trained on. For instance, given a set of paintings, a generative AI model could create a new painting that is similar in style to the ones it has seen.

In summary, deep learning is a subset of machine learning, and generative AI is a type of AI (often employing deep learning techniques) that is capable of creating new content.

Large Language Model (LLM) is a type of AI model that can generate natural language text from large amounts of data. They use deep neural networks (rf. Deep Learning above) to learn from billions or trillions of words and are capable of producing text on any topic or domain. LLMs can perform a wide variety of natural language tasks, including classification, summarization, translation, generation, and dialogue. 

The term “Large” in LLM refers to the massive scale of these models, which often involve millions or billions of parameters. The “Language” component signifies that these models are fundamentally based on the building blocks of language (words, sentences, paragraphs), and “Models” are high-dimensional mathematical representations of a large amount of written information.

A well-known example of an application powered by an LLM is ChatGPT.