EMPOWERING EDUCATORS AT
COPENHAGEN BUSINESS SCHOOL
COPENHAGEN BUSINESS SCHOOL
On this page you will find a short explanation of the most important terms to understand information about artificial intelligence.
Machine Learning (ML) is a subset of artificial intelligence that involves the creation of algorithms that allow computers to learn from and make decisions or predictions based on data. In other words, the machine learns through training and doesn’t need explicit programming to make decisions. Instead, it uses statistical methods to improve its predictions or decisions over time. Examples of machine learning techniques include linear regression, decision trees, and k-nearest neighbors.
Deep Learning (DL) is a subset of machine learning that’s based on artificial neural networks with representation learning. It models high-level abstractions in data through the use of multiple layers. In essence, deep learning is a technique for implementing machine learning. It is called “deep” learning because it makes use of deep neural networks, where “deep” refers to the number of layers in the network. Examples of deep learning applications include image recognition, natural language processing, and voice recognition.
Generative AI (GenAI) refer to a type of artificial intelligence that can create new content. It is often associated with generative adversarial networks (GANs), a type of deep learning method. Generative AI can generate images, music, speech, or text that is similar to something it has been trained on. For instance, given a set of paintings, a generative AI model could create a new painting that is similar in style to the ones it has seen.
In summary, deep learning is a subset of machine learning, and generative AI is a type of AI (often employing deep learning techniques) that is capable of creating new content.
Large Language Model (LLM) is a type of AI model that can generate natural language text from large amounts of data. They use deep neural networks (rf. Deep Learning above) to learn from billions or trillions of words and are capable of producing text on any topic or domain. LLMs can perform a wide variety of natural language tasks, including classification, summarization, translation, generation, and dialogue.
The term “Large” in LLM refers to the massive scale of these models, which often involve millions or billions of parameters. The “Language” component signifies that these models are fundamentally based on the building blocks of language (words, sentences, paragraphs), and “Models” are high-dimensional mathematical representations of a large amount of written information.
A well-known example of an application powered by an LLM is ChatGPT.
Multimodal model is an AI model that can understand, process and generate different kinds of data such as video, audio, speech, images, text etc. at the same time. This approach allows the model to have a more comprehensive understanding and make more accurate predictions. An example of a Multimodal model is Copilot, formerly known as Bing Chat Enterprise (which is built on GPT4), since it can get inputs in text, speech and image. The output can be text or images.
Text-to-Video and text-to-Image refer to AI models that can transform your text input into either a video or image output. This means that you can ask it in text, it could be a manuscript or description, and it will give you an answer in either a video or an image.