THE GLOSSARY

The definitive 2026 dictionary for AI agents, prompt engineering, and the future of autonomous systems.

A

Agentic Workflow

A design pattern where an AI decomposes a complex goal into smaller sub-tasks, iterates recursively, and uses feedback loops to improve its own output.

Official Entry

AI Agent

An autonomous software entity that can perceive its environment, reason about tasks, and take actions towards a specific goal without constant human intervention.

Official Entry

AI Alignment

The sub-field of AI safety that ensures the goals and behaviors of an artificial intelligence system match human values and safety standards.

Official Entry

AutoGPT

An open-source autonomous AI agent framework that uses LLMs (like GPT-5.4) to achieve multi-step goals by browsing the web, accessing files, and using external tools.

Official Entry

Autonomous Agent

An AI system capable of planning its own tasks and executing them independently to reach a long-term objective without human prompting for every step.

Official Entry

B

BabyAGI

A classic task-driven autonomous agent framework that focuses on recursive task planning, prioritization, and execution loops.

Official Entry

Backpropagation

The fundamental algorithm used to train neural networks by calculating the 'error' of an output and sending it backward through the layers to adjust weights.

Official Entry

C

Chain of Thought (CoT)

A prompting technique where the AI is instructed to 'think step-by-step', which significantly improves logical reasoning and mathematical accuracy.

Official Entry

Context Window

The total volume of tokens an AI model can 'remember' at once during a single session. Exceeding this limit causes the model to 'forget' earlier parts of the conversation.

Official Entry

D

Deterministic

A model setting (usually Temperature 0) where the AI produces the exact same output for the same input every time, eliminating randomness.

Official Entry

Diffusion Model

The core architecture behind modern image generators like Midjourney and Stable Diffusion, which creates images by 'denoising' a field of random static.

Official Entry

E

Embeddings

The process of converting words or images into numerical vectors in a multi-dimensional space, allowing AI to understand semantic similarity.

Official Entry

F

Few-Shot Prompting

The practice of providing 2-5 examples of a desired output within a prompt to 'prime' the AI's understanding of the task's rules and format.

Official Entry

Fine-Tuning

Additional training of a pre-trained model on a smaller, specialized dataset to optimize it for a specific industry or writing style.

Official Entry

Function Calling

The ability for an LLM to generate structured JSON data that can be used to trigger real-world API calls, database queries, or local code execution.

Official Entry

H

Hallucination

When an AI model generates factually incorrect, nonsensical, or made-up information while presenting it as truth with high confidence.

Official Entry

HITL (Human-in-the-Loop)

A design pattern where an AI agent proposes high-stakes actions, but a human must manually approve or edit them before execution.

Official Entry

I

Inference

The live process of an AI model generating a response to a prompt after its training phase is complete (the 'thinking' phase).

Official Entry

K

Knowledge Cutoff

The date when a model's training data ends. For older models, they have no knowledge of events occurring after this specific point in time.

Official Entry

L

LLM (Large Language Model)

A deep learning model trained on petabytes of text that can recognize, summarize, translate, and generate human-like content.

Official Entry

LMM (Large Multimodal Model)

An advanced LLM (like GPT-4o) that can process multiple types of data—text, images, and audio—natively within the same conversational context.

Official Entry

LoRA (Low-Rank Adaptation)

A popular technique for fine-tuning massive AI models using very little compute power, making customization accessible to individual developers.

Official Entry

N

Negative Prompt

Instructions that explicitly tell an AI what **not** to do or include (common in image generation to remove unwanted artifacts like 'extra fingers').

Official Entry

Nucleus Sampling (Top-P)

A randomness control that tells the AI to only consider the most likely tokens that make up a certain percentage (e.g., top 90%) of possibilities.

Official Entry

P

Parameter

The internal 'knobs' or variables learned during training. Generally, the more parameters a model has, the more complex its reasoning capabilities.

Official Entry

Prompt Engineering

The technical discipline of structuring, refining, and optimizing inputs to large language models to get the highest quality and most reliable outputs.

Official Entry

Prompt HUD

A specialized UI layer that allows users to interact with prompt variables (like {{Topic}}) without seeing the underlying code or system prompt.

Official Entry

Prompt Injection

A security vulnerability where a user's input tricks an AI into ignoring its safety instructions or leaking its internal system prompts.

Official Entry

Q

Quantization

Compressing an AI model (e.g., from 16-bit to 4-bit) so it can run efficiently on consumer-grade hardware like laptops or smartphones.

Official Entry

R

RAG (Retrieval-Augmented Generation)

A system that connects an LLM to a private database, allowing it to cite real-time documents and providing much higher factual accuracy.

Official Entry

Reasoning Tokens

Hidden internal tokens used by specialized models (like OpenAI's o1 series) to 'think' through complex logic before generating a final answer.

Official Entry

Recursive Prompting

A process where an AI takes its own output and puts it back into the prompt to refine, expand, or check its work in a continuous loop.

Official Entry

RLHF

Reinforcement Learning from Human Feedback; the process of using human rankings to 'reward' an AI for being more helpful, safe, and conversational.

Official Entry

S

SLM (Small Language Model)

Compact AI models (under 10B parameters) designed to be extremely fast, cheaper to run, and highly efficient for simple, repetitive tasks.

Official Entry

System Prompt

The core, 'God-mode' instructions that set the AI's identity, tone, safety boundaries, and primary mission before the user ever speaks.

Official Entry

T

Temperature

A slider for randomness. 0.0 makes the AI a predictable robot, while 1.0 (or higher) makes it a creative, unpredictable poet.

Official Entry

Token

The fundamental unit of AI text. 1 token is roughly 4 characters or 0.75 words. Models are billed and constrained by 'Input' and 'Output' tokens.

Official Entry

Token Limit

The hard ceiling of a model's memory. Once reached, the AI must 'eject' the oldest information to make room for new inputs.

Official Entry

Transformer

The 2017 architecture that revolutionized AI by allowing models to process all words in a sentence at once, rather than one by one.

Official Entry

V

Vector Database

Specialized storage (like Pinecone or Chroma) for 'Embeddings' that allows AI agents to search through millions of documents in milliseconds.

Official Entry

W

Weights

The numerical values that represent the strength of connections in a neural network; they are essentially the 'knowledge' the model has learned.

Official Entry

Z

Zero-Shot CoT

The simple but powerful technique of adding 'Let's think step by step' to an instruction to drastically increase an AI's accuracy instantly.

Official Entry

Zero-Shot Prompting

Asking an AI to perform a completely new task without giving it any examples, relying purely on its pre-existing training and logic.

Official Entry
A-Z

Mastered the lingo?

Put these concepts into action with our verified expert agents.