AI Glossary

The ultimate 2026 dictionary for AI agents, prompt engineering, and the future of automation.

AI Agent

An autonomous software entity that can perceive its environment, reason about tasks, and take actions towards a specific goal without human intervention.

AutoGPT

An open-source autonomous AI agent framework that uses LLMs (like GPT-5.4) to achieve goals by browsing the web, accessing local files, and using external tools.

BabyAGI

A task-driven autonomous agent framework that focuses on planning, prioritizing, and executing a recursive loop of tasks to solve complex problems.

Chain of Thought (CoT)

A prompting technique where the AI is instructed to 'think step-by-step', significantly improving logical reasoning and accuracy.

Context Window

The maximum amount of information (tokens) an AI model can 'remember' at any one time during a conversation.

Few-Shot Prompting

Providing 2-3 examples of the desired output within the prompt to guide the AI's response style and accuracy.

Hallucination

When an AI model generates factually incorrect or nonsensical information with high confidence.

HITL (Human-in-the-Loop)

A design pattern where an AI agent proposes an action, but a human must approve it before it is executed, ensuring safety and compliance.

Inference

The process of an AI model generating an output based on a given input (the 'thinking' phase after training).

LLM (Large Language Model)

A deep learning algorithm trained on massive datasets that can recognize, summarize, translate, and generate content.

Negative Prompt

Instructions telling the AI what **not** to do or include in the output (e.g., 'Do not use metaphors').

Parameter

A variable within an AI model that determines how it processes data; v6 of Midjourney allows manual adjustment of these for artistic control.

RAG (Retrieval-Augmented Generation)

A technique that allows an AI model to access external data sources (like PDFs or websites) in real-time to provide up-to-date and factually accurate answers.

Recursive Prompting

A multi-step process where the output of one prompt is used as the input for the next to build complex results.

SLM (Small Language Model)

A compact AI model (usually under 20B parameters) optimized for speed, privacy, and local deployment.

Temperature

A parameter that controls the 'creativity' or randomness of an AI's response; 0.0 is deterministic, while 1.0 is highly creative.

Token

The basic unit of text that an AI model processes (roughly 0.75 words per token).

Transformer

The core architecture behind modern LLMs that allows the model to process sequences of data in parallel, leading to massive performance gains.

Weights

Numerical values within a model that dictate the importance of different tokens in a sequence.

Zero-Shot Prompting

Asking an AI to perform a task without providing any examples, relying purely on its pre-trained knowledge.

A-Z

Mastered the lingo?

Now it's time to put these concepts into action. Explore our verified blueprints for maximum efficiency.