Agentic Workflow
A design pattern where an AI decomposes a complex goal into smaller sub-tasks, iterates recursively, and uses feedback loops to improve its own output.
The definitive 2026 dictionary for AI agents, prompt engineering, and the future of autonomous systems.
A design pattern where an AI decomposes a complex goal into smaller sub-tasks, iterates recursively, and uses feedback loops to improve its own output.
An autonomous software entity that can perceive its environment, reason about tasks, and take actions towards a specific goal without constant human intervention.
The sub-field of AI safety that ensures the goals and behaviors of an artificial intelligence system match human values and safety standards.
An open-source autonomous AI agent framework that uses LLMs (like GPT-5.4) to achieve multi-step goals by browsing the web, accessing files, and using external tools.
An AI system capable of planning its own tasks and executing them independently to reach a long-term objective without human prompting for every step.
A classic task-driven autonomous agent framework that focuses on recursive task planning, prioritization, and execution loops.
The fundamental algorithm used to train neural networks by calculating the 'error' of an output and sending it backward through the layers to adjust weights.
A prompting technique where the AI is instructed to 'think step-by-step', which significantly improves logical reasoning and mathematical accuracy.
The total volume of tokens an AI model can 'remember' at once during a single session. Exceeding this limit causes the model to 'forget' earlier parts of the conversation.
A model setting (usually Temperature 0) where the AI produces the exact same output for the same input every time, eliminating randomness.
The core architecture behind modern image generators like Midjourney and Stable Diffusion, which creates images by 'denoising' a field of random static.
The process of converting words or images into numerical vectors in a multi-dimensional space, allowing AI to understand semantic similarity.
The practice of providing 2-5 examples of a desired output within a prompt to 'prime' the AI's understanding of the task's rules and format.
Additional training of a pre-trained model on a smaller, specialized dataset to optimize it for a specific industry or writing style.
The ability for an LLM to generate structured JSON data that can be used to trigger real-world API calls, database queries, or local code execution.
When an AI model generates factually incorrect, nonsensical, or made-up information while presenting it as truth with high confidence.
A design pattern where an AI agent proposes high-stakes actions, but a human must manually approve or edit them before execution.
The live process of an AI model generating a response to a prompt after its training phase is complete (the 'thinking' phase).
The date when a model's training data ends. For older models, they have no knowledge of events occurring after this specific point in time.
A deep learning model trained on petabytes of text that can recognize, summarize, translate, and generate human-like content.
An advanced LLM (like GPT-4o) that can process multiple types of data—text, images, and audio—natively within the same conversational context.
A popular technique for fine-tuning massive AI models using very little compute power, making customization accessible to individual developers.
Instructions that explicitly tell an AI what **not** to do or include (common in image generation to remove unwanted artifacts like 'extra fingers').
A randomness control that tells the AI to only consider the most likely tokens that make up a certain percentage (e.g., top 90%) of possibilities.
The internal 'knobs' or variables learned during training. Generally, the more parameters a model has, the more complex its reasoning capabilities.
The technical discipline of structuring, refining, and optimizing inputs to large language models to get the highest quality and most reliable outputs.
A specialized UI layer that allows users to interact with prompt variables (like {{Topic}}) without seeing the underlying code or system prompt.
A security vulnerability where a user's input tricks an AI into ignoring its safety instructions or leaking its internal system prompts.
Compressing an AI model (e.g., from 16-bit to 4-bit) so it can run efficiently on consumer-grade hardware like laptops or smartphones.
A system that connects an LLM to a private database, allowing it to cite real-time documents and providing much higher factual accuracy.
Hidden internal tokens used by specialized models (like OpenAI's o1 series) to 'think' through complex logic before generating a final answer.
A process where an AI takes its own output and puts it back into the prompt to refine, expand, or check its work in a continuous loop.
Reinforcement Learning from Human Feedback; the process of using human rankings to 'reward' an AI for being more helpful, safe, and conversational.
Compact AI models (under 10B parameters) designed to be extremely fast, cheaper to run, and highly efficient for simple, repetitive tasks.
The core, 'God-mode' instructions that set the AI's identity, tone, safety boundaries, and primary mission before the user ever speaks.
A slider for randomness. 0.0 makes the AI a predictable robot, while 1.0 (or higher) makes it a creative, unpredictable poet.
The fundamental unit of AI text. 1 token is roughly 4 characters or 0.75 words. Models are billed and constrained by 'Input' and 'Output' tokens.
The hard ceiling of a model's memory. Once reached, the AI must 'eject' the oldest information to make room for new inputs.
The 2017 architecture that revolutionized AI by allowing models to process all words in a sentence at once, rather than one by one.
Specialized storage (like Pinecone or Chroma) for 'Embeddings' that allows AI agents to search through millions of documents in milliseconds.
The numerical values that represent the strength of connections in a neural network; they are essentially the 'knowledge' the model has learned.
The simple but powerful technique of adding 'Let's think step by step' to an instruction to drastically increase an AI's accuracy instantly.
Asking an AI to perform a completely new task without giving it any examples, relying purely on its pre-existing training and logic.
Put these concepts into action with our verified expert agents.