AI Glossary
Essential AI terms explained in plain language. No jargon, just clear definitions for founders and business leaders.
AI Agent
ApplicationsAn AI system that can autonomously perform tasks, make decisions, and take actions to achieve specified goals. Can use tools, access APIs, and operate with minimal human intervention.
API (in AI context)
InfrastructureApplication Programming Interface that allows developers to access AI models and services. OpenAI, Anthropic, and others provide APIs to integrate their models into applications.
Artificial Intelligence (AI)
FundamentalsThe simulation of human intelligence by machines, enabling them to perform tasks that typically require human cognition such as learning, reasoning, problem-solving, and decision-making.
Attention Mechanism
TechnicalA technique that allows models to focus on different parts of the input when producing outputs. Enables models to capture long-range dependencies in data.
Batch Processing
TechniquesProcessing multiple AI requests together rather than one at a time. Can improve efficiency and reduce costs for non-time-sensitive workloads.
Computer Vision
ApplicationsAI field focused on enabling machines to interpret and understand visual information from the world, including images and videos.
Context Window
TechnicalThe maximum amount of text (measured in tokens) that a model can process at once. Larger context windows allow for more information to be considered when generating responses.
Deep Learning
FundamentalsA subset of machine learning using neural networks with many layers (hence 'deep'). Particularly effective for complex tasks like image recognition and natural language processing.
Edge AI
InfrastructureRunning AI models locally on devices (phones, IoT devices) rather than in the cloud. Offers benefits in latency, privacy, and offline operation.
Embedding
TechnicalA numerical representation of data (text, images, etc.) as vectors in a high-dimensional space. Similar items have similar embeddings, enabling semantic search and comparison.
Few-shot Learning
TechniquesProviding a model with a few examples of a task within the prompt to help it understand the desired output format and behavior.
Fine-tuning
TechniquesThe process of taking a pre-trained model and further training it on a specific dataset to customize it for particular tasks or domains.
Generative AI
ApplicationsAI systems that can create new content — text, images, code, music, video. Includes LLMs, image generators like DALL-E and Midjourney, and more.
Hallucination
ChallengesWhen an AI model generates information that is factually incorrect or made up, but presents it as truth. A key challenge in deploying LLMs for factual applications.
Inference
TechnicalThe process of using a trained AI model to make predictions or generate outputs on new data. Distinct from training, which is the process of creating the model.
Large Language Model (LLM)
ModelsAI models trained on massive amounts of text data that can understand and generate human-like text. Examples include GPT-4, Claude, and Llama.
Latency
TechnicalThe time delay between sending a request to an AI model and receiving a response. Critical factor in user experience for real-time applications.
Machine Learning (ML)
FundamentalsA subset of AI where systems learn and improve from experience without being explicitly programmed. ML algorithms use data to identify patterns and make predictions.
Model Drift
ChallengesWhen an AI model's performance degrades over time as real-world data changes from what the model was trained on. Requires monitoring and periodic retraining.
Multimodal AI
ModelsAI systems that can process and generate multiple types of data — text, images, audio, video. GPT-4V and Gemini are examples of multimodal models.
Natural Language Processing (NLP)
ApplicationsThe branch of AI focused on enabling computers to understand, interpret, and generate human language. Powers chatbots, translation, and text analysis.
Neural Network
FundamentalsA computing system inspired by biological neural networks in the brain. Consists of interconnected nodes (neurons) that process information in layers.
Prompt Engineering
TechniquesThe practice of crafting effective inputs (prompts) for AI models to achieve desired outputs. Critical skill for working with LLMs.
RAG (Retrieval Augmented Generation)
TechniquesA technique that combines LLMs with external knowledge retrieval. The model first retrieves relevant information from a knowledge base, then uses it to generate more accurate responses.
Reinforcement Learning from Human Feedback (RLHF)
TechniquesA training technique where models are refined based on human preferences and feedback. Used to make LLMs more helpful, harmless, and honest.
Temperature (in AI)
TechnicalA parameter that controls randomness in model outputs. Higher temperature = more creative/random responses. Lower temperature = more focused/deterministic responses.
Token
TechnicalThe basic unit of text that language models process. Can be a word, part of a word, or punctuation. Model pricing and limits are often measured in tokens.
Transformer
FundamentalsA neural network architecture that revolutionized NLP. Uses attention mechanisms to process sequential data. Foundation of modern LLMs like GPT and BERT.
Vector Database
InfrastructureA database optimized for storing and querying embeddings (vectors). Essential for semantic search, recommendations, and RAG systems. Examples: Pinecone, Weaviate.
Zero-shot Learning
TechniquesThe ability of a model to perform tasks it wasn't explicitly trained for, using only natural language instructions. A key capability of modern LLMs.
Showing 30 of 30 terms
Ready to put this knowledge to work?
Book a Discovery Call and let's discuss how AI can help your business.
Book a Discovery Call