This cheat sheet provides quick definitions of common AI terms, organized by category for easy reference. Perfect for beginners, students, and professionals looking to refresh their knowledge.
| Category | Term | Definition |
|---|---|---|
| ⚙️ Core Concepts | Artificial Intelligence (AI) | Broad field of making machines perform tasks that normally require human intelligence. |
| Machine Learning (ML) | Subset of AI where systems learn from data. | |
| Deep Learning (DL) | Subset of ML using multi-layered neural networks. | |
| Neural Network | Computational model inspired by the human brain, made of interconnected “neurons.” | |
| Generative AI | AI that creates new content (text, images, code, audio). | |
| 📚 Learning Paradigms | Supervised Learning | Training on labeled data (input + known output). |
| Unsupervised Learning | Training on unlabeled data, finding patterns or clusters. | |
| Reinforcement Learning (RL) | Model learns by interacting with an environment and receiving rewards/penalties. | |
| Zero-Shot Learning | Model solves tasks without examples during training. | |
| One-Shot Learning | Model solves tasks after seeing one example. | |
| Few-Shot Learning | Model solves tasks after seeing a handful of examples. | |
| Transfer Learning | Using a pre-trained model for a related task. | |
| 💬 NLP (Natural Language Processing) | Token | Smallest unit of text AI processes. |
| Embedding | Numeric vector representation of words/sentences for understanding. | |
| Large Language Model (LLM) | AI model trained on massive text corpora (e.g., GPT, LLaMA). | |
| Prompt | Input text/instructions given to an AI model. | |
| Prompt Engineering | Crafting effective prompts for better AI output. | |
| Context Window | Maximum amount of input tokens an LLM can handle at once. | |
| Hallucination | Confident but incorrect answer generated by AI. | |
| Grounding | Linking AI answers to trusted data/sources. | |
| RAG (Retrieval-Augmented Generation) | AI retrieves external knowledge before generating answers. | |
| 🧮 Model Types | CNN (Convolutional Neural Network) | Neural network for image processing. |
| RNN (Recurrent Neural Network) | Processes sequential data (text, time series). | |
| Transformer | Deep learning architecture powering LLMs (uses attention). | |
| Diffusion Models | Generative models for images/audio, working by denoising. | |
| 🛠️ Training & Deployment | Epoch | One full pass through the training dataset. |
| Overfitting | Model memorizes training data but fails on unseen data. | |
| Underfitting | Model is too simple, missing patterns. | |
| Fine-Tuning | Further training a pre-trained model on specific data. | |
| LoRA (Low-Rank Adaptation) | Lightweight fine-tuning method for LLMs. | |
| Inference | Using a trained model to make predictions. | |
| Latency | Time taken for a model to return results. | |
| ⚖️ Ethics & Governance | Bias | Systematic unfairness in AI outputs due to skewed data. |
| Explainability (XAI) | Techniques to understand AI decisions. | |
| Responsible AI | Ensuring AI is fair, accountable, and transparent. | |
| AI Safety | Practices ensuring AI doesn’t cause harm. | |
| 💬 Conversational AI | Agent | AI that can act on a user’s behalf (fetch data, perform actions). |
| Autonomous Workflow | AI completes tasks end-to-end without human input. | |
| Conversational Workflow | AI interacts in multiple steps, waiting for responses. | |
| Chain-of-Thought | Intermediate reasoning steps taken by a model. | |
| Tool/Plugin | External capability an LLM can call (API, database). |