Nutshell Series

🧠 AI Terminology Cheat Sheet

This cheat sheet provides quick definitions of common AI terms, organized by category for easy reference. Perfect for beginners, students, and professionals looking to refresh their knowledge.

Category Term Definition
⚙️ Core Concepts Artificial Intelligence (AI) Broad field of making machines perform tasks that normally require human intelligence.
Machine Learning (ML) Subset of AI where systems learn from data.
Deep Learning (DL) Subset of ML using multi-layered neural networks.
Neural Network Computational model inspired by the human brain, made of interconnected “neurons.”
Generative AI AI that creates new content (text, images, code, audio).
📚 Learning Paradigms Supervised Learning Training on labeled data (input + known output).
Unsupervised Learning Training on unlabeled data, finding patterns or clusters.
Reinforcement Learning (RL) Model learns by interacting with an environment and receiving rewards/penalties.
Zero-Shot Learning Model solves tasks without examples during training.
One-Shot Learning Model solves tasks after seeing one example.
Few-Shot Learning Model solves tasks after seeing a handful of examples.
Transfer Learning Using a pre-trained model for a related task.
💬 NLP (Natural Language Processing) Token Smallest unit of text AI processes.
Embedding Numeric vector representation of words/sentences for understanding.
Large Language Model (LLM) AI model trained on massive text corpora (e.g., GPT, LLaMA).
Prompt Input text/instructions given to an AI model.
Prompt Engineering Crafting effective prompts for better AI output.
Context Window Maximum amount of input tokens an LLM can handle at once.
Hallucination Confident but incorrect answer generated by AI.
Grounding Linking AI answers to trusted data/sources.
RAG (Retrieval-Augmented Generation) AI retrieves external knowledge before generating answers.
🧮 Model Types CNN (Convolutional Neural Network) Neural network for image processing.
RNN (Recurrent Neural Network) Processes sequential data (text, time series).
Transformer Deep learning architecture powering LLMs (uses attention).
Diffusion Models Generative models for images/audio, working by denoising.
🛠️ Training & Deployment Epoch One full pass through the training dataset.
Overfitting Model memorizes training data but fails on unseen data.
Underfitting Model is too simple, missing patterns.
Fine-Tuning Further training a pre-trained model on specific data.
LoRA (Low-Rank Adaptation) Lightweight fine-tuning method for LLMs.
Inference Using a trained model to make predictions.
Latency Time taken for a model to return results.
⚖️ Ethics & Governance Bias Systematic unfairness in AI outputs due to skewed data.
Explainability (XAI) Techniques to understand AI decisions.
Responsible AI Ensuring AI is fair, accountable, and transparent.
AI Safety Practices ensuring AI doesn’t cause harm.
💬 Conversational AI Agent AI that can act on a user’s behalf (fetch data, perform actions).
Autonomous Workflow AI completes tasks end-to-end without human input.
Conversational Workflow AI interacts in multiple steps, waiting for responses.
Chain-of-Thought Intermediate reasoning steps taken by a model.
Tool/Plugin External capability an LLM can call (API, database).
Nutshell Series

Zero-Shot, One-Shot, and Few-Shot Learning: Explained with Examples

Artificial Intelligence (AI) models—especially Large Language Models (LLMs) like GPT—are powerful because they can solve problems even when they haven’t been directly trained on them. This ability is often described in terms of zero-shot, one-shot, and few-shot learning. Let’s break these concepts down with examples you can relate to.


🔹 Zero-Shot Learning

What it is:
Zero-shot learning means the model is given no examples of a task but is still expected to perform it using general knowledge and instructions.

Analogy:
Imagine being asked to play a new board game just by reading the rulebook, without watching anyone else play.

Example:

Prompt: Translate the sentence “Je suis étudiant” into English.
Answer: “I am a student.”


🔹 One-Shot Learning

What it is:
In one-shot learning, the model is shown one example of how a task is done before being asked to solve a new but similar problem.

Analogy:
Like being shown how to solve one type of math problem and then solving the next one on your own.

Example:

Example: "Translate 'Hola' → 'Hello'"
Now, translate "Adiós".

Answer: “Goodbye.”


🔹 Few-Shot Learning

What it is:
Few-shot learning gives the model several examples (usually 2–10+) so it can learn the task pattern more reliably before attempting a new query.

Analogy:
Like practicing a handful of past exam questions before taking the real test.

Example:

Example 1: "Translate 'Bonjour' → 'Hello'"
Example 2: "Translate 'Merci' → 'Thank you'"
Example 3: "Translate 'Chat' → 'Cat'"
Now, translate "Chien".

Answer: “Dog.”


✅ Summary

Learning Type Examples Provided Strength Use Case
Zero-Shot None Most flexible; relies on general knowledge Text classification, reasoning
One-Shot 1 Learns simple pattern quickly Simple translation, formatting
Few-Shot Few (2–10+) Captures complex patterns better Summarization, style imitation

🌟 Why This Matters

These learning modes are central to how modern AI systems adapt to new tasks. Instead of retraining models for every use case, we can simply provide instructions (zero-shot) or a few examples (one/few-shot). This makes LLMs powerful tools for translation, summarization, customer support, coding help, and much more.

👉 Whether you’re experimenting with AI prompts or building production-ready applications, understanding zero-shot, one-shot, and few-shot learning will help you design smarter and more effective solutions.