AI Primer: Infographic Essential Terms
An infographic visualizing the core concepts, learning methods, and key applications defining the field of Artificial Intelligence.
The AI Lexicon at a Glance
The field of AI is vast, but its foundational vocabulary can be organized into 7 key categories. "Learning Methods & Training" forms the largest group, representing the technical engine that powers AI.
This infographic visualizes the relationships between these categories and provides a complete glossary for all 53 essential terms from the AI Primer.
Deep Dive: Key Applications
Not all AI applications are equal. This chart plots key tools by their relative business adoption and disruptive impact, showing how foundational technologies like NLP enable user-facing tools like Generative AI.
Visualization Legend (Bubble Chart)
Deep Dive: The Learning Process
AI models aren't magic. They are the result of a clear process that turns raw data into a predictive tool. This flow shows the typical journey from a simple dataset to a deployed model making live inferences.
The process begins with a large Dataset, which is then split into Training Data(to teach the model) and testing data (to validate it).
A Machine Learning Algorithm(like a Neural Network) processes the training data, adjusting its Hyperparameters to learn patterns. This can be Supervised(with labels) or Unsupervised(without).
A large pre-trained Foundation Model can be refined using Fine-Tuning and RLHF to align it with specific tasks or human values, preventing Overfitting.
The final model is deployed, often via an API. When it receives a Prompt, it runs Inference to generate a new, unseen prediction or response.
Data (4 Terms)
- Big Data: Extremely large datasets analyzed to reveal patterns.
- Dataset: A collection of related data points for training.
- Synthetic Data: Artificially generated data for training.
- Training Data: The subset of data used to teach the model.
Infrastructure (5 Terms)
- API: Rules allowing software applications to communicate.
- Model Context Protocol: Rules for how a model processes conversation history.
- Model Deployment: Making a trained model available for use.
- OpenAI: A prominent AI research lab and company.
- Token: The unit for measuring computational cost and context.
Ethics & Metrics (5 Terms)
- Bias in AI: Systemic errors in decisions from skewed data.
- Ethical AI: Developing AI that is fair, transparent, and respects privacy.
- Explainable AI (XAI): Methods to understand *why* an AI made a decision.
- Confidence Score: The model's internal certainty about its prediction.
- Inference: Using a trained model to make a prediction on new data.

