Dictionary of AI terms
A
-
Activation Function: A mathematical function within a neuron that determines its output, crucial for enabling networks to learn complex patterns.
-
Algorithm: A set of instructions that a computer follows to perform a task.
-
Artificial General Intelligence (AGI): A hypothetical type of AI that possesses human-level intelligence and adaptability across various domains and tasks.
-
Artificial Intelligence (AI): The broad field encompassing the simulation of intelligent behavior in computers, with the aim of creating systems that can learn, reason, and act autonomously.
B
-
Backpropagation: The core method for calculating errors and adjusting weights within neural networks during the training process.
-
Bias: The tendency of a model to favor certain outcomes over others, potentially leading to unfair or discriminatory results.
-
Big Data: Datasets that are exceptionally large and complex, requiring specialized techniques for processing and analysis.
C
-
Chatbot: A computer program designed to engage in conversations with human users, often through text or voice interactions.
-
Classification: A machine learning task where a model learns to assign categories to data points (e.g., classifying an email as spam or not spam).
-
Clustering: A machine learning task focused on grouping similar data points without pre-defined labels.
-
Cloud Computing: The on-demand delivery of computing resources, including AI tools and platforms, over the internet.
-
Computer Vision (CV): The field of AI that enables computers to extract meaningful information from images and videos.
D
-
Data: The raw information used to train and evaluate machine learning models
-
Data Preprocessing: The essential process of cleaning, transforming, and preparing data for use in machine learning models.
-
Dataset: A structured collection of data used for training and testing machine learning models.
-
Deep Learning (DL): A subset of machine learning that uses multi-layered artificial neural networks to learn complex representations from data.
-
Deepfake: Manipulated media (images, videos, audio) created using AI techniques, often with the intent to deceive.
E
-
Embeddings: Mathematical representations of words or other data points that capture their semantic meaning and relationships.
-
Explainable AI (XAI): Techniques and methods aimed at understanding and interpreting the decision-making processes of AI models.
F
-
Feature: A measurable characteristic or property of a data point used as an input to a machine learning model.
G
-
Generalization: The ability of a machine learning model to perform accurately on new, unseen data.
-
Generative Adversarial Network (GAN): A deep learning architecture where two neural networks compete: one generates samples, the other tries to distinguish between real and fake.
-
Gradient Descent: An iterative optimization algorithm commonly used to minimize errors and find the best parameters for a machine learning model.
H
-
Hyperparameter: A configuration setting for a machine learning model that is set before the training process begins (e.g., learning rate, number of layers in a neural network).
I
-
Inference: The process of using a trained machine learning model to make predictions or decisions on new data.
-
Internet of Things (IoT): A network of interconnected devices with sensors that can collect and exchange data.
L
-
Label: In supervised learning, the target output or correct answer associated with a data point, used to guide the model's learning.
M
-
Machine Learning (ML): A subset of AI focused on algorithms and techniques that enable computers to learn from data without being explicitly programmed.
-
Model: A mathematical representation of patterns learned from data, used for making predictions or decisions.
N
-
Natural Language Processing (NLP): The field of AI concerned with the interaction between computers and human language, including understanding and generation.
-
Neural Network: A type of machine learning algorithm inspired by the structure of the biological brain, composed of interconnected nodes (neurons).
-
Neuron: The basic computational unit within an artificial neural network.
O
-
Overfitting: A situation where a model learns the training data too well, including noise and anomalies, leading to poor performance on new data.
P
-
Precision: A performance metric in classification tasks. Measures the proportion of true positive predictions out of all positive predictions made by the model.
R
-
Recall: A performance metric in classification tasks. Measures the proportion of true positives correctly identified by the model.
-
Regression: A machine learning task where the model predicts a continuous numerical value (e.g., house price prediction).
-
Reinforcement Learning (RL): A type of machine learning where an agent learns through trial and error by interacting with an environment and receiving rewards or punishments.
S
-
Supervised Learning: A machine learning paradigm where models are trained on labeled datasets (input-output pairs are provided).
T
-
Transfer Learning: A technique where a model pre-trained on one task is re-purposed for a related task, improving efficiency and performance.
U
-
Unsupervised Learning: A machine learning paradigm where models discover patterns in data without explicit labels.
V
-
Validation Set: A portion of the dataset used to tune model hyperparameters and help prevent overfitting.
W
-
Weights: The adjustable parameters within a neural network that determine the strength of connections between neurons. Learning involves updating these weights.
X
-
XAI (Explainable AI): The field focused on developing techniques to understand and explain the decisions made by AI models, fostering transparency and trust.
Less Common (But Important!) Terms
-
Adversarial Examples: Inputs to machine learning models intentionally crafted to cause misclassification, exposing vulnerabilities.
-
Autoencoder: A type of neural network used for unsupervised learning, often for dimensionality reduction or feature representation.
-
Backtracking: A search algorithm that systematically explores potential solutions, reversing direction when a dead-end is reached.
-
Bayesian Inference: A statistical approach to updating beliefs about a hypothesis as new data becomes available.
-
Capsule Networks: A type of neural network architecture designed to better handle hierarchical relationships and viewpoints.
-
Dimensionality Reduction: Techniques for transforming data into a lower-dimensional representation that preserves essential information.
-
Domain Adaptation: The ability to adapt a model trained in one context (domain) to perform well in a different but related domain.
-
Ensemble Learning: The process of combining multiple machine learning models to improve overall predictive performance.
-
Evolutionary Algorithms: Optimization methods inspired by biological evolution, used for finding solutions to complex problems.
-
Fuzzy Logic: A type of logic that deals with degrees of truth rather than simply true or false, useful for handling uncertainty.