Glossary of Key Terms
Inputs that are slightly perturbed from the original data to mislead generative models and produce unexpected outputs.
Unwanted and unfair preferences or prejudices learned by generative AI models from the training data, leading to biased content generation.
Generating content with specific constraints or conditions, like generating images of cats based on different cat breeds.
A technique for evaluating the performance of a machine learning model. Cross-validation involves splitting the data into a training set and a validation set. The model is trained on the training set and then evaluated on the validation set. This helps to ensure that the model is not overfitting the training data.
Techniques used to increase the diversity and size of the training dataset, improving the generalization capability of generative models.
A subset of machine learning that utilizes artificial neural networks with multiple layers to process and learn from data, enabling complex pattern recognition and generation tasks.
A technique for preventing overfitting in machine learning models. Early stopping stops the training of the model when the loss function stops decreasing. This helps to prevent the model from learning the noise in the training data.
The ethical implications and potential consequences of using generative AI in various applications, including privacy, security, and fairness.
The metric used to measure the performance of a machine learning model. Evaluation metrics can be used to compare different models or to track the performance of a model over time.
A process of improving the performance of a machine learning model by retraining it on a smaller dataset. The smaller dataset is typically labeled data that is similar to the data that the model was originally trained on. Fine-tuning can be a way to improve the performance of a model on a specific task.
A specific type of generative model that consists of two neural networks, the generator, and the discriminator, playing a game to improve the quality of the generated content.
A class of artificial intelligence systems that can generate new and original content, such as images, text, music, or videos, based on patterns learned from existing data.
A type of model within Generative AI that learns the underlying data distribution to generate new data samples that resemble the training data.
The use of specialized hardware to speed up the training and inference of machine learning models. Hardware acceleration can be a way to improve the performance of machine learning models on large datasets.
The parameters of a machine learning model that are not learned during training. Hyperparameters are typically set by the user, and they can have a significant impact on the performance of the model.
The process of finding the best values for the hyperparameters of a machine learning model. Hyperparameter tuning can be a time-consuming process, but it can be worth it to improve the performance of the model.
A task in Generative AI where the model transforms an input image into an output image, like converting a sketch into a realistic image.
The process of using a trained generative model to generate new content from unseen data or user inputs.
The time it takes for a machine learning model to make a prediction on new data. Inference time is important for applications where the model needs to make predictions in real time.
The lower-dimensional space in which data is encoded by generative models, allowing manipulation and interpolation of data representations.
A hyperparameter that controls how quickly a machine learning model learns. A higher learning rate will cause the model to learn more quickly, but it may also cause the model to overfit the training data.
A function that measures the error between the predicted output of a machine learning model and the actual output. The loss function is used to train the model to minimize the error.
A phenomenon in GANs where the generator produces limited variations of the same content, reducing the diversity of generated samples.
The design of a machine learning model. The model architecture determines the number of layers, the type of layers, and the connections between the layers.
A collection of pre-trained machine learning models that can be used for a variety of tasks. Model zoos can be a convenient way to get started with machine learning, as they provide a way to use pre-trained models without having to train them from scratch.
A field of AI focused on understanding and processing human language, often used in text generation tasks.
A common limitation in generative models where the model becomes too specialized in the training data and performs poorly on unseen data.
The process of preparing data for machine learning. Pre-processing can involve tasks such as cleaning the data, removing outliers, and transforming the data into a format that the model can understand.
A type of machine learning that combines supervised and unsupervised learning. The model is given some labeled data, but it is also allowed to learn from unlabeled data. This type of learning can be more efficient than supervised learning, as the model can learn from a larger dataset.
The process of using generative models to create coherent and contextually relevant text based on a given prompt or input.
The task of generating images from textual descriptions using generative models.
A technique in which pre-trained generative models are used as a starting point for new tasks, saving time and computational resources.
A type of machine learning where the model is not given any labeled data. Instead, the model learns to identify patterns in the data without any guidance. This type of learning is often used for tasks such as clustering and dimensionality reduction.
Another type of generative model that learns to encode data into a lower-dimensional representation and then decode it back into the original data space.