Generative Pre-Trained Transformer (GPT) is a family of large language models developed that uses the transformer architecture to generate human-like text based on input prompts. The "pre-trained" aspect means the model is first trained on massive amounts of text from the internet to learn language patterns, grammar, facts, and reasoning, then can be fine-tuned or used directly for specific tasks like conversation, writing, translation, or code generation without task-specific training.
Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
Explore Similar Terms:
Stanford HAI Associate Director Rob Reich details how the use of AI could play out for society, and what it means for generations who will grow up with it.
Stanford HAI Associate Director Rob Reich details how the use of AI could play out for society, and what it means for generations who will grow up with it.

More people feel comfortable outsourcing important projects to AI; new research shows why we shouldn’t.
More people feel comfortable outsourcing important projects to AI; new research shows why we shouldn’t.

The AI Teacher Test: Measuring the Pedagogical Ability of Blender and GPT-3 in Educational Dialogues
The AI Teacher Test: Measuring the Pedagogical Ability of Blender and GPT-3 in Educational Dialogues

The new 2.7B parameter language model trained on biomedical literature delivers an improved state of the art for medical question answering.
The new 2.7B parameter language model trained on biomedical literature delivers an improved state of the art for medical question answering.

Will ChatGPT make the already troubling income and wealth inequality in the U.S. and many other countries even worse? Digital Economy Lab Director Erik Brynjolfsson and other experts share their views.
Will ChatGPT make the already troubling income and wealth inequality in the U.S. and many other countries even worse? Digital Economy Lab Director Erik Brynjolfsson and other experts share their views.

A new study shows systemic issues in some of the most popular models.
A new study shows systemic issues in some of the most popular models.
