桐木舟学英语人工智能

 找回密码
 立即注册
搜索
热搜: 活动 交友 discuz
查看: 212|回复: 0

The transformer & GPT timeline

[复制链接]

28

主题

13

回帖

128

积分

注册会员

Rank: 2

积分
128
发表于 2023-9-7 03:08:52 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能,让你轻松玩转社区。

您需要 登录 才可以下载或查看,没有账号?立即注册

x
The transformer & GPT timeline

Timeline of Transformers, GPT, and ChatGPT. The timeline shows that GPT evolved from the original transformer architecture and gained its ability through many iterations.
2015. OpenAI was founded by Sam Altman, Elon Musk, Greg Brockman, Peter Thiel, and others. OpenAI develops many different AI models other than GPT.
2017. Google published the paper Attention is All You Need, which introduced the transformer architecture [Vaswani, Ashish, et al. “Attention is all you need.” Advances in neural information processing systems 30 (2017)]. The transformer is a neural network architecture that lays the foundation for many state-of-the-art (SOTA) large language models (LLM) like GPT.
2018. GPT is introduced in Improving Language Understanding by Generative Pre-training [Radford, Alec, et al. “Improving language understanding by generative pre-training.” (2018)]. It’s based on a modified transformer architecture and pre-trained on a large corpus.
2019. GPT-2 is introduced in Language Models are Unsupervised Multitask Learners [Radford, Alec, et al. “Language models are unsupervised multitask learners.” OpenAI blog 1.8 (2019): 9], which can perform a range of tasks without explicit supervision when training.
2020. GPT-3 is introduced in Language Models are Few-Shot Learners [Brown, Tom, et al. “Language models are few-shot learners.” Advances in neural information processing systems 33 (2020): 1877–1901], which can perform well with few examples in the prompt without fine-tuning.
2022. InstructGPT is introduced in Training language models to follow instructions with human feedback [Ouyang, Long, et al. “Training language models to follow instructions with human feedback.” arXiv preprint arXiv:2203.02155 (2022)], which can better follow user instructions by fine-tuning with human feedback.
2022. ChatGPT, a sibling of InstructGPT, is introduced in ChatGPT: Optimizing Language Models for Dialogue. It can interact with humans in conversations, thanks to the fine-tuning with human examples and reinforcement learning from human feedback (RLHF).


回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

Archiver|手机版|小黑屋|桐木舟论坛

GMT+8, 2024-5-14 00:49 , Processed in 0.048626 second(s), 21 queries .

Powered by Discuz! X3.4

© 2001-2023 Discuz! Team.

快速回复 返回顶部 返回列表