So, you’ve seen some amazing GPT-3 demos on Twitter (machine-made Op-Eds, poems, articles, even working code). But what’s going on under the hood of this incredible model? Here’s a (brief!) look inside.
GPT-3 is a neural-network-powered language model. Alanguage model is a model that predicts the likelihood of a sentence existing in the world. For example, a language model can label the sentence “I take my dog for a walk” as more probable to exist (i.e. on the Internet) than the sentence “I take my banana for a walk.” This is true for sentences as well as phrases and, more generally, any sequence of characters.
Like most language models, GPT-3 is elegantly trained on an unlabeled text dataset (in this case, Common Crawl ). Words or phrases are randomly removed from the text, and the model must learn to fill them in using only the surrounding words as context. It’s a simple training task that results in a powerful and generalizable model.
The GPT-3 model architecture itself is atransformer-based neural network. This architecture became popular about 2–3 years ago, and is the basis for the popular NLP modelBERT. From an architecture perspective, GPT-3 is not actually very novel! So what makes it so special and magical?
IT’S REALLY BIG. I mean really big. With 175 billion parameters, it’s the largest language model ever created (GPT-2 had only 1.5 parameters!), and was trained on the largest dataset of any language model. This, it appears, is the main reason GPT-3 is so impressive.
And here’s the magical part. As a result, GPT-3 can do what no other model can do (well): perform *specific* tasks without any special tuning. You can ask GPT-3 to be a translator, a programmer, a poet, or a famous author, and it can do it with fewer than 10 training examples. Damn .
Most other models (like BERT) require an elaborate fine-tuning step, where you gather thousands of examples of (say) French-English sentence pairs to teach it how to do translation. With GPT-3, you don’t need to do that fine-tuning step. This is the heart of it. This is what gets people excited about GPT-3: custom language tasks without training data.
Today, GPT-3 is in private beta, but boy can I not wait to get my hands on it.
很遗憾的说,推酷将在这个月底关闭。人生海海,几度秋凉,感谢那些有你的时光。
以上所述就是小编给大家介绍的《GPT-3 Explained in Under 2 Minutes》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
大数据系统构建
Nathan Marz、James Warren / 马延辉、向磊、魏东琦 / 机械工业出版社 / 2017-1 / 79.00
随着社交网络、网络分析和智能型电子商务的兴起,传统的数据库系统显然已无法满足海量数据的管理需求。 作为一种新的处理模式,大数据系统应运而生,它使用多台机器并行工作,能够对海量数据进行存储、处理、分析,进而帮助用户从中提取对优化流程、实现高增长率的有用信息,做更为精准有效的决策。 但不可忽略的是,它也引入了大多数开发者并不熟悉的、困扰传统架构的复杂性问题。 本书将教你充分利用集群硬件优势的La......一起来看看 《大数据系统构建》 这本书的介绍吧!