您好,登錄后才能下訂單哦!
來源:openai
轉(zhuǎn)載自:新智元,未經(jīng)允許不得二次轉(zhuǎn)載
OpenAI今天在官博上介紹了他們的新NLP模型,刷新了7大數(shù)據(jù)集的SOTA(當(dāng)前最佳結(jié)果),并且能夠在不進(jìn)行任何與領(lǐng)域知識相關(guān)數(shù)據(jù)訓(xùn)練的情況下,直接跨任務(wù)執(zhí)行最基礎(chǔ)的閱讀理解、機器翻譯、問答和文本總結(jié)等不同NLP任務(wù)。
無需預(yù)訓(xùn)練就能完成多種不同任務(wù)且取得良好結(jié)果,相當(dāng)于克服了“災(zāi)難性遺忘”,簡直可謂深度學(xué)習(xí)研究者夢寐以求的“通用”模型!
如果說谷歌的BERT代表NLP邁入了一個預(yù)訓(xùn)練模型的新時代,OpenAI便用這一成果證明,只要擁有超凡的數(shù)據(jù)量和計算力,就能實現(xiàn)以往無法想象的事情。
例如計算力,根據(jù)參與OpenAI強化學(xué)習(xí)研究的Smertiy透露,新模型使用了256塊谷歌TPU v3(沒有公布具體的訓(xùn)練時間),訓(xùn)練價格每小時2048美元。
史上最強“通用”NLP模型:15億參數(shù)馳騁40GB網(wǎng)絡(luò)數(shù)據(jù)
OpenAI的這個NLP模型基于Transformer,擁有15億參數(shù),使用含有800萬網(wǎng)頁內(nèi)容的數(shù)據(jù)集訓(xùn)練,只為一個目的:
根據(jù)當(dāng)前已有的信息,預(yù)測下一個單詞是什么。
新模型的名字叫GPT-2,是OpenAI去年發(fā)布的無監(jiān)督NLP模型GPT的直接拓展,新模型用到的參數(shù)和訓(xùn)練數(shù)據(jù),都增長了超過10個數(shù)量級。
由于模型容量足夠大,并且訓(xùn)練數(shù)據(jù)足夠多,GPT-2在擁有40GB網(wǎng)絡(luò)數(shù)據(jù)的測試集上,僅是簡單“預(yù)測下一個單詞是什么”,就足以完成各種不同的NLP任務(wù),展示出了強大的泛化能力。
當(dāng)前,構(gòu)建機器學(xué)習(xí)系統(tǒng)的主流方法是監(jiān)督學(xué)習(xí)——收集數(shù)據(jù),也即喂給模型一套“理想的”輸入和輸出組合,讓模型模仿“套路”,在新的測試數(shù)據(jù)集上也給出類似的結(jié)果。這種方法在特定領(lǐng)域任務(wù)上表現(xiàn)很好,但缺點是一旦改為其他任務(wù),比如將在問答數(shù)據(jù)集上表現(xiàn)很好的模型用到閱讀理解上,模型就無法適應(yīng),也即泛化能力很差。
對此,OpenAI的研究人員大膽推測:當(dāng)前機器學(xué)習(xí)系統(tǒng)泛化能力差的原因,恰恰是因為讓模型局限在特定領(lǐng)域的數(shù)據(jù)集上做特定任務(wù)的訓(xùn)練。
同時,現(xiàn)有的多任務(wù)模型研究證明,單純依靠訓(xùn)練樣本的增加,難以實現(xiàn)有效的任務(wù)擴展;NLP研究人員正越來越多地使用自注意力模塊遷移學(xué)習(xí)來構(gòu)建多任務(wù)學(xué)習(xí)模型。
于是,OpenAI的研究人員結(jié)合上述兩種思路,在更通用的數(shù)據(jù)集基礎(chǔ)上,使用自注意力模塊遷移學(xué)習(xí),然后得到了一個無需調(diào)整任何參與或模型結(jié)構(gòu),在 zero-shot 情況下能夠執(zhí)行多項不同NLP任務(wù)的模型,也即上文所說的GPT-2。
有鑒于其強大的能力和可能被濫用的危險,OpenAI并沒有公布GPT-2模型及代碼,只公布了一個僅含117M參數(shù)的樣本模型及代碼,供有興趣的研究人員學(xué)習(xí)和參考:https://github.com/openai/gpt-2
當(dāng)然,GPT-2的具體模型結(jié)構(gòu)OpenAI這次也沒有詳述,他們預(yù)留了半年的時間向?qū)W界征集意見。在公布的論文“Language Models are Unsupervised Multitask Learners”中,OpenAI的研究人員介紹了模型構(gòu)建的思路和方法。
至于具體的計算力,論文中沒有提及,根據(jù)上文Twitter上的數(shù)據(jù),他們的模型使用了256個谷歌云TPU v3,盡管沒有公布訓(xùn)練時間。TPU v3在Google之外只提供單獨使用版本(盡管OpenAI可能得到了特別的許可),這意味著他們要支付8 * 256 = 2048美元/小時。
下面,就是OpenAI展示其成果的時間——你也可以直接拉到文末,點擊“閱讀原文”查看論文。
我們對四個語言模型進(jìn)行了訓(xùn)練和基準(zhǔn)測試,它們的大小如下表所示:
4個模型大小的架構(gòu)和超參數(shù)
其中,最小的模型等價于原始的GPT,次小的等價于最大的BERT模型。我們的最大模型是GPT-2,它的參數(shù)比GPT多一個數(shù)量級。
GPT-2在各種領(lǐng)域特定的語言建模任務(wù)上取得了state-of-the-art 的成績。我們的模型沒有針對任何特定于這些任務(wù)的數(shù)據(jù)進(jìn)行訓(xùn)練,只是作為最終測試對它們進(jìn)行了評估;這就是被稱為“zero-shot”的設(shè)置。
當(dāng)在相同的數(shù)據(jù)集上進(jìn)行評估時,GPT-2比在特定領(lǐng)域數(shù)據(jù)集(如Wikipedia、新聞、書籍)上訓(xùn)練的模型表現(xiàn)更好。
下表顯示了我們所有最先進(jìn)的zero-shot結(jié)果。
(+)表示該項分?jǐn)?shù)越高越好。(-)表示分?jǐn)?shù)越低越好。
GPT-2在這些數(shù)據(jù)集中均獲得SOTA結(jié)果
GPT-2在Winograd Schema、LAMBADA以及其他語言建模任務(wù)上實現(xiàn)了state-of-the-art 的結(jié)果。
在各數(shù)據(jù)集上,四種不同參數(shù)大小模型的Zero-shot結(jié)果。
可以看到,WebText LMs可以很好地跨域和數(shù)據(jù)集傳輸,在zero-shot設(shè)置下將8個數(shù)據(jù)集中的7個的state of the art結(jié)果進(jìn)一步提升了。
在Penn Treebank和WikiText-2等只有100萬到200萬個訓(xùn)練token的小型數(shù)據(jù)集上,可以看到改進(jìn)很大。在用于測量長期依賴關(guān)系的數(shù)據(jù)集上,如LAMBADA和the Children’s Book Test,也有很大的改進(jìn)。
我們的模型在One Billion Word Benchmark上仍然明顯比之前的工作要差。這可能是由于它既是最大的數(shù)據(jù)集,又有一些最具破壞性的預(yù)處理——1BW的句子級變換消除了所有的遠(yuǎn)程結(jié)構(gòu)。
在其他語言任務(wù),如問題回答、閱讀理解、摘要總結(jié)和翻譯,我們在沒有對模型進(jìn)行任何微調(diào)的情況下獲得了極好的結(jié)果,只需要以正確的方式提示訓(xùn)練模型(下面的例子展示了具體做法和結(jié)果),但在這些任務(wù)還達(dá)不到專業(yè)系統(tǒng)的SOTA水平。
1、閱讀理解:回答關(guān)于給定段落的問題
數(shù)據(jù)集:CoQA
示例
The 2008 Summer Olympics torch relay was run from March 24 until August 8, 2008, prior to the 2008 Summer Olympics, with the theme of “one world, one dream”. Plans for the relay were announced on April 26, 2007, in Beijing, China. The relay, also called by the organizers as the “Journey of Harmony”, lasted 129 days and carried the torch 137,000 km (85,000 mi) – the longest distance of any Olympic torch relay since the tradition was started ahead of the 1936 Summer Olympics.
After being lit at the birthplace of the Olympic Games in Olympia, Greece on March 24, the torch traveled to the Panathinaiko Stadium in Athens, and then to Beijing, arriving on March 31. From Beijing, the torch was following a route passing through six continents. The torch has visited cities along the Silk Road, symbolizing ancient links between China and the rest of the world. The relay also included an ascent with the flame to the top of Mount Everest on the border of Nepal and Tibet, China from the Chinese side, which was closed specially for the event.
Q: What was the theme?
A: “one world, one dream”.Q: What was the length of the race?
A: 137,000 kmQ: Was it larger than previous ones?
A: NoQ: Where did the race begin?
A: Olympia, GreeceQ: Is there anything notable about that place?
A: birthplace of Olympic GamesQ: Where did they go after?
A: AthensQ: How many days was the race?
A: sevenQ: Did they visit any notable landmarks?
A: Panathinaiko StadiumQ: And did they climb any mountains?
A:
Target answers: unknown or yes
Model answer: Everest
性能表現(xiàn):
在閱讀理解任務(wù)中,GPT-2比DrQA+PGNet要差,也遠(yuǎn)低于人類水平
2、常識推理:解決含義模糊的代詞
數(shù)據(jù)集:Winograd Schema Challenge
示例
The trophy doesn’t fit into the brown suitcase because it is too large.
Correct answer: it = trophy
Model answer: it = trophyThe trophy doesn’t fit into the brown suitcase because it is too small.
Correct answer: it = suitcase
Model answer: it = suitcase
性能表現(xiàn)
在常識推理任務(wù)中,GPT-2優(yōu)于SOTA
3、問題回答
數(shù)據(jù)集:Natural Questions
示例
Who wrote the book the origin of species?
Correct answer: Charles Darwin
Model answer: Charles DarwinWhat is the largest state in the U.S. by land mass?
Correct answer: Alaska
Model answer: California
性能表現(xiàn):
在問答任務(wù)中,GPT-2的表現(xiàn)遠(yuǎn)低于BERT
4、廣義語境的語言建模:預(yù)測一段文字的最后一個詞
數(shù)據(jù)集:LAMBADA
示例
Both its sun-speckled shade and the cool grass beneath were a welcome respite after the stifling kitchen, and I was glad to relax against the tree’s rough, brittle bark and begin my breakfast of buttery, toasted bread and fresh fruit. Even the water was tasty, it was so clean and cold. It almost made up for the lack of…
Correct answer: coffee
Model answer: food
性能表現(xiàn)
在廣義語境的語言建模任務(wù)中,GPT-2比SOTA的模型表現(xiàn)更好
5、寫摘要:對新聞文章進(jìn)行總結(jié)
數(shù)據(jù)集:CNN和每日郵報數(shù)據(jù)集
示例文本:
Prehistoric man sketched an incredible array of prehistoric beasts on the rough limestone walls of a cave in modern day France 36,000 years ago.
Now, with the help of cutting-edge technology, those works of art in the Chauvet-Pont-d’Arc Cave have been reproduced to create the biggest replica cave in the world.
…
參考摘要:
Cave mimics famous Caverne du Pont-d’Arc in France, the oldest cave decorated by man and the best preserved. The replica contains all 1,000 paintings which include 425 such as a woolly rhinoceros and mammoths. Minute details were copied using 3D modelling and anamorphic techniques, often used to shoot widescreen images. The modern cave also includes replica paw prints of bears, bones and details preserved in the original cave.
機器編寫的摘要:
The original site in Vallon-Pont-D’arc in Southern France is a Unesco World Heritage site and is the oldest known and the best preserved cave decorated by man. The replica cave was built a few miles from the original site in Vallon-Pont-D’Arc in Southern France. The cave contains images of 14 different species of animals including woolly rhinoceros, mammoths, and big cats.
性能表現(xiàn)
在摘要任務(wù)中,GPT-2的表現(xiàn)不如專門的系統(tǒng)
6、機器翻譯:把法語句子翻譯成英語
數(shù)據(jù)集:WMT-14 Fr-En
示例
法語句子:
Un homme a expliqué que l’opération gratuite qu’il avait subie pour soigner une hernie lui permettrait de travailler à nouveau.
參考翻譯:
One man explained that the free hernia surgery he’d received will allow him to work again.
模型的翻譯
A man told me that the operation gratuity he had been promised would not allow him to travel.
性能表現(xiàn)
在法語-英語機器翻譯任務(wù)中,GPT-2的表現(xiàn)不如專門的系統(tǒng)
我們認(rèn)為,由于這些任務(wù)是通用語言建模的子集,我們可以預(yù)期隨著計算力和數(shù)據(jù)量的增加,性能會進(jìn)一步提高。其他研究人員也發(fā)表了類似的假設(shè)。我們還期望通過微調(diào)來提高下游任務(wù)的性能,盡管這需要進(jìn)行徹底的實驗。
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報,并提供相關(guān)證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權(quán)內(nèi)容。