Grad students replicated ‘risky’ OpenAI software

A duo of grad students has recreate an artificial intelligence code that OpenAI refused to release to the public because it could potentially be used by malicious individuals and organizations to generate fake news. The researchers, however, did it anyway to make a stand against huge tech labs.

Partners, Aaron Gokaslan, 23, and Vanya Cohen,24, who graduated with masters in computer science have released an open-source code of an algorithm that was first developed by the artificial intelligence laboratory co-founded by Elon Musk, OpenAI. The AI was developed to be fluent in a different language, that it can auto-generate texts.

“Recently, large language models like BERT¹, XLNet², GPT-2³, and Grover⁴ have demonstrated impressive results in generating text and on multiple NLP tasks. Since Open-AI has not released their largest model at this time (but has released their 774M param model), we seek to replicate their 1.5B model to allow others to build on our pre-trained model and further improve it,” the researchers said in a blog post.

OpenAI, one of the tech companies who was first able to develop such breakthrough, said that they would not release the source code of the artificial intelligence they have developed because it was “too risky.” They said that the language AI has the ability to generate text fluently that it could be used by malicious actors in order to generate fake news and use the technology to carry out misinformation campaigns.

“Research from AI2/UW has shown that news written by a system called “GROVER” can be more plausible than human-written propaganda. These research results make us generally more cautious about releasing language models,” said in a report.

However, the duo believes that there is no real-life risk with the technology, not yet. And they have recreated the “secret” AI in order to send a message across that developers don’t need an elaborate and elitist lab in order to develop and train high functioning artificial intelligent software.

The open-source version of the artificial intelligence that the two researchers have developed and pre-trained are now posted online for anyone who is interested in it to download and use.

Cohen and Gokaslan wanted to prove that an elite AI laboratory, similar to that owned by OpenAI, is not necessary for experts to create, develop, and train helpful AI software. The pair only leveraged $50,000 worth of free cloud computing from Google, which gives subsidies to academic institutions, and their expertise in order to recreate what Elon Musk’s company did.

They believe that releasing the codes for the breakthrough for free can help in the advancement of knowledge and innovation as it can be used by other researchers as a baseline to develop their own. It can also be used by the broad tech community to make preparations for the would-be effect of the technology in the future.

“This allows everyone to have an important conversation about security and researchers to help secure against potential future abuses,” says Cohen, who notes language software also has many positive uses. “I’ve gotten scores of messages, and most of them have been like ‘Way to go.”

The researchers said that OpenAI’s decision to delay the release of their language software relies heavily on the fact that it could not be easily replicated and only those who have “high degree of specialized domain knowledge” can do so. However, in their research, they were able to prove that the software can easily be replicated by two grad students who don’t have specialized knowledge of language modeling.

“We base our implementation off of the Grover model⁴ and modify their codebase to match the language modeling training objective of GPT-2. Since their model was trained on a similarly large corpus, much of the code and hyper-parameters proved readily reusable. We did not substantially change the hyper-parameters from Grover,” the researchers explained.

After their testing, the duo was able to determine that their replication yielded an insignificant difference from that developed by OpenAI.

OpenAI released a report on Tuesday saying it was aware of more than five other groups that had replicated its work at full-scale, but that none had released the software. The duo took this report as a way of validating that what they released to the public could not be more dangerous than what the company has already released – if it was indeed dangerous.

Be the first to comment on "Grad students replicated ‘risky’ OpenAI software"

Leave a comment

Your email address will not be published.


*