days
-3
-6
hours
-2
-2
minutes
-4
-9
seconds
-2
-4
search
GPT-2 is now roaming the wild

OpenAI finally releases “dangerous” large-scale unsupervised language model GPT-2

Maika Möbus
OpenAI
© Shutterstock / maxuser

The full version of GPT-2 is now publicly available, following nearly nine months of heated debates and some smaller model releases. The large-scale unsupervised language model was kept under lock and key for this long as it was deemed too dangerous—a controversial decision that led to backlash from the open source community.

The capped-profit organization OpenAI has released the full 1.5 billion parameter version of GPT-2, a large-scale unsupervised language model for automatic text generation. The powerful model is capable of generating text that can trick people into thinking it was written by a human author. Let’s take a look at how GPT-2 works and whether there are methods to detect the generated texts.

SEE ALSO: The Limitations of Machine Learning

Features of GPT-2

GPT-2 was first announced on the OpenAI blog in February 2019, alongside some impressive examples. Using eight million websites as training database, the large-scale unsupervised language model (LM) was designed to predict the next word and thereby write coherent texts.

The language model requires a short thematic input on which it bases the text, adapting to the input’s style. It can then transport a convincingly human quality of writing over more than one page. The name GPT-2 was chosen as it is a successor to OpenAI’s GPT (Generative Pre-Training) model, released in June 2018.

Full 1.5B parameter version released

OpenAI announced the release of the full GPT-2 model with 1.5B parameters in a blog post this week. The release includes the code as well as the model weights, as this should facilitate detection of GPT-2 outputs.

Along with the release announcement, OpenAI shared some of their findings over the last months, including the results of a survey at Cornell University. In the survey, humans gave the full 1.5B model a “credibility score” of 6.91 out of 10, compared to 6.72 for the 774M model. This relatively small increase in credibility influenced OpenAI’s decision to release the full model.

The now-famous unicorn example

Among the samples in the initial announcement of GPT-2 was a short, human-written input—on the discovery of a herd of unicorns no less—that led to a coherent text with added fictional background information.

This was the human-written system prompt:

In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.

And this is what GPT-2 made of it after ten tries, coming up with the researcher’s name and fictional quotes:

The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science.

Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved.

Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions, were exploring the Andes Mountains when they found a small valley, with no other animals or humans. Pérez noticed that the valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow.

Pérez and the others then ventured further into the valley. “By the time we reached the top of one peak, the water looked blue, with some crystals on top,” said Pérez.

Pérez and his friends were astonished to see the unicorn herd. These creatures could be seen from the air without having to move too much to see them – they were so close they could touch their horns.

While examining these bizarre creatures the scientists discovered that the creatures also spoke some fairly regular English. Pérez stated, “We can see, for example, that they have a common ‘language,’ something like a dialect or dialectic.”

Dr. Pérez believes that the unicorns may have originated in Argentina, where the animals were believed to be descendants of a lost race of people who lived there before the arrival of humans in those parts of South America.

While their origins are still unclear, some believe that perhaps the creatures were created when a human and a unicorn met each other in a time before human civilization. According to Pérez, “In South America, such incidents seem to be quite common.”

However, Pérez also pointed out that it is likely that the only way of knowing for sure if unicorns are indeed the descendants of a lost alien race is through DNA. “But they seem to be able to communicate in English quite well, which I believe is a sign of evolution, or at least a change in social organization,” said the scientist.

Altogether, OpenAI shared eight text samples on their blog.

Safety concerns around GPT-2

OpenAI deemed this convincing text generation as too dangerous, so they decided to withhold the full model. They voiced their concerns that GPT-2 could be used for generating fake news or for phishing purposes, identity theft, manipulation of social media content, etc. This decision, which some felt went against “Open”AI’s guidelines, was not well received within the open source community, as one of the voiced objections shows:

Only a smaller GPT-2 version with 124 million parameters was made publicly available in February, first referred to as 117M due to an error in calculation. OpenAI did, however, plan to reevaluate their decision after six months, which led to a staged release of gradually larger models. Until recently, it was unclear whether the full model was going to be released.

GPT-2 detection

Is there a way to spot GPT-2 generated texts? A team of researchers at the MIT-IBM Watson AI Lab and Harvard NLP started working on this issue a while back. They developed the Giant Language model Test Room (GLTR), which is designed to show a visual representation of whether a text was generated by a language model or written by a human. It was trained on the first released small version of OpenAI and still has limited abilities, as it is not meant to analyze long texts.

The researchers based their tool on the assumption that LMs will, compared to a human author, more frequently use a word that is likely to follow the previous word. The top 10 most likely words are highlighted in green, the top 100 in yellow and the top 1,000 in red. Words that are less likely are marked in purple.

As can be seen in the unicorn text, two words in the human input were marked purple, while the GPT-2 text is mostly green:

Source: gltr.io

You can check out the GLTR demo in the browser and enter a sample text.

Meanwhile, OpenAI have been working on a detection model of their own. It has detection rates of around 95% for text generated by the full GPT-2 model. Despite the possibility it may help adversaries better evade detection, OpenAI are releasing this model as they believe the model is not yet accurate enough and can benefit from further research.

OpenAI’s journey from nonprofit to capped-profit

OpenAI was originally founded in 2015 as a nonprofit organization, with a starting capital of one billion dollars. Among the founding members were Sam Altman and Tesla, Inc. founder Elon Musk, who later left the organization in order to avoid a conflict of interests.

In March 2019, OpenAI went through some changes as it switched to a capped-profit model under the name OpenAI LP. OpenAI now refers to OpenAI LP, while the original version was renamed to OpenAI Nonprofit. OpenAI is now governed by the board of OpenAI Nonprofit, which includes employees Greg Brockman (Chairman & CTO), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO).

SEE ALSO: What Twitter and Facebook can teach us about machine learning

A few months after the organizational change, Microsoft invested one billion dollars in OpenAI to support the quest for artificial general intelligence (AGI). The mission of OpenAI, as stated on their website, is “to ensure that artificial general intelligence benefits all of humanity.” They offer an overview of their various AI projects, from the AI-powered musical composer “MuseNet” to solving a Rubik’s Cube with a robotic hand.

Author
Maika Möbus
Maika Möbus has been an editor for Software & Support Media since January 2019. She studied Sociology at Goethe University Frankfurt and Johannes Gutenberg University Mainz.

Leave a Reply

Be the First to Comment!

avatar
400
  Subscribe  
Notify of