OpenAI, a research organization focused on artificial intelligence, searches for ways to expand and aid online-based programs. Their work with mechanical language technology has developed a model for artificial intelligence called GPT-2. GPT-2 has the capability to generate plausible conversations online, continuing ones that have been started by humans, asking questions, summarizing, and reading comprehension.
In an example of GPT-2’s capabilities, programmers entered into the database a simple paragraph reciting that English speaking unicorns were discovered in the Andes. GPT-2 took that short paragraph and added nine more that describe the discovery in great detail in virtually perfect grammar. You can read the entire example on their website.
By incorporating data from certain databases and websites, the technology responds to text input by humans and responds appropriately – sometimes even with facts. With that said, a few abnormalities such as repetitive text and topic changes have hindered a flawless design. The GPT-2 could, however, offer significant value on many platforms including dialogue agents, speech recognition systems, translation between languages, and AI writing assistants.
OpenAI has not released the technology due to its “concerns about malicious application of the technology”. Here’s why:
In a table of comparisons between GPT-2, humans, and previous records from other technologies, the GPT-2 model neared human accuracy more so than any other models. So, each iteration of AI technology gets closer and closer to replicating human interaction. Is it simply a matter of time before applications like GPT-2 are perfected? Once perfected, does the technology pose an underlying threat that might offset the benefits it offers on, e.g., social media interaction, improved FAQ’s, and immediate solutions to complex problems? That may all be for the good. But how will people differentiate responses from real people versus GPT-2? Will the model’s flaws mentioned in OpenAI’s article be key ways of telling responses that are not generated by a real person? GPT-2 could be beneficial. But in the wrong hands, can such technology do more harm than good? Would it mean human response roles are eliminated from the job market?
I guess that’s for us to decide.