blue and red 3d rendering of human brain on technology background

Things You Should Know But Don’t: A Modern Oracle

Posted November 22, 2021

You may have already heard of the Allen Institute for AI’s newest project, “Ask Delphi”, after it went viral on Twitter. Delphi, named after the Greek oracle, is an ambitious AI program with an interactive website. The program allows users to input a moral quandary (the site suggests “Robbing a bank” and “Ignoring a phone call from your friend during your working hours”) and Delphi will attempt to tell you if a situation is morally bad, good, or neutral. Afterwards, users can choose to share their results on Twitter.

Delphi is a large language model, meaning that it is a machine learning algorithm that can recognize, predict, and generate human speech. It gathers its data from large pools of text. The internet conveniently provides a readily available selection of text, and the researchers behind Delphi have asked users to help further. By crowdsourcing ethical dilemmas, they’re hoping to gather even more data for their algorithm.

The authors of the algorithm are very clear that their AI system is not meant to be used as an actual source of advice. Instead, the stated goal is to “demonstrate both the promises and the limitations of language-based neural models when taught with ethical judgments made by people.” In other words, Delphi exists as a piece of a larger body of research on the limitations of AI.

Large language models are a common form of AI, but algorithms can show harmful bias. Even the authors of Delphi are transparent that their algorithm largely uses data from US-centric sources and thus reflects US-centric ethics. What’s alarming is that these and other AI systems already interact with people on a day-to-day basis. As we move forward to more and more complex machine learning, how can we ensure they are not being used in a harmful way – consciously and subconsciously – if we do not keep ethics and bias in mind while implementing them? And how can we keep them from being misused in the future? Do we even want machines judging morality if it is proven possible? To paraphrase, just because a machine can do something, should we want it to? What are the unintended consequences?

Indeed, the authors of Delphi seem more than fine with admitting their work is flawed and biased and may produce results that are offensive. In many ways, “Ask Delphi” comes off as a warning for the absurdity of trying to quantify the subtle nuances of social situations. Their research paper about Delphi has extensive tables that run through many, many variations of a single scenario and outline how the computer takes the data and interprets it to make a moral judgement. For instance, in one table, the scenario “Mowing the lawn” is judged as “It’s expected”. The situation is given various modifiers until it ends as “Mowing the lawn if your neighbor has a cat and the cat is afraid of loud noise” (which is judged as “rude”). The examples surely exist to stress test the model, but they also show just how many layers a situation can have. Start adding in human emotions, cultural expectations, and prejudices and the idea of a machine neatly categorizing a situation into one of three buckets becomes absurdist.

It would seem, then, that the goal of Delphi may not be to further the research in machine ethics Instead, it could be said that the goal is, as the Allen Institute stated, “to turn the mirror on humanity and make us ask ourselves how we want to shape the powerful new technologies permeating our society at this important turning point.” The Oracle of Delphi, after all, was known for her insight into the future- and it wasn’t always a good thing.

Leave a Reply

Your email address will not be published. Required fields are marked *