Image of a Red radar screen with futuristic user interface HUD and digital world map.

Things You Should Know But Don’t: The Future of Killer Robots

Posted April 24, 2023

Can laws keep up with the rapid advances in technology and artificial intelligence (AI)? This is something that most nations have struggled with since the turn of the century brought us to an era of personal computers and the internet—today, powerful technology is readily available to virtually every person. The phones in our pockets have more computing capacity than the most advanced PCs of the 80s and 90s. The concept of net neutrality, a principle that says internet service providers must not discriminate between different types of data on the internet when displaying it to a user, didn’t even exist until 2003, well after many homes had internet access. It is still a point of contention between service providers and end users today.

In the past, I’ve written about how laws around biometric data collection vary from country to country and aren’t consistent among the states within the U.S. More recently, I wrote about the use of AI in warfare, such as with unmanned drones. While there have been conversations about how to handle AI controlled weapons, the Human Rights Watch says that as of February 2023, “there is still no U.S. government-wide policy on autonomous weapons systems or governing their use in law enforcement, border control, or other circumstances outside of armed conflict.

So, what is being done about the obvious risks around the increased use of AI, specifically AI weapons, while laws are continuously slowed by the bureaucratic process? The good news is that many people recognize that the use of robotics against people can result in grievous harm. This now includes the companies that make them. AI and robotics are increasingly more accessible to consumers, so their purpose and use is becoming an increasing concern for the private companies that manufacture them. Boston Dynamics, for example, builds highly advanced robots for many different purposes.  In 2022, it issued a statement signed by six of the world’s leading manufacturers of robotics asserting that robots should not be weaponized, acknowledging that there is an increased risk of dangerous applications with the release of autonomously operated robots. They also claim, however, that they will “carefully review [their] customers’ intended applications to avoid potential weaponization.” One important caveat: the self-imposed limitations do not include robotics that, according to the statement’s signers, are used by government agencies to defend themselves and uphold the law.

By issuing statements like these, companies are trying to maintain trust among their customer base.  But they are limited in their abilities to enforce them. While they can vet their everyday consumers, that has its own limitations.  It’s unlikely they will investigate the end uses by consumers.  And they make it very clear that they will continue selling robotics to the government for policing and warfare purposes. Of course, there are many other companies that make robotics, none of which have signed a similar pledge.  But without public response or government oversight, society is depending upon self-regulation among the manufactures to ensure safety.

So should laws be passed to address the fears?  Are existing laws sufficient?  Unfortunately, the enactment of laws is very slow-moving and, regrettably, impractical.  Worse, in a legislative rush to find solutions to problems, laws passed can result in unintended consequences that prove harmful to the rights of consumers.  So with government and law enforcement so far behind and a public that seems to be fascinated with the advances in AI, what is the best way to keep the field of robotics and AI in check? Luckily, there are private organizations working to increase awareness and urging push-back against widespread use, particularly with respect to AI weaponry.

For instance, Stop Killer Robots is an international campaign that “works to ensure human control in the use of force.” It urges politicians to propose and enact laws quickly and monitor the state of autonomous decision-making. By demanding that humans remain in control of the decision to use force in policing or warfare, they hope to ensure that humans also remain responsible and accountable for their actions.  The Electronic Frontier Foundation, a nonprofit that advocates for digital rights, has also objected to companies and laws that use AI for law enforcement that can do more harm than good, particularly in terms of personal online privacy and freedom.

The Future of Life Institute has released a petition with more than 27,000 signatures of tech leaders and concerned consumers, including Elon Musk, Steve Wozniak, Andrew Yang, and other pioneers in AI applications, calling for all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

It is undoubtedly positive that tech companies recognize the possible harm that can come with their advancements. But it is also important that the public and government do the hard work to keep AI developers in check. Educating yourself about local laws around digital rights and reaching out to your representatives can make a big difference in the progress of legal restraints and protection from unfettered competition rushing to make the smartest AI driven robot or mechanical application. As always, make sure to stay informed and speak up.

Leave a Reply

Your email address will not be published. Required fields are marked *