Does Artificial Intelligence Have Ethics?

Man with robot

In 2016, Microsoft released a chatbot designed to interact with people over Twitter. Enabled with an AI routine that analyzed speech, the bot was supposed to show how machines could mimic human communication. Unfortunately, Microsoft had to remove the bot when it began tweeting racist and sexist comments; its AI engine was flooded with hate speech from pranksters and other bad actors online. Now the AI routine itself was certainly not sexist or racist, it was merely imitating speech based on the data it received. I’m sure this incident led to a lot of jokes about how AI-enabled machines will become evil geniuses bent on subjugating humanity. But I think what it really proves is that the real threat in our current generation of AI isn’t AI, it’s ourselves.

I’ve previously written about how the fear that AI-enabled machines will make human work irrelevant is unfounded. In short, while humans aren’t as fast or accurate as computers when it comes to analyzing data, computers are no match for us when it comes to the informed application of that analysis, creativity, lateral thinking and emotional intelligence. We possess critical thinking and experience that today’s computers simply can’t replicate. Likewise, computers don’t have the capability for ethical thinking that humans do. Despite its sophistication, AI at the end of the day is just another tool. And like any tool, it has no ethics but can be used to perform unethical things when wielded by an unethical, unscrupulous or just ignorant person. More common than not, however, is AI performing unethically because it hasn’t been trained properly.

Take AI-powered visions systems, for example. Recent research revealed that a popular facial recognition software platform had “much higher error rates in classifying the gender of darker-skinned women than for lighter-skinned men.” Is the AI racist? No, but it does replicate the institutional biases inherent in society. In this case, the training models used to instruct the AI algorithm to identify faces were comprised mostly of white male faces. Accordingly, the algorithm performed better identifying the faces with which it had more experience. Now imagine law enforcement using this AI-enabled vision technology to scan crowds in busy public spaces for the face of a wanted criminal. If that criminal were a person of color, the chances of the AI incorrectly identifying an innocent person of color as the criminal would be higher than it would for a white criminal.

The science fiction author Isaac Asimov famously developed the Three Laws of Robotics. The laws set forth a simple ethical framework for how robots should interact with humans, and the laws are unalterably fixed in the robot’s positronic brain (aka AI); they cannot be bypassed. In their entirety, the laws are:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

With those rules in place, humans could rest assured that their interactions with robots would be safe. Even if the robot had been ordered by a human to cause harm to another human, it’s “digital conscience” would stop it from carrying out the order. I believe a similar code of ethics should be applied when developing AI. Data scientists and IT professionals will need to review the results of their AI algorithm’s analysis not only for accuracy, but also for the fair and ethical application of that analysis.

While this will take much work and collaboration between businesses, academia and end-users, it is critical that we adopt ethical fail safes when applying AI technology in our daily lives. AI is developing rapidly and the risks that it could enable unethical behavior by its users, even unintentionally, is real. These fail safes need to be defined now so they can be used to manage the performance of AI applications in the future.

This article originally appeared on MoneyInc.com on May 20, 2019.