Capterra Legal Software Blog

Technology news and advice for legal professionals

The Potential Legal Issues of AI (And Why You Should Care)

Share This Article

We’ve had big dreams for artificial intelligence for years. From The Jetsons showing us snarky robo-maids to robots dreaming of electric sheep, to the more realistic promises of robot diagnostics in hospitals, we’ve been hearing about realistic AI that’s truly useful to humans for some time.

As we move into an age where we’re seeing more and more AI in our daily lives, we have to start looking into the legal implications of these new technologies. Will AI be an asset or a liability? Let’s look at some problems already in the making.

AI Legal Issues

Hold on, what is AI?

Is it robots ready to take over the world? Is it Siri? Is it military bots? Is it Tom Servo and Crow?

Mystery Science Theatre robots saying "Have you guys thought about what you want for Christmas?"

Bet you thought I was too young for this joke, huh? Jokes on you, this show’s on Netflix!

AI, or artificial intelligence, is the ability of a machine to be “smart.” If that sounds super broad, that’s because it is. We already have machines that we consider “smart” for their ability to make choices and evaluate situations using sophisticated programming.

The concept of AI is often conflated with machine learning, which is when machines are able to build upon their original programming and “learn.” I’m actually conflating these two terms in this article, just to make life easier for my readers.

In my defense, at least I’m being intentional.

If AI and machine learning allows robots and computer programs to be clever, learning, growing, and changing critters, at what point do we consider them aware enough to eavesdrop, to perform as a “witness,” or to have rights of their own?

Alexa Subpoenaed

We were all taking bets on how long it would take for this to happen, weren’t we? An ongoing court case has called Alexa, the default name given to the Amazon Echo household assistant, to the stand.

Well, ok, not literally. But data from an Amazon Echo was subpoenaed in a murder investigation. The thought is, if Alexa is listening, perhaps she heard (and recorded) an actual murder. The evidence as good as a security camera and better than a human witness in proving a crime, if only Amazon would fork over the intel.

Unsurprisingly, Amazon wasn’t thrilled about the precedent being set by the seizure of Alexa’s recorded information. So they’re battling the warrant, stating that communication with the device qualifies as protected speech, and surrendering the recordings would set a precedent that could lead to violations of privacy for the Echo’s users.

And Alexa is listening, in case you were wondering what I meant. Alexa is the command word given to the Amazon Echo and Echo Dot. After the command, the Echo takes whatever’s been said and follows the given request. But to hear the command word in the first place, the microphone listening must be turned on. Amazon assures users that only the command phrases are sent to the cloud storage, but that still lets us know that Alexa’s always waiting for her name.

In other words, Alexa is always listening, but her name makes her pay attention. Which is sort of spooky when you think about it, especially knowing that that passive listening might allow Alexa to overhear a murder.

It Hears You when You’re Sleeping

So if Amazon Echo is always, always listening, what does that mean for privacy? According to Amazon, the Echo does not save or store any information that wasn’t relayed along with the trigger word “Alexa.” Still, if the Echo is on, it has the ability to listen, an ability which could potentially be turned on at a later date.

How much you trust this promise is directly related to how much you trust Amazon. When they say they want to protect the communication the user has with Alexa, do they want to protect it for the user’s sake, or for their own?

This is one of the biggest concerns many people have about the rise of the Internet of Things. In essence, the “IoT” is a system of physical objects that are connected to one another as well as the internet, and can interact with users in a way we consider “smart.” Some people think the interconnection is great, while others see it as technology jumping the shark and invading human privacy.

And some folks just really, really hate the IoT (strong language in the link). Which is fair, given that the rise of the IoT increases the number of items in your home that are tracking, storing, and saving your data until either their parent companies profit or a bored hacker decides to run Doom on your fridge.

But what about Alexa? Doesn’t she get a say in all this? The answer is maybe.

An Awkward Isaac Asimov Moment

If you’re a mega-nerd like me, you’re probably already familiar with science fiction writer Isaac Asimov’s three laws of robotics. If not (or if you’d like a refresher without stooping to Google), the rules are as follows:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

These “laws” are from one of Asimov’s short stories, and form the base of much of the conflict in his work. In their simple elegance, the laws have actually been used and referenced by real-life robotics designers, which is pretty darn cool.

It’s easy to make up laws for software programming, which is really all Asimov’s laws are, but things quickly get messy when that software starts interacting with people. So here’s a question: how do we make robots follow these rules if they gain free will?

Big jump? Not really. This specific conflict is the concept behind a lot of Asimov’s work, along with a lot of science fiction writing in general. But really though, how will the legal system approach the humanity of AI?

At least one mock trial has already asked the question.

In 2004, a mock trial was held in San Francisco to determine the fate of BINA48, a fictional AI owned by a fictional company, who may or may not have successfully gained consciousness. In the case, a robot who did the work of 1,000 customer service agents had discovered through company correspondence that it was to be shut down. The program reached out to an attorney, asking for legal help in preventing its “death”.

While the concept feels futuristic, who’s to say that the legislation necessary for such a case isn’t already in place? In this case, laws concerning the rights of life-support patients came into play, an interesting reminder at the variety of situations seemingly face-value laws may someday need to cover.

Some publications have gone so far as to suggest that AI-centric court cases may quickly become an issue of civil rights.

All said, this is still an issue for the future. Nevertheless, the rights and responsibilities of AI are something to think about before they become a problem.

What AI legal issues do you foresee?

Are you worried about rights for Siri and Alexa and future programs like them? Or do you think they’re just electronic spies for their parent companies? Maybe you’re just excited for the evidence they may help future lawyers collect. Whatever the answer, it’s clear that artificial intelligence comes with a slew of inherent legal issues.

Tell me what you think in the comments below. Subscribe to my mailing list for more legal tech news and reviews, and check out these articles for more content like this:

Looking for Law Practice Management software? Check out Capterra's list of the best Law Practice Management software solutions.

Share This Article

About the Author

Halden Ingwersen

Halden Ingwersen writes about HR and eLearning at Capterra. She’s a graduate of Agnes Scott College and a TEDx presenter. You can follow her on Twitter @CapterraHalden, just don’t get her started about her zombie survival plan.

Comments

No comments yet. Be the first!

Comment on this article:


Your privacy is important to us. Check out our Privacy Policy.