Rights of robots need discussion

David J. Gunkel, professor of communication studies and author of “Robot Rights,” holds a book talk and argues if robots should have moral or legal standing Wednesday in the Founders Memorial Library.

By Noah Thornburgh

Yelling at Siri, kicking a Roomba, throwing a Jibo; what it means for these to be right or wrong was the topic of a book talk with professor of media studies David J. Gunkel from 5 p.m. to 7 p.m. Wednesday at the 71 North space in Founders Memorial Library. The talk, titled The Right(s) Question, was based on Gunkel’s book “Robot Rights,” published by MIT Press in November 2018. The book, and talk, explored whether robots are capable of having rights, whether they should have rights and whether this is a question worth asking right now.

The first two have no easy answer, but the last is answered with a resounding yes.

Gunkel began the talk by explaining the four possible positions to the first two questions, combinations of robots can/can’t have rights and robots should/shouldn’t have rights. All positions are plausible, but none are without flaws. A common thread runs through arguments against these views: humans cannot help but grow attached to some artifacts, like a cherished guitar or a Stradivarius violin. In the case of robots, humans often project sentience onto the machines and treat them as if they have a moral obligation to them, even knowing full well the robots don’t have sentience.

This is heartwarming, in a way. Humanity shows some humanity — people consistently feel the need to show respect to what is, with current technology, a thing. People don’t like to hurt dogs and people don’t like to hurt robots. Intuition as wholesome as that shouldn’t be resisted.

Working on the forefront of human-machine communication is Professor Andrea L. Guzman.

“What I focus on is how people conceptualize artificial intelligence as a communication partner,” Guzman said. “Previously we only spoke with humans.”

Now communication is moving from human-human to human-machine as products like Alexa and Siri become popular.

 

Guzman said she sees the rights question as eminently important, as human-machine communication continues to advance.

“When we create technologies that are stepping into former human positions, what does that mean for the systems of ethics and systems of rights we currently have in place?” Guzman said.

The technology will change the world. People are already scared they’ll lose their jobs to a robot; asking the rights question helps prevent ethical chaos as the future comes to be. When the day comes that something nonhuman demands consideration as a person, humanity should not be caught unsuspecting.

What Gunkel calls for is a conceptual reboot — a fundamental challenge to the current theories of the world, humanity, robots and ethics.

This reboot is unavoidable, whether in the next few years or at the moment robots achieve sentience, possibly decades from now. Robots will have to be considered for personhood — even if for nothing other than to fit into the legal system, just as corporations were granted legal personhood so courts knew what to do with them.

A crucial first step of this reboot will be to upend the common, incorrect view of the robot.

“Most people think about science fiction,” Guzman said. “Where [artificial intelligence] is either seen as something that’s dark and is going to rule the world, or something that’s great and is going to save the world.”

Speculative views like these confuse people more often than enlighten. Reactions to robots should not be shaped by unrealistic media portrayals. The rights question will go nowhere if it’s built on wild expectations.

“When we continuously talk about [robots] in the context of science fiction, we’re dealing with a myth and not the reality of how it functions,” Guzman said.

This is not to say science fiction is bad — nothing could be further from the truth. Science fiction prototyped much of modern technology, after all. Only the old, mythic view of robots needs to go away, while sci-fi should stick around to help flesh out ethical questions through speculation and inspire new generations of scientists.

“Fiction is the realm where we discuss the ethics, where we make connections between ourselves and machines, but it’s also where we spark that interest in kids who might design more and more interesting and imaginative robots for us,” Gillian King-Cargile, director of NIU’s STEM Read program, said.

Programs like STEM Read, which encourages students to explore topics like robot rights, are a step toward demythologizing the robot and situating a realistic framework for young scientists and ethicists to work with.

Robots aren’t quite there yet, but the time will come when robots’ resemblance to humanity is too striking to ignore and when kicking a Roomba may be a little more than kicking a vacuum. It’s better to ask the hard questions now than leave it for an unsuspecting future.