The Conscious Robot: The Good, The Bad and The Really Awful

On March 28 a group of 1,300 artificial intelligence experts and tech leaders including Elon Musk called for a six-month pause on testing systems more powerful than GPT-4, introduced this month by OpenAI. A pause would provide time to implement “shared safety protocols,” the group said in an open letter. Machine learning is having a moment. Image generators like DALL·E 2 and language models like ChatGPT are grabbing headlines. What is behind the headlines?

Distinguished Research Professor and Fellow of the Royal Society of Canada, Russell Belk, has pointed out that we already suffer at the hands of the smart machines we made to improve our lives. Driverless vehicles are gradually putting truck, taxi, Uber and Lyft drivers out of work. Similarly, robots used in stores, hotels, nursing homes will mean fewer jobs in the future. With population aging, assistant robots could be used for the elderly. This will mean fewer personal interactions. And we learned during Covid how isolation adversely affects mental health Then there’s the thorny issue of robots for sex which will lead to the further dehumanizing of humans, as porn on the internet already has done.

Today’s leading machine learning models derive their power from deep neural networks — webs of artificial neurons arranged in multiple layers, with every neuron in each layer influencing those in the next layer. Before they can be useful, neural networks must first be trained. During training, the network processes a vast number of examples and repeatedly adjusts the connections between neurons, until it can correctly categorize the training data. Along the way, it learns to classify entirely new inputs.

Joshua Bongard, a roboticist at the University of Vermont and a former member of the Creative Machines Lab, believes that consciousness doesn’t just consist of cognition and mental activity, but has an essentially bodily aspect. He has developed beings called xenobots made entirely of frog cells linked together so that a programmer can control them like machines. According to Dr. Bongard, it’s not just that humans and animals have evolved to adapt to their surroundings and interact with one another; our tissues have evolved to subserve these functions, and our cells have evolved to subserve our tissues. “What we are is intelligent machines made of intelligent machines made of intelligent machines, all the way down,” he said.

Bongard’s position totally aligns with mine as described in The Embodied Mind, namely, that our embodied mind is not the old enskulled one. It is an extended mind that draws on the intelligence of all the cells in our body that contain specific bits of information, micro-memories. All memories, consciousness, and the mind emerge from this linked sentient network. Memory is in fact a body-wide web, and intelligence is exhibited in a wide range of systems well beyond the traditional central nervous system.

Bots like Bard and OpenAI’s ChatGPT deliver information with incredible speed. But they also spout plausible falsehoods, or do things that are seriously creepy like when one Bing search engine declared its love for Kevin Roose, a technology columnist for The New York Times in a “conversation“ with him. Reese commented, “It’s now clear to me that in its current form, the A.I. that has been built into Bing is not ready for human contact. Or maybe we humans are not ready for it.”

Hod Lipson, is the director of the Creative Machines Lab at Columbia University. His research is directed to answering some fundamental questions of our times: Will robots be able eventually design and make other robots? Can machines be curious and creative? Will robots ever be truly self-aware?

According to Lipson, creating a machine that will have consciousness equal to humans will surpass everything else we’ve achieved. “So eventually these machines will be able to understand what they are, and what they think,” Dr. Lipson said. “That leads to emotions, and other things. I want to push this as far as I can.” Lipson wants to create conscious robots.

Antonio Chella, a roboticist at the University of Palermo in Italy believes that consciousness can’t exist without language, and has been developing robots that can form internal monologues, reasoning to themselves and reflecting on the things they see around them. One of his robots was recently able to recognize itself in a mirror, passing what is probably the most famous test of animal self-consciousness.

At the rate things are progressing, scientists will probably develop a robot that is conscious. When that happens, should the robot be granted rights? Freedom? Should it be programmed to feel happiness? Will it be allowed to speak for itself? To vote?

Humanoid robots may eventually develop their own rights and responsibilities. In fact, there already exists the blossoming field of Machine/Robot Ethics. Distinguished Research Professor and Fellow of the Royal Society of Canada, Russell Belk, points out that as our machines become more human-like, we become more machine-like. “We magnify our capabilities with hand-held computers, we replace our body parts with prostheses and we may soon modify our genes to procure additional benefits for ourselves and our progeny, including an extended lifespan,” he says.

This scenario brings to mind the sociologist Robert Merton’s (1936) law of unanticipated consequences to describe the unwelcome side effects of social actions, including technological innovations. Merton concluded that individuals would fail to comprehend all the outcomes arising from innovations out of ignorance, human error, or inexperience.

A million questions arise: What if a self-driving car or a sex machine malfunctioned? What will be the impact on society of millions of unemployed people? Could a few wealthy entrepreneurs further divide the world into haves and have-nots, reinforcing inequality?

Although digital technologies offer their users many benefits, these technologies also expose their users to risks. Remember that they can get stuff wrong. Remember that they can make stuff up.

Key Takeaways

There are many things that these systems are very good at.

Users of chatbots, stay skeptical. Take a look at them for what they really are.

Chatbots are intelligent in some ways, but dumb in others.

Technologies often have unanticipated outcomes on people and societies.

As our machines become more human-like, we become more machine-like.

References

Aubin, C. A., Gorissen, B., Bongard, Josh ... & Shepherd, R. F. (2022). Towards enduring autonomous robots via embodied energy. Nature, 602(7897), 393-402.

Belk, Russel (2019). Machines and Artificial Intelligence, Journal of Marketing Behavior: Vol. 4: No. 1, pp 11-30.

Brubaker, Ben (2023). In Neural Networks, Unbreakable Locks Can Hide Invisible Doors. Quanta Magazine

Kwiatkowski, R., Hu, Y., Chen, B., & Lipson, H. (2022). On the Origins of Self-Modeling. arXiv preprint arXiv:2209.02010.

Merton, R. (1936). The unanticipated consequences of purposive social action. American Sociological Review, 1(6), 894-904.

Pipitone, A., & Chella, A. (2021). Robot passes the mirror test by inner speech. Robotics and Autonomous Systems, 144, 103838.

Rosenbaum, M., Walters, G., Edwards, K., Gonzalez, C., Contreras Ramírez, G. (2021). Play, Chat, Date, Learn, and Suffer? Merton’s Law of Unintended Consequences and Digital Technology Failures.

Academia Letters, Article 3426. https://doi.org/10.20935/AL3426.

Voss, A., Cash, H., Hurdiss, S., Bishop, F., Klam, W., & Doan, A.P. (2015). Case Report: Internet gaming disorder associated with pornography use. The Yale journal of biology and medicine. 88 (3), 319-324

Verny, Thomas R (2021). The Embodied Mind. Pegasus Books, NY, New York.

Woods, L. & Perrin, W. (2019). Internet harm reduction: A proposal. Retrieved August 14, 2021, Retrieved from: https://www.carnegieuktrust.org.uk/blog/internet-harm-reductiona-proposal/

Previous
Previous

New Directions in the Treatment of PTSD and Traumatic Brain Injury (TBI)

Next
Next

The Bedrock Theory of Memory: How Memories are Formed, where they are Stored