Rose E. Guingrich, PhD Candidate



ETHICOM Founder

Princeton University Psychology & Social Policy

Psychology & Ethics of Human-AI Interaction

Want to know more about the psychology and ethics of human-AI interaction? Watch or listen to Our Lives With Bots, the podcast.

Click here for a new longitudinal study on the social impacts of companion chatbots.

Why do we perceive human likeness & mind in artificial intelligence agents?

What are the social & ethical implications of this phenomenon?

People are constantly making inferences about others’ minds during social interactions. There are explanations for why we humans have evolved to do this. How can I predict how someone else might act in the next few moments without having some concept of how their mind works? How can I connect with another person without having some sense of their thoughts and reactions and desires? Fundamental human needs are fulfilled by our innate tendency to make evaluations about others’ minds. The psychological motivation that I’m particularly interested in is the desire to foster connections or the need to belong.

We use our own mind as a lens into others’, because our mind is most familiar and accessible to us. We have insight to others’ minds only through our concept of our own. It makes sense for us to have evolved to perceive other people as having a mind like ours, but why do we perceive non-human agents as having a mind like ours, too? This is my big question, and I ask it specifically with reference to social AI agents like chatbots, robots, and voice assistants. The natural follow-up to that question, especially in the wake of humanlike AI agents that can talk like us and even look like us, is an even bigger one: what are the social and ethical consequences of perceiving AI agents as having characteristics of a humanlike mind? I find that this question becomes ever more important as AI becomes more advanced, humanlike, and integrated into our daily lives.

You may be thinking, I’ve never fallen into that trap. I know AI is mindless – it’s just a machine! I hate to break it to you, but there’s data to suggest that while AI may be “mindless” as you say, the process by which you evaluate AI agents is also mindless. This means that when you interact with an agent that looks like you and talks like you, you tend to automatically perceive it as having a mind like yours, whether it be for the sake of predicting its behavior, understanding it, or even connecting socially with it. Want to see some of that data? Look no further than below.

Recent Publications: (see pubs & press for more)

Recent Posts:

Want to schedule a consultation for your AI project? Please use the contact menu button to send me a request.

all art presented on this page and others is my own work