This essay began as a reflection on a shift that was occurring in our thinking about technology--from machines that do what we ask of them, to
partners in a "conversation." The conversation metaphor is helpful because it puts the responsibility on designers to understand users and to foresee,
as much as possible, their desires and intentions.
But then I realized that humans aren't always very good at this in everyday conversation. At the time I was also reading about mirror neurons and the theory held by some that they are partly responsible for our feelings of empathy. I think the best conversations are with those who can feel empathy for us, so I wanted to consider how this might work as a metaphor for interaction design.
Here is a small taste of the essay:
Systems should anticipate our intentions in the same way a good conversationalist would. But this is not to say they need to get it right every time. A good conversationalist will sometimes ask, “When you say that, do you mean a or b?” When I tap a text message that contains a URL on my phone, the system asks me, “Do you want to open/view this text message, or do you want to visit the URL with the phone’s Web browser?” When I want to share a photo, the system asks me whether I want to share using email, text message, Twitter, a photo-sharing site, or some other means. When I tap a tweet in my Twitter application to indicate that I want to interact with the tweet in some way, the system asks me how I want to interact: Direct Reply, Retweet, Mark as Favorite, etc. It is fine for the system not to know exactly what I want to do, but it should take the clues I’ve provided about my intentions and build on them, without introducing non sequiturs, or worse, by making false assumptions about my intentions. It is okay for a computer to say, “I don’t know.” Also: “Did you really mean that?” I once accidentally separated the keyboard on my iPad into two pieces without knowing how, or how to undo it (use pinch-out and pinch-in). It would have been okay, that one (first) time, for the system to interrupt our conversation to make sure I understood why it was responding to my input in that way. Think about it: Computers are most annoying when they come across as bullies, assuming they know what it is that you want to say or do, not caring enough to make sure they have it right. This behavior is difficult to suffer with humans; we should not tolerate it in conversation with our computer systems.
Header image is from the original publication. You can read it online at interactions magazine or download the official PDF using the ACM authorizer service below.