I’ve been asking Alexa a lot of strange questions lately as I think about forms of bias in machine learning algorithms and whether these will surface in the responses that our voice assistants give to our questions. The short answer is: Yes, yes they will. The difficulty is knowing where exactly the biases are coming from, and how best to avoid them. This post puts in one place my thoughts on this topic over the past 3-4 years.
Here is an essay I published in 2016 on the topic of gender and status in VUIs.
- research-article
Here is a ten-minute talk on this topic at the Interaction '16 conference in Helsinki, Finland:
More recently, I’ve been thinking about the sources of information that Alexa uses in her responses, and also, how she puts together these responses, down to the level of word choice and grammatical construction. I blogged about this in 2018, highlighting some of Alexa's problematic responses to questions about Eric Garner.
Here is an essay on this topic that I published in 2018:
- research-article
Most companies are investing heavily in voice today, in new and unexpected ways. We can expect to speak to our appliances and other everyday objects more and more in the near future. It's difficult to predict how bias issues will arise when I am asking my toaster to brown my toast "a little bit more." But I have no doubt that bias issues *will* arise.
Header image is a screen capture from the vimeo of me talking at the Midwest UX conference in Grand Rapids, MI, in October 2019.