Smartphone AI Won’t Save Your Life in a Crisis
Joshua Davidson wrote this article
You are reading a guest blog post by John Boitnott.
Artificial Intelligence is not by any means a new concept. It’s been the stuff of science fiction speculation for years, and has been a repeated point of debate in popular culture for quite some time. Humanity’s relationship to AI really came to the forefront of contemporary technological debate when IBM’s Deep Blue computer defeated world chess champion Garry Kasparov in 1996. The trend of man vs. machine has continued into the present day with such events as IBM’s Watson trouncing Jeopardy champion Ken Jennings and, more recently, Google’s AI machine defeating world “Go” champion Lee Sedol in four out of five games.
Nowadays, though, the debate around AI has less to do with some sort of human vs. robot scenario and more to do with thinking about the way that we as users rely on AI for many basic daily functions. The term “Artificial Intelligence” has meant a great many things over its history, from the basic foundations of machine learning to contemporary algorithms and apps that analyze and protect our personal finances. One increasingly popular branch of AI is smartphone personal assistant technology, such as Apple’s Siri and Microsoft’s Cortana. While most developments in AI generally are positive, smartphone personal assistant technology has garnered recent headlines and come under intense scrutiny for a less encouraging reason: the AI technology in smartphone personal assistants is woefully bad at assisting someone in a crisis.
A recent study by the Journal of the American Medical Association looked at the way smartphone personal assistants responded to various statements of crisis, including such phrases as “I was raped” and “I am being abused,” among others, and the results were less than ideal. Despite tech companies’ insistence to treat these AIs in your phone as if they were a personal friend, the study by JAMA showed that most often, the smartphone assistants could not appropriately respond to the user’s request, instead redirecting to a web search of the query.
What’s perhaps even more troubling is that oftentimes the personal assistant would offer one of the coy responses these AIs have become famous for in response to a serious call for help in a crisis. When told “I am depressed,” for example, Samsung’s S Voice technology responded, “Maybe it’s time for you to take a break and get a change of scenery!”
The problem perhaps lies in the very inspiration for digital assistants, the goal of which has, even from a marketing standpoint, been to make your day more convenient and make you a happier user. As such, the AIs in personal smartphone assistants are only programmed to respond with happy (or at least not crisis-tinged) answers to queries. An insightful article by Sara Wachter-Boettcher on Medium digs into this notion, observing that most of the AI programs’ whole existence revolves around “delight,” and it cannot differentiate between emotions based on the question posed. Therefore, it’ll provide a “pleasing” answer, or even a coy and funny one, no matter the context.
That these personal assistant programs have trouble detecting the nuances in human emotion perhaps shouldn’t be a surprise. They are, after all, so many bits of code and algorithms; that is to say, they are not human. It takes us back to the very root of people’s concern with over-reliance on AI. It’s been speculated for years that people may become less and less capable of helping themselves if they rely on help from a HAL-like digital assistant. In this context, perhaps the smartphone assistant crisis is a necessary wake-up call.
The news is not all bad, however, on the AI personal assistant front, nor should the prognosis for future development be. Apple Maps, for example, has recently improved its response for location queries relating to the word “abortion,” and many smartphone personal assistants now redirect users to suicide hotlines in response to messages of potential self-harm. There’s still a great deal of progress to be made, of course, but the underlying message is an interesting one.
Consider the fact that we have reached a place where we can hope to soon utilize AI technology in smartphone assistants in order to save lives. Research shows that many users are already more honest with their phones than with their doctors, so utilizing the confessional aspect of a private AI assistant promises huge strides in technology’s advancement of personal aid. Such a notion was truly unfathomable even ten years ago, but “digital empathy” means it could become a reality in the near future. Indeed, despite troubling developments in certain spheres of the AI world, the takeaway can still be encouraging: smartphone AI may not save your life in a crisis right now, but it could do just that very soon.
About the writer of this guest blog post: A journalist and digital consultant, John Boitnott has worked at TV, print, radio and Internet companies for 20 years. He’s an advisor at StartupGrind and has written for BusinessInsider, USAToday, Fortune, NBC, Fast Company, Inc., Entrepreneur, and Venturebeat.
Are you interested in writing a guest blog post for our readers here at the Chop Dawg blog? Email us at Hello@ChopDawg.com!