How to Build Human Trust in AI
How to Build Human Trust in AI https://csuiteold.c-suitenetwork.com/advisors/wp-content/themes/csadvisore/images/empty/thumbnail.jpg 150 150 Mike Moran https://secure.gravatar.com/avatar/46b59d002824d2902a53e1d7bb94702f?s=96&d=mm&r=gTo read the popular press, AI can outdo humans at anything, but the truth is far more complex. AI applications are typically to do just one thing well, such as when Watson took on all comers in the Jeopardy game show. But while many of us are fine with letting computers play games, polls show that many of us are distrustful of self-driving cars. And that trust is a key issue, because otherwise valuable applications will be slowed down or even stopped if people don’t trust the technology. Research also shows that when AI systems give incorrect answers too frequently or answers that make no sense to humans, it reduces their trust in the system.
So, how do you build human trust in AI?
Explain the system’s decisions. There are calls for explainable AI, where the system must provide an explanation of how it came to its decision. This technique is still R&D, because techniques today are notorious for being proverbial black boxes. The problem with the research is that when you force systems to use only those techniques that are explainable, they inevitably work worse. Someday, this might be the answer, but not today.
Improve the system’s accuracy. The reason you want AI to explain itself is so you understand how mistakes happen. If you can make it work well enough, maybe no one needs an explanation. After all, most is us don’t know how our cars work, but we trust when we apply the brakes that it will stop. Hardly anyone knows how Google’s AI works, but we trust that our searches will get us good results, so we keep using it.
Reduce the really big mistakes. Watson once gave a really bad answer to a Final Jeopardy question in the category US Cities, providing the response of “Toronto.” We call that a “howler”–an answer so bad that even if you don’t know the correct answer, you still know that response is wrong. You can actually tune the system to reduce howlers by scoring bad answers as worse responses than wrong answers that are “close.”
Put humans in the loop. This might be the simplest of all. Instead of treating every AI system as one to replace humans, maybe it is easier, safer, and more trustworthy to set up the AI to help the humans do their jobs. Watson is being used to diagnose diseases, but rather than replacing doctors, Watson shows the doctors possible diagnoses based on the symptoms, with the doctor making the final decision. When decisions are so high stakes, this might be the most prudent approach.
AI is no longer science fiction, but people are understandably nervous of this kind of powerful force that works in mysterious ways. We need to pay close attention to building human trust in the system to see AI used in the safest and most valuable ways possible.