Growing concerns about how artificial intelligence (AI) makes decisions has inspired U.S. researchers to make computers explain their âthinking.â
âComputers are going to become increasingly important parts of our lives, if they arenât already, and the automation is just going to improve over time, so itâs increasingly important to know why these complicated systems are making the decisions that they are,â assistant professor of computer science at the University of California Irvine, Sameer Singh, told CTVâs Your Morning on Tuesday.
Singh explained that, in almost every application of machine learning and AI, there are cases where the computers do something completely unexpected.
âSometimes itâs a good thing, itâs doing something much smarter than we realize,â he said. âBut sometimes itâs picking up on things that it shouldnât.â
Such was the case with the Microsoft AI chatbot, Tay, which became racist in less than a day. Another high-profile incident occurred in 2015, when Googleâs photo app mistakenly labelled a black couple as gorillas.
Singh says incidents like that can happen because the data AI learns from is based on humans; either decisions humans made in the past or basic social-economic structures that appear in the data.
âWhen machine learning models use that data they tend to inherit those biases,â said Singh.
âIn fact, it can get much worse where if the AI agents are part of a loop where theyâre making decisions, even the future data, the biases get reinforced,â he added.
Researchers hope that, by seeing the thought process of the computers, they can make sure AI doesnât pick up any gender or racial biases that humans have.
However, Googleâs research director Peter Norvig cast doubt on the concept of explainable AI.
âYou can ask a human, but, you know, what cognitive psychologists have discovered is that when you ask a human youâre not really getting at the decision process. They make a decision first, and then you ask, and then they generate an explanation and that may not be the true explanation,â he said at an event in June in Sydney, Australia.
âSo we might end up being in the same place with machine learning where we train one system to get an answer and then we train another system to say â given the input of this first system, now itâs your job to generate an explanation.â
Norvig suggests looking for patterns in the decisions themselves, rather than the inner workings behind them.
But Singh says understanding the decision process is critical for future use, particularly in cases where AI is making decisions, like approving loan applications, for example.
âItâs important to know what details theyâre using. Not just if theyâre using your race column or your gender column but are they using proxy signals like your location, which we know it could be an indicator of race or other problematic attributes,â explained Singh.
Over the last year thereâs been multiple efforts to find out how to better explain the rational of AI.
Currently, The Defense Advanced Research Projects Agency (DARPA) is funding 13 different research groups, which are pursuing a range of approaches to making AI more explainable.