Machines are learning faster than ever. Can we trust them?
Artificial intelligence systems run algorithms and make decisions faster than humans. They excel at performing tasks, but they can't tell users why one decision is better than another, making some of their recommendations seem arbitrary or unreliable.
That poses a risk when AI is used in military situations. Operators demand accountability from their machines, especially when they are used to understand massive amounts of images, far more than any human could analyze.
The U.S. Defense Advanced Research Projects Agency turned to Raytheon Technologies to work on a first-of-its-kind neural network, a brain-like system of AI, that explains itself. Raytheon Technologies' Explainable Question Answering System will show users which data mattered most in the artificial intelligence decision-making process. Users can ask the system questions about chosen recommendations and discover why it rejected others.
"We know why humans may mess up a decision. There's no intuitive way to know when machines are wrong," said Bill Ferguson, lead scientist and EQUAS principal investigator at Raytheon Technologies. "To build trust, we have to give the user enough information about how the recommendation was made so they feel comfortable acting on the system's recommendation."
Built on hundreds of millions of data sets, EQUAS uses pictures to explain itself. A heat map lights up on a picture to show which characteristics influenced its decision. If the user asks why it came to the conclusion it did, the system uses words to explain its logic.
The system will also monitor itself and share conditions that limit its ability to make reliable recommendations. For example, if EQUAS can’t recognize people when they wear bulky winter clothing, it will signal the limitation to human operators. Self-monitoring helps developers identify which parts of the program need improvement.
Like humans, "EQUAS operates on a reward system. Get the answer right and developers strengthen the connections within the neural system that made that connection. Get the answer wrong and we weaken that connection," said Ferguson. "We are developing more and more capability with every iteration we make."
Raytheon Technologies is developing the system as part of DARPA's Explainable Artificial Intelligence program. The technology is still in its early phases of development, but could potentially be used in other operations.
"A fully-developed system like EQUAS could help with decision-making not only in DoD operations, but in a range of other applications, like campus security, industrial operations and the medical field," said Ferguson. "Say a doctor has an ultrasound image of a lung and wants to validate if there are shadows on the image, which may indicate cancer. The system could come back and say, yes there are shadows, and based on these other examples, we should investigate this diagnosis further."