A neural network learns when it should not be trusted: A faster way to estimate uncertainty in AI-assisted decision-making could lead to safer outcomes.

Advertisement

BEGIN ARTICLE PREVIEW:

Increasingly, artificial intelligence systems known as deep learning neural networks are used to inform decisions vital to human health and safety, such as in autonomous driving or medical diagnosis. These networks are good at recognizing patterns in large, complex datasets to aid in decision-making. But how do we know they’re correct? Alexander Amini and his colleagues at MIT and Harvard University wanted to find out.
They’ve developed a quick way for a neural network to crunch data, and output not just a prediction but also the model’s confidence level based on the quality of the available data. The advance might save lives, as deep learning is already being deployed in the real world today. A network’s level of certainty can be the difference between an autonomous vehicle determining that “it’s all clear to proceed through the intersection” and “it’s probably clear, so stop just in case.”
Current methods of uncertainty estimation for neural networks tend to be computationally expensive and relatively slow for split-second decisions. But Amini’s approach, dubbed “deep evidential regression,” accelerates the process and could lead to safer outcomes. “We need the ability to not only have high-performance models, but also to understand when we cannot trust those …

END ARTICLE PREVIEW

READ MORE FROM SOURCE ARTICLE