Simeon Adebola on September 14, 2019
Black Box AI technologies like Deep Learning have seen great success in domains like ad delivery, speech recognition, and image classification; and have even defeated the world's best human players in Go, Starcraft, and DOTA. As a result, adoption of these technologies has skyrocketed. But as employment of Black Box AI increases in safety-intensive and scientific domains, we are learning hard lessons about their limitations: they go wrong unexpectedly and are difficult to diagnose.
This talk is about safe AI
AI has become ubiquitous in our lives.
It takes on tasks : Ads, Recommendations, Image processing and Games. Common features of these tasks are that data are nearly free and unlimited , action takes priority over learning, and there is little/no cost of failure. AI used for these is inefficient(requires incredible amounts of data), opaque (hides its knowledge away), and brittle (can fail unexpectedly and catastrophically).
For tasks in Biotech, Public Health, Agriculture and Defense/Intelligence , the common features are that data are scarce and expensive, action is dependent on learning, and failure costs lives.
However, AI meant for benign tasks is being adapted for high risk tasks.
Baxter gave recent examples from Uber's self driving car and IBM Watson
How do we make Ai safe when safety matters?
One possible way is Explainable AI (XAI)
There are questions we need to be ask the Ai.
So we need an Explanation Interface
Explainable AI has been around long before it became a buzz word. Baxter shows a Nomogram image from years before. Another example are Expert systems. They became popular starting from the 80s.
Recent examples of explainable AI: Most of the work has been in deep learning focused on image classification. Baxter discussed examples from recent papers. There are also approaches using heat maps. We can also do the same thing with text
XAI work is mostly focused on explanation but there are problems with explanation.
How do we fix these problems? Baxter says he does not know how to solve all of the problems but we can solve most by : Embed machine knowledge into the human mind
How do we do this? Teach!
Baxter talks about the Rectangle Game and another experiment teaching image categories
We did have some success in transferring some knowledge that was machine knowledge in to humans.
The problem with this is that its's is recursive maths and teaching is harder than learning.