To understand this answer, you need to understand that there are many AI techniques and technologies. A popular one today is “deep learning convolutional neural networks”. Often today, when one speaks of AI, they are talking about this. It was made popular by Google open sourcing a technology called “TensorFlow”. These are considered “black box” systems, meaning that people can’t be sure how the answers are figured out by the system.

This problem is not a matter of whether or not the user understands how the solution works. It’s a question of both, “does the developer know how the solution works?” and more importantly, “can the solution provide an explanation as to why a particular answer was given, rather than a different one?”

For these black-box solutions, the answer is no for both cases. This is just one class of AI systems.

There are other classes of AI systems. Some of these are “explainable AI” systems. These systems are often analytical rather than statistical solutions. Since they are built analytically, it is often the case that the developer knows exactly how the solution works. Additionally, the these solutions are able to provide a logical reason as to why a particular answer was provided in a case.

We cannot be confident in using black-box AI solutions, especially in many use cases where an explanation or a reason for why an answer was given is needed. Regulated industries like banking and healthcare require this type of explainability. “Explainable AI” solutions, do provide that capability and we can be confident using these solutions.