Technology

Artificial Intelligence Has Some Explaining to Do

Software makers offer more transparent machine-learning tools—but there’s a trade-off.

Artificial intelligence software can recognize faces, translate between Mandarin and Swahili, and beat the world’s best human players at such games as Go, chess, and poker. What it can’t always do is explain itself.

AI is software that can learn from data or experiences to make predictions. A computer programmer specifies the data from which the software should learn and writes a set of instructions, known as an algorithm, about how the software should do that—but doesn’t dictate exactly what it should learn. This is what gives AI much of its power: It can discover connections in the data that would be more complicated or nuanced than a human would find. But this complexity also means that the reason the software reaches any particular conclusion is often largely opaque, even to its own creators.