When Is It Important for an Algorithm to Explain Itself?

Posted by @audreylaine, Jul 6, 2018

Many efforts to apply machine learning get stuck due to concerns about the “black box” — that is, the lack of transparency around why a system does what it does. Sometimes this is because people want to understand why some prediction was made before they take life-altering actions, as when a computer vision system indicates a 95% likelihood of cancer from an x-ray of a patient’s lung. Sometimes it’s because technical teams need to identify and resolve bugs without disrupting the entire system. And now that the General Data Protection Regulation (GDPR) is in effect, businesses that handle consumer data are required to explain how automated systems make decisions, especially those that significantly affect individual lives, like allocating credit or hiring a candidate for a job. While GDPR only applies in Europe, businesses around the world anticipate that similar changes are coming and so are revisiting governance efforts.

This article examines why a seemingly straightforward idea like a right to an explanation is hard to understand and implement in practice.


Please login or register to post a reply.

© Mayo Clinic Social Media Network. All Rights Reserved.