Call it gallows humor, but I found this particular taxonomy ironic. It came to us from Boing Boing in their article, “A taxonomy of algorithmic accountability.”
Computer scientist, Ed Felten, shared a short taxonomy of four ways that an algorithm can fail to be accountable to the people whose lives it affects. It can be protected….
- by claims of confidentiality (“how it works is a trade secret”);
- by complexity (“you wouldn’t understand how it works”);
- unreasonableness (“we consider factors supported by data, even when you there’s no obvious correlation”);
- and injustice (“it seems impossible to explain how the algorithm is consistent with law or ethics”).
In the long run, all of these types of complaints are addressable–so perhaps explainability is not a unique problem for algorithms but rather a set of common-sense principles that any system must attend abide by.
Melody K. Smith
Sponsored by Access Innovations, the world leader in taxonomies, metadata, and semantic enrichment to make your content findable.