• Members Only
    • Login

Canadian Association for the Club of Rome

Live differently - it's your choice!

  • Home
  • About
    • About the Club
    • Meet our Directors and Members
    • CACOR Groups
  • Articles
    • Climate
    • CACOR Groups
    • Quotes
    • Trending
    • CACOR Writers
    • What are you doing
    • Solutions
    • CACOR YouTube
  • Events & Resources
    • Luncheon Events
    • CACOR Forum
    • Presentations
    • Resources
    • Announcements
  • Contact Us
  • Join Us
    • Apply for Membership

January 28, 2018

Optimization over Explanation

Maximizing the benefits of machine learning without sacrificing its intelligence

Imagine your Aunt Ida is in an autonomous vehicle (AV) — a self-driving car — on a city street closed to human-driven vehicles. Imagine a swarm of puppies drops from an overpass, a sinkhole opens up beneath a bus full of mathematical geniuses, or Beethoven (or Tupac) jumps into the street from the left as Mozart (or Biggie) jumps in from the right. Whatever the dilemma, imagine that the least worst option for the network of AVs is to drive the car containing your Aunt Ida into a concrete abutment. Even if the system made the right choice — all other options would have resulted in more deaths — you’d probably want an explanation.

Or consider the cases where machine-learning-based AI has gone wrong. It was bad when Google Photos identified black men as gorillas. It can be devastating when AI recommends that black men be kept in jail longer than white men for no reason other than their race. Not to mention autonomous military weapon systems that could deliver racism in airborne explosives.

To help ameliorate such injustices, the European Parliament’s has issued the General Data Protection Regulation (GDPR) that is often taken to stipulate a “right to explanation” for algorithms that “significantly affect” users. This sounds sensible. In fact, why not simply require all AI systems be able to explain how they came to their conclusions?

The answer is not only that this can be a significant technical challenge, but that keeping AI simple enough to be explicable can forestall garnering the full value possible from unhobbled AI. Still, one way or another, we’re going to have to make policy decisions governing the use of AI — particularly machine learning — when it affects us in ways that matter.

One approach is to force AI to be artificially stupid enough that we can understand how it comes up with its conclusion. But here’s another: Accept that we’re not always going to be able to understand our machine’s “thinking.” Instead, use our existing policy-making processes — regulators, legislators, judicial systems, irate citizens, squabbling politicians — to decide what we want these systems optimized for. Measure the results. Fix the systems when they don’t hit their marks. Celebrate and improve them when they do.

We should be able to ask for explanations when we can. But when we can’t, we should keep using the systems so long as they are doing what we want from them.

For, alas, there’s no such thing as a free explanation.

Read the entire article here

 

Article posted by John Verdon / Articles, CACOR Values / AI, Big Data, Digital Environment, Explanation, Moral Values, Optimizing Values, Research Paradigm, values, Valuing Values Leave a Comment

Read more articles like this. Sign up for our latest updates.

Comments

  1. apples says

    February 8, 2018 at 7:43 pm

    Ꮋello it’s me, I am also ѵisiting this site regularly,
    thіs website is in fact nice and the vieweгs are really sharing fastidious thoughts.

    Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay informed …

Upcoming Events

  1. CAPTURE CARBON NOW

    April 27 @ 10:00 am - 5:30 pm

View All Events ...

Recent Articles

  • Climate of The Holocene, From the End of the Ice Age (11750 years ago) to The Present as Derived From Ice Cores; Recent Sudden Changes and Those of the Past. David Fisher
  • Path to Zero Carbon: Navigating the urban energy transformation
  • You Don’t Know What You’ve Got ’til its Gone

Topics

Monthly Archives

Connect with us

  • 
  • 
  • 
  • 
  • 
  • 

Membership

Apply for membership.
 

Apply

Donate

CACOR is a registered charity.

Donate

Details

  • Privacy Policy
  • Disclaimer
  • FAQ
  • Sitemap

© 2018 · Canadian Association for the Club of Rome · Built by Creative Integration Web Design · Contact Us