One shortcoming of current machine-learning programs is that they fail in surprising and decidedly non-human ways. A team of Massachusetts Institute of Technology students recently demonstrated, for instance, how one of Google’s advanced image classifiers could be easily duped into mistaking an obvious image of a turtle for a rifle, and a cat for some guacamole.” — Jerry Kaplan, The Wall Street Journal, June 2, 2018
We have, in recent years, been bombarded with stories about AI (Artificial Intelligence, for those of you who live in farm country and think it means something else), and about how our meager human brains will soon not be able to keep up with those super-smart machines. Self-driving cars. Computers that accurately diagnose, and even treat, medical conditions. Robots that perform surgery, manage eldercare, and take care of our sexual needs. Autonomous military drones. Siri. Predictive applications to “enhance” our Internet experience (Amazon, Pandora, etc.). Chatbots. Legal assistants. And, of course, the omnipresent Google.
So I was strangely reassured by the above passage, which I came across some time ago in the Wall Street Journal in an article focusing on efforts being undertaken to make self-driving cars fail in predictable ways (so that, for example, they do not mistake light reflected back from their camera lenses for truck headlights rushing towards them from the other direction, and run off the road as a result. Or so they don’t perform like the self-driving Uber test vehicle in Tempe, AZ, which killed a pedestrian walking her bike across the road because its algorithms, which did recognize her presence, mistook her for “ghosting” in the poorly-lit night.)
Elon Musk, the CEO of Tesla, isn’t happy about the bad rap self-driving cars are getting, believing that the press is out to get Tesla, and that the “holier-than-thou hypocrisy of the big media companies [lays] claim to the truth but [publishes] only enough to sugarcoat the lie.” In his view, this is “why the republic no longer respects them.” Because they are out to get Tesla.
In his view, maybe. I’m thinking that his essentially accurate, but self-interested, view of press shortcomings, and why the public disrespects big media, might be suffering from tunnel vision. Which, as I understand it is another thing self-driving cars aren’t so good at.
Research shows we’d be much more accepting of self-driving cars if they failed in the same sorts of ways as cars driven by humans do — misjudging a curve and approaching at too high a speed, failing to notice the car in the “blind spot,” going the wrong way up a one-way street, distracted driving while texting or on the phone–rather than in the spectacularly unpredictable ways they sometimes do. Even if there are ultimately far fewer accidents, the sheer unpredictability of today’s AI “fails,” makes us queasy.
No doubt many of these concerns will be addressed with time, money, and more research, and as the Wall Street Journal article makes clear, the future probably belongs, at least in large part, to AI and its many applications.
But, fellow humans, all is not yet lost, and with luck (our own unpredictability wild card, and one the machines haven’t sussed out yet), it never will be.** Gird your loins, get behind the wheel, put your foot on the gas pedal, and press onward!
Oh, and hand over the guacamole before you leave.
(I’ve always loved this ad, which seems largely representative of much of my life. And appropriate for this post. )
**I can’t help thinking that it’ll be a while before a self-driving car on a major highway gets pulled over by the cops, who discover that one of the reasons the robot in charge of it is driving so erratically is that he’s making a cup of tea, complete with kettle and milk, on the floor in front of the passenger seat. Only in England.