Notice: Passwords are now case-sensitive
Forgot your password? Register a new account

Connections

Groups

Community Requests

Oct 30 - Nov 1 Comp Laude® Awards and Gala Manchester Grand Hyatt San Diego
Register Now!

Lynch: Automation Designed to Keep People Safe Can Produce Opposite Result

By Tom Lynch

Tuesday, September 19, 2017 | 315 | 0 | min read

A fascinating article in Friday's Daily Alert from the Harvard Business Review describes how our dependence on automation can erode cognitive ability to respond to emergencies.

In “The Tragic Crash of Flight AF447 Shows the Unlikely but Catastrophic Consequences of Automation,” authors Nick Oliver, Thomas Calvard and Kristina Potocnik, professors and researchers at the University of Edinburgh Business School, report on their analysis of the horrific crash of Air France flight 447 in 2009. Their research, recently published in Organizational Science, describes in riveting detail the series of preventable cascading events that led to the deaths of all 228 passengers and crew.

Although the crash of AF447 is a transportation tragedy, it also can serve as a stark reminder that employees who depend on technology, especially technology that controls dangerous work — say, self-driving 18-wheel trucks, for example — need a lot of training to take the right steps when technology reacts to emergencies. Without that training, the authors contend, the cognitive ability to take manual control and successfully deal with the emergency is problematic at best.

The authors provide an example:

"Imagine having to do some moderately complex arithmetic. Most of us could do this in our heads if we had to, but because we typically rely on technology like calculators and spreadsheets to do this, it might take us awhile to call up the relevant mental processes and do it on our own. What if you were asked, without warning, to do this under stressful and time-critical conditions? The risk of error would be considerable.

"This was the challenge that the crew of AF447 faced. But they also had to deal with certain “automation surprises,” such as technology behaving in ways that they did not understand or expect."

The point here is the technology offering up the “automation surprises” was doing exactly what it was programmed to do. The technology did not fail; the pilots, all three of them, failed in their response to the “surprises.”

We are now at the beginning of a monumental shift in the way work (and play) is done. The natural gravitational movement of artificial intelligence assuming more and more control in our daily lives is unstoppable. Think of how it has brought tremendous improvements in air safety.

To prove that, consider this astounding statistic: In 2016 the accident rate for major jets was just one major accident for every 2.56 million flights. But this bubble of safety can breed terrible complacency.

How humanity deals with and prepares for the rude “automation surprises” that will surely come along on the way to the future should be a critical component in the thinking of organizational leaders and safety professionals.

Tom Lynch is a principal with Lynch Ryan & Associates, a Massachusetts-based employer consulting firm. This column was reprinted with his permission from his Workers' Comp Insider blog.

Comments

Be the first to comment.

Related Articles