Photo: 195430633 by Anatoly Stojko | Dreamstime
195430633 Anatoly Stojko Dreamstime 623358671336e

Is our faith in algorithms misplaced?

April 20, 2022
AI has great benefits, but trusting machines and sensors could lead to bad decision making.

Recently, my doctor gave me a stress test, which basically amounts to running on a treadmill while attached to an electrocardiogram machine. I noticed the pulse rate on my Fitbit watch varied from the EKG reading. The variance changed with time but did not seem to follow any pattern. No big deal (I’m fine), but I was fascinated by how this could be.

The answer may be a flaw in how software engineers think. They like to measure things that are easy to measure. And when measuring is difficult or expensive—such as measuring the heart’s electrical activity—they come up with a new solution. Here, the light in my wrist was being measured to calculate my bpm. The engineer makes some assumptions, tests, and develops an algorithm to turn that data into the displayed pulse.

The problem is that the reading was wrong. I have been living as if my pulse rate is 75 when, in fact, it isn’t. Again, no big deal, but what if it was a big deal? We are so used to relying on algorithms that we will make the wrong decision every day of the week.

This is increasingly relevant as commercial vehicles increase their level of automation. Someday, these commercial vehicles will likely be driving themselves, and we need to have a healthy fear of where this might lead—and maybe a hesitation with algorithms in general.

Algorithms are concise instructions or recipes that computers can follow to do their assigned work. Who writes these algorithms? Programmers do. They may be experts in the systems they write for, or have no expertise in them at all. They might have expertise in a completely different field. That disconnect has always been this way in the computer software field. Today, data scientists do the programming, and these subject-matter experts (SMEs) do hold expertise in the application and what the readings mean for the sophisticated technology.

But it can get complicated. The CAT 799 truck for mining has about 150 sensors generating data every second, which are transmitted locally or even to the cloud. The sensors can measure loading, weather, fuel, and GPS, and the data combine to become big data. Now hundreds or even thousands of algorithms might use those fields to figure out what is happening inside the asset, what to do about it, and how to improve performance.

The goal of the SME may be to predict component failure and test hundreds of algorithms to find a few that work. What happens if the algorithm doesn’t work? How do we act if an algorithm gives us bad results? Mistakes and biases with artificial intelligence rank among the greatest fears of AI scientists, technologists, business leaders, and policy makers alike. I think that blindly following the computer’s results without thought should be our fear as well.

In some cases, the company will lose money; in others, quality might suffer a bit. But in a few instances, lives are in the balance.

This worst-case scenario happened with Boeing’s 737 Max, where bad software caused two fatal passenger jet crashes. Here’s the short version: Boeing installed new flight control software called the Maneuvering Characteristics Augmentation System (MCAS) to overcome aerodynamic issues that came up when the Max got a bigger engine. Pilots were told that flying this new 737 was just like the classic version, only this system may take control and push the nose down if externally mounted sensors determine the plane’s pitch is too high and might cause the plane to stall. The pilot loses control, and if the sensor and software are wrong, there is no recourse, because the programmers didn’t code for that. And as a result, 346 people died.

In IEEE Spectrum, pilot and engineer Gregory Travis wrote: “It is astounding that no one who wrote the MCAS software for the 737 Max seems even to have raised the possibility of using multiple inputs, including the opposite angle-of-attack sensor, in the computer’s determination of an impending stall. As a lifetime member of the software development fraternity, I don’t know what toxic combination of inexperience, hubris, or lack of cultural understanding led to this mistake.”

This is one of many reasons why I fear that we will defer to the machine and not think for ourselves. Likely, this is already the norm.

About the Author

Joel Levitt | President, Springfield Resources

Joel Levitt has trained more than 17,000 maintenance leaders from more than 3,000 organizations in 24 countries. He is the president of Springfield Resources, a management consulting firm that services a variety of clients on a wide range of maintenance issues www.maintenancetraining.com. He is also the designer of Laser-Focused Training, a flexible training program that provides specific targeted training on your schedule, online to one to 250 people in maintenance management, asset management and reliability.  

Sponsored Recommendations

Fleet Maintenance E-Book

Streamline your fleet's maintenance and improve operations with the Guide for Managing Maintenance. Learn proven strategies to reduce downtime, optimize in-house and third-party...

Celebrating Your Drivers Can Prove to be Rewarding For Your Business

Learn how to jumpstart your driver retention efforts by celebrating your drivers with a thoughtful, uniform-led benefits program by Red Kap®. Uniforms that offer greater comfort...

Guide To Boosting Technician Efficiency

Learn about the bottom line and team building benefits of increasing the efficiency of your technicians in your repair shop.

The Definitive Guide to Aftertreatment Diagnostics

Struggling to clear aftertreatment fault codes? Learn more about different aftertreatment components, fault codes, regen zones, and the best maintenance practices to follow.