Why You Can’t Just Let AI Do All the Work

AI driving

There’s a certain thrill that comes from giving up control. Stepping out of the airplane with a parachute strapped to your back, buckling into the seat of the roller coaster car, or even just saying to the chef “surprise me” all share the same qualities. You are surrendering your fate to someone or something else, trusting in their technology or expertise, in exchange for something you can’t do on your own. 

In business, the latest trend is our increasing reliance on AI (artificial intelligence) software. There is no doubt that the advanced learning model and neural networks available today can find and optimize relationships in big data in a way that humans would never be able to. From music recommendations on iHeartRadio to surge pricing on Uber to Teslas driving themselves down the highway, AI and Machine Learning keep creeping farther and farther into our lives and culture. 

As the technology evolves, it becomes more accessible to companies beyond just “big tech”. This is where the temptation comes in. The potential benefit of applying AI to your business challenges is just too big to pass up. More and more companies are incorporating AI into their everyday operations, but the nature of AI introduces new challenges that you can’t just ignore. 

The very complexity and self-guidance that makes AI so powerful also obscures its inner workings. Unlike traditional software algorithms and programs, AI implementations become black boxes. You can see the inputs going in, the outputs coming out, and a set of parameters used to “tune” the system, but you can’t know how those factors combine to lead to an output. In most cases, you also can’t expect that the same set of inputs will consistently lead to the same output. 

Essentially, you are ceding control to this black box and trusting that it will deliver on a set of goals at least as well as a traditional “full control” model would. This is where things get sticky, because real world results are never as simple as the one or two KPIs the AI model is optimizing for. To see this in action, look no further than the controversy over Facebook’s newsfeed system, where a model designed to optimize for user engagement often emphasizes controversial and polarizing content. Undoubtedly the system generated much more engagement than a traditional “editorially managed” feed, but that enhanced performance came with negative impact for users as well as the company itself. 

For a slightly less sinister example, think of the living brooms from the Sorcerer’s Apprentice. They were tasked with filling the well with water. They were “intelligent” enough to continue that task even when Mickey tried to stop them, but not truly intelligent enough to realize that there was just too much water. 

AI will continue to optimize to a goal, not realizing when the goal should change.

If you tell an AI to optimize for number of orders generated, it’s not going to say “Hey boss, most of these orders are sale items with less margin. Are you sure that’s what you want me to do?” If you tell it to optimize delivery routes to reduce fuel costs, it’s not going to say “You know, that might mean our product isn’t as fresh when it gets to the customer.” And anyone who has watched enough sci-fi movies knows what happens if you ask an AI to create world peace (spoiler alert: it usually decides the best way is to kill all humans). 

The lesson here isn’t to avoid AI and it’s many benefits. The lesson is that, as we put more and more robots to work for us, we must become vigilant human overlords. As the AI models take over the grunt work and apply their magic, the humans responsible for the business have to keep a keen eye on ALL of the data and results to watch for unintended consequences. To get the best value out of AI and reduce the risk, you need a three-stage cycle:

  1. The humans “train” the AI and set the parameters for success
  2. The AI does the heavy lifting and optimizes based on its settings
  3. The humans explore the data and find actionable insights that can be used to adjust the success parameters and repeat the cycle

Someday the “I” in AI might be powerful enough to understand all of the subtle details, but until then, we can let AI drive as long as we are awake at the wheel and ready to steer. 

If you want to see how to give your teams the knowledge and skills to find actionable insights, contact us