The psychologist Abraham Maslow famously said that if the only tool you have is a hammer, you tend to see every problem as a nail. And everywhere you look these days, companies are wielding AI like a hammer, hoping it can solve all their pressing business problems: “How can we use AI? How can we sell AI? How can we make money on AI?”
But here’s the thing – if you’re starting with those questions, you’re starting with a broken assumption. The right question isn’t “How do we use/sell/make a fortune with AI?” The right question is “What problems are we trying to solve?”
What Problem Are You Solving?
Technology should make us more capable of doing uniquely human work, not replace our capacity to think, create, and connect with each other. The approach of starting with the tool instead of the problem is why we’re seeing many AI implementations stall. Companies are throwing technology at problems they haven’t properly defined or understood.
The approach I recommend to any leader considering AI: Start by listing your actual problems. Not theoretical problems, not problems you think you should have, but the real pain points keeping your teams from working fast. Then ask: What tools do I have that can help solve these problems? AI might be one of those tools. It makes sense to explore AI solutions for tedious, repeatable, manual tasks. You likely can identify those opportunities in your organization easily.
But let’s say your employee engagement is suffering, and people have expressed needing better support during difficult times. You want human connection and emotional intelligence here, not algorithmic responses. Starting with the problem helps reveal the appropriate solution.
Should We Engineer Out the Human Element?
Today, AI excels at automating repetitive tasks – the digital equivalent of assembly line work. If your backup administrators are turning the same widgets over and over, or your data entry teams are focused on purely laborious spreadsheet work, AI absolutely can help. But I believe relationship building, creative problem-solving, and complex decision-making require human judgment, intuition, and contextual understanding that no algorithm today can yet replicate.
At Commvault, we’re committed to the ethical development and deployment of AI. We’ve employed it for tasks like turning complex regulatory documents into succinct summaries or helping create targeted versions of content. Saving this time for employees to focus on more value-added activities. None of this work happens without careful human oversight.
The companies I see succeeding understand this distinction. They use AI to eliminate tedious tasks so workers can focus on what humans do best: nuanced decision-making, building trust, navigating complex stakeholder relationships, and thinking through problems that don’t have clear precedents.
A Framework for Smart AI Adoption
Before implementing any AI solution, leadership teams should ask themselves these questions:
- What specific problem are we solving? Be concrete. “We want to be more efficient” isn’t specific enough.
Try: “We want to automate X.” - Why is this problem worth solving? What’s the real business impact?
- Where do we need human judgment to remain in the loop? Identify the decision points, beyond just high-risk scenarios, that demand wisdom, not just intelligence.
- How will we measure success? Not just adoption rates, but actual problem resolution.
- Revisit the conversation. Successful AI adoption is not a point-in-time measurement.
Read more in Guiding Principles for Responsible AI.
The Path Forward
Make time to establish procedures for data handling, privacy protection, and decision-making authority. Who controls what information gets fed into AI systems? What data absolutely cannot be uploaded to external AI platforms? How do you prevent customer data from being used to train models?
When you put information into an AI system, you may be sharing it not only with that vendor, but also with its cloud providers, sub-processors, and others in its data supply chain. A secret known by more than three people isn’t a secret anymore – so be careful before you hand yours to dozens of entities.
Learn more about Commvault’s approach at Principles for Responsible Artificial Intelligence.
Danielle Sheer is Chief Trust Officer at Commvault.