Ethical AI: No Longer Optional
Business users increasingly rely on AI systems to help their organizations perform with higher productivity. Yet if business leaders don’t understand how these systems work – especially the role data plays in how AI makes decisions – how can they possibly know whether the systems they rely on are ethical AI?
I don’t think this is a controversial statement, yet as I train corporate auditors and business users to understand, communicate, and apply data analytics to make better assurance decisions, it’s clear that we’ve got a major gap to close when it comes to ethics in artificial intelligence and basic data skills in general.
Finance on the fly
I compare the critical nature of AI and data skills to corporate finance. How many senior leaders have zero understanding of budget management and are comfortable with a lack of understanding on how the company controls spend? It’s hard to imagine a meeting where a CFO shrugs and says, “I just create the budget. I don’t have time to follow where the money goes.” But this is exactly what’s happening with AI in many companies.
If your company is using AI, simply assuming someone on the IT team understands it is a high-risk strategy. Finance knowledge is a basic necessity for most leaders but what about data skills and knowledge of AI technologies? If business leaders don’t have the skills to challenge an AI system – Is our recruiting tool sexist? Is our chatbot racist? – they are more likely to let questions about it wash over them. But a lack of business knowledge around the use of AI can be a serious corporate risk. In fact, it potentially can lead to huge reputational damage and financial impact for the whole organization.
It’s unrealistic to expect that auditors or business leaders will become AI experts overnight. What is key is that people accept that they have a knowledge or skill gap. Part of the challenge for data leaders is that the technology wave is rising and AI systems are becoming easier to obtain. The speed of change is so fast that business leaders may not grasp the implications. As awareness of ethical AI rises across the general public, AI will no longer be accepted as simply a given. Its algorithms and output increasingly will be interrogated.
The AI Industrial revolution
I like to link AI’s evolution to the Industrial Revolution that came two centuries before it. Nineteenth-century companies didn’t invest in flying shuttles and sewing machines so their workers could stare at them while they continued to do all their work by hand. Mill owners didn’t realize leaps in productivity until they showed their workers how to use them in a way that produced more output faster.
I fear we’re still in the same boat when it comes to most data and technology. We all carry around tiny computers called mobile phones that have mind-blowing processing speed, for example, but using them to watch funny videos or post pictures from our weekends isn’t making us more productive. More productivity is one of the key drivers to economic growth.
Five ways to move toward ethical AI
Whether you’re a data leader new to AI or advising someone in the business, here are a few rules of thumb to take on AI to a depth where you’ll be able to identify ethical shortcomings in your company’s algorithms and AI plans.
- Be brave. Auditing AI for ethical shortcomings is a tectonic shift from the way most audit groups operate. But we all need to build up our courage and grapple with AI, even if we don’t initially feel comfortable with it. If your businesspeople don’t understand AI and your auditors aren’t looking at it, you’re contributing to a growing governance gap that someone needs to challenge before you run into ethical issues that cause trouble.
- Be nosy. Read blogs, find books, sit in on meetings, go to online conferences about the ethics of artificial intelligence. Make going to school on ethical AI your educational priority for the next year. This will help you build up a vocabulary in AI that will feel natural rather than forced.
- Be forgiving. I see it in the audit space quite a lot: even when business teams are training on AI, the organizational intolerance of mistakes makes it impossible for some people to even try. But failure is a key part of how we learn, so as data leaders it’s up to us to mentor and move people forward without intimidating them. When you’re first starting down a new learning path, there are no dumb questions.
- Surround yourself with the right people. Part of my point here is that it’s not about you as a data leader being an expert, it’s about you gathering knowledge and surrounding yourself with people that are able to translate AI concepts into plain English. This will give you the confidence to challenge, not just to accept the word of the data scientist in the room. Find colleagues or peers who understand AI ethics and are happy to explain it to you.
- Think of the competitive advantage. Ethical AI can be presented as a defensive measure against bias or harm, but it may be more appealing to business users to consider the potential upside with customers. Customers ultimately are the reason we’re being transparent. They’re relying on us to treat them fairly and engage with them as equals. Considering the middling state of ethical AI today, I believe you’ll be seen as a market leader if you engage with these concepts now.
Closing the theory-reality gap
Ethical AI is impossible unless business leaders come on board. It’s just a theoretical entity otherwise. That’s why the bravery I mentioned may be the most important factor of all. Too many companies identify skill gaps in areas like AI but are reluctant to invest in training for fear that the newly skilled employees will disappear to better jobs as soon as they can. That’s not a foregone conclusion. And unless you are training them, you’ve got few options other than recruiting, which everyone else is doing already. This only shrinks the pool of talent and drives up costs.
So accept the AI skills gap for what it is. But act on it. Get people trained. They’ll appreciate it more than you might think. The cost of not doing so in our new AI world is only rising, but the benefits you can realize make it an excellent strategy for the coming year.