Harnessing AI Effectively & Ethically by Understanding ‘Why’ and ‘How’


Katie KingCEO of AI in Business

Wednesday, May 4, 2022

Artificial intelligence (AI) is changing the face of the business landscape. Here's how understanding its use cases can help you implement it effectively.

Article 8 Minutes
Harnessing AI Effectively & Ethically by Understanding ‘Why’ and ‘How’
  • Home
  • IT
  • Software
  • Harnessing AI Effectively & Ethically by Understanding ‘Why’ and ‘How’

The wheel, the printing press, the internet, the smartphone...throughout history, several technological changes have come along that completely reshape our way of life.

Artificial intelligence (AI) is the latest in this series of world-shaking developments, bringing with it some serious shifts in the ways we consume products and content, interact with the world around us, conduct business and more. For several years now, AI has been working quietly in the background making our software more intuitive, our smartphones smarter, our shopping recommendations more tailored and so much more that we don’t even realize.

For a while, fears born from popular fiction about robots taking over the world put many off adopting this technology into their organizations. It’s human nature to fear or mistrust what we don’t yet understand and despite versions of it existing since the 1950s, AI only truly came into its own in recent years. While some businesses were ahead of the curve and adopted early, it wasn’t until the pandemic deepened our reliance on technology and that many hold-outs began to open their minds and pay more attention to the potential that this technology can bring.

But now that the challenge is no longer to raise awareness about the benefits and impacts of this technology, the focus now must shift to the ‘why’ and the ‘how’ in order to ensure it’s adopted both effectively and ethically.

The ‘Why’ of AI

Every AI journey should start by identifying the problem at hand. If you’re adopting AI just for the sake of it, or introducing new tools simply because that is what everyone else seems to be doing, your project is doomed before it can even begin. AI is its most effective when there is a problem to solve or a clear issue to address. For example, perhaps you need to improve your customers’ perception of your brand, bring in more sales leads, streamline your processes to waste less time, or minimize the risks associated with some of the more dangerous work your company might do. All of these problems can be solved with AI, but you need to be certain that that this technology is the right course of action for your organization and what you are looking to achieve.

This is the part that most businesses tend to struggle with most. Yes, they have problems to solve and yes, they are aware that AI has all this potential that they should be tapping into. The trouble is often in making the two connect and where to even begin. In my recently published second book, AI Strategy for Sales and Marketing, I outline what I have called the STRATEGISE Framework, which essentially gives a step-by-step outline for exploring and adopting AI. The framework is meant to guide decision-makers through the process, taking into consideration things like resources, time, data, augmentation, and more.

For the adoption of AI to be effective, it needs to be strategic rather than tactical. Only by focusing on the outcomes (the ‘why’) can businesses truly succeed in achieving their goals. You cannot consider a project effective at achieving your goals if you don’t know what they are. Embarking on vanity projects simply won’t cut it. AI needs purpose, otherwise, how can it be measured?

The ‘How’ of AI

Next comes the questions of ‘how,’ but not necessarily in the logistical sense. When going through the STRATEGISE process, the logistics will have been sorted along the way. The ‘how’ to consider here is how this technology may impact your audiences and stakeholders. When adopting AI into a business, it can be easy to get wrapped up in all of the amazing benefits that that technology will bring to the organization, typically in the form of some valuable KPIs. This might be things like increased sales, attention from investors, higher productivity and so on. While all of these are great for the company, they may come with consequences.

For example, when automating your hiring process, are you thinking about how potential biases might creep in and corrupt the process, or are you simply focused on how much time it will save your HR team and how much likelier you are to find great talent? When you introduce AI to help personalize your customers’ experiences, are you considering how you are collecting, storing and using the data required to deliver these experiences and how your customers may feel about it? It’s a bit of a double-edged sword with every action having potential negatives reactions, and overall the benefits of AI do often far outweigh the risks involved. Not only that, but these risks are solvable so long as they are considered in the first place.

‘Standards’ and ‘Ethics’ make up the final S and E of the STRATEGISE Framework, baking these considerations into the adoption process itself. It should be noted that there are several different codes of ethics and so on related to AI that have been published by trade bodies, government organizations and other AI-focused groups, but very few laws and regulations governing the technology itself. While the existing frameworks serve as a guide for conduct, much of the governance of this technology is happening at the organizational level. Therefore it falls on the company, it’s people, and its leadership to take things like privacy, ethics, bias and transparency into consideration.

It should also be noted that AI is not inherently inethical or biased. Rather, it is the way we humans use it and programme it that can allow it to be so. As with anything, there is going to be risk. AI is not fundamentally bad, but you will have a few players who use it that way. It’s the same with the internet, social media, and so on. AI learns from the data it is fed. If we put garbage in, we get garbage out.

Let’s return to the hiring example. In order to train that system to be able to automate the process for your HR team, you would likely use data from your existing staff, past hires and so on to train the algorithm on what a ‘good fit’ looks like for your specific organization. If left unchecked, the system will pluck out candidates who are very similar to those you have trained it to believe are successful in your organization. And while that might be true and you may end up with successful candidates, this could have serious implications for your diversity or bias issues. If your current workforce is made up of people who share similar backgrounds, education, genders or even ethnicities, you will be training the algorithm to find more of the same and could be missing out on the opportunity to introduce fresh ideas and perspectives.

It’s a similar issue with data. Today’s customers are informed enough to know that the companies they do business with are collecting their data, but very few understand why this is needed or how it is used. This lack of transparency creates mistrust in the organization. Yes, AI is creating all of these fantastic outcomes for the customer and is able to construct these really tailored, really personalized experiences over time. But if the customer doesn’t trust you enough to do business with you long term, what was all of it for?

So how do you overcome this? By understanding the risks at hand and taking action to overcome them. Humans should never be written out of the equation when it comes to governing AI systems. For this technology to create its desired effects in a way that is not harmful, we need to ensure that human intelligence is providing oversight. Moreover, AI should not be treated as a ‘black box’ that no one understands or trusts. Being open and transparent with all of your stakeholders can help to demystify some of the mistrust that taints perceptions about this technology. Helping customers understand exactly how and what their data is used for and how it benefits them helps to see the bigger picture. But at the same time, you need to ensure that you are not overstepping or misusing the data they are willing to provide.

Measurement is essential for determining whether or not you have truly been effective at reaching your goal, and just how much impact you have made. This can be challenging to do, but there are several different models for assessment you can follow. One of these is the ‘Scorecard for Success’ featured in my first book, Using Artificial Intelligence in Marketing. The Scorecard is exactly what it sounds like, giving criteria and benchmarks for businesses to reference when assessing their AI journey. 

Success will vary on a case-by-case basis and only you can truly determine whether or not your efforts were effective. But by taking the time to be more aware, to assess the ‘why’ and the ‘how’ of your journey, and to reflect back on what worked and didn’t, you are putting yourself in a much better position to have the desired outcomes with as minimal drawbacks as possible.

Using Artificial Intelligence in Marketing and AI Strategy for Sales and Marketing are both available now from Kogan Page.

Katie King

Katie is CEO of AI in Business, a firm that specializes in AI consultancy and training. With over 30 years' experience, she has advised many of the world's leading brands and business leaders, including Richard Branson/Virgin, O2, Orange and Accenture. Based in Kent, UK, she is a member of the UK Government All-Party Parliamentary Group (APPG) task force for the enterprise adoption of AI and an Editorial Board Member of the journal AI and Ethics. A regular international keynote speaker, she has also delivered TEDx talks and is a frequent commentator on BBC TV and radio. 


Join the conversation...