Akil Benjamin, Co-Founder and Head of Research at COMUZI, was a keynote speaker at AMBA & BGA’s Business School Professionals Conference 2019 in Vienna, Austria. Here, he delves into the ethical implications of AI and offers tips for leaders working with rapidly emerging technology
Could you introduce yourself and the main topics covered in your presentation at AMBA & BGA’s Business School Professionals Conference?
I’m Head of Research at COMUZI, an innovation studio. We help people think about the future and build next-generation products, services and experiences, with a focus always on the people they are trying to serve.
The main topic of my presentation was consequence scanning; it was about asking AI the right questions. How we can demystify this technology? How can we make sure it gives us positive experiences? How do we amplify the positives while monitoring and mitigating the negatives?
These days, AI can decide whether or not you get a mortgage; whether or not a judge should give you jail time. It can also help you out when it comes to your healthcare needs, and predict the sentence you are typing on your phone. It is becoming ubiquitous.
Do the positives of AI outweigh the negatives?
Yes and no. It depends which side of the coin you look at. AI is only as smart as the person or team that programmed, developed and designed it. So we have to start by thinking about how we design these things and then identify the positives and negatives from that.
How can humans best work with this technology?
Technology has allowed us to do things much more rapidly. We can scan 300 million sources or candidates in a couple of minutes. It’s mind-boggling. But obviously, doing things this fast isn’t part of the human skill set. I believe that AI will show us the value of humanity: the things AI cannot do, such as making considered decisions, thinking about outcomes and taking time over things these are innately human skills.
What questions should we be asking AI?
How does AI impact the people it has been programmed to serve? Does it achieve your number-one goal? In achieving this goal, does it serve to marginalise, hurt, or have unintended consequences for a secondary group of people? If so, who are these people?
Play devil’s advocate and start thinking about what the impact could be. Decisions are not irreversible but, with AI, they are usually longstanding and hard to unpick. Let’s do our best to limit the number of poor decisions that are made in the first place.
Where do you think AI does not or cannot add value?
There will always be a need for human connections. I believe that AI is a tool, not a person; it facilitates doing something. As long as we stay connected and remember that AI is a tool rather than a proxy for human beings, I think we will be fine.
What are the ethical implications we need to consider?
Is AI reinforcing negative stereotypes or decisions? Is it reinforcing institutional divides or inequalities? Is it perpetuating injustices that people have been fighting over the past 20-100 years?
If so, AI isn’t being used for the right thing. But, if we’re using AI to deconstruct and reimagine a carefully thought-out future, we are taking progressive steps.
How can AI inform an authentic marketing strategy?
Let’s demystify AI. It’s not magic in a box, it’s a programme. Let’s start telling people the truth and educating them through our marketing messages so they can participate in the conversation and come along with us. I think the better people understand the technology, the more they will be able to engage with it.
In terms of how AI can influence marketing strategy, I believe it’s going to be another tool that we can leverage to gain a deeper insight into the messages we are putting out into the world. We might even get to the stage where AI is crafting that message. I should add that, to be ethical, we must be explicit about when we use AI technology in this way. People should know when they are talking to a computer and when they’re talking to a human.
What tips would you give leaders hoping to work with AI, to help them get started?
Don’t be too slow, but don’t rush into anything either: AI isn’t a band aid, it won’t fix all your problems. Take the time to work out where it will be most beneficial when implementing it in your business. It’s a tool, a narrow tool; it can’t do everything, but it can do specific things well. So take your time to define the specific things you want to do well and what these look like.
Second, talk to people in the organisation, especially those who are directly involved in the problem you are trying to address.
Third, ask yourself about the intended consequences of this, for the people you are looking to serve; keep reminding yourself that this is a tool and should not replace human relationships. Gather together people who can provide honest, unbiased perspectives. Having such people in the mix is important, especially when implementing powerful technology. ‘Yes men’ can be very detrimental to any project.
Is education evolving fast enough to deal with the effects of AI on the future of work?
No. The style of education, in my opinion, must change. Education needs to become more transactional. Currently, education is set up as a long-term investment: you set aside three years for a lifetime’s worth of learning. But with the world’s current rate of change, education needs to be more transactional. If I need to learn a specific core skill right now, I will need a 12-week course. In six months’ time, the technology may have changed. If I do enough of those courses I might gain a qualification; or perhaps I’ll have a series of qualifications in specific things. I think that’s how education needs to direct itself.
I would have had multiple degrees by now had I had the time to take formal qualifications in everything. I couldn’t afford to invest that time, I just had to learn what I had to learn and implement it. I think education has to evolve, recognising skills learned on the job as well as bringing in people who have acquired their skills in this way.