Is AI making dangerous decisions without us?

Artificial intelligence (AI) is set to take control of many aspects of our lives, but not enough is being done with regards to accountability for its consequences.

The increasing application of AI across all aspects of business has given many firms a competitive advantage. Unfortunately, its meteoric rise also paves the way for ethical dilemmas and high-risk situations. New technology means new risks and governments, firms, coders and philosophers have their work cut out for them.

If we are launching self-driving cars and autonomous drones, we are essentially involving AI in life-or-death scenarios and the day-to-day risks people face. Healthcare is the same; we are giving AI the power of decision making along with the power of analysis and, inevitably, it will have some involvement in a person’s death at some point in the future, but who would be responsible?

Doctors take the Hippocratic oath despite knowing that they could be involved in a patient’s death. This could come from a mistaken diagnosis, exhaustion, or simply missing a symptom. This leads to a natural concern about research into how many of these mistakes could be avoided.

The limits of data and the lack of governance

Thankfully, AI is taking up this challenge. However, it is important to remember that current attempts to automate and reproduce intelligence are based on the data used to train algorithms. The computer science saying ‘garbage in, garbage out’ [describing the concept that flawed input data will only produce flawed outputs] is particularly relevant in an AI-driven world where biased and incomplete input data could lead to prejudiced results and dire consequences.

Another issue with data is that it only covers a limited range of situations, and inevitably, most AI will be confronted with situations they have not encountered before. For instance, if you train a car to drive itself using past data, can you comfortably say it will be prepared it for all eventualities? Probably not, given how unique each accident can be. Hence, the issue is not simply in the algorithm, but in the choices about which kinds of datasets we use, the design of the algorithm, and the intended function of that AI on decision making.

Data is not the only issue. Our research has found that governments have no records of which companies and institutions use AI. This is not surprising as even the US – one of world’s largest economies and one that has a focus on developing and deploying AI – does not have any policy on the subject. Governance, surveillance and control are all left to developers. This means that, often, no one really knows how the algorithms work aside from the developers.

When 99% isn’t good enough

In many cases, if a machine can produce your desired results with 99% accuracy, it will be a triumph. Just imagine how great it would be if your smartphone can complete the text to your exact specification before you’ve even typed it.

However, even a 99% level of precision is not good enough in other circumstances, such as health diagnostics, image recognition for food safety, text analysis for legal documents or financial contracts. Company executives as well as policymakers will need more nuanced accounts of what is involved. The difficulty is, understanding those risks is not straightforward.

Let’s take a simple example. If AI is used in a hospital to assess the chances of patients having a heart attack, they are detecting variations in eating habits, exercise, and other trends identified to be important in making an effective prediction. This should have a clear burden of responsibility on the designer of the technology and the hospital.

However, how useful that prediction is implies that a patient (or her/his doctor) has an understanding of how that decision was reached – therefore, it must be explained to them. If it is not explained and a patient [that is given a low chance of having a heart attack] then has a heart attack without changing their behaviour, they will be left feeling confused, wondering what the trigger for it was. Essentially, we are using technical solutions to deal with problems that are not always technical, but personal, and if people don’t understand how decisions about their health are being made, we are looking at a recipe for disaster.

Decision making, freedom of choice and AI

To make matters worse, AI often operates like a ‘black box’. Today’s machine learning techniques can lead a computer to continue improving its ability to guess the right answer or identify the right result. But, often, we have no idea how the machines actually achieve this improvement or ‘learn’. If this is the case, how can we change the learning process, if necessary? Put differently, sometimes not even the developers know how the algorithms work.

Consumers need to be made more aware of which decisions concerning their lives have been made by AI, and in order to govern the use of AI effectively, the government needs to give citizens the choice of opting out of all AI-driven decision making altogether, if they want to. In some ways, we might be seeing the start of such measures with the introduction of GDPR in Europe last year. However, it is evident that we still have a long way to go.

If we are taking the responsibility of decision making away from people, do we really know what we are giving it to? And what will be the consequences of the inevitable mistakes? Although we can train AI to make better decisions, as AI begins to shape our entire society, we all need to become ethically literate and aware of the decisions that machines are making for us.

Terence Tse is an Associate Professor of Finance at ESCP Europe Business School and a Co-Founder of Nexus FrontierTech, which provides AI solutions to clients across industries, sectors, and regions globally. His latest book, The AI Republic: Building the Nexus Between Humans and Intelligent Automation is due for release in June 2019.

You may also like...

Translate »