AI Is the Latest Tech Revolution with a Crucial Flaw: The People Using It

May technologies’ blessings be upon you. A few years ago, we were blessed with blockchain technology, which allowed millions of people around the world to invest in the miracle of cryptocurrency.

It also had the side effect of allowing North Korea to continue its nuclear program after the country stole the cryptocurrency.

Blockchain is not the only blessing. Technology has given us the joys of social media platforms. They allow us to stay in touch with friends and strangers, but they can also be used to promote unrealistic body image standards, undermine confidence, and trigger suicidal thoughts.

For the price of every last morsel of our privacy, social media has given us a better marketing experience.

Now the latest tech revolution is AI (artificial intelligence). Its advocates say it can be used for everything from content creation, with the likes of ChatGPT and a host of other predictive language models, to cybersecurity and even road safety and cancer diagnoses.

Really.

All previous interactions with technology have shown a basic flaw: us. The problem is that we actually believe in technology. We assume that AI systems work, and work well in all settings. We also assume that AI will work well with us, but none of these things are necessarily true.

Not all AI is created equal

Before we reopen Pandora’s box, we should take a look at how we can determine the quality of AI.

AI model trustworthiness is based on three factors:

The first issue is likely to cause the most problems. The training data set for AI must be immense. This has led to training data taken from copyrighted material, private images, or even the Internet itself.

Copyrighted material and private images involve legal issues. There are already lawsuits on copyright and privacy. Training data taken from the Internet itself assures that the AI’s solutions will simply be wrong.

The second issue – model quality choice of algorithm predictions is a matter of probability – again has to do with how people view AI. We see it as providing accurate, objective answers. It doesn’t. It provides probabilities.

For example, in 2016 people were shocked that Donald Trump won the U.S. presidential election. He was trailing in almost every poll and was given only a 35% probability of winning. But 35% was not 0%.

Trump won the Electoral College, but not the popular vote. AI provides quantitative answers, not qualitative. It is incapable of critical analysis.

For example, an image recognition program gave a 99% probability that a series of horizontal black and yellow lines was a school bus. Its training did not include wheels, doors, and a windshield. An AI solution is just that, a solution. It is not necessarily a correct, accurate answer.

Victim of circumstance

The third issue – a model can only be used in a very limited set of circumstances – is situational. AI is designed to work on very specific problems in very specific environments.

It can be very good at identifying images produced by one specific imaging machine at one hospital but fail with another machine at a different hospital. A model trained to detect tumors in the prostate gland cannot be used to detect tumors in a lung.

The real question for the implementation of AI is whether it will work in your organization. Even if it does, it may not be popular with employees or customers.

For example, a recent study of an AI-based clinical decision support program for treatment of diabetics found that only 14% of the clinicians would recommend it. They gave it a score of 11 out of 100.

Implementation of AI is not like implementing other software. It is not a set-it-and-forget-it process. Trained AI systems need corrective maintenance due to data, model, or concept drift.

AI is not something that organizations can buy, put in place, and begin to see returns. It may not even be necessary.

What is necessary is that before an organization considers AI, it should have a plan. A good place to start is with ISO 27001, which is easily adaptable to AI implementation.

Another good resource is the Artificial Intelligence Risk Management Framework (NIST AI 100-1). The NIST Govern, Measure, Manage, and Map functions fit perfectly into the ISO 127001 framework.

Both require planning, risk assessment, and documentation, and both require that risk management should be continuous, timely, and performed.

However you use AI in your organization, you must acknowledge that it provides incremental improvements but is not transformational.

The technology can provide enormous value when used correctly, but it will take time, proper personnel, training, and determining its proper use.

Contrary to the hype, AI adds an extra dimension to the human decision-making process but it doesn’t replace it. The major threats to AI are not legal or criminal hackers but in the implementation itself.