In the world of computing, you can’t move these days without seeing the acronym “AI” popping up. To some it will revolutionise the world and bring untold benefits, but to others it will condemn us to Terminator-style robotic control.
For the uninitiated, the acronym stands for “Artificial Intelligence”. It’s certainly artificial, driven as it is by computer software and hardware, plus the input of programmers. AI systems search massive amounts of data to respond in the most appropriate way to either answer a direct question, or to suggest responses to certain conditions. An easy question like “what is the capital of France” is easy to respond to, as there are a multitude of reliable data sources that can confirm it is “Paris”. But, as we all know, Google can answer that. No intelligence there. So how about a question like “What were the impacts of the Industrial Revolution on farmers”. Once again, a search through the data sets at the AI’s disposal can come up with essays and other papers written on the subject and by ascribing some form of reliability score to all the results can provide you with a reasonable answer to the question. So far, still no intelligence needed.
And then we come to more serious questions: “How do you stop a baby crying” for instance. Are you really going to trust a machine to tell you the answer to that? Out there beyond your front door are websites and data sources aplenty which contain wacky theories that have no place being included in a search for the answer to that question. In Victorian times, babies were given alcohol or laudanum and I think we all know that those solutions are not acceptable today. Even so, the answer you get to the question is still the result of algorithms searching data and deciding which of that data is most reliable. Still no intelligence.
The AI community will be quick to tell you that AI systems “learn” what is best from the reactions to its answers. There lies the potential for sabotage. If enough people react, say, with the view that racism is not wrong, the AI “learns” that. If enough people say that Putin is a genius and the saviour of all mankind, the AI “learns” that too. Putting your faith in AI is putting your faith in the masses and in a world where opinion, however distasteful or absurd, can influence the results the AI returns. The AI can reply with facts only if its “facts” are from a reliable source. And its “opinions” will be the sum of all the opinions it has found within its data. You only have to look at Brexit, or Covid vaccinations, or whether Trump won the US election or not, as subjects to see that there are large numbers of people whose opinions are just too ludicrous to include in an AI’s decision-making.
Using AI to buck the system is already rife in academia (indeed, I personally know someone who has used an AI program to cheat on a school assignment), and centres of learning are forced to catch up with the technology in order to detect it (and yes, the person I refer to did get found out). I read recently comments from pupils at university who said they wouldn’t use ChatGPT (a common tool much used by students apparently) in case they got caught. Note here, they weren’t saying it was wrong to use it, just that they didn’t want to be caught cheating, which doesn’t fill me with confidence about their moral choices.
AI is touted to be able to replace millions of people in certain jobs as it matures, and as more of us interact online, it may soon become impossible to be sure whether the entity you are communicating with at the bank, or library, or Doctor’s surgery is a person, or an AI construct. Why bother with learning at all if the AI can teach you how to do something (“DocBot: how do you do a heart bypass operation”).
Once again science has created something that has to be treated with caution. Agencies and governments are scrambling to develop the rules and safeguards around when and where and how AI can be used while industry is falling over itself to use and adopt it so they can reduce resource costs, come up with market insights to increase profits and automate decision making. The most important safeguard is to ensure that people, real people, are made aware when any decisions that affect them are being made by an AI programme.
As always, this is just my opinion!