Articles about artificial intelligence abound in the media. Depending on the commentator’s point of view AI is either going to save the World or destroy it and take over all our jobs.
Wherever you set on that spectrum, artificial intelligence is here and – whether you know it or not – has already impacted your life in some way.
But just how intelligent is artificial intelligence?
Let’s start with an understanding of what it is. The capacity of the human mind to remember and recollect information and to perform repetitive tasks consistently over long periods is limited. Machines, on the other hand, have no such limitations. So in theory a machine with sufficient memory, and the speed to assimilate large amounts of data quickly, can learn
everything there is to know about a topic, analyse the data and recall it almost instantly.
So at one level AI is just vast stores of analysed data; very useful if put to work analysing lots of questions about a particular topic and coming up with logical answers faster and more accurately than a human being.
Except in certain sectors its results are causing quite a stir, to the extent that AI is seen as part of the problem and not the solution.
Take consumer lending, insurance and recruitment as examples. Machines have been programmed with algorithms to research past data on delinquency, claims and employee longevity respectively to ‘improve’ the decision making in these sectors.
The results, however, have caused a furore because people of colour and women have repeatedly faced more expensive loans and insurance policies and fewer job opportunities than white males.
To be clear, these machines haven’t been programmed to discriminate; the results are based on their analysis of the data, but such has been the outcry that in most instances the assessments have been handed back to humans.
Which raises a very interesting question.
Are the AI machines doing a better assessment than before (i.e. they are more intelligent) or are they less culturally sensitive and therefore performing less well?
And if their assessments are correct – a person of colour is a greater risk to a lender – shouldn’t they, logically, suffer a premium? And if they don’t, are they being subsidised by white males who are in turn, therefore, the subject of discrimination?
Some people insist that AI is biased but they just don’t know why. Trevor Phillips, the founding chairman of the Equality and Human Rights Commission, said recently “it is now commonplace in artificial intelligence and machine learning that algorithms that govern mortgage lending, insurance quotes and job applications are biased against women and even more so
against people of colour….but no one quite knows how and why the machines learn to discriminate, much less how to stop them.”
What do you think?
Noel Guilford