/   SOCIETY

Relax, AI is Not as Smart as You Think

Artificial Intelligence (AI) is a term broadly used to describe some innovation that is trying to solve yet another problem. A survey by the UK government in 2019 on over 2,400 individuals reported that a good majority know what AI is, but only 12% know a lot about AI. This is no surprise, and the media doesn’t do well to clarify this uncertainty.

An early definition of AI was proposed by Alan Turing 70 years ago. He described a method to determine if a machine exhibits intelligent behaviour, known as the 'Imitation Game' - a game where an interrogator judges if the responses to their questions are from a human or machine. Unsurprisingly, this is a great definition. However, as technology advances, problems that were once thought of as only solvable by humans have become ‘easily’ solvable by AI, and are no longer classified as AI problems. This paradox is known as the 'AI Effect'.

Before breaking down some AI that you might have heard of, I want to be clear about something that is very often misunderstood. AI can be grouped into two categories: Logic Programming (rule-based intelligence) and Machine Learning (data-driven intelligence). In the former category, the intelligence of the AI is developed by researchers. They would very carefully study the domain of the problem and describe a set of rules and instructions which would best output a solution to the problem. In the latter category, intelligence is learned by AI. The researcher would typically use an existing learning technique on a model with tweakable settings, and apply it to historical data from the domain of the problem. Then the settings of the model which would output the best possible solution to the problem are found based on the data.

AI in both these categories has garnered a great deal of positive media attention, and rightfully so, but they have attributed to the great misunderstanding of AI that is widespread in society. Two examples of board game matches from the past 25 years apparently exhibit great strides in AI, but are mostly a testament to computing power rather than intelligent behaviour.

A Logic Programming AI made history in 1997 when IBM’s supercomputer Deep Blue beat the then reigning world champion in a match of chess under standard tournament conditions. Deep Blue follows a set of logical rules to search for the best sequence of moves needed to win, with its prowess coming from its computational speed. At the time, it could evaluate about 200 million chess moves per second (7-12 turns ahead) and its strategy to win could be completely analysed and interpreted by the researchers, i.e. the reason why it made a particular move was fully understood. Although Deep Blue has no practical applications other than playing chess, it proved that AI could outperform human intelligence at a game often associated with requiring high intelligence.

On a similar problem, a Machine Learning AI made history in 2016 when Google’s AlphaGo beat the 18-time world champion in a match of Go. Go is a far more complicated game than chess and it is arguably the most complex game ever devised. The number of possible moves from each position of a chess game is about 20 whereas a Go game is about 200. A Logic Programming AI would need to evaluate about 60 trillion moves to think just 6 turns ahead, so finding the best sequence of moves needed to win the game would be unfeasible. Google’s researchers used Deep Reinforcement Learning to tackle the problem. This learning technique was developed in the 1980's and stemmed from research into animal psychology. In this case, the AI was initially presented with 100,000 games of Go and, given rewards when it won, it learned how to play the game. It then played against itself millions of times to iteratively get better, until it was better than a world champion Go player. Unfortunately, this intelligent behaviour comes at a cost; it took 3 days to train AlphaGo and it consumed what would usually be $2 million of Google's computational power.

In contrast to Deep Blue, the researchers developing AlphaGo didn’t need to have any prior knowledge of the rules of the game. The AI managed to learn the rules of the game and how to win all by itself. This was a feat of general intelligence, but there are some shortcomings to this kind of AI. What exactly has AlphaGo learned and was it worth the cost? 

AlphaGo belongs to a subset of Machine Learning known as Deep Learning (note that the naming of Deep Blue is a coincidence and it is not classified as Deep Learning) which is notorious for requiring a substantial amount of computational power to learn. Deep Learning AIs spend days churning data to learn how to solve a problem. A study on the energy consumption of hardware built for Deep Learning revealed that the learning process can emit as much carbon as the lifetime of five midsize sedans. In addition to that, the intelligent behaviour learned is uninterpretable by even the developers themselves, i.e. we will never know why AlphaGo makes a particular move. Maybe this isn’t such a big problem in a game of Go as the results of the AI are phenomenal.

The use of AI has amazingly made its way into governments and judicial systems, but has also been the centre of criticism from the media and public alike. In 2016, the Wisconsin Supreme Court ruled for judges to use it as an aid to assess the risk of recidivism. Little is understood about COMPAS since it uses trade secrets. This prompted two independent researchers to compare it to the most basic Machine Learning AI (a linear regression), and they found that there was no significant difference; COMPAS does nothing extraordinarily ‘intelligent’. To add insult to injury, an investigation into its risk assessments revealed that there is an inherent (almost extreme) racial bias. This is a big concern in Machine Learning which is driven by historical data. If the data itself is inherently biased then the AI learning from it would be biased as well.

The cherry on top for how AI is misunderstood was seen in Australia; the RoboDebt scheme. In 2016, the Australian government decided to enlist the help of AI to crack down on the abuse of their welfare system and clawback A$1.5 billion in three years. They implemented an automated debt recovery system (a seemingly Logic Programming AI) that was meant to accurately calculate how much a person owed or was owed by the government. Ironically, it has since incurred the Australian government A$1.2 billion in refunds and payouts. 

There were many things wrong with the implementation of the RoboDebt scheme, but let’s look at it from a purely technical standpoint. To this day, there hasn’t been any scrutiny against the AI used. Put simply it did exactly what it was meant to do and nothing more. The human checks and balances that were in place prior to the AI, however, were completely removed and they weren’t replaced. Fundamental errors in the system weren’t caught. In fact, the biggest technical scrutiny was from the way the Australian government averaged income. This was in place much before RoboDebt and its introduction only perpetuated the already existing problem at a rapid rate.

Most of the AI stories that make it to the headlines greatly tip the scales of its perception and propagate misunderstanding in society. A single AI story shouldn’t shape it as a whole as there are many different types of AI - this article has only scratched the surface. Most AI research doesn’t make it to the headlines. However, there are many great applications and use cases of AI which will shape our future, and only when it does will we hear about it.

kheeran.naidu

Kheeran Naidu

An avid learner who is currently pursuing a PhD in Computer Science at the University of Bristol.

The Pangean does not condemn or condone any of the views of its contributors. It only gives them the space to think and write without hindrance.