Algorithms Are No Better At Telling The Future Than Tarot Cards Or A Crystal Ball
An iconoclast's view of Artificial Intelligence and 'Data Science.'
written by Ian R Thorpe
In a medium that rewards those who specialise by writing on a particular theme I remain stubbornly a generalist. One recurring theme in my work however, is challenging those gods of modern life it has become heretical to question. Perhaps my role is that of specialist iconoclast. Here I take on those who evangelise the creed of ‘Data Science’ the buzz phrase that has replaced the now somewhat tarnished Big Data.
According to a new report “An increasing number of businesses are investing in advanced technologies that can help them forecast the future of their workforce and gain a competitive advantage”. It’s true, almost every day we see more and more bollocks being written by supposedly intelligent people who believe that by using things called ‘Data Science’, ‘Artificial Intelligence,’ and ‘Big Data’, machines can already be relied on to make better decisions than humans, and that soon computers will equal or even surpass us in actual intelligence.
Such people are to be pitied rather than despised, obsessed with ‘science’ and besotted with the idea that change equals progress they are simply not intelligent enough to distinguish factual information from the far fetched fantasies of science fiction writers. Having already put a very successful career in Information Technology behind me (I had to retire early due to health problems,) I have always maintained that machines will only be capable of behaving intelligently if we radically redefine what we mean by ‘intelligence’.
Personally I am quite sure there is a little more to our thought processes than the ability to parse vast amounts of data extremely quickly and filter / match certain keywords. Language is how we communicate not only information but ideas, emotions and stories. Even leaving aside ideas such as morphic resonance and morphic fields that are at the bleeding edge of neurological research, I think most people would agree that the human decision making process is beyond our current level of understanding.
Our principle means of communicating ideas is through language, the spoken and written words. Machines have no ability to infer meanings from words. You can feed a million words into a computer, along with definitions. And when you enter that word and ask for a definition, a very simple program will display the answer almost instantly, without the machine having the slightest idea what any of it means.
Many analysts and business consultants and hi — tech corporations however continue to believe that, with enough data, algorithms embedded in currently fashionable People Analytics (PA) applications can predict all aspects of employee behavior: from productivity, to engagement, to interactions and emotional states. Predictive analytics powered by algorithms are designed to help managers make decisions that favourably impact the bottom line. The global market for this technology is expected to grow from US$3.9 billion in 2016 to US$14.9 billion by 2023.
But this is all little more than wishful thinking.
Despite all the usual promises and all the geek mythology, predictive algorithms are as mystical as the oracles and auguries of the ancient world. One of the fatal flaws of predictive algorithms, the one that has made such nonsense of the predictions of climate change soothsayers, is their reliance on “inductive reasoning”. This is when we draw conclusions based on our knowledge of a small sample, and assume that those conclusions apply across the board. It is the methodology that predicted the Remain campaign would win Britain’s EU referendum and that Hillary Clinton would annihilate Trump in the US Presidential election.
Inductive reasoning is based on proscribed logic (the program,) which is not a normal human being’s approach. Artificial intelligence is just that — artificial. Let’s take a very low level look at what happens inside a computer when is it processing data through its algorithms.
The problem with machines is they can only do as instructed. Thus they apply linear projections to random, flexible, non linear systems. Logic gates, arrays of which do all the business in a computer, are set to detect whether certain conditions are “true”. They do this by detecting three states, high (+), low ( — ) and the third which most people overlook, nul — the state of nothingness. Nul is kept free of any charge by being clamped to earth and is necessary so that the gate has something against which to compare the other two. The processor then shifts data or makes decisions by testing if a number of conditions are true. These are:
NOT, AND, NAND, OR, NOR, EX-OR and EX-NOR.
Avoiding technical jargon if we assume A and B are inputs and O the output the symbol <> (greater than or less than) means not equal to,, n is used to indicate a binary digit. O=true indicates the logic gate should let the data bit pass through. Following is a very simple explanation of binary logic:
(NOT gate) If A=n NOT true then O true
(AND gate) If A=n true AND B=n true then O=true
(NAND gate) If A<>n NOT true AND B=n NOT true then O true
(OR gate) If either A=n true OR B=n true then O true
(NOR gate) If A<>n true OR B<>n true then O=true
(Ex OR gate) If A=n true OR B=n true then O true but if A and B = n then O not true.
In modern systems it is highly unlikely the latter three would be used
Those tests are performed by setting switches, long ago when I was a trainee programmer and Methuselah was just a lad, we set the switches by physically setting rocker (bootstrap) switches, now the settings are loaded from software. So, armed with that knowledge can you really foresee a computer being able to decide whether to have chocolate cake or apple pie, buy a BMW or a Lexus, watch the news or Game of Thrones?
When we start to believe that inanimate machines will ever be able to think as humans do, we are on a high road to buffoonery and bluff. It does sell well, though, there are enough tech fans out there to make products based on the idea commercially viable. Conviction of purpose provides the saleable outcomes, irrespective of truth, hope, charity, faith or desperation
Where inductive reasoning falls down is it ‘thinks’ like a machine. To put it in human terms, a manager might observe that all employees with an MBA are highly motivated. According to inductive reasoning it therefore follows that all workers with an MBA are highly motivated. The conclusion is flawed because it assumes a consistent pattern where there are many unpredictable factors in play.
Experience to date informs us the pattern exists, so there can be no reason to suspect it will be broken. In other words, inductive reasoning can only be inductively justified: it works because it has worked before. Therefore, there is no logical reason to consider that the next person our company hires who has an MBA degree will not be highly motivated. That is how machines think. A human manager, in looking for a highly motivated candidate to fill a position would not make assumptions based on the kind of qualification candidates hold, but would frame certain questions in the interview to explore that aspect of a candidate’s suitability.
And until machines can handle unpredictability we should stop indulging fantasists by talking about Artificial Intelligence and refer more realistically to data processing.
It often seems the advocates of Artificial Intelligence, Deep Learning and the rest are seeking to justify their denial of the human capacity to access the infinite intelligence of nature that structures and essentially resides in human consciousness.
In my view the clearly observable difference between humans and machines with respect to intelligence is based on that more spiritual perspective, at least in the broader, philosophical sense. We don’t have to invoke any specific religious belief systems to talk about spirituality. I am certainly not a follower of any religion, though I do not deny the possibility of some higher intelligence on the lines of C J Jung’s collective unconscious (Or maybe The Force of the Jedi Knights.)
Fundamentalists within every religion insist on the existence of a supreme being that is present and active in our physical world but ignore the deeper spiritual truths embedded in the ancient myths. These genuinely fundamental spiritual values are too abstract for most people to reflect on in depth, but while present in animals, they are one of the qualities that will always separate us from machines.
Artificial Intelligence
Technology Rules You
Living within the conspiracy