Artificial Intelligence promises to change the world – yet the current implementations are substantially constrained and still a long way from replicating human intelligence. How can we make the best use of the technology?
AI is good at somethings – but currently very poor at others. I characterise current AI as being good at making constrained atomic decisions based on extended learning. What this means in practice is that AI is good at things like language translation – it can take a sentence and generally convert it with reasonable accuracy. It is extremely poor though at writing books or poetry. Similarly it is able to make medical diagnosis based on a collection of evidence, yet I would not be able to talk to the AI about treatment plans or managing my disease.
The nature of AI means it will continue to dominate in the space of constrained atomic decisions – these are short term and relatively straightforward (atomic), constrained (within a specific domain) and learned (where we have volumes of both input and output data to train the AI).
At this point AI can’t explain its own decisions. Training AI is an art where the analyst needs to define the right abstractions at each layer of the deep learning model, and also prune out anomalies where the AI can be over-trained to the input data and fails to work well with new inputs. The analyst can’t always explain why the AI made a particular decision – the training is far too complex and specifically avoids logical induction in favour of data driven decisions. In most cases we as humans expect logical induction/deduction to explain something.
AI is an attempt to replicate human thought – and there are some specific considerations that mean this is much more challenging. Firstly, our brains are much more complex than any deep learning AI – we have many more neurons and it will be at least 20 years before we could get close to matching our brain neuron capacity. Secondly, the structure in our brains is still poorly understood – we have only just discovered extended neural links that connect across various parts of our brains in ways we did not know a few years ago. Thirdly, it is hypothesised that our brains may actually work on a quantum level – meaning that any neural simulation model will always come up short.
Interestingly our own brains work to ‘logically justify’ decisions even when those decisions are clearly influenced or not based on inductive or deductive logic. This self-actualisation may be unique to humans and our own self-awareness and consciousness, but it appears that the explainability of our decisions is foundational to our psyche.
AI is set to change our world enormously – yet we are not ready to embrace and govern this change in society. AI is open to abuse and bias that we may not understand until its too late and does significant damage. We clearly need to embrace this innovation – but need to make sure we put in the controls to protect us.
See this post on LinkedIn: Do we really understand artificial intelligence?