There is much debate about the intrinsic good or badness of AI – are we destined for a dystopian future where AI decides we are no longer good enough? Yet perhaps the bigger risk in the short to medium term is our human tendencies to malicious intent – are we good enough for AI? In any case we must develop the governance that allows us to have confidence in the safety of AI.
At this point AI is the result of data driven learning – it has no conscience and can’t explain its reasoning. There is no implicit good or bad to AI it will simply respond with results that are derived completely by its learning. The good or badness of AI will thus be based on how well we train the AI, and perhaps most importantly how well we test the AI.
There have been several failures of AI that raise concern – the self-learning chat-bot that developed racist and sexist traits after only a few days of being exposed to the public; and the resume screening AI that filtered out younger women on the basis that they had gaps in their employment history (children).
Many implementations of new Technology have been tempered by establishing controls and practices that make the technology safe. Historically this has been through both thoughtful design and responses to disasters. We can learn from the way that technology domains like aircraft, nuclear power and medical devices have evolved.
We have yet to create these frameworks and practices that let us test and govern AI implementations – particularly where our own safety is involved. There is no doubt that the current engineering focus considers safety, but we simply don’t know enough about the way the risks will emerge. Even in cases where AI has no safety impact it may still create an ethical and moral dilemma by perpetuating unsustainable behaviours and points of view.
Human behaviour is extremely varied – even for those who don’t break the law there is a broad range of acceptable behaviour and attitudes that society tolerates. One of the biggest challenges we face with AI is how do we choose what is acceptable AI behaviour? In most cases there will be a single AI behaviour and we then need to determine how we agree on what the behaviour should be from an ethical perspective. Some autonomous cars are allowing the drivers to select a different profile based on their own preferences (essentially balancing self-preservation with harm to others) – should we even allow such choices to be made?
Humans are also implicitly challenging – graffiti although generally harmless and an expression of our desire for expression is still illegal. The same intent can be seen with some hacking activities where there is no malicious damage, yet they are clear proponents of a particular point of view. Criminal activities range from intentional damage (and potential safety impacts) to greed and power (and even wielded by countries). When we consider this in the context of an AI-enabled future we have to believe that human maliciousness and ingenuity will find ways to damage the intent of the AI and may result in societal or individual harm.
I am a strong advocate of embracing technology – it is in our nature to explore and innovate – yet we need to avoid the adoption of technology ‘at any cost’ to protect our people, our society, and our world.
What is your view? Are we good enough for AI? Please leave a comment below.