Is superintelligence dangerous? This question has been in our imagination for decades, inextricably sustained and distorted by science fiction. Now, things have become different. The question is dominating headlines and entering the agendas of governments, philanthropists, philosophers and AI researchers.
This issue really deserves public attention and research effort. Still, I’ve been resisting the trend. What really fascinates me is untangling what intelligence is, and how it can be measured, in all its forms and degrees. Ultimately, as I argue in my recent book, The Measure of All Minds , we cannot go too far about the perils of superintelligence if we cannot measure intelligence in an effective way. We need to understand how technology can extrapolate it beyond the kinds of intelligence that we know.
At the Beneficial AI conference in Asilomar in January this year, I had the opportunity of listening to the most prominent voices on the safety and impact of AI. I got an update of some of the concerns about AI, especially those about AI control, which I can briefly summarise as follows:
It’s enlightening to compare this list with the problems parents face when raising their children or leaving them alone at home. Unsurprisingly, the modern view of AI safety no longer aims at imbuing AI systems with all the right skills and values, but ensuring that they develop them. In the same vein, the International Joint Conference on Artificial Intelligence has chosen autonomy as the key theme this year. Are AI systems prepared to be autonomous? Do we want them to be?
We can look at the previous bullet list more carefully. For an autonomous system to know the consequences of its actions, to understand what is allowed or not according to law, to recognise and use resources in an appropriate way, to know when wireheading and manipulation is happening, and to cope with unpredictable situations, it needs intelligence. Ultimately, learning others’ values needs intelligence too. It is then the infraintelligence of autonomous systems what is really dangerous. Accordingly, I’m tempted to rephrase Stuart Russell’s long-term question of “should we fear supersmart robots?” into a more short-term concern: “should we fear supersilly robots?”
Of course, in the longer term, if truly intelligent AI systems are granted autonomy, we have to be wary of them. We know well of the history of intelligence and domination, as Stephen Cave, executive director of the Centre for the Future of Intelligence in Cambridge, UK, has recently pointed out. Making, or faking, a difference in cognition power has increasingly pervaded natural evolution, especially for social species, and human civilisations. Indeed, the crux of the issue is not on the quantity of intelligence but rather on the variance of intelligence. So again, I would yet ask a different question “should we fear an unequal distribution of intelligence?”
This brings us back again to the measurement problem. Intelligence is not a monolithic concept but a conglomerate of behavioural features. The animal kingdom also reminds us that there is no such a thing as a gold standard of intelligence. Indeed, the problems of the very concept of “human-level (machine) intelligence” are becoming more conspicuous in the light of a diversity of technology-enhanced (or atrophied) humans, AI systems and hybrid collectives. What is the direction of this new distribution of cognitive power? Do we have accurate instruments to evaluate this increasing diversity?
The evaluation and comparison of behavioural features (including cognitive abilities and personality traits) of humans, non-human animals and AI systems not only is a major scientific inquiry but an urgent research area with enormous implications on safety issues. And it couldn’t be otherwise, as measurement is crucial for all branches of science and engineering, as well as governance.
For more information, check out The Measure of All Minds
Latest Comments
Have your say!