Judea Pearl, recently announced winner of the 2011 Association for Computing Machinery (ACM) A.M. Turing Award for Contributions that Transformed Artificial Intelligence, has been at the forefront of the development of computational intelligence over the last two decades. Read Part One for Judea Pearl’s thoughts on the challenges he has faced in his research, and his inspiration.
Your work was instrumental in changing what it meant for computers to be intelligent. What do you think could be the next big developments in computational intelligence?
It is hard to tell. The field of AI is so diverse and its applications penetrate such a vast range of human activities, that it is impossible to tell whether vision, natural language, or planning is going to see the next big development. I would rather speculate on areas where I have more intimate knowledge and more direct influence.
Where do you see machine learning going in the future?
I have seen much progress in causal discovery, an area where I did some early work in the late 1980’s and which has been developed significantly since, to the point where annual competitions are conducted for programs to discover the correct cause-effect relationships from a given stream of passive observations. It would be exciting if machines could discover that the rooster crow does not cause the sun to rise and, more ambitiously, that malaria is not caused by mal-air.
What areas do you think Bayesian networks, causality, and inference have most promise for in the future?
I see untapped opportunities in aggregating data from a huge number of disparate sources, say patients data from hospitals, and come up with coherent answers to queries about yet unseen environment or sub-population. We have begun to look at this challenge through the theory of “transportability”, but we need to go all the way from meta-analysis to meta-synthesis. Currently, meta analysis does little more than averaging apples and oranges to estimate properties of bananas. We need a principled methodology for analyzing differences and commonalities among studies, experimental as well as observational, and pooling relevant information so as to synthesize a combined estimator for a given research question in a given target sub-population. Our team is currently working on the theoretical aspects of this challenge, and I am sure practitioners will be thrilled with the results.
In recent years, artificial intelligence has matured from logic-based to probabilistic-based and data-based methods. Do you think this trend is irreversible, or do you think that the latter will run their course and there will be a new era for logic?
Logic can play a major role in scaling up reasoning tasks, for example, in going from propositional to predicate logic and relational databases. But it is hard for me to envision how a purely logical system would cope with the uncertainty in the world and, more importantly, how it could learn through the gradual accumulation of (noisy) observations.
Are there any new hot topics or new debates in your subject area which we are yet to hear about?
The temperature of a topic is a function of the thermometer, which in our case is the skill set available to the investigator you ask. Speaking from my perspective, I see the issue of “free will” becoming a topic of lively discussion as robots acquire greater autonomy, and as they become more proficient in counter-factual and introspective reasoning, which in turns are necessry for learning from regret.
…I see the issue of “free will” becoming a topic of lively discussion as robots acquire greater autonomy..
On the practical side, I believe that “Meta synthesis,” as I described above, will become an arena for productive research.
Most current debates are inconsequential. There is a philosophical debate for example on whether counterfactuals are necessary, useful, or dangerous for causal inference. There is also an ideological-methodological debate on whether graphs are necessary, useful, or dangerous for causal inference. These I believe are passing debates that will fade away as soon as the cultural transition from statistical to causal thinking completes its course.
A more important debate rages on how scientists should think about science. In particular, should they ask: “How nature works?” or “How do we test what Nature does?” My mantra is: Think Nature, not experiment. I have seen too many good ideas stifled by thinking experiments rather than Nature. Had Newton worried about experiments, he would not have theorized that the tides are caused by the moon. The requirement of manipulability, even in theory, has led to some truly weird results in causal inference.
Are there any neglected/new areas in the field that you feel offer the potential for more attention/research?
The most neglected area I know is causal inference in statistics education and I have donated part of the Turing Prize money to the American Statistical Association to establish a prize for a person or a team who does most to introduce causal inference in education. What is missing is a 100 page booklet that would convince every statistics instructor that causation is easy (it is!) and that he/she too can teach it for fun and profit.
Do you have any recommendations for further reading and/or specialized libraries/collections?
It is always a pleasure to recommend my books and articles, but if you are aiming at a different source, all standard books in AI nowadays discuss Bayesian networks, graphical models and causal reasoning.
Latest Comments
Have your say!