This article will present some thoughts on the status and the next years of AI evolution. Center stage are two dichotomies: (1) real AI applications vs. the public imagination, and (2) commercial AI systems vs. academic research ambitions. The status of AI is a bag of disparate models and the future may need to involve a fundamental redesign of the popular connectionist approach of neural networks with the logical approach of reasoning systems to achieve a higher degree of generality and fault-tolerance.
Reality vs. Imagination
When we look over the plethora of AI applications that are in productive use today, one feature jumps out immediately: Successful systems are specialized in one activity. They have been carefully trained for that activity and they do it well. Present them with even a slightly altered situation and they do not fail gracefully – they fail catastrophically.
The general public imagines AI to be like a superhuman: Able to do everything we can but faster, without rest, and at lightning speed. Many, including some eminent scientists like Stephen Hawking, have called for the regulation of AI in order to prevent an AI apocalypse in which robots take over the world.
These two viewpoints are diametrically opposed. The current state of AI is extremely far removed from a state in which it could take control of the world. Indeed, we discover every day when we visit some retail website that the chatbot we are cheerfully greeted by is wholly incapable of answering even targeted questions, let alone carry a sensible conversation. And this leads over to the second dichotomy.
Commercial vs. Academic
Commercial AI systems are supposed to be great at exactly one specific task, like recognizing which traffic sign is on the road for an autonomous car, or ranking web pages in response to a search request. They are supposed to do this because they enable and automate a specific service and business model for the company that deploys it. Compare this to the assembly line worker who is not expected to be creative. Some applications require systems in which multiple AI models work in concert, e.g. autonomous vehicles. Even here, each model does a special task and then hands it over to another model.
Academic AI has long dreamed of “general AI” or a somewhat reduced version “robust AI.” The concept is that AI should be capable of more generalistic and fault-tolerant behavior. Discussions on such systems often focus on natural language processing and propose that the AI must possess an underlying model of the world to be able to reason about it. Only then, goes the idea, will the AI be able to articulate in language what makes sense in the world. One response is that this ignores somewhat the reality that most AI applications do not require any language capability.
After decades of modeling systems with purely data-driven approaches and experiencing great successes, it seems strange to say that the future lies in injecting AI with some real-world information in the form of structure that allows logical reasoning. It is, in my personal opinion, true, however. Having spent many years building industrial AI systems, I have come to learn the critical importance of including human domain knowledge into models. That was, and remains, in the context of AI systems meant for a specific use case though.
Books like “The Master Algorithm” by Pedro Domingos or articles like “The Next Decade in AI” by Gary Marcus emphasize the need for a unification of connectionist and logical/symbolist approaches. The connectionists currently rule AI with neural and convolutional networks. The symbolists who use logical deduction have had to take a step back but may represent the next step-change if we can figure out how to combine both approaches in real working applications. At present, it feels a little like asking for the unification of the fundamental theories of physics, but this is where the challenge lies.
On top of such questions of generality comes the question of bias or ethics. A system that tells a stop sign from a speed limit sign is not going to cause ethical problems. A broader system that communicates with people may do so. Such systems may have to be supplied with a moral code, partly implanted directly in the logic, and partly taught via an appropriate dataset. Nature vs. nurture, but for the digital age.
Chasing Performance
Most AI professionals in commercial companies as well as academic establishments work on improving the accuracy of models by tweaking models and algorithms as well as collecting ever more data. An improvement in accuracy by a small fraction is hailed as an accomplishment, even if no one can explain how the latest tweak brought this about. With all the commercial hype and publish-or-perish working to raise the stakes, people have no choice but to shout out the latest step forward, regardless of how much of a baby step it is.
One way to get out of the corner that we have worked ourselves into is to start combining several (existing) models into a more holistic system that can do more than a single specific thing. While this may not get us closer to the dream of full intelligence, it would provide a practical way forward in the short-term whereby larger issues can be tackled. Autonomous vehicles are a strange example of this because they require many models to work together in a single system. One might imagine a similar approach in other applications like retail, finance, health care, logistics, and industrial processes.
Conclusion
All this is overshadowed by the sheer commercial success of the specialized connectionist approach that will continue to steal the limelight and the funding away from more future-oriented approaches as nicely presented in “Prediction Machines” by Ajay Agrawal, Joshua Gans, and Avi Goldfarb.
In conclusion, do not be worried about AI taking over the world because it already has – albeit not in a controlling super-villainous way. AI is currently not even close to acting like it does in many Hollywood movies like Terminator. Current systems do very well in their domain and fail as soon as the situation is slightly outside of that domain. To overcome this problem, we may have to change the fabric of AI and combine the neural network with logical reasoning based on a world-model, which will be challenging. The question before us is whether this is interesting enough to pursue, given the commercial interests, the short time-scales common in the software industry, and the sheer pace of the field.
This article was originally published on his blog.
Get Started With Engati
As Patrick says, there’s no need to fear automation- it has already become a part of everyday lives. Especially in customer-solutioning. Explore Engati’s conversational automation solutions today, no credit card needed. Sign up here.