Deep Learning Makes Conventional Machine Learning Look Dumb

By Janani Gopalakrishnan Vikram

2
8219

Another of Google’s pets in this space is DeepMind, a British company that it acquired in 2014. DeepMind made big news in 2016, when its AlphaGo program beat the global champion at a game of Go, a Chinese game that is believed to be much more complex than Chess.

Usually, AI systems try to master a game by constructing a search tree covering all possible options. This is impossible in Go—a game that is believed to have more possible combinations than the number of atoms in the universe.

AlphaGo combines an advanced tree search with DNNs. These neural networks take a description of Go board as an input and process it through 12 different network layers containing millions of neuron-like connections. A neural network called the policy network decides on the next move, while another network called the value network predicts the winner of the game.

After learning from over 30 million human moves, the system could predict the human move around 57 per cent of the time. Then, AlphaGo learnt to better these human moves by discovering new strategies using a method called reinforcement learning. Basically, the system played innumerable games between its neural networks and adjusted the connections using a trial-and-error process. Google Cloud provided the extensive computing power needed to achieve this.

What made AlphaGo win at a game that baffled computers till then was the fact that it could figure out the moves and winning strategies by itself, instead of relying on handcrafted rules. This makes it an ideal example of deep learning.

AI world has always used games to prove its mettle, but the same talent can be put to better use. DeepMind is working on systems to tackle problems ranging from climate modelling to disease analysis. Google itself uses a lot of deep learning. According to a statement by Mustafa Suleyman, co-founder of DeepMind, deep learning networks have now replaced 60 handcrafted rule based systems at Google.

Learning from Jeopardy

Some trend-watchers claim that it is Watson, IBM’s AI brainchild, which transformed IBM from a hardware company to a business analytics major. Watson was a path-breaking natural language processing (NLP) computer, which could answer questions asked conversationally. In 2011, it made the headlines by beating two champions at the game of Jeopardy. It was immediately signed on by Cleveland Clinic to synthesise humongous amounts of data to generate evidence based hypotheses, so they could help clinicians and students to diagnose diseases more accurately and plan their treatment better.

Watson is powered by DeepQA, a software architecture for deep content analysis and evidence-based reasoning. Last year, Watson strengthened its deep learning abilities with the acquisition of AlchemyAPI, whose deep learning engines specialise in digging into Big Data to discover important relationships.

In a media report, Steve Gold, vice president of the IBM Watson group, said that, “AlchemyAPI’s technology will be used to help augment Watson’s ability to identify information hierarchies and understand relationships between people, places and things living in that data. This is particularly useful across long-tail domains or other ontologies that are constantly evolving. The technology will also help give Watson more visual features such as the ability to detect, label and extract details from image data.”

IBM is constantly expanding its line of products for deep learning. Using IBM Watson Developer Cloud on Bluemix, anybody can embed Watson’s cognitive technologies into their apps and products. There are APIs for NLP, machine learning and deep learning, which could be used for purposes like medical diagnosis, marketing analysis and more.

APIs like Natural Language Classifier, Personality Insights and Tradeoff Analytics, for example, can help marketers. Data First’s influencer technology platform, Influential, used Watson’s Personality Insights API to scan and sift through social media and identify influencers for their client, KIA Motors. The system looked for influencers who had traits like openness to change, artistic interest and achievement-striving. The resulting campaign was a great success.

Quite recently, IBM and Massachusetts Institute of Technology got into a multi-year partnership to improve AI’s ability to interpret sight and sound as well as humans. Watson is expected to be a key part of this research. In September, IBM also launched a couple of Power8 Linux servers, whose unique selling proposition is their ability to accelerate AI, deep learning and advanced analytics applications. The servers apparently move data five times faster than competing platforms using NVIDIA’s NVLink high-speed interconnect technology.

IBM is also trying to do something more about reducing the amount of computing power and time that deep learning requires. Their Watson Research Center believes that it can reduce these using theoretical chips called resistive processing units or RPUs that combine a central processing unit (CPU) and non-volatile memory. The team claims that such chips can accelerate data speeds exponentially, resulting in systems that can do tasks like natural speech recognition and translation between all world languages.

2 COMMENTS

  1. The title of the post should be how and what is deep learning. Saying that deeo learning methods make traditional ML look dumb just brings the author’s ignorance in limelight. ML and DL, both are statistical machine learning techniques.

  2. Deep learning is also a statistical machine learning technique, albeit a more radical/ new one. By saying conventional, we were only referring to older ML techniques. There was no intention to put down any technology.

LEAVE A REPLY

Please enter your comment!
Please enter your name here