Thursday, November 21, 2024

Artificial Intelligence Getting Real, Local

- Advertisement -

Artificial intelligence (AI) is undoubtedly one of the hottest verticals to watch in 2017. An integral part of gaming and robotics, now it has also deeply penetrated aspects of the real world that influence everybody. As the core driving force of analytics, AI is at the heart of every vertical from industrial operations and banking to security and the network of devices. It has also become part and parcel of consumer electronics, as was evident from this year’s CES 2017, and an inseparable part of social media as we can see from the new features being launched by Facebook, Google and other Internet majors. The increased usage of AI technologies fuels further growth in the space—not just in the form of research but also new products such as chips for artificial intelligence. The arena is also teeming with start-ups that are bringing fresh new ideas to life.

Some see AI as a threat to their jobs, while others see it as an opportunity to do better. Either way, it is not something you can ignore. So, well, we try to give you a glimpse of what is happening here…

AI shakes up the chip market

The word ‘smart’ has become like a compulsory prefix for most object names, from pens to phones and musical instruments. The moment the word ‘smart’ is used, it means some amount of intelligence (obviously artificial) is involved. This intelligence is usually implemented using large servers with built-in AI models based on neural networks.

- Advertisement -

In general, the operation of neural networks can be understood in two stages. The first stage involves training the AI system to perform a given task. The second stage is the inference stage, which involves execution of AI models against live data.

The training stage involves crunching immensely large data sets, to help the system identify and learn from patterns in the data. Data collected by the device flies across the Internet to data centres packed with servers, where this is done. On servers, central processing units (CPUs) are usually supplemented with other types of processors like graphical processing units (GPUs).

Nvidia is one of the most well-known GPU makers, who has been powering many data centres for years. Last year, it launched Tesla P100—a GPU with an architecture focused on accelerating AI and deep learning. Apparently, Nvidia spent more than $2 billion on R&D to produce the new chip, which has a total of 15 billion transistors—around thrice as many as the company’s previous chips. According to a press release, an artificial neural network powered by this chip can learn from incoming data twelve times faster than Nvidia’s previous best chip.

To show that machine learning requires a special architecture, Google also developed its own processor called the tensor processing unit (TPU). On the other hand, Microsoft went with field-programmable gate arrays (FPGAs) for its AI needs. Not to be left out of the race, Intel acquired Altera—a company that sells FPGAs to Microsoft. Together with Nervana Systems, a company it acquired last year, Intel is also building Lake Crest (or Nervana Engine), a full-stack solution for deep learning, optimised at every stage to make training ten times faster.

The second stage, or the inference, is undergoing a major shakeup. In the past, most of this work was also done on the cloud using GPUs, TPUs and FPGAs. However, now smart device makers are beginning to feel that this is not ‘real-time’ enough. So, the trend is turning in favour of implementing the inference or execution stage at the edge (that is, in the device itself) instead of relying on the cloud. Since existing devices don’t have enough processing power, memory size and bandwidth for this, the semiconductor industry is facing a heavy demand for high-performance, low-power inference engines for deep neural networks, which can be built into devices.

AI goes local

In a press event held in San Fransisco in March this year, Deepu Talla, vice president and general manager of Nvidia’s Tegra business unit, cited four reasons in favour of bringing AI technology to the edge (on-board the device)—bandwidth, latency, privacy and availability. As the number of devices communicating with the cloud rises by the day, there will be a dire shortage of bandwidth in the near future. Latency is also an issue in applications like self-driving cars where a split second’s delay can have serious implications. Privacy, of course, has always been a serious issue. So, when data is processed within the device itself, there is no lingering doubt about misuse on the cloud. Availability of the cloud is also questionable in rural areas where connectivity is unreliable. When AI is implemented at the edge, all these issues are sorted out.

“We will see AI transferring to the edge,” Talla said to the press, with future intelligent applications using a combination of edge and cloud processing.

This requirement to build intelligence into the device is leading to a major bustle in the semiconductor industry, not to forget a lot of hardware innovation. In last month’s story on smart robotics, we read about a micromote (a chip measuring just one cubic millimetre) developed at the University of Michigan, which incorporates a deep-learning processor capable of operating a neural network using just 288 microwatts.

Nvidia Jetson TX2 credit-card sized platform for intelligent edge devices like robots, drones, cameras and portable medical devices (Courtesy: Nvidia)
Nvidia Jetson TX2 credit-card sized platform for intelligent edge devices like robots, drones, cameras and portable medical devices (Courtesy: Nvidia)

Last year, Nvidia launched Drive PX2—a palm-sized platform to implement auto cruise capabilities in automobiles. This open AI car platform features a unified architecture that allows deep neural networks to be trained on a system in the data centre, and then deployed in the car. This year, Nvidia launched Jetson TX2—a credit-card sized, plug-in edge-processing platform designed for embedded computing. Teal Drones has used the Jetson module to develop a smart drone that can understand and react to what its cameras are seeing. Since this drone does not rely on the cloud, it can be used in remote farms or even by children playing hide-and-seek! EnRoute, another drone maker, has used on-board AI to help its drones navigate and fly faster, avoiding objects on their path.

Cisco has developed a collaboration device that uses AI to recognise people in a room and automatically pick a field-of-view (FOV) with people in it instead of empty chairs. The FOV is spontaneously adjusted as people walk in and out, or move around. The system also zooms in on people who are speaking.

Live Planet’s new 360-degree 3D camera for live streaming of video uses on-board AI to encode 3D videos in real time. Live Planet’s chief strategy office Khayyam Wakil explains, “The camera produces a stream of 65 gigabytes, which is too much data to transmit to a cloud server. On-board processing has made the live streaming possible.”

Sniffing the trend, Intel acquired Movidius in 2016. Movidius produces specialised low-power processor chips for computer vision and deep learning. Their button-sized Myriad 2 platform has many features that support implementation of deep learning at the network edge. Myriad’s SHAVE processor engines achieve the hundreds of giga-flops required in fundamental matrix multiplication compute that is essential for deep learning networks. The on-chip RAM keeps huge volumes of intermediate data on the chip itself to avoid bandwidth bottlenecks. The platform comes with native support for mixed precision and hardware flexibility—both 16-bit and 32-bit floating-point data types, as well as u8 and unorm8 types are supported.

The company literature explains that existing hardware accelerators can be easily repurposed to provide the flexibility needed to achieve high performance for convolution computation. Myriad also comes with a development kit that includes dedicated software libraries to support sustained performance on matrix multiplication and multidimensional convolution.

Start-up Graphcore proposes to handle deep learning with a so-called intelligent processing unit (IPU)—a graph processor that can manage both training and inference on the same architecture, and eventually across multiple form factors (server and device) too. The chip is expected to get ready for early usage by year-end.

According to the company, “This same architecture can be designed to suit both training and inference. In some cases, you can design a piece of hardware that can be used for training, then segment that up or virtualise it in a way to support many different users for inference or even different machine learning model deployments. There will be cases when everything is embedded, for instance, and you need a slightly different implementation, but it’s the same hardware architecture. That’s our thesis—one architecture, the IPU, for training, inference, and different implementations of that machine that can be used in servers, cloud, or at the edge of the network.” That would be the ultimate thing to wish for!

AI seems so real

At one time, the term ‘AI’ was associated only with robots, but now it is everywhere—from security cameras to cars and enterprise applications.

Surtrac is an intelligent approach to traffic management, implemented in Pittsburgh, USA
Surtrac is an intelligent approach to traffic management, implemented in Pittsburgh, USA

In Pittsburg, USA, AI is helping solve traffic woes. Speaking at a White House Frontiers Conference, Carnegie Mellon University professor of robotics Stephen Smith said that traffic congestion costs the U.S. economy $121 billion a year, mostly due to lost productivity, and produces about 25 billion kilograms of carbon dioxide emissions. The AI-based smart traffic management system piloted in the city has reduced travel time by 25 per cent, idling time by over 40 per cent and emissions by 21 per cent. Unlike conventional traffic lights that have pre-programmed timings, the Surtrac system applies AI algorithms to data collected by the radar sensors and cameras of computerised traffic lights to dynamically build a timing plan. The system is decentralised and each signal makes its own timing decision. It also sends the data to traffic intersections downstream so they can plan ahead.

There are 50 such smart intersections now, with plans for expansion citywide. Following that, Smith’s group wants to improve the system to enable signals to talk to cars! According to an IEEE news report, they have already installed short-range radios at 24 intersections. Such systems are expected to begin being built into some cars this year. Traffic signals can then let drivers know of upcoming traffic conditions or change in lights, increasing safety and relieving congestion. The vehicle-to-infrastructure communication system could also prioritise certain vehicles like public transport buses.

AI is helpful on social media too. Facebook, for instance, uses AI to spot and remove offensive content. It is also planning to integrate AI-based suicide prevention tools into Facebook Live and Messenger, in order to recognise and help people with suicidal tendencies.

In April this year, Mark Zuckerberg unveiled a platform that transforms users’ smartphone camera into an engine for augmented reality (AR). The solution relies on implementing AI on the network edge. The platform lets users to layer digital effects atop images and videos captured by the camera. One of the fun demos showed digital sharks swimming around a bowl of cereal.

Facebook has more real-world plans for the future. For example, you can pin a virtual note on your fridge and your roommates will be able to see it when they view the fridge through their cameras.

Bosch-backed robot Kuri (Courtesy: Mayfield Robotics)
Bosch-backed robot Kuri (Courtesy: Mayfield Robotics)

Neural networks help to identify people and track their movement and activities within the camera’s FOV, in order to apply appropriate digital effects. Facebook’s deep neural networks run on the phone itself because getting across the Internet and back will be too slow to effectively implement such digital effects.

While Facebook has optimised its deep learning technology to run on current-day mobile phones, they feel things are bound to get difficult as digital effects get more complex. But, Facebook expects that the future hardware enhancements will surely boost their machine learning models.

CES 2017 was full of AI-powered consumer products. Apart from the expected dominance of AI-enabled smartphones, wearable devices, home appliances and cars, two interesting developments relate to operating systems and home assistants. According to industry experts, more than 40 million homes will have a home assistant by 2021. In November last year, Google launched Google Home for this segment. Amazon’s Alexa is too well-known to be introduced again. Samsung has come up with Otto, and Bosch is backing Kuri. Facebook too demonstrated a personal assistant, though there is no information about its availability yet. Apple is also supposedly working on a Siri-powered home assistant.

With so many connected and intelligent devices all around us, people are getting very worried about privacy and data safety. So, companies like Google and Norton have also come up with solutions to secure your devices. Google is offering Android Things—an operating system that powers smart devices and the Internet of Things (IoT) in a secure way. Norton Core is a mobile-enabled Wi-Fi router equipped with machine learning and Symantec’s threat intelligence techniques to defend your home network from potential threats.

Lots more on the anvil

Researchers all over the world are still exploring the possibilities of AI. What was fiction a decade ago has become real now, and fiction today is being chiselled into reality at labs across the world—and by start-ups too.

So far, AI has been achieved mainly using complex algorithms. Now, imagine a chip that by itself works like a synapse of the human brain—wouldn’t it make AI more real and human-like than ever before? Researchers at CNRS, Thales, have managed to create directly on a chip an artificial synapse that is capable of learning. You can use these chips to create intelligent systems comprising a network of synapses, requiring much less time and energy.

Twenty Two Motors’ smart scooter for Indian roads (Courtesy: Twenty Two Motors)
Twenty Two Motors’ smart scooter for Indian roads (Courtesy: Twenty Two Motors)

Another system developed at the Sandia National Laboratories aims to improve the accuracy with which cybersecurity threats (or bad apples) are detected. The brain-inspired Neuromorphic Cyber Microscope designed by the lab can look for complex patterns that indicate specific bad apples, consuming less power than a standard 60-watt light bulb. This small processor was found to be more than a hundred times faster and a thousand times more energy-efficient than racks of conventional cybersecurity systems.

Lots of interesting AI research is happening at MIT too. Last month, MIT researchers presented a paper proposing a fast and inexpensive way to achieve speech recognition. Current speech recognition systems require a computer to analyse innumerable audio files and their transcriptions, to understand which acoustic features correspond to which typed words. However, providing these transcripts to the machine learning system is a costly and time-consuming affair. This limits speech recognition to a small number of languages. The new approach proposed by the researchers does not rely on transcripts. Instead, the system analyses the correlation between images and spoken descriptions of those images, as captured in a large collection of audio recordings. It eventually learns which acoustic features of the recordings correlate with which image characteristics. According to the scientists, this is more natural—more like the way humans learn. Plus, it is less expensive and less time-consuming, opening up the possibility of extending speech recognition to a larger number of languages.₹ 100 million in April this year and plans to launch the scooter at next year’s Auto Expo. Not a self-driving scooter but definitely smart enough to begin with!

There is a lot of software-based AI innovation happening too, like the one by Mumbai-based Arya.ai, which offers deep learning algorithms for developers to build intelligent systems that can learn, adapt and do things with minimal inputs from humans. The DL Studio platform can be used to incorporate intelligence in e-commerce platforms, diagnostic assistants, image processors for drones, security, device management and maintenance, and more. Deep neural networks are also scalable.

Another interesting trend is the availability of different AI capabilities as services that can be quickly deployed in applications. Clarifai’s powerful visual recognition application programming interface (API) is one example. It uses machine learning to automatically tag, organise and search visual content. Similarly, Datalog.ai offers conversational intelligence as a service for virtual assistants, bots, devices and corporate applications. For developers, it is as easy as plug-and-play. No complex infrastructure or development is needed to put AI to work.

Building trust

AI is indeed at an inflection point. What has made it so hot today? While some would credit the availability of powerful computers or advances in statistical machine learning and deep learning techniques, others say that AI has attained this level of focus and investment mainly because of the sheer amount of data that the IoT is churning up. With sensors all around us, networks are dizzy with data flying all around. Somebody sitting on data obviously wants to make sense of it. So, there is a greater demand for AI, and demand always drives supply—that is the underlying principle of commerce.

The industry is bustling to meet this demand and the air is rife with partnerships and acquisitions. Still, there is one big challenge in the way of deploying AI in real-world scenarios: You need to win the trust of people before they accept AI as a way of life. There is a lot of doubt about security and privacy. The industry is coming together to solve this bottleneck. Last year, Amazon, DeepMind/Google, Facebook, IBM and Microsoft announced the formation of a non-profit organisation called Partnership on AI to improve public understanding of AI technologies and formulate best practices for development and deployment of AI. The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems is also an effort towards aligning the development of AI and autonomous systems with the values of its users and society.

As long as such basic ethical requirements are met and we are assured that intelligent devices will not overthrow us, artificial intelligence is definitely hard to resist!


SHARE YOUR THOUGHTS & COMMENTS

EFY Prime

Unique DIY Projects

Electronics News

Truly Innovative Electronics

Latest DIY Videos

Electronics Components

Electronics Jobs

Calculators For Electronics