Thursday, March 28, 2024

Artificial Intelligence Getting Real, Local

Janani Gopalakrishnan Vikram is a technically-qualified freelance writer, editor and hands-on mom based in Chennai

- Advertisement -

Artificial intelligence (AI) is undoubtedly one of the hottest verticals to watch in 2017. An integral part of gaming and robotics, now it has also deeply penetrated aspects of the real world that influence everybody. As the core driving force of analytics, AI is at the heart of every vertical from industrial operations and banking to security and the network of devices. It has also become part and parcel of consumer electronics, as was evident from this year’s CES 2017, and an inseparable part of social media as we can see from the new features being launched by Facebook, Google and other Internet majors. The increased usage of AI technologies fuels further growth in the space—not just in the form of research but also new products such as chips for artificial intelligence. The arena is also teeming with start-ups that are bringing fresh new ideas to life.

Some see AI as a threat to their jobs, while others see it as an opportunity to do better. Either way, it is not something you can ignore. So, well, we try to give you a glimpse of what is happening here…

AI shakes up the chip market

The word ‘smart’ has become like a compulsory prefix for most object names, from pens to phones and musical instruments. The moment the word ‘smart’ is used, it means some amount of intelligence (obviously artificial) is involved. This intelligence is usually implemented using large servers with built-in AI models based on neural networks.

- Advertisement -

In general, the operation of neural networks can be understood in two stages. The first stage involves training the AI system to perform a given task. The second stage is the inference stage, which involves execution of AI models against live data.

The training stage involves crunching immensely large data sets, to help the system identify and learn from patterns in the data. Data collected by the device flies across the Internet to data centres packed with servers, where this is done. On servers, central processing units (CPUs) are usually supplemented with other types of processors like graphical processing units (GPUs).

Nvidia is one of the most well-known GPU makers, who has been powering many data centres for years. Last year, it launched Tesla P100—a GPU with an architecture focused on accelerating AI and deep learning. Apparently, Nvidia spent more than $2 billion on R&D to produce the new chip, which has a total of 15 billion transistors—around thrice as many as the company’s previous chips. According to a press release, an artificial neural network powered by this chip can learn from incoming data twelve times faster than Nvidia’s previous best chip.

To show that machine learning requires a special architecture, Google also developed its own processor called the tensor processing unit (TPU). On the other hand, Microsoft went with field-programmable gate arrays (FPGAs) for its AI needs. Not to be left out of the race, Intel acquired Altera—a company that sells FPGAs to Microsoft. Together with Nervana Systems, a company it acquired last year, Intel is also building Lake Crest (or Nervana Engine), a full-stack solution for deep learning, optimised at every stage to make training ten times faster.

The second stage, or the inference, is undergoing a major shakeup. In the past, most of this work was also done on the cloud using GPUs, TPUs and FPGAs. However, now smart device makers are beginning to feel that this is not ‘real-time’ enough. So, the trend is turning in favour of implementing the inference or execution stage at the edge (that is, in the device itself) instead of relying on the cloud. Since existing devices don’t have enough processing power, memory size and bandwidth for this, the semiconductor industry is facing a heavy demand for high-performance, low-power inference engines for deep neural networks, which can be built into devices.

AI goes local

In a press event held in San Fransisco in March this year, Deepu Talla, vice president and general manager of Nvidia’s Tegra business unit, cited four reasons in favour of bringing AI technology to the edge (on-board the device)—bandwidth, latency, privacy and availability. As the number of devices communicating with the cloud rises by the day, there will be a dire shortage of bandwidth in the near future. Latency is also an issue in applications like self-driving cars where a split second’s delay can have serious implications. Privacy, of course, has always been a serious issue. So, when data is processed within the device itself, there is no lingering doubt about misuse on the cloud. Availability of the cloud is also questionable in rural areas where connectivity is unreliable. When AI is implemented at the edge, all these issues are sorted out.

SHARE YOUR THOUGHTS & COMMENTS

Electronics News

Truly Innovative Tech

MOst Popular Videos

Electronics Components

Calculators