Google has long become a common name and nothing seems to faze its growth. With a very heavy reach into search algorithms, google seems to have stepped into developing processing hardware as well.
The Tensor Processing Unit (TPU) is a custom ASIC developed by Google, specifically for machine learning and tailored for TensorFlow, an open source software library for numerical computation using data flow graphs.
This could be the next step into google using its vast array of servers to build a dedicated AI. Machine learning is already helping developers design smart systems and, “TPUs deliver an order of magnitude higher performance per watt than all commercially available GPUs and FPGA,” said Google CEO, Sundar Pichai during the company’s I/O developer conference on Wednesday.
It’s an “order of magnitude” faster in AI than conventional processors at similar energy levels.
Google revealed that these TPU enabled the AlphaGo computer to “look ahead” for the moves and “think” faster, in order to beat Lee Sedol, the world champion at “Go”.
According to the Google blog this tiny processing unit can fit into a hard drive slot within the data centre rack and what makes it interesting is, we didn’t even know while it was being used to power RankBrain and Street View.
The blog reads, “TPU is tailored to machine learning applications, allowing the chip to be tolerant of reduced computational precision, which means it requires fewer transistors per operation. Because of this, we can squeeze more operations per second into the silicon, use more sophisticated and powerful machine learning models and apply these models quickly, so users get more intelligent results rapidly.” The google team has been running TPUs inside their data centres for more than a year, and found them to deliver better performance per watt for machine learning
This new development is expected to “fast-forward technology about seven years into the future (three generations of Moore’s Law).”