A field-programmable gate array (FPGA) is all about programmability of software, hardware, input-output (I/O) and often analogue components too. With better design and programming tools, improved performance and energy efficiency, FPGA usage is proliferating.
Though communications remains the biggest market, FPGAs are also increasingly being used for controlling displays in consumer devices, digital signal processing, software-defined radios, radar systems, situation-aware devices, IoT products and even supercomputing. As a result, companies and universities are pooling their brains and brawns to come up with improved FPGAs, related IP, software and tools. Here is a sample of what is brewing.
Fretting over FinFET. From design companies to fabs, everybody is sweating over the FinFET transistor structure, which is believed to be the biggest development in semiconductor technology in the past four decades! FinFETs are basically 3D structures that rise above the planar substrate. This gives more volume without increasing the planar area. Plus, FinFETs are also estimated to be up to 37 per cent faster, while using less than half the dynamic power, and cut static leakage current by as much as 90 per cent, according to a technology report by Synopsys. They also help overcome performance versus power tradeoffs. Researchers have been working to reduce current leakage through the body when the device is in the state, in order to allow lower threshold voltages.
Some research groups have also shown that FinFETs can be scaled by scaling the thickness of the channel. The Korea Advanced Institute of Science and Technology, for example, has demonstrated a 3nm FinFET. It is clear that FinFET is the way ahead for FPGAs too. Xilinx, for example has demonstrated their UltraScale range of all-programmable chips, which takes FPGA technology from 20nm planar monolithic to 16nm FinFET 3D ICs.
Fuelling FPGA innovation. Accelerated processing units (APUs) are often behind the success of FPGA designs. To this effect, AMD’s upcoming Kaveri might be a notable development in the FPGA space. Kaveri pools four CPU cores and eight GPU cores and enables them to work so seamlessly that it has the power of 12 compute cores. This is their first HSA architecture based APU. They claim that adopting HSA, the new processor design standard, will enable engineers to make better use of the APUs for general computing.
Work on on-chip interfaces. It is expected that synthesisable implementations of DDR3, RDRAM, Interlaken and other high-speed interfaces that operate intra-chip and inter-chip will be important for the future of FPGAs. The Interlaken interface developed by Cisco and other original equipment manufacturers (OEMs) is popular as a server and switch interconnect for high-speed Ethernet. However, recently Cavium used Altera’s Interlaken IP core to link Altera FPGAs with its own Octeon processors, demonstrating a new way in which FPGAs with high-speed transceivers can use multi-gigabit interfaces such as Interlaken.
EDA industry leaders are all focussing on developing optimised synthesisable interfaces for systems-on-chips (SoCs). At the recently held MemCon too, it was iterated that, although FPGA vendors would continue to emphasise efficient IP cores for 40- and 100-Gbit Ethernet, Fibre Channel Gen 5 and PCI Express Gen 3, they will definitely be working with EDA specialists such as Cadence, Synopsys and Mentor to offer power-optimised on-chip buses, both for memory interconnect and for links between multi-processor cores.
Embedded networks-on-chips for FPGAs. In February this year, Mohamed Abdelfattah and others at the University of Toronto proposed that, integrating networks-on-chips (NoCs) on FPGAs can improve device scalability and facilitate design by abstracting communication and simplifying timing closure, not only between modules in the FPGA fabric but also with large hard blocks, such as high-speed I/O interfaces. Their design features mixed and hard NoCs, which add less than one per cent to the area of the FPGAs yet run five to six times faster than the soft equivalents.
Their research showed that hard NoCs consume 4.5-10.4mJ of energy per gigabyte of data transferred, which is comparable to the energy efficiency of simple soft point-to-point links in FPGAs, while offering greater benefits. The design is more energy-efficient and space-saving compared to custom buses configured into the fabric by commercial system integration tools.
3PLDs from Tabula. FPGA player Tabula has come up with future-ready spacetime technology, which apparently overcomes the limitations of FPGA’s 2D topologies by using a third dimension to create a class of 3D programmable logic devices (3PLDs). Spacetime uses hardware capable of dynamically reconfiguring at multi-gigahertz rates and the Stylus compiler, which transparently manages this ultra-rapid reconfiguration of logic, memory and interconnect resources throughout the chip. Actually, the third dimension of space is created using time, and hence the name Spacetime!
According to Tabula, “A Spacetime device reconfigures on the fly at multi-gigahertz rates, executing each portion of a design in an automatically defined sequence of steps. Manufactured using a standard CMOS process, Spacetime uses this ultra-rapid reconfiguration to make time a third dimension. This results in a 3D device with multiple layers or folds in which computation and signal transmission can occur. Each fold performs a portion of the desired function and stores the result in place. When some or all of a fold is reconfigured, it uses the locally stored data to perform the next portion of the function.
By rapidly reconfiguring to execute different portions of each function, a 3D Spacetime device can implement a complex design using only a small fraction of the resources that would be required by an FPGA, with its inherently 2D architecture.” The big plus is the manageable price-point of Spacetime devices.
Beyond programmability, self-learning processors. Programmable hardware is the future but cognitive computing is even more exciting. You will be surprised to know that, sometime this year, computers that learn from their own mistakes will be commercially available!
Stanford University, IBM, Google and several others are working on this. Such self-learning computers will enable extreme automation, fault tolerance and adaptability, making computer crashes a thing of the past. They will use a new class of processors, which comprise electronic components that can be connected by wires that mimic biological synapses. Because of the neuron-like elements in these processors, they are known as ‘neuromorphic processors.’ These processors need not be programmed; instead the connections in the circuits are formed based on correlations in data that the processor has learned earlier. As data flows, the connections might get further altered, leading to changes in behaviour just as in the case of humans.
According to a comment by IBM scientist Dharmendra Modha, this methodology brings computation to data instead of the other way round. “Sensors become the computer, and it opens up a new way to use computer chips that can be everywhere,” he said in an earlier report. When they first arrive, these computers will work as co-processors in devices such as smartphones rather than as standalone processors. Qualcomm, which has developed such a processor together with IBM, plans to release it commercially sometime this year. Stanford’s Brains in Silicon project is another major contender in this space.
The author is a technically-qualified freelance writer, editor and hands-on mom based in Chennai