Right from the smartphone in your hand to the dashboard of your car, the embedded media processor is working continuously to deliver what you see and what you hear. It needs to concurrently perform a number of tasks; collect inputs, process audio and video, synchronise the two, deal with graphics, perform a lot of format conversions and cognitive analysis, and finally output to the panel or display.
While all you probably need is a universal serial bus 3.0 to reap all its benefits, content that goes into these media processors is, to say the least, vast. “As the need for bandwidth coupled with data sharing and access-ease requirement increases, interface architectures have evolved substantially over the years to be able to create solutions for the growing Internet of Things segment,” quotes Nate Srinath, founder-director, Inxee Systems Pvt Ltd. This article tries to understand what goes into the making of these processors and how these translate to what we hear or see on the screen.
More functionality at lesser cost
“In today’s media processors, we have about 250 different intellectual property blocks, and this number can go up to 400 or 500 blocks soon,” says Manuel Rei, industry director, Dassault Systemes. “Media processors should be able to manage different kinds of signals and from the same platform, as the cost would sky rocket otherwise,” he adds.
Ashok Chandak, senior director, global sales and marketing, NXP Semiconductors India Pvt Ltd, predicts that the ones that would be popular are those that can provide most capabilities inside a single device. “To keep up with this, chips that go into media processors have different dies that are stuck together one on top of the other. Each has a specific function, thus enabling different capabilities and functionalities within a single package,” he says.
Three-dimensional integrated chips that incorporate system on chip and system in package will be the key enablers for performance and small form factor products, feels Srinath.
Bill Giovino, embedded systems specialist at Mouser Electronics, elaborates, “The movement has been towards higher integration, putting features like video, audio and touch-screen control on one chip. Video processors are supporting higher pixel resolutions, with speed improvements that eliminate the need for the cost and board space taken up by frame buffer dynamic random-access memory (DRAM).”
Support for higher resolution. Vijay Bharat S., associate vice president – hardware design, Mistral Solutions Pvt Ltd, quips in, “Built-in radios for communication, high frame-rate video capture and display, greater than 4K resolution videos and a number of processing cores are on everyone’s mind. High-definition video resolutions start from 1080p and go up to 3Gbps of high-definition serial digital interface.”
He suggests that there are various techniques available to improve audio-visual quality. Once we convert all audio-visual signals from analogue to digital form, quality will improve and noise on the signal can be easily removed. “For audio signals, in particular, more types of digital filtering and sound synthesis is being implemented on-chip,” according to Giovino.
Real-time processing. “Time-critical events call for efficiently managing time synchronisation, latency and throughput,” says Bharat.
“To overcome this issue, we need high-speed processing and communication infrastructure. We also need to have very good audio-visual file-compression algorithms to reduce file size,” he adds.
Their storage space, the memory. “The desired features in a media processor are fast cores that can handle math-intensive graphical processing, along with plenty of random access memory (RAM),” explains Giovino.
Explaining the concept of memory, Bharat says, “There are two types of memories used in various processor based solutions. The current trend is permanent memory like embedded-multimedia-card, serial advanced technology attachment and solid-state memory, which are small in physical size but offer large storage capacities. Multi-chip package and package on package memories are also popular in mobile and consumer electronics applications.”
Adding to this, Giovino goes into more detail, “Larger amounts of on-chip memory, or fast external memory interfaces, are necessary to efficiently handle memory-intensive tasks like buffering images, streaming high-definition (HD) video, or dealing with compression.
Compression also helps with reducing the bandwidth required for wired or wireless data transmission; media processors must rapidly decompress or even decrypt multimedia data, especially if there is any digital rights management (DRM) encryption present.”
Smaller the node….
The technology node is at 14nm, even 10nm now. Rei feels that, as we go further down, challenge would be to design transistors of such small dimensions. “More transistors within a given area lead to more capabilities, and it is a matter of choosing the right design process,” he adds.
Newer materials used in manufacturing are also influencing changes for more suitable semiconductor designs, feels Srinath.
Enhancing sensing
A media processor processes the inputs it gets, and hence sensors become vital. A few qualities expected in such sensors, from Bharat’s experience, are support for night vision, optical/digital zoom, area of interest, ambient light sensing, auto-detection of objects, filter option for video based noise and real-time data collection.
“Built-in analogue-to-digital converters, digital signal processors, digital interfaces instead of analogue interfaces, and outputs are as important,” chips in Srinath.
Software is king
Most of the latest processing techniques are possible thanks to enhancements in software content and programs. “One of the most popular video-processing techniques, namely, video analytics, is implemented via software,” says Bharat. “Most media-processing engines are supported with video accelerators, encode and decode engines,” he adds.
Code, and decode the code. Incremental innovation is what drives codecs, says Chandak.
Bharat adds, “Video encode and decode are implemented using H.264, Moving Picture Experts Group 2 and 4 standards. Audio encode and decode are implemented using MP3, Windows Media Audio, Advanced Audio Coding standard techniques. H.265 and lossy video compression are some of the new optimisation techniques we have seen in the past year.”
Bharat goes on to add, “There is also a demand for very low latency with multiple parallel instances for encoding and decoding. Current media processors are able to receive or transmit multiple video inputs/outputs in different interface formats such as camera serial interface, display serial interface (DSI) and 8/16/24/32-bit parallel video.”
More power, lesser consumption
All devices are expected to work for at least a day, without the need to charge these. No wonder then that media processors are being designed to be as power efficient as possible. There is a slow shift to battery-operated devices, due to reduced power consumption and ease of use. But with these, demand on performance of a media processor only increases, opines Chandak.
Also, with miniature displays coming up, power and thermal dissipation is a challenge in itself, feels Bharat.
“There is a clear intent to simplify the design to reduce part-count and lower power consumption,” adds Giovino.
Connectivity, security and scalability
Connection, intelligence and smartness are the three key elements, according to Chandak. Everything is getting smart and connected, and everything needs to be secure. There is a clear intent to introduce data security at hardware level, and more stringent ones for every line of software that goes into the processor.
Srinath says, “Every device needs to have a layer of authentication built-in to prevent rogue access. With proper authentication, a server can easily perform remote software upgrades over any network, on its target devices.”
“There are three to four levels of security layers on each process like read-only memory code security, data pass through the encryption engine, software layer security and pass key based security,” pitches in Bharat.
Scalability is on every designer’s mind. With technology progressing at a rapid pace, it is very important to plan for the future, to not be left behind. A media processor must have hardware logic, so that processing done could be taken to the outside world. Today’s hardware must be able to support software upgrades for at least five years. Chandak adds, “Scalability of processors would also lead to a greener environment and lower footprint.”
The IoT influence. In Rei’s opinion, influence is indirect with the impact being in the amount of data to be dealt with.
Chandak says, “A media processor must have the required hardware logic to handle the load, while these can offload processing to the outside world.”
For this, these would need to support the IoT-supported communication media, which may not be viable. In such cases, designers may have to work around with external controllers, feels Bharat.
With cloud based processing, it is important to remember that it is good for latency-insensitive tasks, but not for real-time purposes, cautions Srinath.
What you can expect five years from now
The technology we have today is fantastic, feels Rei. The functions and support these offer would have been unimaginable five years ago.
“A design technology in new media processors to look forward to is a built-in hardware engine with basic video analytics like front object detection, danger zone detection, road crossing, road divider and built-in audio encode/decode engine,” quips in Bharat.
Rei continues, “For the moment, what we see in prototype is the ability to see data in different ways. It is a little too early to reasonably talk about what we can expect in the future. Certainly, the way we visualise things is going to change. But, what we do know is that we already have the technology and capability.” It is about putting things together and innovating.
Priya Ravindran was working as a technical journalist at EFY till recently