You would have heard this several times already – “We have just one earth; we better take good care of her!” This slogan is becoming louder as the use of information technology and its paraphernalia of gadgets and machines increases. Of the many kinds of environmental damages that our tech toys cause, the power consumed by these devices is a constant worry for the environmentalist and the common man. While the former worries about the resources being depleted to produce this power, the carbon footprint of the devices, the heat released by them, etc, the latter worries about the sky-rocketing energy bills. Adoption of alternative power sources, such as solar power, can be a welcome relief to all concerned parties.
Fortunately, researchers and device manufacturers are making reasonable headways in the field of ‘micro energy harvesting’. Macro-level energy harvesting in the form of wind-minds, hydro-electric turbines, etc, have been around for a long time. Now, the trend is towards micro-level harvesting of energy from body heat, vibrations, sunlight, wind, and so on, to scavenge micro-watts of power to run ultra-low power devices.
This positive trend can be attributed to two reasons. One, there has been considerable progress in the field of alternate energy – the constantly increasing efficiencies of solar cells is a typical example. Two, thanks to superior design and engineering, current generation electronics consume much less power and are amenable to be powered by alternate sources.
Of late, several interesting gadgets are being featured in the media – solar cells to supplement mobile phone batteries, vibration-powered sensors (that are ideal for use in automobiles and other environments where there is a lot of jolting), solar-powered wireless sensor networks, wireless and batteryless switches for use in building automation, heat- or vibration-powered medical implants such as hearing aids and pacemakers that harvest bodily energy using micro-generators manufactured as micro-electro-mechanical-systems (MEMS), and so on.
In fact, an in-body micro-generator that converts energy from the heart-beat into power for implanted medical devices won the Emerging Technology Award at the Institution of Engineering and Technology’s (IET) Innovation Awards 2009 held in London, recently.
Improved interconnect technologies for 3d ics and 3d packages
The electronics industry continues to strive towards a common goal – to pack more functionality into smaller form factors. For this, the industry depends on constant reductions in the size of chip packages. Maintaining the tempo predicted by the famous Moore’s Law, the integrated circuit (IC) makers are constantly improving their fabrication processes and designs to fit more components into smaller footprints. However, the electronics industry is not satisfied with even that – they want more!
This has resulted in three-dimensional (3D) ICs and packages. A 3D IC is basically a stack of multiple silicon wafers and dies connected through vertical electrical connections, such as through-silicon vias (TSVs), in a way that they behave like a single device. Wikipedia defines a TSV as a “vertical electrical connection (via) passing completely through a silicon wafer or die.”
A step above, we have 3D packages, that is, interconnected stacks of two or more chips. The stacked chips may be wired together along their edges, but vertical connection using TSVs is now emerging as the more popular option since it does not add to the dimensions of the stack. There are several TSV designs and production technologies in use.
Of course, engineers are never satisfied – even as TSV is gaining popularity, next-generation technologies are already being demonstrated. One such wireless interconnect technology was described by Yasufumi Sugimori at the International Solid-State Circuit Conference 2009. The technology used coupled inductors to send signals between stacked die, across a distance of 120 µm. This coupling avoids the need for TSVs, saving the cost of this wafer processing step. According to Sugimori’s estimate, the power spent communicating through the stacked chips will only be half of what is spent in today’s stacked chip packages, while the area spent on communication circuits can be reduced 40 times.
This is the age of technological convergence – most significant breakthroughs happen at the interface of one or more fields of science. Such a conjunction of electrical and electronics engineering with biology, physics, chemistry and material sciences is going to be the future of healthcare. We are talking about such advanced technology that will make current equipment like cardiac pacemakers and blood glucose meters seem very simple!
‘Wetware’ comprising bioelectronics and biosensors will make it possible to measure and understand biological systems in greater detail and with unprecedented accuracy. This will result in advanced lab-on-a-chip biomedical diagnosis, implantable neural interfaces, the ability to restore a patient’s lost vision or reverse the effects of spinal cord injuries, etc. An EETimes.com report cites: “Lab-on-a-chip is one manifestation of the technology…but it is also possible to grow biological cells on electronically addressable substrates. The opportunities for in-vitro diagnostics are clear. Information about the electrical behaviour of individual cells and their reactions to drugs is a major focus for research in cardiac and neural ailments such as Alzheimer’s or Parkinson’s disease.”
Bioelectronics shows a lot of promise now, thanks to the rapid increase in the understanding of biological systems and processes at the nano-scale, advances in semiconductor technology and in surface chemistry related to the interface of biology and man-made devices, availability of micro energy sources, biosensors and such miniature measurement systems, and the materials and fabrication techniques for bio-electronic devices, medical implants, 3D assembly, self-assembly, nano-particles, nano-tubes, nano-wires, etc.
A report by the US National Institute of Standards and Technology (NIST) beautifully describes how it is not just a case of miniature electronics and MEMS enhancing medical treatments but also a case wherein the electronics industry learns from the ways and means of biological systems to enhance its own capabilities at the nano-scale. The 2009 report titled A Frame work for Bioelectronics: Discovery and Innovation states that: “understanding biology may provide powerful insights into efficient assembly processes, devices, and architectures for nano-electronics technology, as physical limits of existing technologies are approached… Advances in bioelectronics can offer new and improved methods and tools while simultaneously reducing their costs, due to the continuing exponential gains in functionality-per-unit-cost in nanoelectronics (aka Moore’s Law). These gains drove the cost per transistor down by a factor of one million between 1970 and 2008 (for comparison, over the same period, the average cost of a new car rose from $3,900 to $26,000) and enabled unprecedented increases in productivity.”
This win-win outlook will contribute further to the growth of bioelectronics. Several industry majors including IBM, Intel, Texas Instruments and Freescale are involved in the research and development of wetware. If there is as much development in the field as we expect to see, it will—over time, and as prices fall—lead to revolutionary changes in national security, healthcare and economic development too.
New Battery technologies
The electronics industry has been working towards the twin goals of reducing the size of devices and making them work for a longer time. Both goals are being achieved today mainly because of advances in IC technology – today’s chips are not only very small but are also designed to consume very little power, by reducing energy leakage, etc. As a result, the devices they are embedded in also have smaller size and a longer battery life.
Battery technology, on the other hand, has not kept pace with this development – batteries have neither become significantly small nor has there been any significant improvement in their capacity. This has been a much-discussed topic over the past few years.
The trend seems to be shifting, and we now hear of a lot of research and advancements in battery technologies. Perhaps over the next year, some of these might get incorporated into products, to result in cellphones, laptops and other devices with a much longer battery life and smaller footprint than the current ones.
One notable development is the use of thin-film technology to make power sources, especially suitable for applications such as smart cards and sensors. These batteries can be flexible if fabricated on thin plastics. They charge fast and offer good energy and power densities in a small form factor. They also demonstrate very low equivalent series resistance (ESR). They are also tolerant to a wide range of temperatures. Since they do not self-discharge, they can be used for a period of around a decade or so. Companies like Oak Ridge Micro Energy and Front Edge Technology have commercialised thin-film power sources.
Then, there is a Swiss company called ReVolt that is trying to adapt the button-type zinc-air battery for use with larger devices like laptops and mobile phones. Since zinc-air batteries have a much higher energy density than the traditional lithium-ion batteries used currently with laptops, ReVolt’s ventures could result in a tripling of the battery life of such devices.
Plus, a look at current research hints at the possible replacement of the old-golden alkaline manganese dioxide-based batteries with newer nickel- and lithium-based battery formulas such as nickel oxyhydroxide, olivine-type lithium iron phosphate and nanowires.
Battery technology seems to be a happening area of research, and any significant breakthroughs will provide a good break for the electronics industry.
Tomorrow’s connected home
Bill Gates envisioned and implemented a smart and connected home many years ago, but in the next few years even our homes might become connected to a reasonable extent. The term ‘connect’ is now being used beyond the context of just mobile phones and computers. Numerous devices have the capability to connect to home wireless networks and the Internet – home theatre systems, gaming consoles, digital photo frames, cameras, printers, and more.
Several efforts are on, to take home networking to the next level.
In January 2009, Cisco announced the Cisco Device Connections Program (Cisco DCP), to enable manufacturers to use the Home Network Administration Protocol (HNAP) in a variety of network-connected devices. The company claims that HNAP enables a simple, secure and open approach for devices to be set up and configured with other HNAP-enabled devices on a home network.
On the consumer electronics front, news reports say that companies like Samsung, Sony and Panasonic are relying on the connected home concept for their next major surge up the ladder, and they already have numerous network-enabled products.
HP offers a range of MediaSmart products including receivers, servers, software and services that help you consolidate all your media files and share them across a range of connected devices at home. The HP Media Smart Connect, for example, is an advanced digital media receiver that turns any high-definition television (HDTV) into a next-generation TV. You can view all your photos, music, movies and videos on your television by connecting it to multiple computers around the home.
You may dream of being able to open all your media on all your devices. But would this be possible? Would all the formats be compatible with all the devices? To overcome this issue, the Digital Living Network Alliance (DLNA) is working to provide a standard. DLNA-certified products ensure that digital content (photos, video and music) is playable across products sharing a wired or wireless network connection. Over the next few years, we can expect to see a range of DLNA-certified products in the market.
The other context of home networking is a little more hi-fi. That is, the computerisation of various household features such as temperature control, lighting, etc, and its centralised control through a panel located either inside the house, outside, or through a mobile device. In India, this will probably be more popular in hotels and offices than in homes – at least in the near future.
Imagine being able to make electrically-functional components by ‘printing’ them on materials such as plastic, silicon and ceramics, using normal printing techniques like screen printing, offset lithography and inkjet! Electronics manufactured using such techniques would be very cheap, and especially suitable for low-performance items such as smart tags and flexible displays.
Printed electronics are typically produced by rapidly printing multiple conductive, insulating and semi-conductive layers to build active or passive components such as thin-film transistors or resistors, which in turn form electronic circuits.
As mentioned earlier, regular printing processes in slightly modified forms can be used for this, the special aspect being the inks used for printing. The inks are usually electrically-functional materials composed of carbon based compounds. Organic as well as inorganic materials can be used, the key criteria being that the material should be available in a liquid form. Several such organic materials are available to be used as conductors, semiconductors or insulators. Inorganic materials used are normally dispersions of metallic micro- and nano-particles. A huge break for printed electronics came with the discovery of conjugated polymers, which even won the Nobel Prize in chemistry in the year 2000, and their development into soluble materials.
There are companies (such as Kovio Inc and Merck) that have been making printed semiconductors since 2001, and of late they seem to be venturing into mass production as well. Printing equipment are available from companies such as IBM and Xerox, while the substrates and inks can be had from 3M, Tetra Pak, Mitsubishi Chemicals, etc.
As an interesting aside, printed electronics has led to the invention of ‘plastic memory’, which is expected to be extremely low in cost, although the performance might not be as high as the traditional technologies’. Rewritable, non-volatile memories with data retention of around 10 years or one million cycles can be printed using polythiophenes, a family of polymers that display ferroelectric properties. A German company called PolyIC GmbH & Co KG has used the technology to make a 20-bit memory using polyethylene terephthalate (PET) as the substrate. Another Norwegian company called Thin Film Electronics ASA is also working in this line. It might be some time before we can buy plastic memory off the shelf, but experts predict there will be a lot of progress in this space, soon.
Four men working in parallel can definitely get more work done than one, but too many cooks can spoil the broth if not managed properly. This is the new twist to the multi-core tale that is emerging now. Multi-core processors are not new anymore – they have emerged victorious even in the desktop and embedded segments, with the industry offering dual-cores, quad-cores and more. The question now is whether developers have the know-how, techniques and most importantly, the right tools required to program the multiple cores and tap their full parallel processing capacity.
Primarily, there needs to be a revolution in the developers’ thought process, so that they start thinking and designing software in ‘parallel’ mode. While some call programming for multi-core as an emerging technology, others argue that it is merely common sense! It does not make sense to develop an application in the traditional, sequential way, and then thrust it on a multi-core processor and expect it to complete the work fast. In fact, doing this might wreak havoc amongst the shared resources.
Developers need to split their programs into parallel tasks intelligently, to ensure proper communication and synchronisation between the various sub-tasks and to avoid software bugs such as race conditions. Plus, developers also need to understand the hardware architecture to a minimal extent at least, to understand how the resources—especially memory—are shared between many cores, and write the application such that the resources are shared optimally. At present, the general sentiment is that shared memory becomes a bottleneck when there are more than four cores. Such bottlenecks can be avoided if developers sequence and coordinate the sub-tasks in a neat way.
All this is not easy to do, and undeniably, programming for multi-cores and other parallel computing systems is a difficult task. But the challenge is eased to an extent if the developers choose the right concurrent programming languages, libraries, application programming interfaces and parallel programming models.
It is interesting to note that the makers of multi-core and multi-processor platforms, such as Intel, are also making efforts to educate developers on how to write parallel programs and multi-threaded applications. The Parallel Programming resources on Intel’s website (http://software.intel.com/en-us/parallel/) include not just learning materials but also discussion boards and forums where developers can get their doubts cleared. Initiatives such as OpenCL (an open standard for parallel programming of heterogeneous systems) and Cuda (a parallel computing architecture provided by NVIDIA) are also worth consideration.
TV gets a facelift
Television is one space that has made a lot of progress in the past year and promises to be a hot space in the coming year too. Exciting features are being added to existing models, while new technologies are invading the flat and high-definition television (HDTV) space. The market for television, as always, is very high. All in all, a high-growth area.
Last year’s forecasts spoke about the emergence of HDTV as a key development in the consumer electronics space. This year too, HDTV is a hot topic. Almost all brands now have HDTVs on sale. Service providers such as Tata Sky and Reliance Big TV are offering HDTV content. Increase in paid, interactive services and more channels along with a fall in prices will lead to a greater demand for HDTVs in the coming year.
Plus, the other hot feature cropping up in the HDTV space is Internet convergence. Several models available today, from brands including Sony, Sam-sung, LG, etc, can be connected to the Internet – enabling the viewer to connect to photo-sites like Flickr or Picasa and view their pictures on a flat screen, stream video from sites such as YouTube, listen to music from the Web, etc.
As we mentioned in our November 2009 article on flat-screens, Internet connectivity could be used by manufacturers for remote diagnosis of problems in your television, delivery of software or driver updates, etc. Television manufacturer Sharp has already started offering such services in the international market.
Another emerging technology that promises greater power for the television is Diiva – the digital interactive interface for video and audio – developed by a consortium of consumer electronics makers in China. The Diiva 1.0 specification is now available. Diiva refers to an all-in-one cable that combines a high-speed, bi-directional data channel; uncompressed high-definition video; and multichannel audio capability to allow users to connect, configure and control various consumer electronics (CE) devices from their digital TVs, while enjoying an enhanced multimedia experience at the same time.
A recent report by the Consumer Electronics Association, USA, predicts that 2010 might see the rise of 3D television and content, although the market might not grow at such a speedy rate as that of HDTV did. They have based this fore-cast on surveys conducted by them wherein customers who had seen 3D movies in theatres expressed a keen interest in 3D TV sets and 3D channels. While manufacturers have already started offering 3D capable models, the demand and the number of models available might grow once 3D channels become available. Satellite television providers such as Sky TV in the UK and DirecTV in the US are hinting at the launch of 3D movie channels in the coming year.
Man likes to control everything, and the first step to controlling everything is to know more about whatever he wishes to control. Perhaps it is this basic psychology that has led to the notable growth in the field of sensors – both in terms of technology and the market.
The demand for sensors is riding a high tide now, the process industry is faced with the imperative need for greater quality control, which cannot be managed without monitoring every tiny detail. Sensors are embedded almost everywhere… to measure and monitor the speed, whereabouts and general health of automobiles; to survey environmental conditions; to monitor safety along industrial borders; to measure the temperature and other parameters of chemical processes; and much more.
Several reasons can be attributed to the growing popularity of sensors – improvements in chip technology have resulted in smaller, cheaper yet more accurate sensors that can be embedded anywhere; today’s sensors consume less power and even traditional power sources last for a long time; some new age sensors can even sustain themselves on the energy weaned from wireless signals; plus, developments in alternate energy have led to sensors that can be fuelled by ambient energy such as solar; wireless technologies have made the installation and use of sensors less cumbersome; advancements in software technology have ensured that the information collected by the sensors is put to best use whether for monitoring processes or for further trend analysis.
According to the Freedonia Research Group, the demand for MEMS-based and imaging sensors will grow the fastest while process variable sensors to monitor pressure, temperature, flow and level will continue to be the largest segment. The automotive industry will also remain a huge market.
According to the group, till 2012 imaging and proximity or position sensors are expected to record the fastest growth. Imaging sensors – including charge-coupled devices (CCD), complementary metal-oxide semiconductors (CMOS) and thermal (infrared) sensors – will register the fastest gains of all sensor types. Of these, the CMOS-based imaging sensors will be more popular because of their lower costs, lower power requirements and higher speeds. Thermal imaging sensors will be used more for military applications, by police departments, fire departments, structural testing companies, original equipment manufacturers (OEMs) and private consumers.
The popularity of proximity and positioning sensors will grow fast, owing to increasing applications in automobiles and industrial sector. While the market for camshaft and crankshaft positioning sensors will continue to grow, there will also be a spurt in newer applications such as collision avoidance systems.
Automobiles as a key consumer of Electronics
The automobile industry is using electronics at a furious pace. By the estimate of some popular subject bloggers, a high-end car might use around 70-100 intelligent chips! In fact, even 8-bit and 16-bit microcontrollers do not suffice the needs of today’s vehicles and the industry uses 32-bit ones.
Electronics play a huge role in automobiles – to monitor emission and other such factors subject to environmental regulations, for safety control, and infotainment.
High-end control systems are used to boost performance and energy regeneration efficiency of batteries; sensors are used to monitor emission; radio frequency identification (RFID) tags are used to identify spare parts and track their age; sensors are used to keep tab of the temperature of vital parts such as the engine and tyres; the convergence of video, audio and data is used to entertain the user as well as to provide information updates; mobile technologies are playing a huge role in ‘connecting’ the vehicle with the outside world; navigation systems ensure the automobile is on the right track at all times; geographical positioning systems (GPS) help keep track of the position and speed of vehicles– a boon for logistics firms; other electronic systems are used for remote diagnostics, vehicle immobilisation systems, collision detection, etc. In such and many more ways, electronics is playing a huge role in vehicles.
The in-car display that will soon be a part of the dashboard of most cars will be the symbol of the ‘connected’ car. It will be the interface through which the driver interacts with all the electronic systems mentioned above. Experts say it will also play a key role in connecting the car with the Internet and other consumer electronics used at home, at work, and on-the-go.
The author is a freelance writer based in Bengaluru. She writes on a variety of topics, her favourites being technology, cuisine, and life