Have you ever wished you had four hands so you could work faster? A similar sentiment amongst computer engineers was what led to the development of multi-core processors more than a decade ago.
A multi-core processor is essentially a chip that has more than one processor (or core) attached to it. By engaging these multiple cores using the right software, a computer can do more in less time.
There is a great demand for multi-core processors across various application domains ranging from general-purpose and embedded computing to networking, digital signal processing (DSP) and graphics. Over the past few years, multi-cores have become the norm, with dual- and quad-core processors being taken for granted even in the desktop and mobile computing spaces.
Of course, we all know that in the computing world, users are never satisfied. However fast their computers are, they want more speed. When they find four cores mundane, they will have six-core processors (Intel has already demonstrated one chip), but soon even that will not be enough. Every need will be met by the industry, with more research, more innovations and more products. It is an unending cycle, and no technology is permanent.
Let us therefore stop at this moment in time and capture the current state of affairs in multi-core processors. Well, the situation here is quite similar to the Cola Wars, with Intel and AMD trying to outdo each other with more cores.
A six-mare chariot to pull your computer
Six- and even eight-core processors per se are not news any more, with server-focused products such as Intel’s Xeon and AMD’s Opteron being quite popular. However, it is interesting to note that Intel recently demonstrated a six-core processor for the desktop segment that is currently being ruled by dual- and quad-core processors. This processor might be launched sometime in March this year.
The processor, codenamed Gulf-town, will use Intel’s 32-nanometre (nm) fabrication technology and Westmere architecture, which means it will also feature some of the interesting power-saving techniques used in Intel’s 45nm products. The power management functions of this architecture will ensure power savings by limiting the power consumed by the idle cores that are not utilised by the applications running at any point of time.
Apart from that, the chip has a larger data cache—12 MB of L3 compared to the 4 MB present in dual-core counterparts. As John Morris and Sean Portnoy observe on their famous blog on laptops and desktops, this data cache, when combined with the extra cores, would result in a chip that is larger and contains 1.17 billion transistors. Intel says that it uses some of those extra transistors to speed up tasks like data encryption and decryption.
The six-core processor will also be available in a dual-socket server version.
AMD too plans to launch its six-core desktop central processing unit (CPU) this year. Codenamed Thuban, this processor will pack six cores and a DDR3 memory controller onto a 45nm die. It is believed that Thuban will be backwards-compatible with existing AM3 and AM2+ motherboards, which gives a flexible upgrade path to existing AMD users.
Six more for the servers
If six cores is the current high score for desktop CPUs, server-focused processors can boast of six more.
AMD’s new server processor, code named Magny Cours, is designed for two- and four-socket servers and uses faster DDR3 memory. This processor will be offered in the market as Opteron 6100, in both eight- and 12-core versions. Both chips will be manufactured using a 45nm manufacturing process. The company has also hinted that by 2011, Magny Cours will be replaced with Interlagos—a more powerful 32nm chip that will come with 12 and 16 cores based on AMD’s Bulldozer microarchitecture.
The company has also revealed a new Opteron chip (codenamed Lisbon) for servers with one or two processor sockets. This will be available in four-and six-core versions as the Opteron 4100 series. By 2011, Lisbon will be succeeded by Valencia—a 32nm chip offering six and eight cores based on Bulldozer.
A team with multiple capabilities
“Last year, we launched the six-core Istanbul processor; we are currently working on the eight-core and 12-core processors, and plan to launch it by the first half of 2010. Apart from this, we are also planning a six-core desktop CPU. While these homogeneous multi-core processors continue to evolve over next couple of years, you will see a movement towards heterogeneous multi-core processors in the future,” comments Vamsi Krishna, senior manager-technical, AMD. “AMD has plans to release its first heterogeneous multi-core processor (codenamed Fusion), where we integrate multiple x86 cores and graphical processing unit (GPU) cores on a single die. This will be the beginning of new multi-core revolution in the future.”
Deep down, the idea behind Fusion is to enable a mix-and-match strategy at the silicon level, through a broad range of design initiatives. AMD will provide a range of application-specific cores that can easily be combined and heaped onto a processor die and fabricated at a low cost. A quad-core processor might contain different combinations of cores, say, two general-purpose cores and two specialised processor cores, or one general-purpose core and three specialised cores, and so on. That is, a processor can be put together from heterogeneous cores, based on the end-use and workload.
Krishna adds, “The Fusion project completely leverages the multi-core concept and Direct Connect architecture, enables a homogeneous programming model for all AMD products, and standardises the coprocessor interface for on-die and platform connectivity. Fusion-based processors, with the CPU and GPU integrated in a single architecture, should make the life of software programmers and application developers much easier.”
Cloud on a chip
System-on-chip is old news. Here is a whole cloud on a chip—Intel’s brain-child with 48 cores!
“Most recently we announced the prototype, single-chip cloud computer (SCC). This research chip contains the most Intel architecture cores ever integrated on a silicon CPU chip—48 cores. It incorporates technologies intended to scale multi-core processors to 100 cores and beyond while consuming less electricity than two standard household light bulbs,” says Vasantha Erraguntla, engineering manager, Intel India.
“Architecturally, the chip resembles a small cluster or ‘cloud’ of computers on a chip. It was designed as a concept vehicle for parallel software research. We will be working with industry research and academia partners to further parallel computing research using this vehicle.”
You might recollect that the same team also successfully tested an 80-core teraflop processor a few years ago. Following that, the team was confident of the success of the SCC. However, this chip seems more complex, with a much larger die size, system-level complexities and the challenges of 45nm physical design to boot. The success of this chip will make processors more scalable than you ever thought possible, and also accelerate multiple-core software research and advanced development.
According to the company, future laptops with processing capability of this magnitude could have ‘vision’ similar to humans who can see objects and motion as it happens and with high accuracy. Imagine, for example, a laptop with a 3-D camera and display giving you virtual dance lessons or showing you a ‘mirror’ of yourself wearing the clothes you want to buy online. Twirl and turn to watch how the fabric drapes and how its colour complements your skin tone. This kind of interaction could eliminate the need of keyboards, remote controls or joysticks for gaming. Some researchers believe computers may even be able to read brain waves, so simply thinking about a command, such as dictating words, would execute it without speaking.
We can expect more breakthroughs from Intel’s stable considering the amount of research being pumped into this space. “On the hardware side, we explore scalable multi-core architecture that integrate streamlined processor cores and accelerators using a fast, energy-efficient, modular core-to-core infrastructure; memory sharing and stacking to provide a high-bandwidth, flexible, cache and memory hierarchy that supports many simultaneous threads fairly and efficiently; and high-bandwidth input/output (I/O) and communications that balance the compute demands with the I/O and network demands within the platform power and cost budgets,” lists Erranguntla.
The soft side of success
As far as multi-core computing goes, success depends more than half on the ability of the program to efficiently harness the immense parallel computing power of the processor. “To take advantage of the increasing number of cores, efficient load balancing of software is required. In addition, we need to identify and come up with programs and applications for these systems, for performance improvement. It is also important for OEMs to strike a balance between performance and power consumption. We at AMD understand this and are working with our partners to address these issues,” says Vamsi Krishna.
Multi-core technology is no longer confined to high-performance computing. It has entered the devices that we use every day, in the form of multi-core microcontrollers that are being extensively used in embedded systems.
Several companies including Renesas Technologies, LSI Corporation, STMicroelectronics and Freescale Semiconductor have introduced multi-core microcontrollers in the recent past. While the multi-core chips initially introduced for embedded systems were aimed at image processing and other multimedia products with heavy processing loads, recent products are targeted at a broad range of general-purpose devices. These facilitate the true real-time response times required by real-time control systems, safety critical applications, etc, thanks to their ability to process instructions in parallel.
Intel too has taken several initiatives to educate developers on the specific challenges and techniques of parallel computing, in order to enable them to make good use of multi-core systems. Plus, Intel and Microsoft have partnered with academia to create two Universal Parallel Computing Research Centres (UPCRC) aimed at accelerating developments in mainstream parallel computing for consumers and businesses in desktop and mobile computing.
The multi-core related software research vision at Intel includes “model based applications that use tera-scale capabilities to comprehend data, make smarter decisions, and make visual experiences look, act and feel real; parallel programming tools that empower the ordinary programmer to develop applications that use parallelism with scalability and performance, safety and reliability; and thread-aware execution environments that provide real-time performance and power management across cores and scale with increasing thread and core counts,” informs Erraguntla. You can find details at ‘http://techresearch.intel.com/articles/Tera-Scale/1421.htm’
More than desktops and supercomputers
Multi-core processors are becoming an indispensable aspect of virtualisation too, especially in data centres.
Vamsi Krishna explains, “The near future of multi-core processors will see the technology enabling data centres to accomplish tasks more quickly and with greater energy efficiency. Areas such as virtualisation are also primed for a boost by multi-core processors. As AMD moves to 8- and 12-core platforms, far more virtual machines can be packed onto each physical server. By focusing heavily on power efficiency and virtualisation capabilities, businesses can now add performance and efficiency to their business without a significant cost.”
2. Server processors with eight, twelve and even more cores
3. Heterogeneous multi-core processors
4. Smaller, more powerful multi-core chips based on 32nm logic technology
5. Single-chip cloud computer
6. Scalable multi-core architectures
7. Higher-bandwidth I/O and communications, for improved performance of multi-core chips
8. Better parallel programming tools, model-based tera-scale applications and thread-aware execution environments, to make better use of multi-core hardware
R. Ravichandran, director-sales, Intel South Asia, says, “Industry reports point out that there will be approximately 10-15 billion devices in the next 4-5 years on the Internet, and most devices like TVs, embedded devices and other consumer electronic devices (beyond traditional desktops, note books and servers) will have a touch to the Internet. Given that there will be a proliferation of devices in the computing continuum, with rich media and video as killer applications, some of the handhelds and smart phones will need great computing capabilities that are energy-efficient… and multi-core will also pervade these segments.”
Ravichandran cites the following examples to prove his point.
Safer and smarter roads. A number of traffic accidents are caused by worn down car tires. Intel, along with industry players Kontron and ProContour, has developed a tire-tread-monitoring embedded technology that is making roads smarter. Kontron, a member of the Intel Embedded and Communications Alliance, has developed a camera with an Intel Core2 Duo processor that captures tire-tread depth as a tire passes over a specialised grate. This technology can alert drivers when their tires need replacement to avoid potentially dangerous tire blow-outs.
Medical imaging with multiple cores. Physicians today collect more complex imagery of their patients than ever before. In order to accurately diagnose diseases and develop treatment strategies in a minimally-invasive manner, new imaging modes, methods and hardware are needed.
In collaboration with the Mayo Clinic, Intel has presented a paper titled Mapping High-Fidelity Volume Rendering for Medical Imaging to CPU, GPU and Many-Core Architectures, outlining how medical imaging benefits from parallel processing in the Intel micro-architecture code-named Nehalem. Medical volumetric imaging requires high-fidelity, high-performance rendering algorithms. They have now achieved more than one order of magnitude performance improvement on a number of large 3-D medical datasets.
What to expect?
Look into any device a few years down the line and you are sure to find a multi-core processor in it. Specialists in the embedded arena, including ARM and RIM, are all launching multi-core models of their processors too.
Multi-core processors have already widened their ambit from supercomputers to desktops, mobiles and data centres. However, the real success and sustainability of the multi-core concept depends on whether it will be ably supported on the software front too, with a proper understanding and execution of true parallel programming principles. Considering the efforts being made by industry leaders to train developers on parallel techniques, this hurdle will be overcome soon, and multi-cores will be adopted even more rapidly.
The author is a freelance writer based in Bengaluru. She writes on a variety of topics, her favourites being technology, cuisine, and life