BFC_samsung-3d-tv1

JUNE 2012: A standard TV set displays images and video in two dimensions (2D). It lacks the depth that forms the third dimension of viewing.

In real life, when you view an object, either eye sees a slightly different picture from the other due to a minute variation in the angle. The difference between the views of both eyes is greatest for objects closeup and tapers off for those farther away. Using that information your brain calculates the distance between you and the object, helping you perceive depth and see in three dimensions or 3D.

Explore Circuits and Projects Explore Videos and Tutorials

To create a sense of depth on the screen in 3D, videos are shot from two slightly different angles corresponding to the average distance between human eyes. In animated films, these ‘shots’ come from computer models that generate the two views. In live action, two cameras are used to record stereo video.

At the movie theatre: polarisation

Previously, the 3D effect was created by using colour to separate the image intended for either eye. It utilised the anaglyph method of encoding a 3D image in a single picture by superimposing a pair of pictures.

Current technology utilises a property of light known as ‘polarisation.’ In a cinema hall, the 3D projector uses polarised filters to project two images—a right-eye perspective displayed with clockwise-polarised light and a left-eye perspective with counterclockwise light. The audience wears special polarised glasses that allow only the right-polarity light to enter each eye. The brain receives the two different perspectives with different polarisations and assembles these to create an image that has depth.

If polarisation is linear, the audience have to sit with their faces aligned to the screen. But if circular polarisation is used, the audience are free to sit in a more comfortable position.

At home: active and passive 3D

Active 3D. 3D-enabled LCD or plasma TV sets work quite differently from the technology used in cinema halls. TV-based 3D technology rapidly flashes alternating left-eye and right-eye video frames. The glasses worn by the viewer are also not polarised but active shutter glasses.

As a right-eye video frame flashes on the screen, the LCD over the right eye switches from opaque to transparent state. When the left-eye video frame appears, the right-eye LCD turns opaque again and the left-eye LCD becomes transparent. At any moment, you see only one perspective, through one eye. But the left and right video images alternate so quickly—at 120 hertz (times per second)—that you perceive a full 3D view. This illusion is possible due to persistence of vision.

READ
Tracing the Sun: Dual Axis Solar Tracker System

What is not so great about this technology is that you have to wear glasses.

Passive 3D. Here the TV divides left- and right-eye perspectives into alternating vertical columns. Microscopic lenses over the screen ‘bend’ the light so that slices of the right-side perspective reach the viewer’s right eye and slices of the left-side perspective reach his left eye.

The benefit of this technology is that it does not require the viewer to sit directly in front of the screen. New TV screens use several sets of lenses to create multiple left and right image pairs for viewers sitting at different angles from the TV. One set of images is for viewers seated directly in front of the screen. Another is for viewers to the side of the screen. Additional pairs take care of all the viewers in between. If you move sideways, you simply transition from one pair of right-left images to the next.

This technology does not require you to wear glasses but the 3D experience is nowhere close to that of active 3D technology.

In smartphones: autostereoscopy

Nintendo 3DS portable gaming device
Nintendo 3DS portable gaming device

Autostereoscopy is the process of displaying 3D images or stereo images without using specialised headgear. It is also known as glasses-free 3D and is the major technology used in 3D smartphones such as HTC EVO 3D and LG Optimus 3D, and portable gaming devices such as the Nintendo 3DS.

This technology uses the principle of parallax barrier to deliver different images to each eye. The latest and most convenient design does not have the physical parallax barrier in front of the pixels, but behind the pixels and in front of the backlight. Thus it doesn’t send different images but different light to the two eyes. This allows the two channels of light to pass through the pixels, allowing glare over the opposite pixels, giving the best image quality.

What’s next?

A different TV programme for everyone. 3D technology is being tested for use in such a way that we can build devices that show different images or videos to different people looking at the same screen. Currently, 3D devices send different images to the two eyes. With a fair amount of tweaking, it can also be possible that these devices send different images to different pairs of eyes. For instance, in passive 3D TV, the angle at which the viewer sits away from the TV set decides the pair of right-left images he sees. If different pairs of right-left images carry different programmes, we have the ultimate solution to the age-old family problem of fighting over the remote control.

READ
Cost-Effective Computer Vision System to Reduce Parking Frustrations

3D conferencing. 3D videoconferencing allows seamless transmission of 3D media across the network, revolutionising traditional video conferencing through introduction of more reality using customised 3D technology.

The key advantage that 3D video conferencing brings against traditional voice over Internet protocol (VoIP) solution is superior visual aspect and vivid perception of facial and other body language, resulting in better involvement of users and enhanced overall communication. Best of all, this technology allows for eye contact.

3D processing hardware. High-end enthusiast laptops and consumer laptops that incorporate 3D capability have been available for the last one year. While mainstream laptops that incorporate 3D use the Nvidia GT 555M GPU, enthusiast devices such as the Alienware M17xR3 use higher-end cards like the Nvidia GTX 580M. In fact, most current GTX GPUs support 3D gaming.

A 120Hz monitor and a 3D Vision Kit combined with a compatible Nvidia graphics card transform the mundane 2D images on the PC into a realistic 3D experience. 3D Vision requires proper configuration before the user can really enjoy the effect. The technology degrades the performance in most games by 50 per cent. Also, the shutter glasses can be annoying to wear for a long time and block a lot of the display brightness. Nevertheless, Nvidia is still in the lead compared to AMD.

AMD uses a technology called ‘HD3D’ to enable stereoscopic 3D support in games, movies and/or photos. Additional hardware (such as 3D-enabled panels, 3D-enabled glasses/emitter and Blu-ray 3D drive) and/or software (such as Blu-ray 3D disks, 3D middleware and games) are required for enabling stereoscopic 3D.

The Radeon HD cards such as the AMD Radeon 5450 and the 6970 HD cards also support 3D gaming. Companies that use these cards are MSI, Lenovo and Hewlett Packard (HP). The HP Envy 17 3D is one of the most popular PCs to come with the Radeon 3D solution. Although AMD has a higher-end card called the Radeon DH 6990, AMD Crossfire Technology’s incompatibility with AMD’s Quad Buffer causes it to not support stereoscopic 3D gaming with frame-sequential displays using active shutter glasses.

READ
“Space applications need a working lifetime of 15 years, and absolutely no mortality is allowed”

Ultimately, most high-end GPUs can play 3D games, but as the overload is almost double the frame-per-second of the game, play performance will probably halve. Moreover, due to software incompatibility, certain performance boosting setups such as AMD Crossfire cannot be used. This puts a cap on the maximum gaming performance that can be derived from those cards.

Smartphones with 3D. Smartphones with 3D not only output movies and games in 3D using an autostereoscopic panel but they can also record and capture images in 3D. This kind of technology uses stereo image capture through two cameras placed a certain distance apart.

The different angles of the cameras with respect to the object being photographed let the smartphone create a 3D view. As always, processing images in 3D doubles the load on the system. Hence an SoC that can handle a 20MP standard image might be able to handle only a 12MP stereo image.

LG’s Optimus 3D utilises TI’s OMAP 4430. The OMAP 4430 is an ARM Cortex-A9 based dual-core SoC clocked at 1 GHz. It can handle 720p stereoscopic 3D video and capture 5MP stereo images.

An advancement over the OMAP 4430 is the OMAP 4460 featured in the Galaxy Nexus. The Galaxy Nexus is the current ‘Google Phone.’ The SoC that powers this device performs above the OMAP 4430 with a better full HD (1080p), 3D playback capability and also enough power to capture 12MP stereo images. This SoC (1.5GHz) is also clocked 500 MHz above the OMAP 4430 (1GHz). The Galaxy Nexus, however, does not feature stereo image sensors. Both the SoCs use PowerVRs SGX540 graphics core.

Nintendo 3DS portable gaming device
LG Optimus 3D

A smartphone that uses stereo image sensors to capture 3D images is the HTC Evo 3D. This device runs on Qualcomm’s first dual-core SoC—the MSM8660. Clocked at 1.5 GHz and featuring an Adreno 220 GPU, this SoC is similar to the MSM8260. The only difference is that the MSM8260 is enabled with HSPA+ wireless communication, while the MSM8660 supports multi-mode HSPA+/CDMA2000 1xEV-DO Rev.B. The Adreno 220 has a theoretically slower fill rate than the SGX540 but the real-life performance difference is not known.

LEAVE A REPLY