A lot has happened since 2009—the year when Pranav Mistry, the Indian technology wizard who is currently working as a researcher at the MIT Media Lab, US, unveiled the ‘SixthSense’ technology.

‘SixthSense’ is a wearable gestural interface that allows users to project the digital information existing on the World Wide Web onto any surface around them and use natural hand gestures to interact with that information.

Vandana Sharma of EFY Bureau caught up with Mistry to know where the future of computing is headed, what India needs to do to come up with path-breaking and life-transforming innovations and a lot more…


PRANAV MISTRY, RESEARCHER, MIT MEDIA LAB, USA
PRANAV MISTRY, RESEARCHER, MIT MEDIA LAB, USA

FEBRUARY 2012: Q. What led you to work on the SixthSense?
A. From the very beginning, I wondered what the future of computing would be like and how we will interact with the digital information space, which has hitherto remained confined to the rectangular screens of our mobile phones, laptops and tablets. I always used to think why can’t this model be broken?

More than this, I thought it would be interesting to use the real world as the interaction space with the digital world. Before the SixthSense project, there were many other projects that I undertook to achieve this end, but probably my approach was not right. However, gradually things began to fall in place.

Q. SixthSense looks no less than magic. Could you demystify it in simple words?
A. To develop the SixthSense interface, I used a combination of very simple hardware components comprising a camera, sensors, an Internet-enabled mobile device and a projector.

Explore Circuits and Projects Explore Videos and Tutorials

In the latest version of the interface, I am using a laser projector with a laser diode inside, which can project on any surface. Technically, one interesting thing about a laser projector is that it never goes out of focus. Since the application that I have suggested in the interface requires the user to wear a projector on his body, the laser projector becomes advantageous as the user doesn’t have to adjust the focus.

So hardware-wise it is very simple. The plus point of these hardware components is that they are cheap and going smaller and smaller every month, leave alone a year.

If you view the video presentation I made at TED (http://bit.ly/2GDYFj), you will observe that I am just making a gesture of taking a picture and the picture is actually getting clicked. To do this, the system needs to understand the gesture the user is making. So the key intelligence that has gone in SixthSense is actually in gathering the understanding of the scene and deciding what to project, where to project, what is in front of the user, what kind of gestures the user is making, etc.

READ
What's New in EDA Tools

All this intelligence comes from the computer-vision software and machine-learning technology. The camera also sees what the user sees. It not only captures the gestures but also the scene and objects around it. Like, if the user holds a book in his hands, the camera matches the cover of the book with the cover of thousands of books available online. Once a match is struck, it can tell you the price, user reviews and also whether your friends already have a copy of the book or not.

Q. How does the device search over the Internet?
A. The device is connected to the cloud. It uses a lot of search engine application program interfaces (APIs) like Amazon APIs. As it connects you to the Internet world, it enables access to all the dynamic information/data while you continue to be in the physical world.

Of course, the device doesn’t always make use of the Internet. It uses many of the software available on it and the mobile phone. For example, it can take pictures without going to the Internet. It can save and modify pictures, zoom in, zoom out and do a lot more.

Q. How has the landscape evolved since you unveiled the device in 2009?
After 2009, the industry is interested in two technologies: augmented reality and gesture interaction. Microsoft is working on gestures-based gaming using Kinect. A lot many gesture-based input devices are also being introduced. Some of the big corporates working on these two technologies are MIT Media Lab sponsors and we are helping them in this work.

Other advancements are happening at the hardware level. You must have noticed that when I made the SixthSense presentation in 2009, the hardware used was of the size of a helmet. But now I am using a device which can be fitted into a match box. This rarely happens. While computing devices are becoming smaller and smaller, I have never noticed in this industry in my fifteen years of experience any device becoming 200 to 300 per cent smaller, as SixthSense device has in the last two years.

Q. Do you see this trend being replicated in other devices like mobile phones?
A. Mobile phones have now reached the limit of going smaller. This is not a technical problem. The reason is we are bound by the screen sizes of these devices. While the components inside the mobile phone can be made smaller and smaller, and a mobile phone can be made of the size of a coin, technically, there would be user limitation. The size of the mobile phone can be reduced only to the the size of the interaction that you wish to give to a user via the screen of the device.

READ
SES 2016: Strategic Electronics Meetup Report

The advantage of using the projection technology is that it can be reduced to the size of a button and any surface can be used as the output medium. You can project on a table, wall, newspaper or your palm—so it eliminates the limitation of the screen. That’s an interesting phenomenon which will impact the way we interact with the digital world.

Q. You have made the SixthSense technology Open Source. What inspired you to share the technology with the community?
A. A technology which gets locked down into corporate policies and confines of the intellectual property can be soon forgotten.

I come from India, from an area where till a few years ago the notion of technological advances was always associated with the western world; to advances aimed at making the life of the western world better and better. But life in the western world is good already and we need to break this model.

It is the two-thirds of the other world that needs the technological advances so that the life of people in these countries becomes better. While I could have made more money by selling the technology to a big company, I will get more blessings if I share the technology out in the open for the benefit of the masses.

Q. Will the SixthSense technology replace the mobile phones and laptops some day?
A. SixthSense is not an alternative to these devices. It is only going to add an option to the existing computing world. We are going to become human again by making computing more human. That is, access to the digital world will no longer remain confined to the rectangular screen of devices. People will be able to interact with and access the digital world while continuing to be in the real world, and not necessarily via the conventional devices.

READ
"I knew Open Source hardware was a necessity to redesign into newer hardware"

Q. Which of your other research projects are special to you?
A. Ghost in the machine—a project on which I worked when I was at IIT-Mumbai—is a special project. The project is likely to have long-term implications. It explored how machines can be made more creative. Big corporates like IBM and DARPA are working on future technologies like artificial intelligence with an aim to make machines more intelligent. They are working on making machines as interesting collaborators of humans, capable of better serving the humanity and earth.

In 2008, I explored the future of augmented reality and used it in my projects as an input medium to access information from the digital world. But now, as part of my current project named TeleTouch, I am exploring the inverse of augment reality. So far it has remained an input device for us where the information is accessed via devices. But in this project I am trying to use augment reality as an output medium to touch and control things that are far and wide in the real world. I am trying to explore how can I touch far, for instance, my door which is 20 metres away from me.

The technology will enable users to use their smartphone’s camera and control everything they see on the screen by touching it. Users can interact with their appliances from far and perform tasks like opening the door and switching the light on or off, just by touching the objects on the phone’s screen.

Q. Last but not the least, tell us about your association with EFY?
A. I grew up reading EFY in India. And it was so much pleasure and surprise when I was approached by you for getting featured in it. I am sure my dad will feel great. He used to teach me and make all sort of stuff from EFY guides


LEAVE A REPLY