SixthSense


            integrating information with the real world


ABOUT                                                                                                               

'SixthSense' is a wearable gestural interface that augments the physical world around us with digital information and lets us use natural hand gestures to interact with that information.

We've evolved over millions of years to sense the world around us. When we encounter something, someone or some place, we use our five natural senses to perceive information about it; that information helps us make decisions and chose the right actions to take. But arguably the most useful information that can help us make the right decision is not naturally perceivable with our five senses, namely the data, information and knowledge that mankind has accumulated about everything and which is increasingly all available online. Although the miniaturization of computing devices allows us to carry computers in our pockets, keeping us continually connected to the digital world, there is no link between our digital devices and our interactions with the physical world. Information is confined traditionally on paper or digitally on a screen. SixthSense bridges this gap, bringing intangible, digital information out into the tangible world, and allowing us to interact with this information via natural hand gestures. ‘SixthSense’ frees information from its confines by seamlessly integrating it with reality, and thus making the entire world your computer.

The SixthSense prototype is comprised of a pocket projector, a mirror and a camera. The hardware components are coupled in a pendant like mobile wearable device. Both the projector and the camera are connected to the mobile computing device in the user’s pocket. The projector projects visual information enabling surfaces, walls and physical objects around us to be used as interfaces; while the camera recognizes and tracks user's hand gestures and physical objects using computer-vision based techniques. The software program processes the video stream data captured by the camera and tracks the locations of the colored markers (visual tracking fiducials) at the tip of the user’s fingers using simple computer-vision techniques. The movements and arrangements of these fiducials are interpreted into gestures that act as interaction instructions for the projected application interfaces. The maximum number of tracked fingers is only constrained by the number of unique fiducials, thus SixthSense also supports multi-touch and multi-user interaction.

The SixthSense prototype implements several applications that demonstrate the usefulness, viability and flexibility of the system. The map application lets the user navigate a map displayed on a nearby surface using hand gestures, similar to gestures supported by Multi-Touch based systems, letting the user zoom in, zoom out or pan using intuitive hand movements. The drawing application lets the user draw on any surface by tracking the fingertip movements of the user’s index finger. SixthSense also recognizes user’s freehand gestures (postures). For example, the SixthSense system implements a gestural camera that takes photos of the scene the user is looking at by detecting the ‘framing’ gesture. The user can stop by any surface or wall and flick through the photos he/she has taken. SixthSense also lets the user draw icons or symbols in the air using the movement of the index finger and recognizes those symbols as interaction instructions. For example, drawing a magnifying glass symbol takes the user to the map application or drawing an ‘@’ symbol lets the user check his mail. The SixthSense system also augments physical objects the user is interacting with by projecting more information about these objects projected on them. For example, a newspaper can show live video news or dynamic information can be provided on a regular piece of paper. The gesture of drawing a circle on the user’s wrist projects an analog watch.

The current prototype system costs approximate $350 to build. Instructions on how to make your own prototype device can be found here

Who Is Pranav Mistry?
Pranav Mistry (b. 1981 in Palanpur, India) is one of the inventors of SixthSense.[1] He is a research assistant and a PhD candidate at MIT Media Lab. Before joining MIT he worked as a UX Researcher with Microsoft. He received Master in Media Arts and Sciences from MIT and Master of Design from IIT Bombay. He has completed bachelors’ degree in Computer Science and Engineering. [2] He is from Palanpur, which is situated in northern Gujarat in India. SixthSense has recently attracted global attention.[3][4] Among some of his previous work, Pranav has invented Mouseless - an invisible computer mouse; intelligent sticky notes that can be searched, located and can send reminders and messages; a pen that can draw in 3D; and a public map that can act as Google of physical world. Pranav holds a Master in Media Arts and Sciences from MIT and Master of Design from Industrial Design Center, IIT Bombay besides his Bachelor degree in Computer Engineering from Nirma Institute Of Technology, Ahmedabad. Pranav’s research interests include Ubiquitous computing, Gestural and Tangible Interaction, AI, Augmented reality, Machine vision, Collective intelligence and Robotics.
SixthSense was awarded the 2009 Invention Award by Popular Science.[1] He was also named to the MIT Technology Review TR35 as one of the top 35 innovators in the world under the age of 35.[5] In 2010, he was named to Creativity Magazine's Creativity 50.[6] Mistry has been called "one of ten, best inventors in the world right now"[7] by Chris Anderson. Mistry has been listed as one of the 15 Asian Scientists To Watch by Asian Scientist Magazine on 15 May 2011. [8]



About
                                                                                                        

Nothing can be and can not be one and at the same time and I am, I am Pranav Mistry.

Currently, I am a Research Assistant and PhD candidate at the MIT Media Lab. Before joining MIT I worked as a UX Researcher with Microsoft. I received my Master in Media Arts and Sciences from MIT and Master of Design from IIT Bombay. I have completed my bachelors degree in Computer Science and Engineering. Palanpur is my hometown, which is situated in northern Gujarat in India.

Exposure to fields like Design to Technology and from Art to Psychology gave me a quite nice/interesting viewpoint to the world. I love to see technology from design perspective and vice versa. This vision reflects in almost all of my projects and research work as well. in short, I do what I love and I love what I do. I am a 'Desigineer' :)



Awards and Achievements

  • Winner of ‘TR35 2009’ award, Technology Review. [9]
  • Winner of ‘INVENTION OF THE YEAR 2009’ award, Popular Science.
  • Winner of ‘Young Indian Innovator 2009’ award, Digit Magazine.
  • Speaker for TED 2009 talk on ‘sixthsense’, TED 2009, Long Beach, CA.
  • 2nd in SPACE competition in SIGGRAPH2004.
  • 1st in Innovation Fair at India level, for project MARBO.
  • All India 3rd in Open Hardware Contest in Techfest @ IIT Bombay for DATAG2.02.
  • 3rd in Model Presentation at INGENIUM 2002
  • 3rd in Creative art competition organized by ISRO.
  • 1st in Design competition organized by IEEE, India chapter.
  • 2nd in website designing organized by ACES.
  • Selected for the Dirubhai Ambani Foundation Award for securing the 1st rank in district.
  • 2nd in on the spot Model Making contest in Techfest @ IIT Bombay.

Video



PICTURES

     

     

     

0 comments:

Post a Comment