Multitouch Computing Surface

The multitouch computing surface is a highly ambitious keystone project which has already generated a great deal of interest amongst students, faculty and other community members. It was constructed over the course of two semesters by several students of the Department of Computer Science; Jason Webb researched and coordinated the project and helped Jess Bazer in the actual construction of the system. Also involved in the project was Philip Lempke and Paden Hogeland.


Overview

Consisting mainly of plywood and 2×2″s the entire operation of the table is centered around four key components; an acrylic surface, infrared lighting, a projector and a vision system. Infrared light is flooded into all the edges of the special acrylic surface, which was specially designed to uniformly diffuse light. Upon user input (touching or placing compliant markers or objects onto the table) infrared light exclusively lights up the objects in contact with the surface of the table which the webcam is constantly analyzing for information.

Surface technology

Infrared light is flooded into the edges of the acrylic in order to create an invisible means of identifying user input. This is achieved through the process of Frustrated Total Internal Relection (FTIR) practically implemented using a strip of infrared SMD LEDs mounted inside an aluminum U-bracket frame surrounding the acrylic surface. For our table we chose to use a special type of acrylic called EndLighten which contains a very high number of translucent particles that cause any light that is passed into the acrylic to become uniformly diffused within the entire surface.

Vision system and projector

Characteristic of many tangible computing systems our table makes use of a closed-loop feedback system consisting of a projector for output and a vision system (a modified PS3Eye webcam) for input. Both are connected to a computer which, through clever programming, does real-time image analysis and feature acquisition on the webcam image stream, acts upon that input and displays output through the projector. This results in a seamless user experience in that anyone can walk up the table and begin interacting with information in an intuitive and natural way. Currently we are making use of the open-source community driven image analysis program Community Core Vision (CCV) which simplifies the common task of blob tracking and identification upon which developers can create multitouch applications.

Want to work on this project?

Become a member of ACM@UNK and get involved. Talk to faculty and other students; its a very good idea to come into the project with an idea of what you’d like to try to use the table and then learning whatever you need to to achieve it. This project currently needs your help!