- The main idea is that you threshold the image into three areas: background (light blue), fingers (dark blue; these are shown as an overlay on-screen) and pressure points (not blue).
- I used a bag of dye for now, since that was easy to make. It might be feasible to tape LEDs to the edges of the table, and use FTIR-like scattering; I’d like to try that later. Actually, if you have one of those cheesy engraved-perspex-plate-with-blue-LEDs-in-the-base things lying around, you might be able to use that.
- Large areas of non-blue are interpreted as fingers. There is a mouse mode, where every touch immediately moves the mouse to that point, and a multi-touch mode which sends an NSNotification with a list of points for each frame. These will of course only be understood by programs that understand this protocol—of which there currently exist only one (the rotozoomer at the end)
- The on-screen display is just a regular transparent OSX window. The background pixels are 100% transparent (alpha=0), and the hands show up as black with alpha 0.1 or so.
So cool to see this sort of innovative thinking and embrace of constraints.