1. Throwable Displays using the Wii remote
This I actually built and demoed in my lab at CMU. But, it only existed for about two days before I had to break it down to move and I didn’t get a chance to document it. Several months ago, a patent filed by Philips made some of the tech new sites about throwable displays in games. But it was a concept patent pretty far from a working demo. However, it turns out it’s pretty easy to implement using a projector, a wiimote, an IR emitter, and some of our trusty retro-reflective tape. It essentially combines the techniques from the finger tracking and the wiimote whiteboard projects. You put a little bit of reflective tape on each corner of a square piece of foam core, turn on the IR emitter so the Wiimote can see the four corners, align the camera tracking data with a projector using the 4-point calibration, and then the projector can display images perfectly aligned to the edges of a moving piece of foam core. The process of using a projector to augment the appearance of objects is called “Spatially Augmented Reality”.
Research colleagues of mine made a really fun demo where they tracked an air hockey puck from above and projected down on the air hockey table to display all sorts of visual effects that responded to the location/motion of the puck. They were demonstrating a fancy new type of high-speed tracking system. But, the Wiimote works quite well at 100Hz. I wish I had documented the throwable display on video, because it worked quite well. You really could pick it up and throw it around and the video image stays fairly locked onto the surface. There's a small latency primarily due to the 60Hz refresh of the projector. I even made a rough demo of the air hockey table, but it was VERY rough - just drew a line tail behind the puck. Again, a little patch of reflective tape on the puck and IR ring illuminated Wiimote above. However, the throwable display concept is actually a simpler implementation of a project I did earlier on “Foldable Displays” (tracked using a Wii remote) which I did make a video of, but not in tutorial format like my other Wii videos:
2. 3D tracking using two (or more) Wii remotes
Since the tracking in the Wiimote is done with a camera, if you have two cameras you can do a simple stereo vision triangulation to do full 3D motion capture for about $100. This was actually already done by some people at the University of Cambridge:
This is text book computer vision algorithm, but I haven’t gotten around to making a C# implementation. Obviously, you can use more than 2 wii remotes to increase tracking stability as well as increase occlusion tolerance. This would be a VERY useful and popular utility if anyone out there wants to make a nice software tool to transform multiple wiimotes into a cheap mocap system.
3. Universal Pointer using the Wii remote
The nice thing about the camera is that it can detect multiple points in different configurations. The four dots could be used to create a set of barcode-like or glyph-like identifiers above each screen in a multi-display environment. This would not only provide pointing functionality on each screen, but also provide screen ID which means you could interact with any cooperating computer simply by pointing at its screen. No fumbling for the mouse and keyboard, just walk around the room, or office building, or campus, and point at a screen. If all the computers were networked, you could carry files with your Wiimote virtually (using the controller ID) letting you copy/paste or otherwise manipulate documents across arbitrary screens regardless of what computer is driving the display or what input device is attached to the computer. You just carry your universal pointer that works on any screen, anywhere automatically. This makes a big infrastructure assumption, but it really alters the way one could interact with computational environments. The computers disappear and it becomes just a bunch of screens and your universal pointer.
Similarly, arbitrary objects could have unique IR identifiers. For example, if each lamp in your house had a uniquely shaped Wii sensor bar on it (and they were computer controlled lamps, of course), you could turn on a specific lamp simply by pointing at it and pressing a button or dim it by rotating the wiimote. If was an RGB led lamp, you could specify brightness, hue, and saturation with a quick gesture..
4. Laser Tag using Wii remotes
If you put IR leds on each of the Wii remotes, they can see each other. So, you can have a laser-tag like interaction just using Wii remotes – no display, except perhaps if you wanted a big score board. You’d have to validate which Wii remote you were shooting at, which you would do using some kind of IR LED blink sequence for confirmation. Just wire up the IR leds to the LEDs built into the Wii remote, so you can computer control their illumination.
5. IR tracking with ID using the Wii remote
This is more technical (and related to the above idea), but it addresses an important issue that I have yet to see done in either commercial or research systems. The problem with IR blob tracking using cameras is that you can’t which blob is which. You could blink the LEDs to broadcast their ID. But, this 1) would be slow because the ID data rate is limited by the frame rate of the camera 2) really hurts your tracking rate/reliability because you don’t know where the dot is when the LED is off. Now, the Wii remote’s camera chip gives 100Hz update, which might be tolerable for a small number of IDs. But, this approach doesn’t really work well when you want fast tracking with lots of unique IDs. One solution is to attach a high speed IR receiver to the side of the Wii remote for data transmission and simply use the camera for location tracking. IR receivers used in your TV probably support data rates of around 4000 bps - much higher than the 50 bps sampling limit you could squeeze out of the Wii remote. So, as the LEDs furiously blink their IDs at 4Kbps, they look like they are constantly on to the camera. This yields good tracking as well as many IDs. Now when you have multiple LEDs transmitting simultaneously, you’ll get packet collisions. So, some type of collision avoidance scheme would be needed of which there are many to choose from. It will also be necessary to re-associate the data packet with a visible dot. So, not all the LEDs can be visible all the time. But, you only have to sacrifice a small number of camera frames to support a large number of IDs. You can also probably boost performance if you are willing to accept short term probabilistic ID association.