This year, I'm helping the User Interface Software and Technology (UIST) conference put their proceedings videos online so that more people can access them. So far, I've gotten most of the videos from this year uploaded to the conference YouTube account. If you've closely followed the tech media coverage, you might recognize projects like the pressure sensitive keyboard and Mouse 2.0.
However, one of my favorite projects this year was a muscle sensing system that (among other things) allows you to play Guitar Hero without a guitar. It directly senses the electrical signals in your arms and maps those to the appropriate button presses. This was done by Scott Saponas, a Phd student at the University of Washington exploring a variety of biometric sensing techniques for input.
I also really like this project which combines a large touch table with other physical input devices such as multiple mice and keyboards all working together nicely. This prototype was done by Bjoern Hartman who has recently joined UC Berkeley's faculty.
I like systems that combine many modes of input so that you can dynamically choose the right device for the job and can gracefully scale to multiple people simultaneously. We are pretty far past having a 1:1 ratio between between people and computers. Yet, most systems today are still designed with 1 device and 1 user in mind.
Monday, October 26, 2009
Saturday, August 8, 2009
Pressure Sensitive Keyboard
Some of my colleagues in the Applied Sciences group in Microsoft Hardware have recently gotten some media exposure for one of their recent projects: the pressure sensitive keyboard
Congrats! It's a very nice prototype, and I look forward to seeing what the students at UIST cook up in the innovation contest.
If you aren't familiar with UIST (User Interface Software and Technology), it is a conference dedicated to new interface research. It is one of my favorite conferences to attend. It's where I demonstrated my past work like Automatic Projector Calibration, Foldable Interactive Displays, where Chris Harrison (recently known for Physically Changing Displays) showed off Scratch Input, Andy Wilson showed early pre-Surface touch tables, Hrvoje Benko showed Spherical Surface, MSR-Cambridge presented Second Light, and where Jeff Han first demonstrated FTIR. ...and that's just a small sample from the last 2-3 years.
So if you are interested in new interface technology (or you are part of the tech media) I would encourage you to attend UIST in October this year. You'll get to see what the latest interface researchers are doing all over the world and get to talk to the people behind the ones you have read about.
Congrats! It's a very nice prototype, and I look forward to seeing what the students at UIST cook up in the innovation contest.
If you aren't familiar with UIST (User Interface Software and Technology), it is a conference dedicated to new interface research. It is one of my favorite conferences to attend. It's where I demonstrated my past work like Automatic Projector Calibration, Foldable Interactive Displays, where Chris Harrison (recently known for Physically Changing Displays) showed off Scratch Input, Andy Wilson showed early pre-Surface touch tables, Hrvoje Benko showed Spherical Surface, MSR-Cambridge presented Second Light, and where Jeff Han first demonstrated FTIR. ...and that's just a small sample from the last 2-3 years.
So if you are interested in new interface technology (or you are part of the tech media) I would encourage you to attend UIST in October this year. You'll get to see what the latest interface researchers are doing all over the world and get to talk to the people behind the ones you have read about.
Thursday, July 23, 2009
Rhonda - 3D drawing
It's always good to give people reminders of what is possible when you don't stick with just a mouse and keyboard. This is a very nice piece of interface work for 3D drawing. The system is called Rhonda. The drawing is a bit on the abstract art side, but it's easy to see the level of control he has.
The great thing about 3d drawing is that the current tools are awful, so new ways of doing it are always interesting. Unfortunately, the bad thing about 3d drawing is that there are a relatively small number of people on the planet that really want to do it. So, it's unlikely these interfaces become widespread outside the domain.
The great thing about 3d drawing is that the current tools are awful, so new ways of doing it are always interesting. Unfortunately, the bad thing about 3d drawing is that there are a relatively small number of people on the planet that really want to do it. So, it's unlikely these interfaces become widespread outside the domain.
Monday, June 1, 2009
Project Natal
If you've been wondering why my project blog has been pretty quiet, I can finally say it is because I have been helping Xbox with Project Natal. If you haven't seen the vision video, it is definitely worth checking out:
Now, I should preface by saying I don't deserve credit for anything that you saw at E3. A large team of very smart, very hard working people were involved in building the demos you saw on stage. The part I am working on has much more to do with making sure this can transition from the E3 stage to your living room - for which there is an even larger team of very smart, very hard working people involved. The other thing I should say is that I can't really reveal any details that haven't already been made public. Unfortunately.
Speaking as someone who has been working in interface and sensing technology for nearly 10 years, this is an astonishing combination of hardware and software. The few times I’ve been able to show researchers the underlying components, their jaws drop with amazement... and with good reason.
The 3D sensor itself is a pretty incredible piece of equipment providing detailed 3D information about the environment similar to very expensive laser range finding systems but at a tiny fraction of the cost. Depth cameras provide you with a point cloud of the surface of objects that is fairly insensitive to various lighting conditions allowing you to do things that are simply impossible with a normal camera.
But once you have the 3D information, you then have to interpret that cloud of points as "people". This is where the researcher jaws stay dropped. The human tracking algorithms that the teams have developed are well ahead of the state of the art in computer vision in this domain. The sophistication and performance of the algorithms rival or exceed anything that I've seen in academic research, never mind a consumer product. At times, working on this project has felt like a miniature “Manhattan project” with developers and researchers from around the world coming together to make this happen.
We would all love to one day have our own personal holodeck. This is a pretty measurable step in that direction.
Xbox and Microsoft deserve an enormous amount of credit for taking on such an ambitious project. It’s one thing to say “Wouldn’t it be cool if…”, but it’s another thing entirely to say, “let’s dedicate the resources to really make it happen inventing whatever needs to be invented along the way.” I have to say it's pretty neat building the future.
Now, I should preface by saying I don't deserve credit for anything that you saw at E3. A large team of very smart, very hard working people were involved in building the demos you saw on stage. The part I am working on has much more to do with making sure this can transition from the E3 stage to your living room - for which there is an even larger team of very smart, very hard working people involved. The other thing I should say is that I can't really reveal any details that haven't already been made public. Unfortunately.
Speaking as someone who has been working in interface and sensing technology for nearly 10 years, this is an astonishing combination of hardware and software. The few times I’ve been able to show researchers the underlying components, their jaws drop with amazement... and with good reason.
The 3D sensor itself is a pretty incredible piece of equipment providing detailed 3D information about the environment similar to very expensive laser range finding systems but at a tiny fraction of the cost. Depth cameras provide you with a point cloud of the surface of objects that is fairly insensitive to various lighting conditions allowing you to do things that are simply impossible with a normal camera.
But once you have the 3D information, you then have to interpret that cloud of points as "people". This is where the researcher jaws stay dropped. The human tracking algorithms that the teams have developed are well ahead of the state of the art in computer vision in this domain. The sophistication and performance of the algorithms rival or exceed anything that I've seen in academic research, never mind a consumer product. At times, working on this project has felt like a miniature “Manhattan project” with developers and researchers from around the world coming together to make this happen.
We would all love to one day have our own personal holodeck. This is a pretty measurable step in that direction.
Xbox and Microsoft deserve an enormous amount of credit for taking on such an ambitious project. It’s one thing to say “Wouldn’t it be cool if…”, but it’s another thing entirely to say, “let’s dedicate the resources to really make it happen inventing whatever needs to be invented along the way.” I have to say it's pretty neat building the future.
Monday, April 20, 2009
Inspiring robots set to nice music
Since, I'm on a bit of a mini-robot kick at the moment. I thought I would share some of the videos I've seen lately that at least inspired it. The elegance of some of these movements and the music remind me of the child-like imagination that we perhaps once had before becoming more jaded with age. At least, the dreams of a little engineer.
This last one is just fun.
This last one is just fun.
Adventures with Bioloid
A couple of weeks ago, I was participating in the Siggraph Jury review process looking at some of the projects submitted this year. There were a couple of submissions using humanoid servo motor robots. Since I have always had an itch to play with robots that I've never had a chance to scratch, I decided to look into buying one. One of the best selections of these robots I found online was at Trossen Robotics. After a lot of reading and video watching, the current highest rated robots appear to be the Robonova, Kondo KHR-2HV, Futaba RBT-1, and the edutainment Robotis Bioloid. These are all very impressive robots that all (with the exception of the Bioloid) are used in the RoboCup Soccer competitions. Combined with the $900-$1500 price tags, these are definitely not your typical kids toy.
After much deliberation, I ended up going with the Bioloid. It's one of the more well documented robots with a healthy developer community, and it's highly reconfigurable. It comes with an "errector set"-like kit which allows you to build a variety of robots, not just humanoids. However, this modularity comes at the cost of extra weight. So, while the power-to-weight ratio of the servo motors maybe comparable to the higher-end robots, the overall performance of the robot is noticeably slower and clunkier. It also happens to be one of the cheaper robots at$900. I really liked the re-configurability (for future robot projects), the number of degrees of freedom (particularly in the hip), and the size of the community support/English documentation.
When I first got it, I was a little intimidated by the number of pieces in the box. Being an educational robot, I was hoping it was going to be a quick and simple setup. While the instructions are fairly easy to follow, it did take me about 5 hours from opening the box to a completed robot. Assembly requires handing many similar looking parts and lots of tiny screws. However, it is very satisfying to see the robot slowly take form as you assemble the components.
Once it is done, you do get an urge to say out loud "IT'S ALIVE!" with a grin on your face.
The included CD does have software to program and contol the robot, but as I expected, it is somewhat limited to keyframe pose playback or simplfied visual programming. My original intent was to run the robot using my own C/C++ or C# program. So, I didn't spend much time with the included software other than to verify the robot worked and to get an understanding of the control flow. The C development tools described by the documentation are for writing programs that run on the Atmega128 chip inside the robot. What I wanted was to run the control logic on my PC. However, getting my own software to control the robot ended up being quite a bit more challenging than I had expected.
The first major hurdle I had was the physical connection. The kit comes with a serial cable for communication with the robot, but it uses a DB-9 connector that is only found on desktops these days and my main machine is a laptop. The Bioloid has an expansion slot on its control board, the CM-5, for a wireless Zigbee connection. There are a few resources online explaining how to use a Bluetooh Module instead of a Zigbee module. So, I had ordered a BlueSMIRF module (WRL-08332) from Sparkfun in anticipation of doing this.
The Bioloid controller requires 57600 baud serial communication, but the Bluetooth modules typically come set to 9600 baud. To my frustration, the information on Sparkfun's website on exactly how to re-configure the baud rate is a little obtuse. They have different chipset versions with different command sets. Something I burned about 2 hours learning was that newer modules, with the BGB203 chip, CANNOT be configured wirelessly over the Bluetooth connection. They have to be configured via the wired TTL TX/RX connections. Moreover, to change the baud rate and save it to memory requires a TTL connection that can dynamically change its baud rate to issue the "save to memory" command at the new baud rate. My short lived attempt at trying to using a second Bluetooth module was a failure because while it could issue the "change baud rate" command it could not issue the "save to memory" command. =oP Anyway, once I got my hands on a USB TTL-232 cable, things went smoothly. One other important thing to check is the Bluetooth passkey of the module (using the configuration commands). In Vista, to make the Bluetooth serial port binding behave nicely I had to configure the bluetooth connection to use the passkey. It happened to be set to "0000" on my module despite the documentation from Sparkfun indicating it would be "default".
The second problem I ran into was that once I connected the Bluetooth module to the Zigbee communication pins, I discovered that it is NOT A REPLACEMENT for to the PC LINK serial programming cable port at the top of the CM-5. The data from the Zigbee unit is only meant to provide command bytes triggering behaviors in a program running natively on the CM-5. What I wanted was raw access to the servos so I could run control logic on the PC. This can only done via the PC LINK. The data from the Zigbee module never makes it to the servo motor bus. So after some digging, I found a schematic for the CM-5 and found where to piggy back data onto the main PC link. The image below shows where I connected my wires. The TX from the Bluetooth module is attached to the logic level side of the RS232 level converter. The other wires are connected to the Zigbee pins as decribed by the reference above.
This defintiely at your own risk and may behave badly if you try to connect the wired PC link cable at the same time. But since I intend to only use the Bluetooth serial connection, this was not a concern for me.
Now, I can run the included software such as Motion Builder using the Bluetooth connection as if I had the wired PC Link cable attached. Great! The CM-5 provides some commands such that if you open up a ternimal window to the serial port, you can get/set the data for each servo manually. However, the human readable commands use A LOT of bandwidth overhead. Given that the 57600 baud connection is already runnning much slower than the 1000000 native baud rate of the Dynamixel AX-12 servo motors, trying to control the robot via these commands was unbearably slow even if executed programmatically and I kept running into buffer limits on more complex commands.
A not-very-well documented mode of the CM-5 is "Toss Mode" which appears to be a pass through mode to the servo motor bus. Put the CM-5 in Manage Mode, and hit the Start button. In your PC's terminal window, type "t" then hit enter. It should respond with "Toss Mode". At this point, any bytes sent via the serial connection is pushed directly onto the servo motor bus and vice versa. Finally! Exactly what I wanted. After slowly making my way through the Dynamixel AX-12 User's guide, I now have a small C# library that provides direct control/communication with the servos via the serial port. It's still pretty rough but once I clean it up a bit more, I'll probably make it available for download. But, it is a farily straight forward implementation of key commands from the Dynamixel users manual. The hard part was getting the hardware into the right configuration to allow direct communication.
The next step to do is write my own control and logic software to see if I can make it do more interesting things than simply recall preset poses. There's also a mild annoyance in that the 57600 baud serial link is about 17x slower than the 1000000 baud servo bus speed. If this becomes an issue, I might explore making an alternative controller board that would provide 1000000 baud pass through, or even put each limb on a separate bus to parallelize I/O making it even faster. This could result in a 70x speed bump in servo communication which would be helpful with real-time control logic.
(update 4-22-09) It looks like Scott Ferguson has C# libraries for controlling a dynamixel directly via a serial port. He was using a USB2Dynamixel adapter. The bad this is that it doesn't provide power to the servo, only control. So, using the CM-5 a wirelss control/power brick is still fairly attractive.
After much deliberation, I ended up going with the Bioloid. It's one of the more well documented robots with a healthy developer community, and it's highly reconfigurable. It comes with an "errector set"-like kit which allows you to build a variety of robots, not just humanoids. However, this modularity comes at the cost of extra weight. So, while the power-to-weight ratio of the servo motors maybe comparable to the higher-end robots, the overall performance of the robot is noticeably slower and clunkier. It also happens to be one of the cheaper robots at$900. I really liked the re-configurability (for future robot projects), the number of degrees of freedom (particularly in the hip), and the size of the community support/English documentation.
When I first got it, I was a little intimidated by the number of pieces in the box. Being an educational robot, I was hoping it was going to be a quick and simple setup. While the instructions are fairly easy to follow, it did take me about 5 hours from opening the box to a completed robot. Assembly requires handing many similar looking parts and lots of tiny screws. However, it is very satisfying to see the robot slowly take form as you assemble the components.
Once it is done, you do get an urge to say out loud "IT'S ALIVE!" with a grin on your face.
The included CD does have software to program and contol the robot, but as I expected, it is somewhat limited to keyframe pose playback or simplfied visual programming. My original intent was to run the robot using my own C/C++ or C# program. So, I didn't spend much time with the included software other than to verify the robot worked and to get an understanding of the control flow. The C development tools described by the documentation are for writing programs that run on the Atmega128 chip inside the robot. What I wanted was to run the control logic on my PC. However, getting my own software to control the robot ended up being quite a bit more challenging than I had expected.
The first major hurdle I had was the physical connection. The kit comes with a serial cable for communication with the robot, but it uses a DB-9 connector that is only found on desktops these days and my main machine is a laptop. The Bioloid has an expansion slot on its control board, the CM-5, for a wireless Zigbee connection. There are a few resources online explaining how to use a Bluetooh Module instead of a Zigbee module. So, I had ordered a BlueSMIRF module (WRL-08332) from Sparkfun in anticipation of doing this.
The Bioloid controller requires 57600 baud serial communication, but the Bluetooth modules typically come set to 9600 baud. To my frustration, the information on Sparkfun's website on exactly how to re-configure the baud rate is a little obtuse. They have different chipset versions with different command sets. Something I burned about 2 hours learning was that newer modules, with the BGB203 chip, CANNOT be configured wirelessly over the Bluetooth connection. They have to be configured via the wired TTL TX/RX connections. Moreover, to change the baud rate and save it to memory requires a TTL connection that can dynamically change its baud rate to issue the "save to memory" command at the new baud rate. My short lived attempt at trying to using a second Bluetooth module was a failure because while it could issue the "change baud rate" command it could not issue the "save to memory" command. =oP Anyway, once I got my hands on a USB TTL-232 cable, things went smoothly. One other important thing to check is the Bluetooth passkey of the module (using the configuration commands). In Vista, to make the Bluetooth serial port binding behave nicely I had to configure the bluetooth connection to use the passkey. It happened to be set to "0000" on my module despite the documentation from Sparkfun indicating it would be "default".
The second problem I ran into was that once I connected the Bluetooth module to the Zigbee communication pins, I discovered that it is NOT A REPLACEMENT for to the PC LINK serial programming cable port at the top of the CM-5. The data from the Zigbee unit is only meant to provide command bytes triggering behaviors in a program running natively on the CM-5. What I wanted was raw access to the servos so I could run control logic on the PC. This can only done via the PC LINK. The data from the Zigbee module never makes it to the servo motor bus. So after some digging, I found a schematic for the CM-5 and found where to piggy back data onto the main PC link. The image below shows where I connected my wires. The TX from the Bluetooth module is attached to the logic level side of the RS232 level converter. The other wires are connected to the Zigbee pins as decribed by the reference above.
This defintiely at your own risk and may behave badly if you try to connect the wired PC link cable at the same time. But since I intend to only use the Bluetooth serial connection, this was not a concern for me.
Now, I can run the included software such as Motion Builder using the Bluetooth connection as if I had the wired PC Link cable attached. Great! The CM-5 provides some commands such that if you open up a ternimal window to the serial port, you can get/set the data for each servo manually. However, the human readable commands use A LOT of bandwidth overhead. Given that the 57600 baud connection is already runnning much slower than the 1000000 native baud rate of the Dynamixel AX-12 servo motors, trying to control the robot via these commands was unbearably slow even if executed programmatically and I kept running into buffer limits on more complex commands.
A not-very-well documented mode of the CM-5 is "Toss Mode" which appears to be a pass through mode to the servo motor bus. Put the CM-5 in Manage Mode, and hit the Start button. In your PC's terminal window, type "t" then hit enter. It should respond with "Toss Mode". At this point, any bytes sent via the serial connection is pushed directly onto the servo motor bus and vice versa. Finally! Exactly what I wanted. After slowly making my way through the Dynamixel AX-12 User's guide, I now have a small C# library that provides direct control/communication with the servos via the serial port. It's still pretty rough but once I clean it up a bit more, I'll probably make it available for download. But, it is a farily straight forward implementation of key commands from the Dynamixel users manual. The hard part was getting the hardware into the right configuration to allow direct communication.
The next step to do is write my own control and logic software to see if I can make it do more interesting things than simply recall preset poses. There's also a mild annoyance in that the 57600 baud serial link is about 17x slower than the 1000000 baud servo bus speed. If this becomes an issue, I might explore making an alternative controller board that would provide 1000000 baud pass through, or even put each limb on a separate bus to parallelize I/O making it even faster. This could result in a 70x speed bump in servo communication which would be helpful with real-time control logic.
(update 4-22-09) It looks like Scott Ferguson has C# libraries for controlling a dynamixel directly via a serial port. He was using a USB2Dynamixel adapter. The bad this is that it doesn't provide power to the servo, only control. So, using the CM-5 a wirelss control/power brick is still fairly attractive.
Saturday, March 21, 2009
Magnetic Ink
In a bit of procrastineering research, I started looking into making my own ferrofluid. Apparently the best stuff to use these days is Magnetic Ink Character Recognition (MICR) Toner. But, it's a little hard to find in bulk. The most amazing work I've seen done with ferro-fluid is by Sachiko Kodama:
It's difficult and messy stuff to work with. Not to mention you need to know how to generate custom magnetic fields to move it. So, it's always been a little low on my project list.
Though, in my brief search for materials, I came across this wonderful artwork by flight404. This is done with an application called Processing, which is a programming environment designed for computational art that grew out of the MIT Media Lab. It's definitely evolved quite a bit since the last time I looked at it if you can create these kinds of visuals. It's beautiful and all free (as in open source). =o) Maybe I'll try my hand at it again (in what little free time I have). Simply trying to recreate something that approximates this visual style would be a satifying exercise.
It's difficult and messy stuff to work with. Not to mention you need to know how to generate custom magnetic fields to move it. So, it's always been a little low on my project list.
Though, in my brief search for materials, I came across this wonderful artwork by flight404. This is done with an application called Processing, which is a programming environment designed for computational art that grew out of the MIT Media Lab. It's definitely evolved quite a bit since the last time I looked at it if you can create these kinds of visuals. It's beautiful and all free (as in open source). =o) Maybe I'll try my hand at it again (in what little free time I have). Simply trying to recreate something that approximates this visual style would be a satifying exercise.
Thursday, March 19, 2009
What would you do with a thousand sheep?
This is absolutely astonishing! What are YOU doing with your sheep? eh? Whatever it is, it's probably not as good as this:
Sunday, March 15, 2009
"Birth of a new art form..."
It's not often you hear that phrase. I don't think I've ever used it myself, but a few people have been tossing that around when talking about Kutiman's work at Thru-You.com who does video remixes of musicians around the world to create amazing new musical/video pieces. The one that is getting the most blog exposure is called The Mother of All Funk Chords. However, the one I think more clearly demonstrates the subtlety and intricacy of this artistic contribution is below entitled "I am new":
There have certainly been many video remixes before, but this steps it up a few notches in several directions - in no small part facilitated by abundance and wealth of YouTube performances. One of the earlier examples of musical/video editing that I really enjoyed is work by Lasse Gjertsen. He started with "human beat boxing", but really stepped it up in the following video. It's worth remembering that he doesn't know how to play these instruments:
On the topic of the great examples of creating musical mixes with video, this is another wonderful example of what one individual with a camera, video editor, a few instruments, and some determination can create:
Trying your hand at making something like this would be an excellent procrastineering project.
There have certainly been many video remixes before, but this steps it up a few notches in several directions - in no small part facilitated by abundance and wealth of YouTube performances. One of the earlier examples of musical/video editing that I really enjoyed is work by Lasse Gjertsen. He started with "human beat boxing", but really stepped it up in the following video. It's worth remembering that he doesn't know how to play these instruments:
On the topic of the great examples of creating musical mixes with video, this is another wonderful example of what one individual with a camera, video editor, a few instruments, and some determination can create:
Trying your hand at making something like this would be an excellent procrastineering project.
Monday, February 16, 2009
Wonderful slow-motion stabilized video montage of New York
I'm not sure who Vincente Sachuc is, but he's certainly got a career in cinematography if he wants one. This makes me want to get back into creative film work rather than technical videos. He mentions this is captured with a Casio EX-F1 at 300 fps and edited at 24 fps. A skateboard and a Steadicam Merlin help with the smooth traveling shots. Of course, you could buy one of my Poor Man's Steadycams at 5% of the cost of a Merlin =o). Colorization done in Premiere and Photoshop.
New York 2008 from Vicente Sahuc on Vimeo.
New York 2008 from Vicente Sahuc on Vimeo.
Monday, January 26, 2009
Impressive and Frightening
This is one of the most impressive (and frightening on several levels) pieces of engineering I've ever seen. When there exists a device that can turn a living tree into logs in under 15 seconds, it is no surprise that deforestation can be a problem. Fortunately, this appears to be a tree farm... for IKEA?
The unbridled and unapologetic efficiency by which this machine performs its function leaves a visceral sensation of both awe and horror. It is distrubingly animal-like. The fact the tree is mostly debarked by the time it hits the ground makes my jaw drop.
The unbridled and unapologetic efficiency by which this machine performs its function leaves a visceral sensation of both awe and horror. It is distrubingly animal-like. The fact the tree is mostly debarked by the time it hits the ground makes my jaw drop.
Friday, January 9, 2009
Sensitive Object - make any surface touch sensitive
This is not extremely new technology, but not a lot of people know about it and it has evolved quite nicely in recent years. Sensitive Object is a French company that sepecializes in the use of microphones to detect touches anywhere on an arbitrary objects. With multiple mics they can determine the location of touches or even dragging (as well as multiple dragging touches - not shown in the video below).
I've had a chance to play with it in person, and it's pretty impressive stuff. It works a lot better than I would expect. It doesn't detect touch locations using a triangulation technique (i.e. see how long it takes for the sound to reach each microphone) because that would vary greatly depending on the material the object was made of (plastic, metal, wood, etc) and depend on the shape of the object. Sensitive Object can do any shape like a vase, or statue.
They accomplish this using a pattern matching technique. Each touch sound gets compared to a known table of sound-to-location mappings. Which means you have to enter this mapping during a calibration step. (i.e. give the system a couple examples of touching each location that you want to recognize, touching here sounds like this.... touching there sounds like that). This upfront calibration step is somewhat heavy, but when you are done it's quite powerful. You could turn a cardboard box, a basketball, your car, or even your friend's head into a touch sensitive surface (if it's hard enough). Though, larger objects may only be "bang sensitive" surfaces.
I've had a chance to play with it in person, and it's pretty impressive stuff. It works a lot better than I would expect. It doesn't detect touch locations using a triangulation technique (i.e. see how long it takes for the sound to reach each microphone) because that would vary greatly depending on the material the object was made of (plastic, metal, wood, etc) and depend on the shape of the object. Sensitive Object can do any shape like a vase, or statue.
They accomplish this using a pattern matching technique. Each touch sound gets compared to a known table of sound-to-location mappings. Which means you have to enter this mapping during a calibration step. (i.e. give the system a couple examples of touching each location that you want to recognize, touching here sounds like this.... touching there sounds like that). This upfront calibration step is somewhat heavy, but when you are done it's quite powerful. You could turn a cardboard box, a basketball, your car, or even your friend's head into a touch sensitive surface (if it's hard enough). Though, larger objects may only be "bang sensitive" surfaces.