Skip to main content

The Patent

http://www.freepatentsonline.com/20100199232.pdf

I was somewhat surprised to come across a patent for Sixth Sense given the initial declaration that the code would be open sourced. I was even more surprised reading the contents of the patent at how general it is...oh hum lets not go there apart from to say I'm not a fan of broad patents, but I wanted to bring it up since 1) it is the only "detailed" source of information on the implementation of Sixth Sense and 2) its useful to acknowledge and recognised since I don't particularly want to be "trolled" in this research.

So yes, its out there and its worth a quick skim through (or not since a "clean room" implementation might be advisable but too late for me!) since it tells us that the source for Sixth Sense is largely based on several open source projects (see 0120). Touchless is used for fiducial recognition, the $1 Unistroke Recogniser algorithm for gesture commands, and ARToolkit for the output. OpenCV is mentioned and possibly does some of the heavy lifting for object recognition (possibly HMM?). I also just realised that the microphone is working in tandem with the camera when it used on paper, probably using some aspect of the sound on the paper to indicate contact with the destination surface since a single camera by itself is insufficient to determine when contact occurs.

So what do we do with this knowledge?

I've played with OpenCV and HandVu in the past and found them (for hand tracking at least) not that great since neither really solve the problem of reliable background segmentation in complex environments hence I can see the logic in using fiducials although a brief play (with touchless) suggests that even a fiducial based recognition system is unlikely to be perfect (at least in the case of a single unmodified webcam). This does lead to an important point for me in terms of requirements :

CVFB-R1: The computer vision system must be able to reliably determine fiducial positions in complex background images.
CVFB-R2: The computer vision system must be able to reliably determine fiducial positions in varied background images.
CVFB-R3: The computer vision system must be able to reliably determine fiducial positions with varying lighting conditions.
CVFB-R4: CVFB-R1 - CVFB-R3 must be met for 4 fiducial markers, each of a distinct colour.

and should it be possible to work without fiducial markers :

CVSB-R1: The computer vision system must be able to reliably determine hand shape in complex background images.
CVSB-R2: The computer vision system must be able to reliably determine hand shape in varied background images.

CVSB-R3: The computer vision system must be able to reliably determine hand shape with varying lighting conditions.
CVSB-R4:  The computer vision system must be able to reliably discriminate between left and right hands.


I rather suspect that I'm going to have to be flexible with tests/thresholds to determine if these requirements are met and it should also be noted that it has been recognised that no single based computer vision technique has been found to work for all applications or environments (Wach et al, 2011, p60) hence there may be some opportunity to improve on the generic libraries/algorithms which it would seem natural to apply (e.g. touchless, cvBlob)

Moving on, for those who haven't played with $1 Unistroke recogniser (Wobbrock, 2007) its impressive. I'd be reasonably confident based of the results of the tests for this algorithm in its reliability and robustness, IF the above requirements can be met.

Keeping to the KISS principle I'm going to use this as the basis of my first experiments (and code woo-hoo!) which are going to be :

1) Capture short (<5 minute) segments of video with a worn webcam (in my case I have a Logitech C910 handy, not the most discrete of cameras but sadly my Microsoft Life show broke grrrrr) in a variety of environments while wearing fiducial markers on 4 fingers.

2) Capture short (<5 minute) segments of video with a worn webcam in a variety of environments without markers.


3) Based on these exemplary videos test various recognition techniques from openCV to determine the optimal technique which meets the above requirements.

4) Apply and test sample gestures against $1 Unistroke Recogniser (Python implementation)
4.1) optional Determine if there are any differences in the performance/reliability of the Python versions.

Okay that's my week planned then, comments?

REFERENCES


Wachs, J, Kölsch, M, Stern, H & EDAN, Y 2011, ‘Vision-based hand-gesture applications’ in Communications of the ACM vol. 54, no. 2 p. 60-71

Wobbrock  et al 2007 http://depts.washington.edu/aimgroup/proj/dollar/

Comments

Popular posts from this blog

I know I should move on and start a new blog but I'm keeping this my temporary home. New project, massive overkill in website creation. I've a simple project to put up a four page website which was already somewhat over specified in being hosted on AWS and S3. This isn't quite ridiculous enough though so I am using puppet to manage an EC2 instance (it will eventually need some server side work) and making it available in multiple regions. That would almost have been enough but I'm currently working on being able to provision an instance either in AWS or Rackspace because...well...Amazon might totally go down one day! Yes, its over-the-top but I needed something simple to help me climb up the devops and cloud learning curve. So off the bat - puppet installation. I've an older 10.04 Ubuntu virtual server which has been somewhat under-taxed so I've set that up as a puppet master. First lesson - always use the latest version from a tarball unless you have kept t

Camshift Tracker v0.1 up

https://code.google.com/p/os6sense/downloads/list I thought I'd upload my tracker, watch the video from yesterday for an example of the sort of performance to expect under optimal conditions ! Optimal conditions means stable lighting, and removing elements of a similar colour to that which you wish to track. Performance is probably a little worse, (and at best similar to) the touchless SDK. Under suboptimal conditions...well its useless but then so are most trackers which is a real source of complaint about most of the computer vision research out there.....not that they perform poorly but rather that there is far too little honesty in just how poorly various algorithms perform under non-laboratory conditions. I've a few revisions to make to improve performance and stability and I'm not proud of the code. It's been...8 years since I last did anything with C++ and to be frank I'd describe this more as a hack. Once this masters is out of the way I plan to look a

More Observations

After this post I AM going to make videos ;) I spent some time doing some basic tests last night under non optimal (but good) conditions: 1) Double click/single click/long tap/short tap These all can be supported using in air interactions and pinch gestures. I'd estimate I had +90% accuracy in detection rate for everything apart from single click. Single click is harder to do since it can only be flagged after the delay for detecting a double click has expired and this leads to some lag in the responsiveness of the application. 2) The predator/planetary cursor design. In order to increase the stability of my primary marker when only looking at a single point e.g. when air drawing, I decided to modify my cursor design. I feel that both fiducial points should be visible to the user but it didn't quite "feel" right to me using either the upper or lower fiducial when concentrating on a single point hence I've introduced a mid-point cursor that is always 1/2 wa