Skip to main content

Do with have a video?

Yes, we have a video!

http://www.youtube.com/watch?v=GXlmus93o68

I wasn't intending to work on any code this weekend but I felt compelled to try out the recognition server and run another set of tests but with the Logitech C900 in place. Results were an improvement on the PS3 eye, in part due to the better low light capabilities, in part due to the camera placement, and in part due to the wider angle.

Some anecdotal notes :

The recognition server provided seems to perform better that the unistroke implementation - I still need to sit down and do the numbers but I wouldn't be surprised if it wasn't significantly better.

I suspect recall for all but the most basic figures/shapes provided via the default unistroke implementation will be poor amongst users. On the flip side, most of us know the alphabet!

Big problem with the use of fiducials on the end of the fingers - they become obscured during natural hand movements! I ended up cupping the marker in my hand and squeezing it to cover it so the I had control of the markers visability. Keeping the fiducial visible requires holding the hand in a position that is simply not ergonomic.

After a few hours usage my wrist aches (but then I do suffer from PA).

I had the advantage of visual and audible feedback during this test - I suspect the performance will deteriorate with that removed. 

Another big problem is drawing letters that require multiple strokes - i, k, f, t, x, 4 etc all cause problems - have yet to test capitals.

obviously no support for correction or refinement - while this could be supported I cant see it being possible without visual feedback...hence reduces the impact of the system on improved situational awareness.

Ramifications - The original sixth sense system had very poor ergonomics as well as suffering from a range of technical issues. Choice of the unistroke recognition engine likely non-optimal (may be implementation dependent though), will need to revisit.

Where's the code then you ask? I may just throw stuff up over the next few days, but my god is it tatty but I'm not going to allow code shame to stop me. I'd like to have something which performs somewhat better than the current version in terms of the interaction support before I do so though.....

Comments

Popular posts from this blog

I know I should move on and start a new blog but I'm keeping this my temporary home. New project, massive overkill in website creation. I've a simple project to put up a four page website which was already somewhat over specified in being hosted on AWS and S3. This isn't quite ridiculous enough though so I am using puppet to manage an EC2 instance (it will eventually need some server side work) and making it available in multiple regions. That would almost have been enough but I'm currently working on being able to provision an instance either in AWS or Rackspace because...well...Amazon might totally go down one day! Yes, its over-the-top but I needed something simple to help me climb up the devops and cloud learning curve. So off the bat - puppet installation. I've an older 10.04 Ubuntu virtual server which has been somewhat under-taxed so I've set that up as a puppet master. First lesson - always use the latest version from a tarball unless you have kept t

Camshift Tracker v0.1 up

https://code.google.com/p/os6sense/downloads/list I thought I'd upload my tracker, watch the video from yesterday for an example of the sort of performance to expect under optimal conditions ! Optimal conditions means stable lighting, and removing elements of a similar colour to that which you wish to track. Performance is probably a little worse, (and at best similar to) the touchless SDK. Under suboptimal conditions...well its useless but then so are most trackers which is a real source of complaint about most of the computer vision research out there.....not that they perform poorly but rather that there is far too little honesty in just how poorly various algorithms perform under non-laboratory conditions. I've a few revisions to make to improve performance and stability and I'm not proud of the code. It's been...8 years since I last did anything with C++ and to be frank I'd describe this more as a hack. Once this masters is out of the way I plan to look a

More Observations

After this post I AM going to make videos ;) I spent some time doing some basic tests last night under non optimal (but good) conditions: 1) Double click/single click/long tap/short tap These all can be supported using in air interactions and pinch gestures. I'd estimate I had +90% accuracy in detection rate for everything apart from single click. Single click is harder to do since it can only be flagged after the delay for detecting a double click has expired and this leads to some lag in the responsiveness of the application. 2) The predator/planetary cursor design. In order to increase the stability of my primary marker when only looking at a single point e.g. when air drawing, I decided to modify my cursor design. I feel that both fiducial points should be visible to the user but it didn't quite "feel" right to me using either the upper or lower fiducial when concentrating on a single point hence I've introduced a mid-point cursor that is always 1/2 wa