Sunday, 25 September 2011

Do with have a video?

Yes, we have a video!

http://www.youtube.com/watch?v=GXlmus93o68

I wasn't intending to work on any code this weekend but I felt compelled to try out the recognition server and run another set of tests but with the Logitech C900 in place. Results were an improvement on the PS3 eye, in part due to the better low light capabilities, in part due to the camera placement, and in part due to the wider angle.

Some anecdotal notes :

The recognition server provided seems to perform better that the unistroke implementation - I still need to sit down and do the numbers but I wouldn't be surprised if it wasn't significantly better.

I suspect recall for all but the most basic figures/shapes provided via the default unistroke implementation will be poor amongst users. On the flip side, most of us know the alphabet!

Big problem with the use of fiducials on the end of the fingers - they become obscured during natural hand movements! I ended up cupping the marker in my hand and squeezing it to cover it so the I had control of the markers visability. Keeping the fiducial visible requires holding the hand in a position that is simply not ergonomic.

After a few hours usage my wrist aches (but then I do suffer from PA).

I had the advantage of visual and audible feedback during this test - I suspect the performance will deteriorate with that removed. 

Another big problem is drawing letters that require multiple strokes - i, k, f, t, x, 4 etc all cause problems - have yet to test capitals.

obviously no support for correction or refinement - while this could be supported I cant see it being possible without visual feedback...hence reduces the impact of the system on improved situational awareness.

Ramifications - The original sixth sense system had very poor ergonomics as well as suffering from a range of technical issues. Choice of the unistroke recognition engine likely non-optimal (may be implementation dependent though), will need to revisit.

Where's the code then you ask? I may just throw stuff up over the next few days, but my god is it tatty but I'm not going to allow code shame to stop me. I'd like to have something which performs somewhat better than the current version in terms of the interaction support before I do so though.....

Thursday, 22 September 2011

Update.

Just an update - while I had originally approached this with the intention of releasing the code as open source my findings regarding....well various aspects of this project, but relevant to this aim, the code itself, means that I'm putting any software development on the back burner for the next few weeks while I perform a study into how people naturally perform gestures. I'm also looking at some options to improve certain show stopping issues with the system (primarily the limited FOV of the webcam).

Any code that does emerge for the project, at least for version 0.1 is unlikely to be very robust but I think that can be overcome : I'm currently thinking that broad colour segmentation followed by some form of object matching technique (e.g. SIFT/SURF) should make quite a robust and reasonably fast algorithm for marker detection however if the FOV problem cant be solved, I actually think that ANY vision based system is inappropriate for this sort of interaction style.

Yes, that's a bit damning, however I am doing HCI research here, not computer vision....and that doesnt mean that I dont have other tricks (literally) up my sleeve :)

Saturday, 10 September 2011

Finally...

Children back at school and I'm back off my hols (a rather interesting time in Estonia if you're interested).

I've spent most of the last week becoming increasingly frustrated with my attempts at image segmentation. I've moved to a c++ implementation for speed and, while the VERY simplistic HSV segmentation technique I am using works, the problem is that I cannot get it to work robustly and doubt that it will ever do such.

I've now covered the range of available techniques and even tried to plumb the depths of just emerging ones and it seems that every computer vision based object tracking implementation or algorithm suffers for the same issue with robustness (openTLD, camshift, touchless, hsv segmentation and cvBlob etc etc). YES, it can be made to work, but issues include (depending on the algorithm) :

- Object drift : over time the target marker will cease to be recognised and other objects will become the target focus.
- Multiple objects : During segments where the camera is moving new objects will appear, some of which cannot be differentiated from the target.
- Target object loss : Due to changes in size, lighting, speed etc the target will be totally lost.
- Target jitter : The centroid of the target cannot be accurately determined.

I'll expand that list as I think of more.

So basically given a semi static camera, a semi uniform "background", uniform lighting, an object can be tracked with some degree of reliability.

Its also worth noting that 2 variables, fiducial colour and lighting uniformity, have the largest impact on reliability of tracking. I was incredibly optimistic this week when I tried to segment a light green pen top and found it to be highly accurately tracked during one experiment; but then I returned to the same code and object later in the day under different lighting conditions and reliability fell massively until I recalibrated.

I am unsure of how to proceed next I must admit; while I didn't expect things to be 100% reliable I had expected one of the available techniques to produce better results than I have had so far. If I had more raw power to throw at things (and more raw time) I'd return to looking at some of the AI techniques (Deep Convoluted Networks) as well as somewhat simpler SIFT/SURF implementations but sadly I am out of time for this portion of my research (and in many ways its THE most crucial aspect)....