Skip to main content

More Observations

After this post I AM going to make videos ;)

I spent some time doing some basic tests last night under non optimal (but good) conditions:

1) Double click/single click/long tap/short tap
These all can be supported using in air interactions and pinch gestures. I'd estimate I had +90% accuracy in detection rate for everything apart from single click. Single click is harder to do since it can only be flagged after the delay for detecting a double click has expired and this leads to some lag in the responsiveness of the application.

2) The predator/planetary cursor design.
In order to increase the stability of my primary marker when only looking at a single point e.g. when air drawing, I decided to modify my cursor design. I feel that both fiducial points should be visible to the user but it didn't quite "feel" right to me using either the upper or lower fiducial when concentrating on a single point hence I've introduced a mid-point cursor that is always 1/2 way between the 2 fiducial. The "feel" when interacting is now much better since the "pinch point" is where we would normally naturally expect a pen to be. 

3) Pinch movement
In relation to the above though the fact that pinching/unpinching moves the points is causing me some issues with accuracy and extraneous points being add to any drawing. I'm hoping to overcome this by better accuracy of pinch/unpinch events however THAT is tied back to accuracy on the fiducial positioning/area detection.

4) Kalman filtering
I'm not too sure how happy I am with the Kalman filtering on the input. While it increases stability it creates a more "fluid" movement of the marker which isn't good for tight changes in direction. That said it makes the air-writing feel very smooth - I wish I could increase the FPS...which I may attempt to do by making the markermonitor use sockets rather than pipes. However I feel like I've spent enough time on the technical details of the prototype and am loathe to spend more at this point.

5) Breathing.
I was surprised at how much impact breathing makes  when sitting down. Depending on the distance between the fiducial and camera this can be massive during a deep breath and is enough to cause any gestures to be poorly recognised. A more advanced system would have to compensate for this and also any movement involved in walking so giros/accelerometers are a must in the longer term. This was already a known requirement for any projection system and has been looked at in some papers (see Murata & Fujinami 2011) hence I expect that any "real world" system would have access to this data. Not too much I can do about this at this point.

6) Right arm/lower right quadrant block when sitting down
This one surprised me. When sitting down and using the right arm for movement, the lower right hand quadrant nearest the body is essentially "blocked" for use in my system since its difficult to move the arm back to this position. Not an issue when standing.

I plan on making some tweaks in terms of the pinch/unpinch detection to see if I can improve the accuracy of that and some UI changes to support it but the next step with the prototype is to take some empirical measurements on the systems performance,

Right, time to make some videos.

Murata, Satoshi;   Fujinami, Kaori; 2011 stabilization of Projected Image for Wearable Walking Support System Using Pico-projector

Comments

Popular posts from this blog

I know I should move on and start a new blog but I'm keeping this my temporary home. New project, massive overkill in website creation. I've a simple project to put up a four page website which was already somewhat over specified in being hosted on AWS and S3. This isn't quite ridiculous enough though so I am using puppet to manage an EC2 instance (it will eventually need some server side work) and making it available in multiple regions. That would almost have been enough but I'm currently working on being able to provision an instance either in AWS or Rackspace because...well...Amazon might totally go down one day! Yes, its over-the-top but I needed something simple to help me climb up the devops and cloud learning curve. So off the bat - puppet installation. I've an older 10.04 Ubuntu virtual server which has been somewhat under-taxed so I've set that up as a puppet master. First lesson - always use the latest version from a tarball unless you have kept t

Camshift Tracker v0.1 up

https://code.google.com/p/os6sense/downloads/list I thought I'd upload my tracker, watch the video from yesterday for an example of the sort of performance to expect under optimal conditions ! Optimal conditions means stable lighting, and removing elements of a similar colour to that which you wish to track. Performance is probably a little worse, (and at best similar to) the touchless SDK. Under suboptimal conditions...well its useless but then so are most trackers which is a real source of complaint about most of the computer vision research out there.....not that they perform poorly but rather that there is far too little honesty in just how poorly various algorithms perform under non-laboratory conditions. I've a few revisions to make to improve performance and stability and I'm not proud of the code. It's been...8 years since I last did anything with C++ and to be frank I'd describe this more as a hack. Once this masters is out of the way I plan to look a