http://www.youtube.com/watch?v=v_cb4PQ6oRs
Took me forever to get around to that one but I've been trying to solve lots of little problems. There's no sound so please read my comments on the video for an explanation of what you're seeing.
The main issue I'm now having is with the fiducial tracking in that the distance between the centroids of each fiducial is important in recognising when a pinch gesture is made, however due to the factors of distance from the camera causing the area of the fiducial to vary and, at the same time, the often poor quality of the bounding area for each fiducial causing the area to vary, I cant get the pinch point to the level where it provides "natural feedback" to the user i.e. the obvious feedback point where the systems perception and the users perception should agree is when the user can feel that they have touched their fingers together.
As it stands, due to the computer vision problems my system can be off by as much as 1cm :(
I should actually say that it IS possible to reduce this however then tracking suffers and the systems state (which is really limited to engaged/unengaged) varies wildly meaning that dynamic gestures are poorly recognised.
*sigh*
I could go back to the beginning and do another iteration of the basic marker tracking code - I've mentioned one option (laplacian) with my current hardware that I think would enhance the performance (and allow me to get rid of the makers!) and I could also do some basic contour detection within the current code which might enhance things...but this is NOT a computer science thesis I'm working on and feel I've treked further along that road than I had intended already.
Hence any additional code is going to focus specifically on making the interaction with the air-writing interface as fluid as possible. Before that though - SURVEY time!
Comments
Post a Comment