Skip to main content

Hardware Design

I finally took the plunge and did the epoxy wields on the prototype rig using some Bondloc titanium epoxy and it works really well, setting hard in just a few minutes and rock solid after 15. While the rig is incredibly primitive it does allow me to shoulder mount both the projector and camera and do so so that they are stable. I just need to make a final decision re placement (left vs right for camera/projector) ~ my initial take was to place the projector on the right so as to be in line with my dominant eye, but I think of more importance/potential is to have a better correlation between the forward facing camera and the dominant hand for pointing...which should also reduce occlusion of the projection.

I'm slowly uncovering papers in this area and found another one today Designing a Miniature Wearable Visual Robot which details the design rationale behind a robotised wearable camera. Mayol et al (2002) use a 3D human model to examine different frames of reference and requirements for the device identifying 3 frames (the wearer's body and active task, alignment to static surroundings, wearers position relative to independent objects) . They also identify 2 requirements, decoupling of the wearers motion from the motion of the sensor and the provision of a wide Field of View. Since we are dealing with a static rather than motorised sensor, it is only the first frame that is of particular relevance however it is interesting to note how a robotised system would enable these different frames.

They also note that, given the proximity of the device to other humans, that :

"a sensor able to indicate where it is looking (and hence where it is not looking) is more socially acceptable than using or wearing wholly passive sensors" (P1)

This is a very interesting point since the social acceptance of a wearable system is a major factor influencing the usability of "always on" wearable systems.

They go on to examine the 3 factors used in their analysis of the most optimal location to wear the robot, detailing FOV, user motion and view of the "handling space" which they define and stress the importance of via the following statement:

"The area immediately in front of the chest is the region in which the majority of manipulation occurs, based on data from biomechanical analysis" (P2 cites [2])


Of final relevance to us is there discussion of their results from fusing these criteria. The forehead is identified as the most optimal position but discounted due to the "importance of decoupling the sensors attention from the user's attention" and alternate positions are considered. Their analysis concludes that if maximal FOV and minimal motion are the most important factors that the shoulder is the optimal alternative.

Phew. And I want one.

Mayol's Robot [1]

 Along with the papers I've read on projector positioning it seems that shoulder mounting wins for both projector and camera ~ happy happy joy joy!



[1]W. Mayol, B. Tordoff, and D. Murray. Designing a miniature wearable visual robot. In IEEE Int. Conf. on Robotics and Automation, Washington DC, USA, 2002.

[2] W.S. Marras, in G. Salvendy, Handbook of Human factors and Ergonomics Sec. Ed., chapter Biomechanics of The Human Body, John Willey, 1997.




Comments

Popular posts from this blog

I know I should move on and start a new blog but I'm keeping this my temporary home. New project, massive overkill in website creation. I've a simple project to put up a four page website which was already somewhat over specified in being hosted on AWS and S3. This isn't quite ridiculous enough though so I am using puppet to manage an EC2 instance (it will eventually need some server side work) and making it available in multiple regions. That would almost have been enough but I'm currently working on being able to provision an instance either in AWS or Rackspace because...well...Amazon might totally go down one day! Yes, its over-the-top but I needed something simple to help me climb up the devops and cloud learning curve. So off the bat - puppet installation. I've an older 10.04 Ubuntu virtual server which has been somewhat under-taxed so I've set that up as a puppet master. First lesson - always use the latest version from a tarball unless you have kept t

Camshift Tracker v0.1 up

https://code.google.com/p/os6sense/downloads/list I thought I'd upload my tracker, watch the video from yesterday for an example of the sort of performance to expect under optimal conditions ! Optimal conditions means stable lighting, and removing elements of a similar colour to that which you wish to track. Performance is probably a little worse, (and at best similar to) the touchless SDK. Under suboptimal conditions...well its useless but then so are most trackers which is a real source of complaint about most of the computer vision research out there.....not that they perform poorly but rather that there is far too little honesty in just how poorly various algorithms perform under non-laboratory conditions. I've a few revisions to make to improve performance and stability and I'm not proud of the code. It's been...8 years since I last did anything with C++ and to be frank I'd describe this more as a hack. Once this masters is out of the way I plan to look a

More Observations

After this post I AM going to make videos ;) I spent some time doing some basic tests last night under non optimal (but good) conditions: 1) Double click/single click/long tap/short tap These all can be supported using in air interactions and pinch gestures. I'd estimate I had +90% accuracy in detection rate for everything apart from single click. Single click is harder to do since it can only be flagged after the delay for detecting a double click has expired and this leads to some lag in the responsiveness of the application. 2) The predator/planetary cursor design. In order to increase the stability of my primary marker when only looking at a single point e.g. when air drawing, I decided to modify my cursor design. I feel that both fiducial points should be visible to the user but it didn't quite "feel" right to me using either the upper or lower fiducial when concentrating on a single point hence I've introduced a mid-point cursor that is always 1/2 wa