Skip to main content

FOV - camera options

So continuing looking at cameras, firstly let me be clear I have a VERY limited budget for this project having already pushed out the boat to buy an Optoma PK301 (I'll cover pico-projectors at a later date) hence commercial options such as this HQ lens and pre-modded IR cameras are just out of my price bracket. Hence the PS3 camera is looking very tempting given they can be picked up on ebay for less than £15 and a large range of hacks have already been done for them.

I wanted to document my comparison of the various options I have considered though :

NameFOV (degrees)fps320fps640fps1280Cost
PS3 Eye75/5612060NA£15
C91083H 1606030£70
Kinect58H (IR)/63H (RGB) 3030NA£100
Xtion58H£120
Samsung SII75???????30NA

The above table is incomplete obviously ~ I've thrown in the SII since I have one available but I can't find any specifications on the camera, even from the datasheet hence the numbers are a guestimate based on a comparison with the C910.

Doing the above research confirmed that I will have to rule out depth based systems such as Kinect and Asus's Xtion since the minimal operating distance for the IR camera is 0.5m in the case of the Kinect and 0.8m for the Xtion. I believe the kinects FOV can be improved to 90 deg via an add on lens but that obviously increases the expense. Pity but a conscious design decision that I am now making is to focus on the "natural gesture position" that I illustrated earlier based on the advantages of it being eyes-free. I am aiming to incorporate a forward aiming camera as well though so, yes, we're talking about a 2 camera system now (possibly with very simple homebrew IR modifications).

I think the main modification that is going to be needed is to increase the FOV of the camera and do so cheaply - some interesting ideas I uncovered for this:

Commercial camera wide angle lens
CCTV wide angle lens
Adapt a lens from a door peep hole

I like the idea of the door peep hole - a nice hack and within my budget.

[1] http://forums.logitech.com/t5/Webcams/Question-on-HD-Pro-c910/td-p/568268

Comments

Popular posts from this blog

I know I should move on and start a new blog but I'm keeping this my temporary home. New project, massive overkill in website creation. I've a simple project to put up a four page website which was already somewhat over specified in being hosted on AWS and S3. This isn't quite ridiculous enough though so I am using puppet to manage an EC2 instance (it will eventually need some server side work) and making it available in multiple regions. That would almost have been enough but I'm currently working on being able to provision an instance either in AWS or Rackspace because...well...Amazon might totally go down one day! Yes, its over-the-top but I needed something simple to help me climb up the devops and cloud learning curve. So off the bat - puppet installation. I've an older 10.04 Ubuntu virtual server which has been somewhat under-taxed so I've set that up as a puppet master. First lesson - always use the latest version from a tarball unless you have kept t

Camshift Tracker v0.1 up

https://code.google.com/p/os6sense/downloads/list I thought I'd upload my tracker, watch the video from yesterday for an example of the sort of performance to expect under optimal conditions ! Optimal conditions means stable lighting, and removing elements of a similar colour to that which you wish to track. Performance is probably a little worse, (and at best similar to) the touchless SDK. Under suboptimal conditions...well its useless but then so are most trackers which is a real source of complaint about most of the computer vision research out there.....not that they perform poorly but rather that there is far too little honesty in just how poorly various algorithms perform under non-laboratory conditions. I've a few revisions to make to improve performance and stability and I'm not proud of the code. It's been...8 years since I last did anything with C++ and to be frank I'd describe this more as a hack. Once this masters is out of the way I plan to look a

More Observations

After this post I AM going to make videos ;) I spent some time doing some basic tests last night under non optimal (but good) conditions: 1) Double click/single click/long tap/short tap These all can be supported using in air interactions and pinch gestures. I'd estimate I had +90% accuracy in detection rate for everything apart from single click. Single click is harder to do since it can only be flagged after the delay for detecting a double click has expired and this leads to some lag in the responsiveness of the application. 2) The predator/planetary cursor design. In order to increase the stability of my primary marker when only looking at a single point e.g. when air drawing, I decided to modify my cursor design. I feel that both fiducial points should be visible to the user but it didn't quite "feel" right to me using either the upper or lower fiducial when concentrating on a single point hence I've introduced a mid-point cursor that is always 1/2 wa