Monday, 2 July 2012

I know I should move on and start a new blog but I'm keeping this my temporary home.

New project, massive overkill in website creation. I've a simple project to put up a four page website which was already somewhat over specified in being hosted on AWS and S3. This isn't quite ridiculous enough though so I am using puppet to manage an EC2 instance (it will eventually need some server side work) and making it available in multiple regions. That would almost have been enough but I'm currently working on being able to provision an instance either in AWS or Rackspace because...well...Amazon might totally go down one day! Yes, its over-the-top but I needed something simple to help me climb up the devops and cloud learning curve.

So off the bat - puppet installation. I've an older 10.04 Ubuntu virtual server which has been somewhat under-taxed so I've set that up as a puppet master. First lesson - always use the latest version from a tarball unless you have kept the OS upgraded. Getting puppet dashboard working alongside this isn't too much of a strain however it would help if everyones instructions included details to cd into the install direction to run rake and a rough note of where that directory is likely to be. I've linked Vide's post since its one of the few with the instruction to cd and on ubuntu for me its /usr/share/puppet-dashboard. Of course, it just illustrates that I am getting old since I should have realised that rake was looking for a local file because its just ruby's verion of make....doh!

Next up comes provisioning - this is why you need the latest stable version. Follow the instructions over at puppet labs on getting started with cloud provisioning...you may find that you also need to update gem from source :

( URL="http://production.cf.rubygems.org/rubygems/rubygems-1.3.7.tgz" PACKAGE=$(echo $URL | sed "s/\.[^\.]*$//; s/^.*\///") cd $(mktemp -d /tmp/install_rubygems.XXXXXXXXXX) && \ wget -c -t10 -T20 -q $URL && \ tar xfz $PACKAGE.tgz && \ cd $PACKAGE && \ sudo ruby setup.rb )


Sorry I cant attribute the above. So I'm about to spin up some instances....

Saturday, 31 March 2012

Back to life! Oh and a copy of my final dissertation.

I've been avoiding anything to do with my dissertation since submitting back in January. I must admit the entire experience was rather stressful and my hand-in date was the sixth of January which made for a rather dire Christmas and New Year, and everything was made much worse by an acute painful and prolonged illness which I've only just managed to shake (or rather, I'm temporarily asymptomatic).

However, I just received my results (for my course, not my illness) and its good news, a pass with a distinction and it seems as if my markers think my dissertation work isn't quite as insane as I had come to suspect it had become! Woo-pee doo!

Wearable Gestural Interfaces: A Viable Interaction Paradigm?
An Exploratory Study.


I had promised to publish it here and am going to put a copy as submitted - typos, terrible grammar, and all, in the hope that someone finds something of use in there. I know that I found several excellent masters papers which were of great use while doing mine and I suspect that this may help at least with a few pointers as to other work in this area.

Be gentle if you read and comment please :)

Monday, 14 November 2011

Getting the design right vs getting the right design

So today I've been looking at ways to make the text input interaction more fluid and investigating if I can able some level of error correction (i.e. a spell checker).

1) 2 hours of use off and on and my arm aches, my back aches, my wrist aches. Even with the minimal arm movement that is required, my forearm still needs supporting and to spell a 5 letter words I am still moving the focal point about....6 inches across which is enough to induce fatigue over time. If I had more time I would do an about turn and look at just using a single finger (although that reintroduces the segmentation issue which the pinch technique solves)....

2) Writing without visual feedback is more taxing than I had thought. I originally did an experiment which involved people walking and writing at the same time as a "proof of concept" exploration. I suspect I had the task wrong though - I should have had them do it blindfolded! Luckily I've still time to repeat that experiment.

3) The limitations of the hacked together prototype are becoming rapidly apparent. Converting points to words introduces a significant lag into the "glue" application (os6sense.py) and the poor performance of the fiducial tracker both in terms of the "jitter" and frequent loss of acquisition is making this painful.

4) On the positive side, I am both impressed and disappointed by just how well the air-writing can work. Its best with visual feedback but on single letters performance is excellent. For cursive handwriting performance is....variable. With visual feedback I'd estimate 80% of words are in the alternate spelling list. Without feedback that drops to maybe 50%. I've a small word list, I need to benchmark obviously.

5) Mobility. The need for compensation due to movement is very apparent, Discreet gestures are just not registered, recognition rate suffers immensely, reliability is terrible - very very unimpressive performance.

6) I had intended to explore the pico-projector more over the next 2 weeks. Sadly the MHL link between my Samsung SII and the projector is unstable meaning if I do anything I need to hook up the laptop...except I have heat related issues with the laptop causing it to crash if I put it in my bag for more than 5 minutes :/ Added to this, battery life on the projector is about 20 minutes on a good day.

All in all, I know this is a prototype but its far less impressive than I had hoped for unless I manipulate conditions extensively. Still that's about par for the course for a V0.1. A good learning experience so far, probably not shaping up to be the greatest Masters thesis ever...but then I knew prototyping was risky.

In other words I have to wonder if this is the right design for this type of interaction. Its hard to tell how many of the issues I'm experiencing are to do with the technology verses issues with the approach I've taken overall. Thats a different kettle of fish though!

Wednesday, 9 November 2011

Recording/Playback

I've been busy with whipping my literature review and methodology sections together for the last few weeks (with the occasional diversion to tout the surveys, still a very low response so far *sadface*) and I'm heading towards crunch time now where I'm going to have to bring everything together for a draft version early next months.

Since I'm now more in a documenting than development phase I've little work done on the prototype apart from to add recording/playback capabilities so that a session can be "recorded" and then I can explore if changes to the gestural interface improve recognition (although that isn't a major aim at this point).

Again, a quick plea to anyone reading, just a few more responses to the gesture and display surveys and I'll be able to start my analysis for that data so if you have 5 minutes it would be greatly appreciated.

Monday, 24 October 2011

Final survey now up

Now the hard part, I have to find some people to participate! I've vastly over stated the amount of time needed to take the surveys since I know everyone is different in how they approach these. If there's any incentive, you get to see me performing gestures...more likely to be a deterrent given I didn't even shave before hand!

If you wander across this blog before December 1st, if you could please take 20 minutes to participate in one of the surveys it would be a huge help.

Added some anotations to the video...

that is all :)

Camshift Tracker v0.1 up

https://code.google.com/p/os6sense/downloads/list

I thought I'd upload my tracker, watch the video from yesterday for an example of the sort of performance to expect under optimal conditions! Optimal conditions means stable lighting, and removing elements of a similar colour to that which you wish to track. Performance is probably a little worse, (and at best similar to) the touchless SDK.

Under suboptimal conditions...well its useless but then so are most trackers which is a real source of complaint about most of the computer vision research out there.....not that they perform poorly but rather that there is far too little honesty in just how poorly various algorithms perform under non-laboratory conditions.

I've a few revisions to make to improve performance and stability and I'm not proud of the code. It's been...8 years since I last did anything with C++ and to be frank I'd describe this more as a hack. Once this masters is out of the way I plan to look at this again and try out some of my ideas but I really see any approach which relies on colour based segmentation and standard webcams as having limited applicability.

So its taken a while but I hope this proves of use to someone someone some day.