Study delayed since I think I can make progress with the prototype and answer some of my questions while opening up new ones :/ I'm glad I know this sort of last minute thing is quite common in research or I might be panicking (3 months to go, omg!).
I'm still having problems with marker tracking due to varying lighting conditions. At home, my "reliable" green marker doesn't like my bedroom but is great downstairs and in my office. Blue/red/yellow - all tend to suffer from background noise. I may have to try pink! Basically I know that colour based segmentation and blob tracking is a quick and easy way of prototyping this, but real world? Terrible!
If using dynamic gestures what are the best symbols to use? In fact is any semiotic system useful for gesture interaction? One could also ask are symbolic gestures really that useful for a wearable system....
Where should a camera point? i.e. where should its focus be?
I've found myself starting gestures slightly left of any central line, so primarily using the right side of my body - how true does this hold for other people? Is there a specific camera angle that is useful? These ones are for the study.
I was trying out a mockup of the interface on my palm and while I know the "natural" interaction style would be to make any projected UI elements into touch elements, my tracking just cant support it (in fact I have to wonder how anyone has done that since it sends histogram based trackers e.g. camshift insane). Hence there is a gulf between the input modalities location and that of the visual output modality which I don't believe represents an effect interaction paradigm. I've not tried it on other surfaces, I want to get a little further with the UI first.
Still hopeful of putting together another demo video by Monday.
I'm still having problems with marker tracking due to varying lighting conditions. At home, my "reliable" green marker doesn't like my bedroom but is great downstairs and in my office. Blue/red/yellow - all tend to suffer from background noise. I may have to try pink! Basically I know that colour based segmentation and blob tracking is a quick and easy way of prototyping this, but real world? Terrible!
If using dynamic gestures what are the best symbols to use? In fact is any semiotic system useful for gesture interaction? One could also ask are symbolic gestures really that useful for a wearable system....
Where should a camera point? i.e. where should its focus be?
I've found myself starting gestures slightly left of any central line, so primarily using the right side of my body - how true does this hold for other people? Is there a specific camera angle that is useful? These ones are for the study.
I was trying out a mockup of the interface on my palm and while I know the "natural" interaction style would be to make any projected UI elements into touch elements, my tracking just cant support it (in fact I have to wonder how anyone has done that since it sends histogram based trackers e.g. camshift insane). Hence there is a gulf between the input modalities location and that of the visual output modality which I don't believe represents an effect interaction paradigm. I've not tried it on other surfaces, I want to get a little further with the UI first.
Still hopeful of putting together another demo video by Monday.
Comments
Post a Comment