Gmote on the MyTouch 3G and an Ubuntu 9.10 HTPC

Finally got a chance to play around with Gmote on my MyTouch 3G running Android 1.6. Installed the Android app a couple of months ago and then never set it up on any of my desktops. This afternoon I installed Gmote server on my custom HTPC hooked up to my Samsung HDTV. It’s running Ubuntu 9.10. I’m writing this post on the beautiful 1080p 42″ screen, as a matter of fact.

I downloaded the server tarball from the Gmote website and ran the shell script to start and setup the server. It didn’t run the first time. Ended up having to install the latest JRE – no big deal. Ran the script again and the server started up prompting for a password to be created and to tell the server where my media files were. The server uses VLC to access and play media files on the host machine. I’m less interested in that functionality. The thing I was interested in is the remote mouse access functionality. Essentially you make the phone touchscreen into a remote touchpad for the server. Sweet.

I turned on wifi on the MyTouch and fired up the Gmote client software. If you’re on the same network the software will go out and find the server on the default port number (8889). If you need to access it across the 3G network you can port forward that port from your router. I’ve already got another PC setup as a DMZ and I’m not doing any other port forwarding. The only downside for the wifi for me is the battery that the wifi radio eats on the phone. Pretty cool little piece of code. Now I can sit in my recliner and control the machine from 10 feet away, which is good, ’cause that’s where I left my Bushmills…

Frakencamera or Camera 2.0

Marc Levoy and his graduate students at Stanford are creating an open source camera platform for researchers in digtital photography and computational photography to write code on top of. Proprietary cameras make it difficult or impossible to write custom software to take advantages of new advances in fields like computational photography.

Read more »

Dan Goldman’s Interactive Video Object Manipulation Project

Just ran across some amazing work being done by Dan Goldman, who did his doctoral work at University of Washington and is currently working as a Senior Research Scientist at Adobe in Seattle.

The research is focused on interacting with video and with objects in video and relies on current work in computer vision. The technology allows users to interact in some really amazing ways with video for annotation and motion analysis. The process uses a storyboard metaphor to visualize a short video clip in a static image. The user can manipulate spatial relationships in the storyboard image in a natural way to interact with the video stream. Some details and references are available on the Adobe Technology Labs site. Check out the technology in action in this amazing video clip on Vimeo:

Dan Goldman – Interactive Video Object Manipulation Project

Deep Linking in YouTube Videos

Saw an interesting post on TechCrunch yesterday. YouTube recently rolled out a new feature that is a welcome addition to their toolbox: deep linking to a point inside a video stream. It’s a very easy implementation, as well. All you need to do is add a ‘#’ at the end of the YouTube URL and then reference the time code following the ‘#’ sign. For example:

http://www.youtube.com/watch?v=1flVlL4Mf8k#t=0m20s

Very useful!

Microsoft Image Composite Editor

Microsoft Research recently released a great little panorama image stitching utility. You can check it out at the Microsoft ICE project site. The utility is a free download.

One of the really nice features of this tool is that it can export to many different image formats. Once exported, one could bring the image into, for example, a video editing package to do pan and zoom effects for video. In addition, there is an export option for Deep Zoom Tileset that creates a series of stitched images and some XML data that allows the image to playback on the web inside of Microsoft’s SliverLight 2 browser plugin. The result is a nice pan and zoom image similar to what one gets with a QuickTime VR movie. You might have seen this in Microsoft’s PhotoSynth tool. And this is all free. Grab the software and have some fun!

I’m hoping to get a couple of experiments up soon, but I’m waiting on a server configuration change for the SilverLight files to run correctly in the browser. I’ll post them when that happens.

Speed, Flash, and Traffic: SIGGRAPH 2008 Wrap-Up

Well, another SIGGRAPH is history. It’s been a terrific creative battery recharge. Thursday and Friday highlights include a really cool Production Session on how the various visual effects companies that made Speed Racer went about replicating the look and feel of anime in a live action motion picture, a very entertaining and interesting overview on the use of Adobe Flash for animation, and an absolutely fascinating class on transportation visualization.
Read more »

Digital Projection, Spatial Augmented Reality, and Shape Grammar – SIGGRAPH 2008

It’s been an inspiring conference so far. The classes I’ve attended have been excellent. On Monday I attended the half-day course on projectors and spatial augmented reality for (I think) the 4th year running. Ramesh Raskar and Oliver Bimber were fantastic as usual. They were joined this year by Aditi Majumber who spoke about large-format displays and Hendrik Lensch who spoke on computational illumination for 3D scene modeling. One of the things I really get excited about in this class is what Raskar calls RFIG. In essence, this entails adding a photosensor to an RFID tag and then projecting structured light from a handheld projector on the photosensor in order to acquire a relative position for the tagged item. With the unique identifier and the relative position, we can query a database and then project useful information about the identified items directly on the items themselves using our handheld projector. All this is made possible by very small and relatively inexpensive handheld computers with wireless network access and attached projectors. You can check out their work, including the full-text of their book, Spatial Augmented Reality, on the supporting website: SpatialAR.com. Great stuff.
Read more »

SIGGRAPH 2008 in Los Angeles

I arrived at the Los Angeles Convention Center today and picked up my credentials for this year’s SIGGRAPH Conference. If you’re unfamiliar with the organization or the conference, it’s a part of the Association for Computing Machinery; SIGGRAPH is the largest SIG (Special Interest Group) in the ACM. The full name is the Association for Computing Machinery Special Interest Group on Graphics and Interactive Techniques, which is more than a mouthful.This year’s conference has added a couple of new features. The fantastic Animation Festival has been expanded and more deeply integrated with the rest of the conference. Screenings are being paired with artist and tech talks and will be running throughout the day, with special big-studio presentations in the evenings, including Pixar and Dreamworks.

In addition, conference organizers are implementing a new RFID tracking component to gather demographic data about attendance at conference sessions and events. The RFID tags are embedded in attendee ID cards (attendees can opt-out by recycling the card at the conference). This year is a pilot of the technology with expanded use next year if all goes well.

I’m especially interested in how this works out. The ILC is partnering with Tulane’s Middle American Research Institute on some interesting RFID projects as MARI moves to new and renovated storage and display space over the next 18 months. On a related note, the half-day class on computer graphic projector technology, including Spatial Augmented Reality, is in the morning at 8:30. I’m hoping to come away with some good ideas for implementing this technology in our MARI-related projects in the near future. I have a meeting set up on Thursday afternoon with engineers at Motorola on current RFID technology, thanks to the gracious assistance of Ian Thomas, Vice-President of Business Development at O’Neill Software, Inc.

I don’t think I could imagine a better week of total geek heaven.

Digital Storytelling at Tulane

At the beginning of last week, I had the pleasure of participating in a workshop hosted by the Innovative Learning Center on Digital Storytelling. The workshop was led by two great facilitators from the Center for Digital Storytelling based in Berkeley, California. Daniel Weinshenker, the Director of the Denver Office, and Jessica McCoy, an instructor based at the Berkeley office, did a wonderful job leading the workshop. If you’re unfamiliar with Digital Storytelling, take a look at the Center’s website. Jessica is also involved with an organization called Stories for Change. Both websites host several amazing examples of digital stories.What I found most valuable in participating in the workshop was the opportunity to see a mature process for helping participants formulate and incrementally improve their stories in such a way that simply engaging in the process in good faith led to a product that was drastically better for having been through it. Participants were asked to come with a written script in the range of 350 words, or at least a set of notes that could be turned into a script. We were asked to gather together as much media as we could, such as pictures, video, sound recordings, and music, that would help to tell the story in more than words.
Read more »

The LucidTouch’s Novel Approach to Multi-touch Interfaces

Just read a nice article on New Scientist about work by Microsoft and Mitsubishi on a novel approach to handling the occlusion problem and the “fat finger” problem of current multi-touch interfaces. There’s also a video of a prototype of the LucidTouch device.

The current prototype device uses a camera on a boom focused on the hands on the back of the device. An overlay shadow is superimposed over the image showing the location of the hands without occluding the display. Active finger touch points are shown and a very intuitive method for showing the hand-off of selected items between fingers is also used. It’s a nice glimpse of what’s ahead in the multi-touch arena.

Next Page »