Archive for December, 2008

Antikythera Model Completed

A couple years ago, I wrote about the world’s earliest known computer, the Antikythera, finally being decoded as a result of X-Ray Tomography. Now two years later, Michael Wright, a former curator at the Science Museum in London, has built a working model of the device. I absolutely love that the Antikythera was finally cracked by an interested “amateur” – though this is probably a misnomer. In any case, Wright is clearly an extraordinary hardware hacker, rather than a research scientist. And I really love that he was able to do what many others have not. Here’s the article on Wired.com, which includes a video interview from New Scientist with Michael Wright.

Adobe Advanced Technologies Lab: Zoetrope

Technology Review has an article about a new tool called Zoetrope developed at Adobe’s Advanced Technologies Lab by Mira Dontcheva. The article includes a video of Zoetrope in action.

Zoetrope gives the user access to a range of data interaction tools that harness snapshots of a web page over time. The user can use the DOM to interact with individual components of a page, especially data driven components. “Lenses” can be place over these data-driven areas and the data can be seen over a temporal period. These changes can be graphed or visualized using Zoetrope and can be linked with other lenses on the same page or even other sites. This is truly incredible software. The web is really becoming a giant database and we are reaching a point where tools are popping up all over the place to harness this data and visualize it so that it becomes meaningful in everyday contexts.

Dan Goldman’s Interactive Video Object Manipulation Project

Just ran across some amazing work being done by Dan Goldman, who did his doctoral work at University of Washington and is currently working as a Senior Research Scientist at Adobe in Seattle.

The research is focused on interacting with video and with objects in video and relies on current work in computer vision. The technology allows users to interact in some really amazing ways with video for annotation and motion analysis. The process uses a storyboard metaphor to visualize a short video clip in a static image. The user can manipulate spatial relationships in the storyboard image in a natural way to interact with the video stream. Some details and references are available on the Adobe Technology Labs site. Check out the technology in action in this amazing video clip on Vimeo:

Dan Goldman – Interactive Video Object Manipulation Project