Wired’s Gadget Lab has a story today about new specs that are trickling out about the upcoming release of Asus’ color e-reader. It’s an unbelievable set of technical specifications with an even more unbelievable price tag. According to rumors, the Eee E-Reader will have two color touchscreens and will be about the size of a hardcover book. In addition, it will have a webcam and microphone for accessing Skype. According to the source of the leak, all this will go for approximately $165! I ain’t holding my breath on that one… Even at a higher price the device looks to be a fantastic offering. I might have to retire my aging Sony PRS-500. 2010 is going to be good year to buy an e-reader.
There have been a couple of tech news reports in the last week that focus on new plastic technologies, especially for eBook/Reader applications.
An article in the IEEE Spectrum, “Inside the Plastic Electronics Revolution“, outlines the work that Plastic Logic has done in developing plastic-based electronics. These cheap and low-power polymer-based transistors are perfect for applications like eBook reader devices and interactive signage.
Arizona State University has recently shown prototypes for flexible active matrix displays. The technology was funded by military grant programs and early devices will be used there first. The representative from ASU’s Flexible Display Center believes consumer applications may be available as soon as 18 months. According to the press release, the “electrophoretic” screens are lightweight and consume only a fraction of the power of a typical LCD.
Very cool stuff just over the horizon.
If you’re like me, you do a lot of reading online. Unfortunately, often much of the page is taken up by superfluous and sometimes distracting clutter. Here’s a simple little tool that works on some (but not all) pages to help make it a bit easier to read:
Since it has become a popular destination for students, Wikipedia has had a special place in the hearts of faculty, and by that I don’t mean a nice and sunny place. It’s often argued that because it is freely editable by anyone and everyone, the overall quality of the articles is suspect. Perhaps this assertion is true, but it’s not one that has been shown conclusively to be true. And there are those that argue the exact opposite is in fact true.
Researchers at PARC have given us all a tool that might help us come to a better conclusion by providing what they call “social transparency” with respect to Wikipedia articles and their editors. Check out their really interesting work on the issue:
They’ve also included a quick start if you’re not exactly sure how it works.
Michael Arrington over at TechCrunch just demoed the second prototype of their custom touchscreen tablet. After reading the article and checking out the videos of the prototype I found myself using Will Smith’s words in Independence Day, “I gotta get me one of these!” The tablet runs Ubuntu and the user interface is essentially a browser OS (it runs a custom version of Webkit). I love that you get a full 1024×768 resolution display, which is well-suited to web browsing. You can access all of your favorite sites including the Google suite of tools, Wikipedia, Hulu.com, and YouTube. The designers think they could produce this thing for about $300. Sign me up! I hope they move forward with production.
Check out the details and demo videos at Tech Crunch.
Zoetrope gives the user access to a range of data interaction tools that harness snapshots of a web page over time. The user can use the DOM to interact with individual components of a page, especially data driven components. “Lenses” can be place over these data-driven areas and the data can be seen over a temporal period. These changes can be graphed or visualized using Zoetrope and can be linked with other lenses on the same page or even other sites. This is truly incredible software. The web is really becoming a giant database and we are reaching a point where tools are popping up all over the place to harness this data and visualize it so that it becomes meaningful in everyday contexts.
Just ran across some amazing work being done by Dan Goldman, who did his doctoral work at University of Washington and is currently working as a Senior Research Scientist at Adobe in Seattle.
The research is focused on interacting with video and with objects in video and relies on current work in computer vision. The technology allows users to interact in some really amazing ways with video for annotation and motion analysis. The process uses a storyboard metaphor to visualize a short video clip in a static image. The user can manipulate spatial relationships in the storyboard image in a natural way to interact with the video stream. Some details and references are available on the Adobe Technology Labs site. Check out the technology in action in this amazing video clip on Vimeo:
Sony just announced the latest version of the Portable Reader System, the PRS-700. Hop on over to Gearlog for a quick show and tell.
The new version is $400 and includes some nice features. Sony has added a touchscreen for page turns, including turning multiple pages quickly by swiping and holding. A welcome addition is a set of side LED lights for reading in the dark. Sony also announced changes to the online bookstore, which currently truly sucks. I doubt any changes could make it worse.
For complete specs, pics, and complete specifications checkout Sony’s site.
TeleRead has some additional information from the announcement press conference. One thing Paul Biba mentions in the TeleRead post is that this new version is a great deal faster than previous versions and faster than any current competitor. This is apparently due to Sony’s expertise in writing custom drivers and designing the display processor. A faster eReader. Now I really want one of these….
Microsoft Research recently released a great little panorama image stitching utility. You can check it out at the Microsoft ICE project site. The utility is a free download.
One of the really nice features of this tool is that it can export to many different image formats. Once exported, one could bring the image into, for example, a video editing package to do pan and zoom effects for video. In addition, there is an export option for Deep Zoom Tileset that creates a series of stitched images and some XML data that allows the image to playback on the web inside of Microsoft’s SliverLight 2 browser plugin. The result is a nice pan and zoom image similar to what one gets with a QuickTime VR movie. You might have seen this in Microsoft’s PhotoSynth tool. And this is all free. Grab the software and have some fun!
I’m hoping to get a couple of experiments up soon, but I’m waiting on a server configuration change for the SilverLight files to run correctly in the browser. I’ll post them when that happens.
For a fantastic overview of this year’s SIGGRAPH, head on over to Hack a Day and read Eliot Phillips’ post there. Waaaay better than my weak efforts…heheh.
As Eliot points out in his post, most of the papers are online at various locations on the Interwebs and the links are all aggregated at Ke-Sen Huang’s site. Make sure you check out “Finding Paths through the World’s Photos“, especially if you’ve been following Microsoft Labs’ Seadragon implementation Photosynth. Extremely cool.