Category Archives: Websites

Retrievr

Flickr is probably the greatest online photo management and sharing application to hit the internet. You can spend hours browsing through millions of beautiful images from around the world, which are categorised and tagged for your convenience. Recently, Christian Langreiter released a new experiment named Retrievr. Retrievr allows you to sketch something in your browser and it when then search the Flickr photo database for images which it thinks matches your sketch!

Retrievr is based on research conducted at the University of Washington on a topic called Fast Multiresolution Image Querying. To describe the process simply, you take an image and create a wavelet of the image. Using a wavelet transform, you could generate many different representations of the same base data – a lot like varying the compression level when saving an image. From these wavelets, a signature of the image is formed composed of the key wavelets, while all non-significant items are discarded. Once a wavelet has been generated for each image, they are stored in the database for fast retrieval later. As a user of Retrievr, you simply create a sketch, it computes the wavelet for your sketch, compares it against the wavelets already stored in the database and returns you a set of best matches. At this stage, only a small subset of the images on Flickr have been analysed for use in Retrievr, however Chris says to email him if you’d like to see another group/set of images included into the site.

Ultimately, I think it is a awesome experiment which proves what is possible through utilising Fast Multiresolution Image Querying however at this stage I can’t can (see comments below) see a real world practical use for it.

Slashdot CSS Makeover

Slashdot, one of the most widely known technology sites around the world has finally had a CSS makeover. The idea of retooling Slashdot was bantered around back in 2003, however the likelihood of anything happening seemed slim. At the time, there didn’t seem to be any serious enthusiasm to rebuild the backend, slashcode – with the general sentiment being “if you feel like hacking up slashcode, then we’d consider using it”. Slashdot has been running on a similar HTML 3.2 base for some eight years, simply because “it worked” and that was really good enough at the time.

It wasn’t long after the idea was mentioned before someone took it upon themselves to start the ball rolling. One of the first largely publicised steps was done by Daniel M. Frommelt, whose two articles published on A List Apart, Retooling Slashdot with Web Standards and Retooling Slashdot with Web Standards Part II garnered a lot of support. These two articles demonstrated how a huge site like Slashdot can be broken down and reworked into valid, semantic HTML.

The end product for Slashdot is a mostly validate HTML 4.01 document – which is a huge step forward. It would have been awesome if the xHTML utopia was realistic, however as with many sites, the advertisements and user contributed content prohibit it. The single largest point of interest about the retooling is the paradigm shift in authoring the HTML. The existing site was built using tables for structure and inline font tags for styling. This has now been replaced with divs for layout/structure, semantic markup and CSS for the presentation.

Since there has now been serious work done on Slashcode, we might see more frequent updates and features added to it.