All posts by Alistair Lattimore

About Alistair Lattimore

My name is Alistair Lattimore, I'm in my very early 30's and live on the sunny Gold Coast in Australia. I married my high school sweet heart & we've been together for longer than I can remember. Claire and I started our family in September 2008 when Hugo was born and added a gorgeous little girl named Evie in May 2010. You can find me online in the typical hangouts, Google+, Twitter & facebook. .

Google Gears, Shifting Into The Offline World

One of the biggest problems that have plagued a lot of new, cool web applications is that they are not available when you’re not connected to the internet. That might seem like a silly thing to say, however certain types of applications lend themselves to being used all the time in an online environment, so when you’re not connected it is a real pain.

Google have decided that its a good time to fix this problem by releasing a product known as Google Gears. The Google Gears product hopes to solve the ‘offline web application’ problem by providing a browser plugin for Windows, Linux and Macintosh which enables a web application to be tolerant of being offline.

The first public application to see the benefits of the new Google Gears is the web based feed reading application named Google Reader. After installing Google Gears, it’s now possible to disconnect from the internet and continue using Google Reader as if you were connected. After reconnecting to the internet, the changes that were made to the offline data are shipped back to the online version of your Google Reader data and everything continues as normal.

The recent release of Google Gears must come as a bit of a blow to companies like Joyent who released a product in March known as Slingshot. The Slingshot product provides similar capabilities to Google Gears, however it is limited to the Ruby on Rails development platform. I expect it won’t be long before someone provides a Ruby on Rails wrapper over Google Gears and it starts to gain momentum in the development world.

Right at this moment, the idea of providing an offline web application doesn’t appeal to me. I’m connected to the internet all day at work and its available when I get home as well. That attitude would likely change a lot if I were the type of person who travels a lot of has sporadic access to the internet; then it would really shine. I expect some pretty exciting development in this space in the very near future.

Google Image Search Supports ImgType=Face

Google Image Search has supported the imgtype parameter for a long time and it recently received an upgrade, now accepting an imgtype with a value of face.

At the moment, there isn’t an option on the advanced search page to restrict images to that imgtype, so you’ll need to add it into the URL manually.To give you an idea of what it might be useful for, compare to two search results:

The first set of results are anything that has been associated to ‘Alistair Lattimore’, where the latter is meant to filter the results to contain faces. In that particular example, it isn’t perfect however it’s quite easy to see where this is going. This new functionality is apparently the by product of a Google acquiring Neven Vision last year who were developing specialist facial recognition software.

Gold Coast Real Estate Agents Suck

Claire and I live in an apartment/townhouse at the Gold Coast, with people on both our side walls and above us. We’ve been in our current apartment since December 2004 and we’ve discussed moving into something different.

That idea got put onto the hot plate this week when I called our new on site managers and asked about what I needed to do for them to move out. That discussion lead into us exchanging information about our apartment and her finding out for the first time that it is actually for sale. She was a little shocked about that, since the real estate agent hasn’t informed her nor have they been going to her to get to get a key for our apartment to show people through it.

Being a little confused by the whole thing, she started calling around to find out what was going on. We were very surprised to find out that our apartment has in fact been sold for over a month now and that the Gold Coast real estate agent didn’t think it important to inform the body corporate managers or the current tenants! You can probably guess what is coming next, the new owners intend to move into their new investment in about two weeks.

With that much notice, we’re now hunting at a fairly rapid pace for a new home to move into at the Gold Coast somewhere. We’re not interested in moving into another gated complex, nor are we fussed on sharing walls with people again. Our neighbours in the last two and a half years have been far too inconsiderate to want to go through that again, so we’re looking for a standard house.

We’ve submitted an application to a nice four bedroom house in Pacific Pines, cross your fingers we get it.

Google Acquires DoubleClick

April 13th saw Google finalise a deal to acquire online media and advertising heavy weight DoubleClick Inc. The announcement from Google states that they’ve purchased DoubleClick for USD$3.1 billion in cash from San Francisco based private equity firm Hellman & Friedman along with JMI Equity.

The interesting thing here isn’t that Google have purchased yet another monster business but that they are one of the biggest forces in the online advertising landscape. DoubleClick currently service a different type of online advertising client than Google, so the DoubleClick business will definitely complement Google’s online advertising strategies. More importantly though, Google gain all of the technology that DoubleClick have been developing which focuses strongly around rich media advertising. I expect it won’t be long before we see Google start to aggressively roll out rich media advertising into their current products such as Google Video and YouTube and subsequently into the wider market as well.

Interesting times ahead for online advertisers, the internet landscape is changing yet again.

Removing Indexed Content From Google The Easy Way

Google are constantly improving their services and during April they updated their Google Webmasters Tools; this release relates to removing content that has already been indexed by Google.

Google have supported removing content from their service for a long time, however the process was often slow to take. With the recent addition of the URL removal into Google Webmasters Tools, its now possible to process the removal of a page quite quickly.

As with everything associated to Google Webmaster Tools, the web site to act on first needs to be verified. Once verified, there is now a URL Removals link under the Diagnostics tab. The removal service supports removing URL’s in the following ways:

  • individual web pages, images or other files
  • a complete directory
  • a complete web site
  • cached copies of a web site

To remove an individual web page, image or file – the URL must:

  • return a standard HTTP 404 (missing) or 410 (gone) response code
  • be blocked by the robots.txt file
  • be blocked by a robots tag

Removing a directory has less options available, it must be blocked using the robots.txt file. Submitting http://mydomain.com/folder/ would remove all objects which reside under that folder including all web pages, images, documents and files.

To remove an entire domain from the Google index, you need to block it using a robots.txt file and submit the expedited removal request. Google have once more reinforced the point that this option should not be used to remove the wrong ‘version’ of your site from the index, such as a www versus non-www version. To handle this, nominate the preferred domain within the Google Webmaster Tools and optionally redirect the wrong version to the correct version using a standard HTTP 301 redirect.

Cached copies of web pages can be removed by setting the <meta> robots attribute with a noindex on the given page(s) and submitting the removal request. By using this mechanism, Google will never re-include that URL so long as the robots noindex <meta> data is present. By removing the robots noindex <meta> data, you are instructing Google to re-include that URL, so long as it isn’t being block by alternate means such as a robots.txt file. If the intention is to simply refresh a given set of web pages, you can also change the content on those pages and submit the URL removal request. Google will fetch a fresh copy of the URLs, compare them against their cached copies and if they are different immediately removed the cached copy.

After submitting requests, it’s possible to view the status of the request. They will list as pending until they have been processed, denied if the page does not meet the removal criteria and once processed they will be moved into the ‘Removed Content’ tab. Of course, you can re-include a removed page at any time as well. It should be noted that if you remove a page and don’t manually re-include the web page(s) after exclusion, the removed page(s) will remain excluded for approximately 6 months – after which they will be automatically re-included.

Being able to remove content from the Google index so quickly is going to come in handy when certain types of content are indexed by accident and need to be removed with priority.