What’s next in search? From text-based search to audio search and now coming to search-by-sight? What-you-see-is-what you-can-search is an interesting idea. It will be soon be available for smart phones and Google's Chrome browser. So what exactly does it do?
Imagine you can point a phone on a product you saw in a store, searches on that would display including reviews, prices and availability etc. This will be a killer app.
This Google’s toy project, according to Google engineer Xiuduan Fang is about image-based search, "I am working on a 20% project to facilitate the input of Web image searching. Chrome extension for Web Goggles.” Just in case you don’t know, the 20% figure refers to a Google program that permits engineers to devote a fifth of their time to whatever they think is cool. The idea that you can drag an image into a box and search results will be shown. Google Goggles currently is available as an application for phones running Google's Android OS, but they are working to release other versions, too. A Web browser interface would expand the service's availability beyond phones.
There are plenty of situations where you might want to point your phone at a subject while out and about. Imagine in a wine store, in the supermarket, in a department store or electronic stores. What it does is they compare an uploaded image to a database of billions Google has collected and analyzed. Just like those facial recognition software they used in CSI. It can recognize landmarks and read the text of any products, but until Google works out privacy controls it doesn't make use of its ability to recognize faces. I can see law enforcement people holding up those phones in train stations and airports. Is that a good idea?
Here’s a better application: Drag a photo of your exboyfriend or girlfriend into Facebook’s Goggles and it will show you hundreds of faces that resemble your ex and help you to look for replacement. You can then invite to your Facebook CafeWorld and get to know them a bit more.
Just in case you’re curious of how those software works. Our faces have numerous, distinguishable landmarks, the different peaks and valleys that make up facial features. These landmarks are defined as nodal points. Each human face has approximately 80 nodal points. Some of these measured by the software are: distance between the eyes
;width of the nose depth of the eye sockets;
The shape of the cheekbones and the length of the jaw line
These nodal points are measured creating a numerical code, called a faceprint, representing the face in the database. Camera angle and light variance can create a bit of a problem.
Another applications that I think makes sense is ATM and credit card security.
The software is able to quickly verify a customer's face. After a customer consents, the ATM or check-cashing kiosk captures a digital image of him. The FaceIt software then generates a faceprint of the photograph to protect customers against identity theft and fraudulent transactions. By using the facial recognition software, there's no need for a picture ID, bankcard or personal identification number (PIN) to verify a customer's identity. This can definitely cut down cedit card fraud.