Say Cheese! Your Picture Is Being Cross-Referenced Around the World

Published: 
Wednesday, May 4, 2011

There are some people who claim to never forget a face.  Now, what if that person was actually a government computer using advanced facial recognition technology?  Say Cheese!  Your picture has now been reduced to data and is being cross-referenced with databases across the world that can identify your face from the Internet before you’ve even finished smiling! 

Facial Recognition Technology (“FRT”) has been around for almost 20 years now, but until recently there has been limited applicability.  After 9/11, some officials proposed using it in airports and on the streets as a security measure. The ACLU nationally has been warning that it doesn’t work well enough for any general or mass surveillance use. 

The U.S. Department of Defense sponsored the Face Recognition Technology (FERET) program back in 1993 as a means for law enforcement to develop automatic FRT capabilities that could be deployed in the field.  FRT works as computer software that takes the human face and creates a map of distinguishable landmarks based on one’s facial features.  The distance between those landmarks are measured as the face is transformed into a data file.  Common facial features generally include the distance between the eyes, the width of the nose, and the shape of the cheekbones.

FRT has proven mildly effective in verifying identity claims, where there are relatively small populations in controlled environments.  Its efficacy, however, is contingent on the quality of the resulting image as well as the selection and composition of images used to develop the algorithms.  For example, if there is a photograph of a crime suspect, and that suspect had been previously arrested, the suspect’s face would potentially find a match in those databases.  Microsoft’s gaming console add-on, the Kinect, also uses FRT in order to authenticate users so that they can log onto Xbox Live.  NEC, a Japanese electronics company, has created a billboard system in Tokyo using FRT that identifies the sex and age of people passing by it in order to display targeted advertisements.

Facebook users upload and tag 100 million photos everyday, and YouTube users post 24 hours of video content every minute.  While you have been posting and tagging pictures of your friends and family, however, private online service providers like Facebook and Google have been quietly buying up FRT.  Now they want to put your data to use.

In February 2011, Google published a patent filed in the European Union that checks visual searches against “images accessible through social networking applications, calendar applications, and collaborative applications.”  “Images contained from those applications can then be compared for similarity to create a list of possible identities to be used in conjunction with other search-related data.”  At this point, it still remains unclear what Google plans to do with its FRT.  Similarly, Facebook also recently introduced FRT to facilitate auto-tagging of pictures uploaded by users.  With over half a billion FB users uploading over 100 million photos every day, the opportunity for social media aggregation of users’ photos is potentially limitless. 

New services such as TinEye, Recognizr, and Pornfaceher.com  bring FRT to the ordinary user.  TinEye searches the web for images based anywhere a photo is posted.  So if you see someone’s photo posted on a webpage, you would simply right click the image, select TinEye reverse image lookup, and then all hits for that “face” are returned to you.  Recognizr is an Android application that uses your smartphone’s built-in camera and FRT to obtain the name and social networking history of any stranger whose picture you snap.  Pornfaceher.com allows one to submit a photograph of an ex-lover or spouse from Facebook or any URL and it will find her porn star lookalike.  Still excited about FRT?

Privacy implications of FRT abound, as there have already been reports of governments and nation states using FRT to match pictures with protestors during the recent uprisings in the Middle East.  Knowing that the government is able to visibly identify you along with any information that has been aggregated through social media, one would be hard-pressed to deny that in the hands of abusive power, FRT is exceedingly dangerous.  Apart from protesters, it isn’t a far stretch to image a public ID camera system being installed.  Closer to home, Washington is one of 30 states where the Department of Licensing uses FRT to match your driver’s license picture against others in its database, purportedly to combat alleged immigration and other fraud.

Ironically, given the technology and complexity of FRT, it is surprisingly easy to fool.  CV Dazzle is a cosmetic makeup that “camouflages” FRT software, effectively preventing the software from mapping the face. 

Many pitfalls still exist for FRT.  Among other things, the type of lighting and camera angle required to capture an image, as well as things like smiling, ethnicity, and familiarity all affect the ability of FRT to match an image.  Further, Hewlett Packard had to respond to claims that its webcams were racist when they failed to detect the face of a black person, but had no difficulty recognizing a white face.  The problem, HP later stated, was a result of the “algorithms used to measure the intensity of contrast between the eyes and the upper cheek and nose,” and the camera had difficulty distinguishing between the contrast as determined by HP’s initial algorithms.

As with all technology, FRT is becoming faster, cheaper, and more effective, but it still suffers from many flaws and privacy concerns.  Regardless of the flaws, in our post-9/11 surveillance society, there is often a rush by governments to throw any new technology at a perceived threat.

What is the government’s purpose in adopting FRT?  Simply to make it easier to catch criminals?  Enhance national security?  Detect terrorists?  The only thing we have learned is that there is no one-stop shop for technology that is the “be all, end all” for crime detection.  Policymakers must carefully consider the economic, social, and civil liberties cost prior to implementing any new technologies, including facial recognition.