jump to navigation

Detecting genetic disorders with 3d face scans September 16, 2007

Posted by Johan in Abnormal Psychology, AI, Applied, Behavioural Genetics, Developmental Psychology, Face Perception.
add a comment

Following on from last week’s post on smile measuring software, The Scotsman (via Gizmodo) reports on the work by Hammond and colleagues at UCL, who are developing 3d face scans as a quick, inexpensive alternative to genetic testing. This is not as crazy as it sounds at first since it is known that in a number of congenital conditions, the hallmark behavioural, physiological or cognitive deficits are also (conveniently) accompanied by characteristic appearances. The classic example of this is Down syndrome, which you need no software to recognise. More examples appear in the figure above, where you can compare the characteristic appearances of various conditions to the unaffected face in the middle.

Hammond’s software can be used to identify 30 congenital conditions, ranging from Williams syndrome (a sure topic of a future post) to Autism, according to the Scotsman. I know of no facial characteristics of autism, so I would take that part of the story with a grain of salt. The system claims an accuracy rate of over 90 percent, which is not conclusive, but certainly good enough to inform a decision to carry out genetic tests that are. The UCL press release gives some more information about how the software works:

The new method compares a child’s face to similarly aged groups of individuals with known conditions and selects which condition looks the most similar. In order to do this, collections of 3D face images of children and adults with the same genetic condition had to be gathered, as well as controls or individuals with no known genetic condition.

It really is too bad that the software uses 3d images – those cameras are neither cheap nor ubiquitous, which somewhat defeats the point of using this software as an affordable alternative to (or initial screening for) genetic testing. I can’t help but wonder if it wouldn’t be possible to achieve similar accuracy using normal portraits. If you can tell how much someone is smiling in a photo, you should be able to pick up on that extra chromosome…


AI detection of facial expressions September 7, 2007

Posted by Johan in AI, Applied, Emotion, Face Perception.
1 comment so far

I’ve written previously about how algorithms that detect faces in images are appearing everywhere, including Google Images and many recent digital cameras, where they are used to ensure that focus is on the face (presumably, no one who buys a Cybershot is interested in the aesthetic effects of not having the face in focus).

This technology is being expanded into the realm of specific facial expressions by OMRON (among others), a company that just released software that promises to measure the smile factor of faces in a picture. The smile factor as OMRON conceives of it goes from 0 to 100 %, and will presumably serve to shift the blame nicely when you want people to smile more in a picture (“look, I think the picture is fine, but the camera thinks you should be smiling more”). It is only a matter of time before this makes it into digital cameras, soon followed by a spinach-on-the-teeth detector.

Other proposed applications for OMRON’s software include human-computer interactions, and as an objective measure of liking in food tasting studies. I imagine the software would also be useful for more theoretical investigations into emotional expressivity. As it stands, scoring the magnitude or kind of expression manually is quite tricky.

It never ceases to amaze me how object recognition software is steadily advancing along the ventral visual stream.

Google images now detects faces (poorly) June 17, 2007

Posted by Johan in AI, Face Perception, Off Topic, Sensation and Perception.

Google has somewhat secretly rolled out a special filter for its image searches, that enables you to restrict your searches to faces. Geeks in the know say that this technology comes from their purchase of Neven Vision, a company I’ve never heard of before.

To use this technology, simply add &imgtype=face to the search URL – adding it in the search box won’t work (although Quicksilver users will find that it does work if you use the Google search plugin – I have no idea why).

Face detection algorithms are all the rage in computer science, and there are lots of implementations out there. I thought I’d compare Google to the competition.

An initial test suggests that Google does rather well. Compare a search for “house” with the image filter and without it. There are only faces (including a certain british-comedian turned american-doctor) in the former, and only houses in the latter.

However, it turns out that it’s quite easy to trip Google up once you throw faces of non-human animals at it. According to Google, this is a face:

But it’s not. So how about the competition? There are many online demos of face detection algorithms, but I quite like PittPatt by the Face Group at Carnegie Mellon. You can just plug in a URL rather than having to upload anything, and it’s quite fast. Pittpatt puts rectangles over each part of the image where it thinks a face is. Nothing lights up for the poor dog above.

Let’s make things more difficult for Pittpatt. Gorillas look more like people, right? Google produces mostly faces, but also this:

Ah, Pittpatt seems confused:

Apparently there are a few facial features in there, but I think the blue colour of the frame indicates uncertainty – from the gallery of uploaded pictures on Pittpatt, it seems the most obvious faces are framed in green or at times yellow.

How about chimps, then? Google fails this one spectacularly – all the faces on the first page are non-human, except for a bearded guy and a few pictures of George W. Bush. Go figure. To be fair, this probably has less to do with Dubya’s chimp-like features and more to do with Google’s page rank system, where a picture that is linked a number of times with the word “chimp” nearby ends up on page one of the results for that term. So Bush doesn’t necessarily look like a chimp, it’s just that the Internet thinks he does.

Google thinks this is a face. And you might be inclined to agree, but for the sake of argument I’m assuming that Google wants their algorithm to pick out human faces, not general primate or mammal faces. Pittpatt produces only another weak blue cube, this time restricted to the mouth of the chimp.

So what can we conclude here? This test is of course terribly unfair – I’ve only picked faces that Google failed so at best, Google could only have done as badly as Pittpatt. But it seems like Google’s face detection algorithm isn’t all that great compared to the alternatives. Another possibility is that they’re just trying not to be specist, but I somewhat doubt that. Also, we have learned that a fake statue of King Kong apparently looks more human than a fat dog or a young chimp, according to Pittpatt. I’m not sure what to do with that information.

In any case, it is pretty fascinating that computers can get this good at face detection. While primate faces can look like a human face to us, there is no way you would confuse the two. I guess you might interpret the ambiguous output of Pittpatt as something similar – there is something face-like there, but it’s not quite it.

Update: Other geeks inform me that the face detection algorithm that Google uses is probably trained by either of their two online games Peekaboom or Image Labeler. I couldn’t figure out how Peekaboom works without registering, but the point of the image labeler game is basically for both players to give the same name to a picture. With a training procedure like that, you suddenly start to see how the chimp above might appear under a “face” search – while “chimp” might be the first word you try there, then possibly “ape,” “face” isn’t going to be far behind. This actually points to a problem of human-trained computers – they acquire all our idiosyncrasies and imperfections, for better and for worse.