Facial recognition might be growing in stature as the most promising authentication technology currently being used though recent research has proved it isn’t without its own limitations. And much of that has to do with the way the tech seems to be biased towards males and white-skinned individuals.
As pointed out by researcher Joy Buolamwini associated with the MIT Media Lab, facial recognition software has been found to be accurate in only about 65 percent cases with dark-skinned females while their male counterparts are better with 88 percent accuracy.
However, the same software was found to be able to correctly identify white skinned females in 93 percent cases. White skinned males fared best with an accuracy of more than 99 percent.
Buolamwini said she used facial recognition software made by Microsoft, IBM and the Chinese start-up Megvii. Microsoft researcher Timnit Gebru too has contributed to the research the sort of which can be seen as further proof of what has always been speculated.
Explaining further, Buolamwini said she used 1,270 faces as samples for the research. Buolamwini also stressed she drew up her sample data set from a diverse user group that comprised of well-known faces as well as those hailing from all walks of life cutting across demographics, gender, ethnicity and such.
As for implications of the facial recognition technology in not being consistent enough, it is the dark skinned group and that too women among them might have to do more to prove their identities. When used in law enforcement roles, such software will be more prone to pick out the African-American community as likely suspects than their white-skinned American counterparts.
Such irregularities can also be evident in the healthcare sector as well that has been increasingly adopting facial recognition technology off late for various roles. On the whole, another aspect of the facial recognition tech developed so far points to those been region based as well to a large extent.
As such, software that has been developed in Europe seems more at ease in identifying Europeans while software developed in America seems most accurate for people of the region. Similarly, software created in Asia seems best suited for identifying Asians.
The research also underscored the need to fine-tune the algorithm using a diverse group of faces drawn from all walks of life and all around the world; unless of course it is targeted at a specific group or region.