Last week, we saw an article in the Daytona Beach News Journal that exemplifies the problem with how the method of identifying owners of fingerprints is described to the public by law enforcement. For example:
A new, $7.4 million computer system has the software capability of storing and examining palm prints and larger areas of the finger lifted from crime scenes. In addition, the new program — called the Biometric Identification System — for the first time is able to retain suspects’ mug shots, as well as images of a crook’s tattoos and other identifying marks, said Florida Department of Law Enforcement crime analyst Stacy Colton-Clark.
. . .
Crime analysts have a “hit” when the finger or palm print of an unidentified suspect matches with prints already stored in the computer system. Anytime an individual is arrested, his or her fingerprints — and now their palm prints — are taken by the arresting agency. Those prints are stored in the state’s Automated Fingerprint Identification System, commonly known as AFIS.
First, the reporter over simplifies the process by which fingerprints in the database are matched to suspects. A fingerprint is never a “match” per se. Rather, when an unknown print is entered into AFIS or this new system, it may produce a “hit” which means that the computer think there are enough consistencies between the unknown print and the hit. However, that isn’t the end of the story. Then a fingerprint analyst at the law enforcement agency will then have to do a side-by-side comparison and subjectively determine whether the prints are consistent enough with each other to verify the hit.
So there is a not a computer-driven scientific certainty involved here. In fact, this method is burdened by the same subjective (and often unreliable) methods as other forensic “matching” methods.
The article also judges the dividends of spending $7.4 million on this program by how many more hits are achieved but does not investigate whether those hits were accurate or the reliability of the method.
See this is the problem with fingerprints (and many other individualizing forensic assays)–they are based on a number of assumptions:
1) that every person has a unique fingerprint design (which has never been studied or proven);
2) that mere experience at performing subjective fingerprint comparisons guarantees reliability (it doesn’t–proficiency testing has demonstrated that when the same comparison was performed by multiple analysts, different results were achieved and that the error rate in some cases has been as high as 50%);
3) there is no bias involved (this obviously isn’t true–the comparisons are being performed by a law enforcement agency whose job it is to get a “match,” and by an analyst who knows that the known print they are comparing to just was spit out as a “hit” by a computer system. There is no way this is an unbiased process).
It can’t be that we judge the success of any forensic method on how many “hits” we get or whether the person is eventually convicted because that is a self-fulfilling prophesy.
I would submit that instead of spending many millions of dollars expanding the system in place, that money would be better spent, as the National Academy of Sciences Report suggests doing, on coming up with a new method of examining prints that diminishes human observer bias and increases reliability.