Extended Feature Set for Fingerprint Matching
There are fundamental differences in the way fingerprints are compared by forensic experts and current Automatic Fingerprint Systems (AFIS). For example, AFIS systems focus mainly on the quantitative measures of fingerprint minutiae (ridge ending and bifurcation points), while latent experts often analyze details of intrinsic ridge characteristics and relational information. This alternate process includes examination of an extended feature set of minutiae shape, dots, incipient ridges, local ridge quality, ridge tracing, etc. However, most of the features used by latent experts have not even been quantitatively defined for AFIS matching. This project aims to develop algorithms that automatically extract and match extended features.
A.K. Jain, Y. Chen and M. Demirkus, " Pores and Ridges: High Resolution Fingerprint Matching Using Level 3 Features", IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006.
Y. Chen, M. Demirkus and A.K. Jain, " Pores and Ridges: Fingerprint Matching Using Level 3 Features", Proc. of International Conference on Pattern Recognition (ICPR), Vol. 4, pp. 477-480, Hong Kong, August, 2006.
Multispectral Fingerprint Matching
Multispectral (MS) fingerprint imaging systems use different wavelengths of light to illuminate the surface and the subsurface layers of the finger skin and capture the reflected light. The resulting fingerprint images and the combination of these images provide more discriminatory and robust information about the characteristics of the fingerprint than those from a TIR based optical fingerprint sensor. In that sense, we analyze the performance of different fingerprint matching algorithms on MS fingerprint images and explore new features that can be extracted from each image band.
Individuality of Fingerprints
The question of fingerprint individuality can be posed as follows: Given a query fingerprint, what is the probability that the observed number of minutiae matches with a template fingerprint is purely due to chance? An assessment of this probability can be made by estimating the variability inherent in fingerprint minutiae. We develop a compound stochastic model that is able to capture three main sources of minutiae variability in actual fingerprint databases. The compound stochastic models are used to synthesize realizations of minutiae matches from which numerical estimates of fingerprint individuality can be derived. Experiments on the FVC2002DB1 and IBMHURSLEY databases show that the probability of obtaining a 12 minutiae match purely due to chance is 1.6×10−5 when the number of minutiae in the query and template fingerprints are both 46.
Y. Zhu, S. C. Dass, and Anil K. Jain, " Compound Stochastic Models for Fingerprint Individuality", Proc. of International Conference on Pattern Recognition (ICPR), Vol. 3, pp. 532-535, Hong Kong, August, 2006.
S. C. Dass, Y. Zhu and Anil K. Jain, " Statistical models for assessing the individuality of fingerprints", Fourth IEEE workshop on Automatic Identification Advanced Technologies, pages 1-7,2005.
S. Pankanti, S. Prabhakar, and A. K. Jain, "On the Individuality of Fingerprints", IEEE Transactions on PAMI, Vol. 24, No. 8, pp. 1010-1025, 2002. A shorter version also appears in Fingerprint Whorld, pp. 150-159, July 2002.
It has been observed that the reduced contact area offered by solid-state fingerprint sensors do not provide sufficient information (e.g., minutiae) for high accuracy user verification. Further, multiple impressions of the same finger acquired by these sensors, may have only a small region of overlap thereby affecting the matching performance of the verification system. To deal with this problem, we suggest a fingerprint mosaicking scheme that constructs a composite fingerprint image using multiple impressions. In the proposed algorithm, two impressions of a finger are initially aligned using the corresponding minutiae points. This alignment is used by the well-known iterative closest point algorithm (ICP) to compute a transformation matrix that defines the spatial relationship between the two impressions. The transformation matrix is used in two ways: (a) the two impressions are stitched together to generate a composite image. Minutiae points are then detected in this composite image. (b) the minutia maps obtained from each of the individual impressions are integrated to create a larger minutia map. The availability of a composite template improves the performance of the fingerprint matching system as is demonstrated in our experiments.
A. K. Jain and A. Ross, " Fingerprint Mosaicking", Proc. International Conference on Acoustic Speech and Signal Processing (ICASSP), Orlando, Florida, May 13-17, 2002.
Hybrid Fingerprint Matcher
A fingerprint matcher that uses both minutiae and texture information present in fingerprints has been developed. A set of 8 Gabor filters are used to extract texture information inherent in fingerprints. Minutiae and/or core information is used to align two fingerprints. The hybrid matcher is shown to exhibit superior matching performance compared to a purely minutiae-based matcher.
A. Ross, A. K. Jain, and J. Reisman, " A Hybrid Fingerprint Matcher", Pattern Recognition, Vol. 36, No. 7, pp. 1661-1673, 2003.
A. Ross, J. Reisman and A. K. Jain, " Fingerprint Matching Using Feature Space Correlation", Proc. of Post-ECCV Workshop on Biometric Authentication, Copenhagen, Denmark, June 1, 2002.
A. K. Jain, A. Ross, and S. Prabhakar, " Fingerprint Matching Using Minutiae and Texture Features", Proc. International Conference on Image Processing (ICIP), Greece, October 7-10, 2001.
A. K. Jain, S. Prabhakar, L. Hong and S. Pankanti "Filterbank-based Fingerprint Matching", IEEE Transactions on Image Processing, Vol. 9, No.5, pp. 846-859, May 2000.
Fingerprint classification can provide an important indexing mechanism in a fingerprint database. An accurate and consistent classification can greatly reduce fingerprint matching time for large databases. In this paper, we present a fingerprint classification algorithm which is able to achieve an accuracy better than previously reported in the literature. We classify fingerprints into five categories: whorl, right loop, left loop, arch, and tented arch. The algorithm separates the number of ridges present in four directions (0, 45, 90, and 135 degrees) by filtering the central part of a fingerprint with a bank of Gabor filters. This information is quantized to generate a FingerCode which is used for classification. Our classification is based on a two-stage classifier which uses a K-nearest neighbor classifier in the first stage and a set of neural networks in the second stage. The classifier is tested on 4,000 images in the NIST-4 database. For the five-class problem, classification accuracy of 90% is achieved. For the four-class problem (arch and tented arch combined into one class), we are able to achieve a classification accuracy of 94.8%. By incorporating a reject option, the classification accuracy can be increased to 96% for the five-class classification and to 97.8% for the four-class classification when 30.8% of the images are rejected.
S. Dass and A. K. Jain," Fingerprint Classification Using Orientation Field Flow Curves", Proc. of Indian Conference on Computer Vision, Graphics and Image Processing, (Kolkata), pp. 650-655, December 2004.
A. K. Jain and S. Minut, "Hierarchical Kernel Fitting for Fingerprint Classification and Alignment", Proc. of International Conference on Pattern Recognition, Quebec City, August 11-15, 2002.
A. K. Jain, S. Prabhakar and L. Hong, " A Multichannel Approach to Fingerprint Classification", IEEE Transactions on PAMI, Vol.21, No.4, pp. 348-359, April 1999.
Distinguishing Identical Twins Using Fingerprints
Automatic identification methods based on physical biometric characteristics such as fingerprint or iris can provide positive identification with a very high accuracy. However, the biometrics-based methods assume that the physical characteristics of an individual (as captured by a sensor) used for identification are distinctive. Identical twins have the closest genetics-based relationship and, therefore, the maximum similarity between fingerprints is expected to be found among identical twins. We show that a state-of-the-art automatic fingerprint identification system can successfully distinguish identical twins though with a slightly lower accuracy than nontwins.
A. K. Jain, S. Prabhakar, and S. Pankanti, " On The Similarity of Identical Twin Fingerprints", Pattern Recognition, Vol. 35, No. 11, pp. 2653-2663, 2002.
Combination of Fingerprint Matchers
Different fingerprint matching algorithms may use different type of information extracted from the input fingerprints and hence complement each other. Integration of fingerprint matching algorithms is a viable way to improve the performance of a fingerprint verification system. In this paper, we use logistic transform to integrate the output scores from two different fingerprint matching algorithms. Each set of four parameters for a specified false acceptance rate (FAR) is obtained through supervised learning: the four parameters are adjusted so that the false rejection rate (FRR) is minimized for a given FAR. This results in optimizing a function with an unknown analytical form; hence the commonly used gradient-descent learning with an artificial neural network is not applicable. The optimization is solved by Brent's efficient numerical algorithm without the use of derivatives. Experiments conducted on a large fingerprint data set confirme the effectiveness of the proposed integration scheme.
A. K. Jain, S. Prabhakar and S. Chen, " Combining Multiple Matchers for a High Security Fingerprint Verification System", Pattern Recognition Letters, Vol 20, No. 11-13, pp. 1371-1379, 1999.
S. Prabhakar and A. K. Jain, " Decision-level Fusion in Fingerprint Verification" Pattern Recognition, Vol. 35, No. 4, pp. 861-874, 2002.
3D Face Recognition
The performance of face recognition systems that use two-dimensional (2D) images is dependent on consistent conditions such as lighting, pose, and facial expression. A multi-view face recognition system is being developed, which utilizes three-dimensional (3D) information about the face, along with the facial texture, to make the system more robust to those variations. A procedure is presented for constructing a database of 3D face models and matching this database to 2.5D face scans which are captured from different views. 2.5D is a simplified 3D (x, y, z) surface representation that contains at most one depth value (z direction) for every point in the (x, y) plane. A robust similarity metric is defined for matching. To address the non-rigid facial movement, such as expressions, we present a facial surface modeling and matching scheme to match 2.5D test scans in the presence of both non-rigid deformations and large pose changes (multiview) to a neutral expression 3D face model. A geodesic-based resampling approach is applied to extract landmarks for modeling facial surface deformations. We are able to synthesize the deformation learned from a small group of subjects (control group) onto a 3D neutral model (not in the control group), resulting in a deformed template. A personspecific (3D) deformable model is built for each subject in the gallery w.r.t. the control group by combining the templates with synthesized deformations. By fitting this generative deformable model to a test scan, the proposed approach is able to handle expressions and large pose changes simultaneously.
X. Lu and A. K. Jain, " Deformation Modeling for Robust 3D Face Matching", Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR2006), Vol. 2, pp. 1377 - 1383, New York, NY, Jun. 2006.
X. Lu and A.K. Jain, " Integrating range and texture information for 3D face recognition", Proc. of WACV (Workshop on Applications of Computer Vision), pp. 156-163, Breckenridge, Colorado, January 2005.
X. Lu and A.K. Jain, " Deformation Analysis for 3D Face Matching", Proc. of WACV (Workshop on Applications of Computer Vision), pp. 99-104, Breckenridge, Colorado, January 2005.
X. Lu, D. Colbry and A. K. Jain, " Three-Dimensional Model Based Face Recognition", Proc. International Conference on Pattern Recognition (ICPR), vol. I, pp. 362-366, Cambridge, UK, August 2004.
X. Lu, D. Colbry and A. K. Jain, " Matching 2.5D Scans for Face Recognition", Proc. International Conference on Biometric Authentication (ICBA) , pp. 30-36, Hong Kong, July 2004.
X. Lu, R. Hsu, A. Jain, B. Kamgar-Parsi and B. Kamgar-Parsi, Face Recognition with 3D Model-Based Synthesis, Proc. International Conference on Biometric Authentication (ICBA), pp. 139-146, Hong Kong, July 2004.
Face Recognition in Video
Face recognition in video has gained wide attention as a covert method for surveillance to enhance security in a variety of application domains (e.g., airports). A video contains temporal information as well as multiple instances of a face, so it is expected to lead to better face recognition performance compared to still face images. However, faces appearing in a video have substantial variations in pose and lighting. These pose and lighting variations can be effectively modeled using 3D face models. Combining the advantages of 2D video and 3D face models, we propose a face recognition system that identifies faces in a video. The system utilizes the rich information in a video and overcomes the pose and lighting variations using 3D face model. The 3D face models are obtained from a 3D range sensor and stereographic reconstruction process. Experimental results have shown that both 3D face models provide better face recognition performance by compensating pose and lighting variations.
U. Park and A.K.Jain, " 3D Face Reconstruction from Stereo Video", Proc. First International Workshop on Video Processing for Security (VP4S-06) in Third Canadian Conference on Computer and Robot Vision (CRV06), June 7-9, Quebec City, Canada, 2006.
U. Park, H. Chen and A. K. Jain, " 3D Model-assisted Face Recognition in Video", Proc. of 2nd Workshop on Face Processing in Video, in conjuction with AI/GI/CRV05, pp. 322-329, Victoria, British Columbia, Canada, May 2005.
Human face detection is often the first step in applications such as video surveillance, human computer interface, face recognition, and image database management. We propose a face detection algorithm for color images in the presence of varying lighting conditions as well as complex backgrounds. Our method detects skin regions over the entire image, and then generates face candidates based on the spatial arrangement of these skin patches. The algorithm constructs eye, mouth, and boundary maps for verifying each face candidate. Experimental results demonstrate successful detection over a wide variety of facial variations in color, position, scale, rotation, pose, and expression from several photo collections.
R.-L. Hsu, Mohamed Abdel-Mottaleb and A. K. Jain, "Face Detection in Color Images", IEEE Transactions on PAMI, vol. 24, no.5, pp. 696-706, May 2002.
R.-L. Hsu, M. Abdel-Mottaleb, and A. K. Jain, " Face detection in color images", Proc. International Conference on Image Processing (ICIP) , Greece, October 7-10, 2001.
Sarat C. Dass and A. K. Jain, "Markov Face Models", The Eighth IEEE International Conference on Computer Vision (ICCV), Vancouver, Canada, July 9-12, 2001.
3D Human face models have been widely used in applications such as face recognition, facial expression recognition, human action recognition, head tracking, facial animation, video compression/coding, and augmented reality. Modeling human faces provides a potential solution to the variations encountered on human face images. We propose a method of modeling human faces based on a generic face model (a triangular mesh model) and individual facial measurements containing both shape and texture information. The modeling method adapts a generic face model to the given facial features, extracted from registered range and color images, in a global to local fashion. It iteratively moves the vertices of the mesh model to smoothen the non-feature areas, and uses the 2.5D active contours to refine feature boundaries. The resultant face model has been shown to be visually similar to the true face. Initial results show that the constructed model is quite useful for recognizing profile views. sensors.
R.-L. Hsu and A. K. Jain, " Semantic face matching", Proc. IEEE Int'l Conf. Multimedia and Expo (ICME) , Lausanne, Switzerland, Aug. 2002.
R.-L. Hsu and A. K. Jain, " Face modeling for recognition", Proc. International Conference on Image Processing (ICIP) , Greece, October 7-10, 2001.
Combination of Face Matchers
Current two-dimensional face recognition approaches can obtain a good performance only under constrained environments. However, in real applications, face appearance changes significantly due to different illumination, pose, and expression conditions. Face recognizers based on different representations of the input face images have different sensitivity to these variations. Therefore, a combination of several face classifiers which can integrate the complementary information should lead to improved classification accuracy. We use the sum rule and RBF-based integration strategies to combine three commonly used face classifiers based on PCA, ICA and LDA representations. Experiments conducted on a face database containing 206 subjects (2,060 face images) show that the proposed classifier combination approaches outperform individual classifiers.
X. Lu, Y. Wang and A. K. Jain, " Combining Classifiers for Face Recognition", Proc. ICME 2003, IEEE International Conference on Multimedia & Expo, vol. III, pp. 13-16, Baltimore, MD, July 6-9, 2003.
The goal of forensic dentistry is to identify people based on their dental records, mainly as radiograph images. In this paper we attempt to set forth the foundations of a biometric system for semi-automatic processing and matching of dental images, with the final goal of human identification. Given a dental record, usually as a postmortem (PM) radiograph, we need to search the database of antemortem (AM) radiographs to determine the identity of the person associated with the PM image.We use a semi-automatic method to extract shapes of the teeth from the AM and PM radiographs, and find the affine transform that best fits the shapes in the PM image to those in the AM images. A ranking of matching scores is generated based on the distance between the AM and PM tooth shapes. Initial experimental results on a small database of radiographs indicate that matching dental images based on tooth shapes and their relative positions is a feasible method for human identification.
H. Chen and A. K. Jain, " Tooth Contour Extraction for Matching Dental Radiographs", Proc. ICPR 2004, vol. III, pp. 522-525, Cambridge, UK, August 2004.
G. Fahmy, D. Nassar, E. Haj-Said, H. Chen, O. Nomir, J. Zhou, R. Howell, H. H. Ammar, M. Abdel-Mottaleb and A. K. Jain, "Towards an Automated Dental Identification System (ADIS)", Proceedings of the International Conference on Biometric Authentication (ICBA), Hong Kong, July 2004.
A. K. Jain and H. Chen, " Matching of Dental X-ray Images for Human Identification ", Pattern Recognition, Vol. 37, No. 7, pp. 1519-1532, July 2004.
A. K. Jain, H. Chen and S. Minut, " Dental Biometrics: Human Identification Using Dental Radiographs", Proc. of 4th Int'l Conf. on Audio- and Video-Based Biometric Person Authentication (AVBPA), pp. 429-437, Guildford, UK, June 9-11, 2003.
We describe a method for handwritten signature verification. The signatures are acquired using a digitizing tablet which captures both, dynamic and spatial information of the writing. After preprocessing the signature, several features are extracted. The authenticity of a writer is determined by comparing an input signature to a stored reference set (template) consisting of three signatures. The similarity between an input signature and the reference set is computed using string matching and the similarity value is compared to a threshold. Experiments on a database containing 1,232 signatures of 102 individuals show that user-specific thresholds yield better results. Several approaches to obtain the optimal threshold value from the reference set are investigated. The best result yields a false acceptance rate of 1.6% and a false reject rate of 2.8%.
A.K. Jain, Friederike D. Griess and Scott D. Connell, "On-line Signature Verification", Pattern Recognition, vol. 35, no. 12, pp. 2963--2972, Dec 2002.
Friederike D. Griess and Anil K. Jain, " On-line Signature Verification", MSU Technical Report TR00-15, 2000.