Let’s talk about digital identity with Mei Ngan, Scientist at the National Institute of Standards and Technology (NIST).

In episode 42, we explore Mei’s work at NIST evaluating face recognition biometrics with the Face Recognition Vendor Test (FRVT), how accurate facial recognition actually is, and the effects of different variables on the FRVT – face masks (motivated by the pandemic), face morphing as a current FR vulnerability for identity credentials, demographic differentials, and twins – “the forgotten demographic”.

[Scroll down for transcript]

“[Face recognition] technology really has come a long way, especially when you only have half the face available to do recognition with. But with that said though, there still remains certain limitations to the technology – such as being able to differentiate between identical twins, demographic differentials and extremely poor-quality photos.”

Mei NganMei Ngan is a scientist at the National Institute of Standards and Technology (NIST).  Her research focus includes evaluation of face recognition and tattoo recognition technologies.  Mei has authored and co-authored a number of technical publications, including the accuracy of face recognition with face masks, evaluation of face morphing detection algorithms, demographic effects in face recognition, performance of facial age and gender estimation algorithms, and publication of a seminal open tattoo database for developing tattoo recognition research, which she received the Special Contribution Award for at the 2015 IEEE International Conference on Identity, Security and Behavior Analysis (ISBA).

Mei was awarded the Department of Commerce Gold Medal Award in 2020 and was a recipient of the 2020 Women in Biometrics Award, a globally recognised award honouring innovative women in the biometrics field.

Find out more about Mei’s work at nist.gov/programs-projects/face-recognition-vendor-test-frvt

Find the FRVT leaderboards at pages.nist.gov/frvt/html/frvt11.html (1:1) and pages.nist.gov/frvt/html/frvt1N.html (1:N).

View Women in Identity’s webinar with Mei exploring demographic effects in facial recognition here: https://youtu.be/Lni4Pe8dYuk

We’ll be continuing this conversation on Twitter using #LTADI – join us @ubisecure!

­Go to our YouTube to watch the video transcript for this episode.

Let's Talk About Digital Identity
Let's Talk About Digital Identity
Ubisecure

The podcast connecting identity and business. Each episode features an in-depth conversation with an identity management leader, focusing on industry hot topics and stories. Join Oscar Santolalla and his special guests as they discuss what’s current and what’s next for digital identity. Produced by Ubisecure.

 

Podcast transcript

Let’s Talk About Digital Identity, the podcast connecting identity and business. I am your host, Oscar Santolalla.

Oscar Santolalla: Hello, and thanks for joining to this episode of Let’s Talk about Digital Identity. And one aspect that is connected to identity and has been like that for many years is face recognition. You have heard a lot in the past years. But also there are new challenges in these recent years, for instance, by the use of face masks, as you can imagine. And for having a deeper conversation about face recognition biometrics, we’ll have a special guest today who is Mei Ngan.

Mei is a scientist at the National Institute of Standards and Technology, NIST. Her research focus includes evaluation of face recognition and tattoo recognition technologies. Mei have authored and co-authored a number of technical publications, including the accuracy of face recognition with face masks, evaluation of face morphing detection algorithms, demographic effects in face recognition, performance of facial age, and gender estimation algorithms and publication of a seminal open tattoo database for developing tattoo recognition research.

And for this, she received a Special Contribution Award at the 2015 IEEE International Conference on Identity, Security and Behavior Analysis. Mei was awarded the Department of Commerce Gold Medal Award in 2020, and was the recipient of the 2020 Women in Biometrics Award.

Hello, Mei.

Mei Ngan: Hello, Oscar. How are you?

Oscar: Oh, very good and really pleased to have you on this conversation with you.

Mei: Yeah, I’m quite happy to be here. Thanks for having me.

Oscar: Fantastic. We definitely want to hear more about yourself and how your life led you to working now at NIST.

Mei: Yeah. So I had started my career out of college, actually working for a defence contractor called Lockheed Martin. I was hired into their engineering leadership development programme. And there, I had the opportunity to do rotations across different business units and projects doing software development and integration work for various government contracts, including running an internal research project, looking at, at the time, integrating web services with network layer protocols to prioritise certain types of network traffic. I also led a field deployment team where we installed passive radar systems in the field to collect data.

But about seven years with Lockheed, I decided it was time for something different. So I ended up getting a job interview for a guest researcher position, which happened to be with the biometrics group at NIST. And at the time, I didn’t have any background in biometrics, but I still ended up getting the job. And that’s how I started my journey at NIST.

Oscar: Oh excellent, quite super interesting things before NIST. So we’d like to hear also – well, now, I would like to hear what is specifically the work you do at NIST?

Mei: Yeah. So looking back, I’ve been here at NIST for close to 10 years now. And I’ve had the opportunity to work on a number of biometrics projects. One of my first roles at NIST involved developing software for a large-scale iris recognition evaluation. Later on, I also ran a tattoo recognition programme focused around testing automated tattoo recognition capabilities, and also the development of best practices on how to actually collect tattoo images to support image-based tattoo recognition.

But most recently, though, I’ve been working on projects focused on face recognition, and in particular, running various testing efforts within our Face Recognition Vendor Test, or what many people know as FRVT. So NIST has been doing work in the area of face recognition for well over two decades now. And FRVT has sort of been our flagship face recognition test programme.

And today, it’s an ongoing test where developers send their algorithms to NIST, and we test them on any number of sequestered datasets that developers presumably have not seen before. And the tests that we run are free and open worldwide, we don’t charge anything. And what it really does is it creates a level playing field because developer softwares run on the same data in the same way on the same hardware.

And we document results in public reports. And because we publish the results publicly, the research and the commercial community – they pay attention to this. Customers and end users and policymakers also pay attention to our reports. And what we found is that our evaluation process actually helps develop and improve technology over time, because the developers are able to use the information that we provide back to them to improve their systems.

So I mean, and that’s a lot of what we do. We publish reports with numbers, and the metrics and protocols that generated those results, and we let the readers really decide what to do with the information. And sometimes as an outcome of our tests, a need for a particular standard is identified. So we go off and we develop those standards.

But currently, our FRVT programme includes tracks to evaluate core face recognition capability, as well as other activities that will add and modify based on the needs of the community. And a recent example of that is a test we ran evaluating face recognition on people wearing face masks, which was a need driven by COVID-19, where face recognition systems may have to deal with subjects wearing masks.

Oscar: Yes, of course, very, very topical. And I was about to ask you now actually, if the work that NIST does is only for American companies or institutions. But you already told me that any developer, I mean any, any company doing technology, either private or public, in any country in the world can submit their algorithms or of their software, to your department or to your group.

Mei: Yes, as I mentioned, our evaluations are free and open to anyone worldwide. We have a large percentage of the commercial face recognition market who participates in our tests, and we’re always open to academic and research entities participating as well.

Oscar: OK, so also academic as well. And from the – if we see from the face recognition system that are already commercial that today people or either people are buying or institutions companies are using, which ones are the most commonly used applications today for face recognition?

Mei: So the applications of face recognition really have expanded over the past 10 years. And this has to do with face recognition undergoing a technology revolution really over the past five to seven years. And applications can range from unlocking our cell phones every single day, or using e-gates to enter or exit countries. We’ve also encountered some very unique and interesting applications. There was one news article that talked about how face recognition was being used in some parks in China to ration toilet paper, and to battle toilet paper theft in public parks. So face recognition is being used in many, many places.

And in our tests, the number of face recognition providers out there have grown significantly as well. But one thing I’d like to just sort of mention is that there’s a lot of questions about how accurate and the state of the art of face recognition today. And it really depends on the algorithm. There are some algorithms that are quite capable and very accurate. But there’s also a long tail of sort of not so accurate algorithms that we’ve tested before as well. So there are large differences between the good algorithms and the bad ones.

Oscar: Yes. So, let’s move the conversation to – you mentioned the Face Recognition Vendor Test. That’s the test that your group does regularly.

Mei: Yeah.

Oscar: Tell us a bit of – you mentioned already a bit of the effectiveness. Also to clarify, there is a verification and identification, if you can tell us also the difference on that and how it goes with the effectiveness.

Mei: Yeah, yeah, so we run two tracks that involve core accuracy. And we look at – the first evaluation is what we call one-to-one verification. So essentially, we’re trying to figure out how well the algorithms can take two images, and determine whether or not they’re the same person. And as I mentioned before, one-to-one verification applications include, you know, using face recognition to unlock our cell phones, or trying to authenticate our passport when we’re trying to cross an e-gate, for example.

In one-to-many identification, that use case involves a database, where given a database and an image that you want to search against the database, we measure how often the correct mate is retrieved from the database, and how often an image of a person that actually doesn’t exist in the database actually false positively gets retrieved. So those are the two use cases.

And so to give you an idea of the state of the art here, on well-controlled photos, so well-controlled full-frontal well-lit images, when doing one to one verification – back in 2017, the error rates or the false rejection rates were about 3% for the most accurate algorithms. Fast forward to today, 2021, those error rates have come down to about 0.2%. So you know, we’ve seen significant improvements in the state of the art in face recognition. And we’ve seen that the good algorithms have the ability to see through some of the traditional difficulties in face recognition, such as poor illumination, variable facial expression, and pose.

On the one-to-many end, there is evidence that some algorithms can now do profile to frontal matching, meaning being able to use a profile or side view image of a person and to find them from a database of frontal images. So the top performing algorithms currently in our tests can use a profile image, search it against a database of about 1.6 million people and find the correct person at rank one, about 94% of the time. So, I mean, the technology really has come a long ways, especially when you only have half of the face available to you to do recognition with.

But with that said though, there still remain certain limitations to the technology, such as being able to differentiate between identical twins, demographic differentials, and extremely poor quality photos as well. And we’ve seen that image quality can range from you know, excellent, good, bad, ugly, and lots far beyond.

Also, as face recognition technology is progressing, at the same time, though, ways to exploit the vulnerabilities of face recognition has been advancing as well. And examples include face morphing, and presentation attacks. And these types of topics impedes on the security of your system. So this is something that developers and end users need to pay attention to.

Oscar: Yes, but as you said, in the best cases, in the best algorithms, your results are quite impressive. I will say it’s really very accurate. That’s great. Tell us now about these aspects of face recognition that you investigated. For instance, you mentioned briefly about the demographic effects – if we start for instance, with the effects of face masks.

Mei: Sure. So we conducted a study looking at face recognition accuracy on people wearing face masks. And this was motivated by the pandemic that we’re in right now, which has brought on the global practice of people wearing protective face masks in public places for safety purposes. So there’s a need to be able to use face recognition on people wearing masks, as I mentioned, from trying to unlock our cell phones while we’re out grocery shopping, to when we’re wearing a mask to cross e-gates at airports without needing to remove our masks.

And the presence of masks presents a problem for face recognition because regions of the face are occluded by masks including the mouth and the nose, which contains information useful for recognition. So what we did in our study was we applied solid colour masks digitally, to a large set of photos, millions of them, and then compare the accuracy of face recognition algorithms on both unmasked versus masked images. And we also looked at how changing the shape of the mask, the colour of the mask, and the amount of nose coverage of the mask would affect performance.

So initially, we had tested algorithms that were developed prior to the pandemic. And what we found was that accuracy with masked faces declined substantially across the board. And the majority of the pre-pandemic algorithms just didn’t work with face masks, and the error rates were very high. Subsequently, though, we started receiving algorithms developed after the pandemic was declared. And it’s clear that some developers have started adapting their algorithms to better handle face masks.

So currently, in the best cases, the error rates on masked faces are anywhere between 2% to 5%, which is comparable to where face recognition technology was about 2017, 2018 timeframe on unmasked photos. But if you think about it, right, these results are still quite impressive given 60% to 70% of the face is covered by a mask. So the presence of face masks has set the technology back a few years, but the technology may still be usable, depending on your application.

And we found that the amount of nose coverage had an impact on error rates, so high masks that covered more of the nose produce more errors when compared to masks that covered less of the nose. We also saw that the shape of the mask mattered. There were higher error rates in the wide masks when compared to round masks. And this makes a lot of sense because the wide masks tend to cover more of the face than the round masks do.

So the study that we ran was a laboratory test using synthetic masks. And what it does is it gives us a first order answer on the question about whether the existence of face mask occlusions has an impact on face recognition algorithms when compared to people who aren’t wearing masks. We didn’t use real masks in our data because we didn’t have a large dataset of real mask images available to us. So the test might be missing certain realities with real masks that might impact accuracy, such as certain textures or patterns that might impact results or the fit of real masks over different faces.

And our test didn’t take into consideration any interactions with the camera that the mask might have, because many cameras today run with exposure control. And it’s possible that a dark mask on lighter skin or white mask on darker skin could cause over or under exposure problems. So I think the best thing for owners and end users of face recognition systems to do is to know their algorithm and how their system performs in their own environment, their own operational environment, ideally using the operational data coming from their systems of people wearing real face masks.

Oscar: So you have in your group people who are like the testers who, let’s say, wear the masks and you test the algorithms.

Mei: Yeah, what we did was – for us, we had an existing repository of many, many images. And yeah, we overlaid synthetic masks using software. And then we ran the evaluation. But I think the best thing to assess sort of your system accuracy installed in your environment is to use the data that is collected on your system of people wearing real face masks that are coming through your system. And that will give you a much better answer to how well your system works.

Oscar: And are there many of these developers and institutions, companies that are developing face recognition, many already have adapted their algorithms to for the face recognition for the face masks?

Mei: Yeah, so I would say that some of the top performing algorithms have started adapting their algorithms for face masks. But there still exist a number of developers who have submitted algorithms who just aren’t addressing the face mask issue, and we can’t necessarily compel or control the algorithms that they submit to us. So we just test it, and then we report the results.

Oscar: Sure. Tell us about the face morphing – the case of face morphing?

Mei: Yeah. So as I mentioned, right, as the performance and deployment of face recognition technology has started ramping up. In the past few years, there has also been an increasing number of technologies aimed to defeat or exploit vulnerabilities in face recognition systems. And one such vulnerability is face morphing. And face morphing is an image manipulation technique where you merge the faces of different people to form a single photo.

And morphing itself as a concept isn’t anything new. And it’s really quite easy to do. There are a number of free applications out there that can take two photos and create a morph. But the application of face morphing can actually be used to create a security vulnerability to exploit modern face recognition systems. Because what happens is when you morph say two faces of different people, you get an image that looks like both people. And if that morphed image gets onto an identity credential, it can be used by both people, because one, the morphed looks visually like both subjects. But more importantly, modern face recognition algorithms are fooled by this morph, and will actually match images of both subjects to the morphed photo.

So, imagine if an attacker is working with an accomplice to use a self-submitted morphed photo to apply or renew a passport say. And visually, the morph looks like the person applying for the passport. So it’ll likely pass any sort of visual check by an officer. And once that morphed photo gets onto the passport, we now have a fraudulent yet genuine document that can be used by both people. And when the attacker takes that passport to try and enter or exit the country using an e-gate, which is powered by face recognition underneath the hood, the face recognition algorithm will most likely authenticate the user with the morph and that e-gate door is going to pop open. So, morphing poses a threat to entities that accept any sort of user-submitted photo for identity credentials.

And all of the capable face recognition algorithms that we’ve tested are vulnerable to this morphing phenomenon. A few years ago, a real case of the morphing threat actually made news headlines. There was a member of a German activist group who applied for a German passport with a morphed image. They morphed a photo of themselves with an Italian politician. And they successfully got the passport with the morphed photo on it. And this made news headlines internationally and was essentially a wake-up call for a number of governments worldwide that accept user-submitted passport application photos, for example.

So given the security concerns around morphing, NIST launched a programme called FRVT Morph. And the goal here was to look at whether morphed images could be detected using software algorithms and how well they work. And we’ve tested a number of morph detection algorithms over the course of two – about two years. And we’ve tested them over multiple datasets of escalating difficulty. So our tests ranges from using rudimentary morphs that have very clearly visible morphing artifacts to using quite sophisticated morphs where it’s really hard to spot any sort of image manipulation.

And it really is the – it’s the sophisticated morphs that are concerning, because that’s what an attacker would realistically use if they were going to do this. As of today, automated morph detection still remains a really challenging problem. Morph detection using a single image in isolation is a very hard problem. And we really haven’t seen any breakthroughs in terms of technology in that aspect.

There is a second scenario where in addition to the image that’s in question, the system also has access to a photo of one of the subjects that went into the morph. And with this additional bit of information, we have seen some promising results and morph detection capability. But with that said, there’s still quite a bit of progress to be made before the technology is mature enough for operational use.

Other research organisations have studied human performance on detecting morphs. And the consensus is that – for morphs that are sophisticated where there are little to no visible artifacts available for the people to pick off. Humans have a very hard time detecting whether an image is a morph or not as well. So given both humans and machines aren’t currently reliable when it comes to detecting morphs, it really solidifies the case for better processes and procedures to try and mitigate this vulnerability by preventing morphs from getting into your system in the first place. So putting better enrolment processes in place, whether it’s through live enrolment, or trusted capture of some form to ensure image manipulation isn’t possible in your process.

Oscar: Yeah, it’s quite a big challenge, I can see not easy to mitigate. But as you said, the enrolment is very important. So all the time the user is registered into the system is the identities is – well the photo is taken by the system. And then you say, once this photo, the morphed photo is on the system, the only way to find algorithm to search on a database which of these photos are morphed.

Mei: Yeah, so once morphed photo gets into your system, it’s really hard to figure out whether or not you have a morph or not. There is one sort of process, if you have a centralised database, for example, the Dutch Vehicle Authority actually ran a pilot a couple years ago, where they took their operational driver’s license, and they created some morphs using people who are in their system. And they did a search of morphed photos. And what happened was given a morph is of let’s say, two different people. If both subjects exist in that database, then when you do a search of the morph against the database, both of those subjects will be retrieved at high ranks with high scores.

So, something I guess system owners could do is essentially do a one-to-many de-duplication search on any sort of self-submitted application photo into their system and start looking for suspiciously high scoring multiple subjects that come back. And that could be a way to sort of trigger whether or not there’s further investigation that needs to happen with the application photo, for example.

Oscar: What about the demographic effects in the facial biometrics?

Mei: Yeah, yeah. Demographics is a bit of a hot topic. So the buzz around I guess bias in face recognition really started a couple of years ago. There was a study published by MIT called Gender Shades which tested various cloud-based gender classification algorithms, and reported that gender was often misidentified in darker skinned females.

Now, gender classification has to do with facial analysis of a single image and you’re trying to answer is the person male or female. While face recognition has to do with extraction and comparison of identity information between two photos. So these two are separate things, the reporting of gender classification algorithms being biased got conflated with face recognition being biased, which resulted in often incomplete reporting in the press of face recognition being biased.

There was also an influential report that was published by Georgetown University on potential demographic biases in face recognition. They published a report called The Perpetual Lineup, which criticised unregulated police use of face recognition in the United States, and talked about the possibility of racial bias or demographic bias which would impinge on people’s civil liberties.

So in response to these series of events, one of our partner government agencies asked us to look at this demographic bias problem, and to see to what extent that it’s real. So in late 2019, NIST published a report on demographic differentials, talking about the various stages of the face recognition system pipeline where bias can potentially occur in the pipeline. And we reported accuracy on over 180 face recognition algorithms across demographic groups defined by sex, age, race, country of birth.

And what we found was that there was a wide range of performance among the algorithms tested. So the algorithm matters and more accurate algorithms are going to give you better outcomes if they’re used obviously. We found that there were much larger demographic differentials in false positive rates when compared to false negative rates. So a false negative, for example, was that when you try to use your face to unlock your phone and you get rejected and you get locked out. A false positive is when someone else is able to unlock your phone with their face. Right?

So, we tested one-to-one verification algorithms and we tested one-to-many identification algorithms. And for the class of one-to-one verification algorithms that we tested, we found that false positive rates were higher in people from East West Africa and East Asia than people from Eastern Europe. False positives in females were generally higher than males. And false positives were largest in the oldest and youngest age groups.

Now, the magnitude of these demographic differentials differed between algorithms. Most of the very accurate algorithm showed lower demographic differentials. Additionally, some of the outcomes for the one-to-many identification algorithms that we tested didn’t show the same kind of demographic differentials as seen in the one-to-one algorithms. There were a handful of developers with one-to-many algorithms that had stable false positive rates across the different demographic groups that we tested.

When we published this report, we didn’t analyse cause and effect. We put these numbers out and operators and users of face recognition, you know, need to think about what these numbers actually really means to their operations, right, because the impact of differentials could vary drastically with your application and associated risks. For example, if you’re using a face recognition system to authenticate members for gym access, right, trying to access a gym, the consequences of a false positive might be someone getting in and using your gym equipment for free when they didn’t pay for membership. OK.

But a false positive in the context of a law enforcement investigation could have much greater consequences. So application matters definitely. At the end of the report is a list of recommendations for research towards mitigating demographic differentials. And face recognition developers have been actively working on this demographics issue. And NIST plans on putting out an update to the demographics report later this year, and will also publish and maintain a leader board that will report several numeric measurements of demographic differentials for algorithms as a way to sort of track progress in this area.

Oscar: Yes, demographic bias, something that has been already in the media, social media a lot, social media in the last months, last two years, as you said. And you said from your report that you didn’t get what are the causes. You cannot say for instance, why these developers, this company had such poor performance in demographics, right? But is there any pattern you can say now so who are – it’s not trying to blame but what went wrong in the way they design or what is the pattern?

Mei: So, these algorithms were submitted to us as black box algorithms. So we have no idea how they were developed and what data they were trained on. But there was one data point that we found in our study, is that many algorithms developed in China did not give the elevated false positive rates on Chinese faces that algorithms developed elsewhere in the world do, so that data point points and provides some – shed some light on the fact that training data matters. So in terms of recommendation, we recommended that developers try and look at using more diverse training data to support the development of their algorithms.

Oscar: Yeah, so the training data, there has to be the widest possible. So that’s at least one of the main reasons why most of these algorithms perform so badly on this.

Mei: Yeah, I mean, if you really want to make face recognition better, you also want more signals, right? You want more discriminative power from the biometric that you’re sensing. So one other recommendation we make is, you know, in addition to capturing face, potentially using iris recognition as well, because there are a number of cameras on the market today that simultaneously capture both face and two iris images. So if you could use both modalities, you’ll likely be in better shape in terms of accuracy, because iris doesn’t appear to have the type of issues that we see with face when it comes to discrimination.

Oscar: So you mean that at the time that developers created the algorithm, or it’s for the application that they should use both?

Mei: Yeah, it’s for the application, right.

Oscar: For the applications, yeah, to make it stronger. Correct. And there is one you mentioned – the forgotten demographic, the twins. Tell us a bit about that.

Mei: Yeah, twins are what we often consider the forgotten demographic. Because whenever anyone brings up demographics, the focus sort of naturally leans towards race or sex or age, right? But twins can exist in any given race, sex or age group, and about 3% of babies born in the United States are of twins or multiple births. Identical twins are always of the same sex and fraternal twins can be of the same or different sex.

And currently, one of the issues with face recognition accuracy when you get to very, very large population sizes is the presence of twins, specifically identical twins, because face recognition has issues differentiating between identical twins. So given a pair of identical twins, twin A will authenticate as twin B, and then vice versa, making twins a demographic with a very high false positive rate. So twins is still a bit of an open limitation for face recognition at the moment. And actually later this year, we plan on publishing a report quantifying current face recognition performance on twins.

Oscar: Yes, very, very interesting. And now that we are been discussing, of course, a lot the perspective of the developers, the company, the technology vendors, technology providers who create these algorithms and the solutions. So what about from the organisations who are going to buy some of these solutions, such as by law enforcement government, can be another company that needs this authentication, and what can someone who is evaluating one of these products should check? What will be the top on your mind?

Mei: Yeah, again, depending on what you’re using the face recognition system for. Accuracy is one thing, right? But outside of accuracy, there are a number of other things that organisations probably should pay attention to. For example, for complete systems, you might be interested in how quickly the face recognition system works, how mature the software is, how reliable it is, how easy it is to integrate into your environment, the costs, support for human review. And all of this can vary widely between technology providers.

But I would recommend that anyone who’s looking at procuring a face recognition system definitely do some research into how well the technology works, and what types of limitations I guess the developers themselves have found, and also from other third-party independent tests organisations have found as well.

Oscar: And it’s very likely that for the best providers, they will be – it’s possible to find on your reports?

Mei: Yeah, as a part of FRVT we actually have and maintain a leader board for both the one-to-one and one-to-many test cases. So those leader boards are public, and they’re updated on sort of a bi-weekly or monthly basis, whenever we put out new results. So, interested parties can go to our website and take a look at that information as well.

Oscar: Excellent to know that’s super valuable, I believe. Thanks a lot for this conversation. But I’ll ask you a final question Mei is – for all business leaders that are listening to this conversation now, what is the one actionable idea that they should write on their agendas today?

Mei: Something that I’ve learned since working in the field of biometrics is that nothing is permanent except change. And maintaining sharp awareness and staying educated really, and adapting to the ever-changing state of everything around us is extremely important. This pandemic is a perfect example of how quickly changes to technology requirements can occur almost overnight, right? From the need to process people wearing face masks to finding new ways to implement contactless biometrics.

And at NIST, we’re constantly learning new things about this new generation of algorithms and trying to adapt our tests to keep providing relevant information to the community. But not only does change awareness apply from a business and technology standpoint, it also applies to people. And this includes understanding and adapting to the changing needs and state of mind of your employees, your co-workers and your family.

The needs of my children have changed during this time that we’ve had to quarantine at home with them not being able to see their friends and their schooling shifting to a completely virtual environment. So the type of support that they need from me has changed. And perhaps the type of support that I need from them has changed as well. And I imagine things will continue to change as we hopefully navigate through a path back to what we know as normalcy.

Oscar: Yes. Yeah, yeah, completely right. Being ready to adapting to changes. Yeah, it’s true. I think no solution is going to solve the problems forever, so it’s good to be ready for hearing the signals, what are changes coming?

Mei: That’s right.

Oscar: Again, thanks a lot Mei, for this fascinating interview with you. And please tell us if people want to learn more about the work you’re doing at NIST, what are the best ways to find information or get in touch with you?

Mei: So all of our projects are public and the reports and what we did and our tests and our activities are available online. So, if you go and you Google “NIST FRVT” that will take you to our FRVT web pages where you can find out more information about our face recognition tests.

Oscar: OK, perfect. Thanks a lot Mei and all the best.

Mei: Thank you so much, Oscar. It was great talking to you.

Thanks for listening to this episode of Let’s Talk About Digital Identity produced by Ubisecure. Stay up to date with episodes at ubisecure.com/podcast or join us on Twitter @ubisecure and use the #LTADI. Until next time.

[End of transcript]