Let’s talk about digital identity with Jessica Figueras, Founder at Hither Strategy.

In episode 64, Oscar and Jessica explore the ethical issues surrounding digital identity, and what’s happening to ensure ethical use of identity in the UK and globally. Plus, how organisations can continue to protect themselves while staying on the right side of this ethical minefield.

[Transcript below]

“These issues are just too important to leave to business as usual. These issues are everyone’s responsibility to fix.”

Jessica is a strategist specialising in trust and security, governance, and the role of tech in civil society. She works with start-ups and scale-ups on commercial strategy, and advises UK Government on technology and policy issues relating to cyber crime and online harms. Previously Jessica led multi-million pound research and data programmes for companies including Dods, GlobalData and Ovum, and has advised senior executives in large established tech companies as well as many VC-backed scaleups.

She is currently Vice Chair of the UK Cyber Security Council, and was previously Chair of the Board at NCT, the UK’s largest charity for parents. She is a sought-after speaker and commentator, and has published extensive research on the application of emerging technologies across government, telecoms and other regulated industries.

Find Jessica online at jessicafigueras.com, on Twitter @JessicaFigueras and on LinkedIn.

We’ll be continuing this conversation on Twitter using #LTADI – join us @ubisecure!

­Go to our YouTube to watch the video transcript for this episode.

Let's Talk About Digital Identity
Let's Talk About Digital Identity
Ubisecure

The podcast connecting identity and business. Each episode features an in-depth conversation with an identity management leader, focusing on industry hot topics and stories. Join Oscar Santolalla and his special guests as they discuss what’s current and what’s next for digital identity. Produced by Ubisecure.

 

Podcast transcript

Let’s Talk About Digital Identity, the podcast connecting identity and business. I am your host, Oscar Santolalla.

Oscar Santolalla: The emerging ethics of digital identity is what we’re going to discuss today. And our guest today is Jessica Figueras, founder at Hither Strategy. Jessica is a strategist specialising in trust and security governance, and the role of tech in civil society. She works with start-ups and scale-ups on commercial strategy, and advises the UK Government on technology and policy issues relating to cybercrime in online harms.

Previously, Jessica led multimillion pound research and data programmes for companies including Dods, GlobalData and Ovum, and has advised senior executives in large established tech companies, as well as many venture capital-backed scale-ups.

She is currently Vice Chair of the UK Cyber Security Council, and was previously Chair of the Board at NCT, the UK is largest charity for parents. She is a sought-after speaker and commentator and has published extensive research on the application of emerging technologies across government, telecoms, and other regulated industries.

Hello, Jessica.

Jessica Figueras: Hi, thank you so much for inviting me.

Oscar: It’s great having you, Jessica. It’s super interesting, the conversation that we’re going to have now. So let’s talk about digital identity. And first of all, I would like to hear a bit more about yourself, if you can tell us about yourself and your journey to the world of identity.

Jessica: Sure. So in fact, I mean many people I found to work in the field of digital identity, it’s interesting, actually how many have a background in the telecoms industry, which is my background as well, originally. So around 20 years ago, I worked at a company called Ovum, which was at that time, the leading European tech analyst firm. And I was really focusing on how you had, for the first time, the telecoms industry converging with the mainstream IT industry. And as mobile telecoms operators started to get to grips with the consumer web, it was really clear that one interesting asset they had, mobile telcos had, was identity. And so you know, I became very interested in ways in which telcos might be able to use their knowledge of the identity of the user in different ways. And in those kinds of use case.

So after that, I moved away from telecoms and for the past eight years, I focused most on government technology, particularly here in the UK. And around 10 years ago, the UK government, like everyone else is looking to digitise but it became apparent that there was a very significant barrier around digital identity. Government, at that time, was very keen to where they could, to replace call centres and expensive channels with what was seen as cheaper digital channels. But it was understood that if citizens couldn’t verify their identity online then the take up of government digital services would stay low. And at the time, there were there were very, very long and complex attempts to create a cross government digital identity service, which ran into a lot of problems. So, that’s something I’ve been following with interest.

And of course, government isn’t just about its own digital services for citizens. Government also leads our national cyber security response, it has an important role to play in making the digital economy work as well as it can. And of course, also government is the authoritative provider of many identity credentials, you know, like passports, driving license and so forth. So, this kind of whole area is of strong interest to me and a lot of the work I do at the moment is looking at these areas where government can do more to support security and trust with digital identity.

Oscar: OK, excellent. So, from the telco industry and nowadays more with government as you mentioned, yeah super interesting. So, looking at today, so how important is identity and identity management for any government and every – any type of organisation to establish digital trust with the users?

Jessica: Well, of course, it’s a crucial building block. I mean, if we look beyond digital for a moment, we can see that everyday life just doesn’t function unless there is trust there. And what I mean by that is that in society, we need a basic level of trust that people are playing by the same rules as us. Trust kind of oils the wheels of social interactions, and when it’s missing, things go wrong.

So, for example, if people don’t trust the police, you know, they won’t report crimes, and therefore, crime will go up. And again, people won’t pay their taxes, if they think there’s a lot of corruption there. And then kind of look at looking to, you know, everyday commerce, you won’t buy things online if you don’t trust the other party is going to fulfil their side of the bargain, or if they’re going to exploit you in some way.

Equally, if you’re a potential merchant or a service provider, you may well not offer services online if you think there’s a very good chance that bad actors are going to use these services in ways that expose you to risks. So, there are particular areas where both organisations and users on both sides need digital trust to be in place. These are questions such as, you know, are you who you say you are, primarily? And that links on to the question, are you entitled to be here? Are you entitled to use this service? And then there are questions again, about, are you going to do what you said you do? And, I hope also that you’re not going to do what you didn’t say you’re going to do, like, for example, exploit my personal data, or use my service in a way that exposes me to harm.

And trust is also about understanding that the other party is going to treat you fairly. For example, making exceptions or special provisions in areas where, I as a user have got special needs. So yeah, establishing trust is absolutely critical. And identity management is the foundation piece for all of this. It starts off with those questions, you are who you say you are, you’re entitled to be here, and it provides a level of assurance in case things go wrong that you can then kind of trace it back.

Oscar: Yeah, yeah definitely, I can see the connection with identity management in digital trust, right? In both sides, that’s interesting, what you say it’s both sides, not only the end users, but the other service providers, the companies and organisations like government that provide services.

And you also touched that sometimes it’s possible the bad actors will have bad use of the identity and everything related to persons. So this comes to the main topic we’d like to discuss today about ethical use of identity. So we’d like to know, as you work a lot on this, what’s happening today to ensure that the ethical use of identity, so what’s happening today in the UK, as you might be working much closely and globally? So what’s happening today?

Jessica: Yeah, absolutely. So identity is a really interesting area because it’s a kind of focus point, I think, for ethical concerns touching quite a number of areas. And in many cases, I think these issues are quite complex. There’s not always an easy answer. It’s about finding a balance.

So I guess to start off with, just to kind of try to summarise some of the concerns, the ethical concerns, around digital identity and the reasons why people worry. So firstly, there’s the whole area of privacy – end user privacy. So, when we verify someone’s identity online, we potentially, not necessarily but we potentially, have the ability to build a very detailed picture of a named individual. That’s not always the case, clearly, there are many digital services which build detailed pictures of users’ activity online, but without necessarily being able to map it to a real world identity.

When we verify someone’s identity, again, we’re talking about specific person here. Now, if you start to link that real world identity to their usage of your services, whether those are government services, financial services, what they’re buying online, websites they’re looking at, perhaps you start to join up the data you have with data from elsewhere. What you then have is potentially a wealth of data about a named individual, which could be used in many ways, some of which may be extremely unethical. These kind of very detailed profiles, they could be used as a tool of social control in a sinister way by governments or private organisations equally.

And if you look at the concerns generally today around surveillance, around too much power in the hands of tech companies, which citizens cannot hold accountable, we certainly see what happens when authoritarian regimes start to collect very detailed data on their citizens’ activities, just how kind of troubling this area is. So this is this is the first area where we need to be ethically aware. When we verify someone’s identity, what are we then going to do with that data?

And the second area is around the whole area of bias. Here, this goes back to the role of identity verification as a gateway to access services. The idea is there are many digital services which are sensitive in different ways, they expose the provider to significant risk of fraud, for example, cyber security. And, of course, there are those questions about needing to check the entitlements of the user.

So this is why identity verification is around gatekeeping, you know, are you who you say you are? And are you entitled to be here? Now, that’s fine. But where bias can creep in is that if there are unreasonable assumptions made about how that gatekeeping should work. What that could mean is that certain groups of users could be systematically denied access to services on basises which are very unfair. So, you know, and here is where we have the concerns around digital exclusion. And there are lots of different facets to this. We can think about people who are excluded for different reasons. There is a lot of concern around a kind of cluster of characteristics to do with poverty, old age in particular, education, people who are less digitally literate, people with an irregular immigration status, or perhaps people who are homeless or no fixed address.

And characteristics like this, you tend to find that people in these groups are much less likely to have the credentials that they need to verify their identity. They may not have a bank account, they may not have a fixed address, no passport, no driving license, no credit record. And then you have the challenges around the user’s actual ability to use the service. And typically, you might find this with older age groups or people with learning disabilities, who simply don’t have the digital skills or find it very kind of hard to navigate services which can be very complex. And then you have other kinds of potential bias as.

So for example, it is quite well known that at least some of the algorithms which are used to verify people’s identity against a photograph have a real problem with skin colour. So, people with darker skin are much more likely to be rejected and that’s a real problem. Particularly then when you bring in things like facial recognition in biometrics for security purposes. That’s really kind of quite troubling there because you may well be involving a kind of law enforcement element on the basis of a technology which is treating people differently on the basis of their skin colour. So, you know, there are lots and lots of different ways that bias can creep in, and work in quite an insidious way with identity verification to lock certain types of people out.

So here we have, you know, kind of two kind of broad categories of ethical difficulties, one around privacy, and the other around bias. So, in terms of what’s happening, I would say that both of these issues, you know, there is a lot of awareness about them. And these are areas of interest for many civil society groups. We have some codes of conduct out there, which are kind of voluntary. There’s one, for example, a group called Women in Identity has one which covers bias, which is good. There are some voluntary code of conducts around biometrics. So, for example, the Biometrics Institute has one, here in the UK, the law enforcement are starting to go down that route as well.

And the UK Government is also developing a trust framework, which is really interesting. And the idea is that this will be something that companies which provide digital identity services will be able to accredit that service against the trust framework, which should cover issues such as these. So that’s certainly very encouraging.

One thing I would say, though, is that I think it’s – so far, civil society, like I’ve said, it’s certainly waking up to potential issues with the way that identity is used. It’s a particular parts of civil society have taken an interest more than others. I think that’s a potential issue.

Oscar: Thank you for that overview. It’s very interesting about the many aspects of the ethical use of identity. And now, ultimately who’s the responsibility to ensure that ethical use of identity? Is it government regulation – regulation is that’s part of the job? Or, is it organisations are the ones who build the services? So what would you say? Whose responsibility is it?

Jessica: It’s everyone’s responsibility. This is a question about the kind of society we want to live in. There are so many different questions here – some which are about ease of use and accessibility of digital services. Others are about how we balance different good things that we might want. We all want security, we all want privacy as well. And, you know, balancing these different kind of good things is something that everyone needs to get involved in.

Certainly, there are lots of areas where government really does need to step into lead. And these, typically, are in areas where kind of technical standards are required. Interoperability, of course, is key. And interoperability if we can build that on the basis of high standards then we can avoid the problem of having a race to the bottom as they say, where providers look to compete on having the kind of easiest and cheapest digital identity service, which is something we want to avoid.

So yeah, so government is I think, starting to lead now in digital identity, which is good, and they are starting to put in place some solutions, such as the trust framework I mentioned. But yeah, I did want to say more I think about civil society’s role here. There certainly is a response from civil society. I mean, we see, for example, that there are quite a few kind of think tanks who are interested in this, particularly looking at the international angle. So, for example, the Omidyar Network had a program called Good ID, which has been taken on I think, by Harvard now. The Tony Blair Institute also has a programme looking at how digital identity is developing in different countries and what are some of the different issues, and how would they balance some of these different factors. So that’s good.

I think that what we haven’t seen in civil societies, we haven’t seen the breadth and depth of voices, the diversity of voices, actually, that we need to see involved in this debate. I’m starting to see much more participation by groups which are interested in bias, which is really, really good. I think I would like to see a lot more contribution from those who represent the digitally excluded. You know, the elderly, people who are less educated, people with learning disabilities, people who are poor, homeless, don’t have credentials.

These are the kind of people who are worst represented generally in society. They tend to not be at the table, they don’t work in the tech industry. They don’t take part in conversations like you and I are having now. And so, groups which represent these people, they really, really need to step up and take part because it’s not until we hear both sides that we start to understand how we balance these important social goods.

Now, to give you an example about where this has actually worked is in the field of child safety online. So, we have seen, unfortunately, over the last, you know, 10, 10, 15 years, an absolutely massive growth in the rise of online child exploitation, which is really kind of quite appalling. And has become a really, really major, perhaps even the major part of the work of law enforcement working in this area. They have to consider the digital element.

Now, the fact is, is that if you’re going to protect children online, around social media when they’re interacting online, then part of the solution to that is almost inevitably going to be constraints and limitations around anonymity. And these are some of the debates that we’re having now in society. And, one thing that I think that has worked there is that NSPCC, which is I think the largest British children’s charity with a background that goes back, it’s sort of legacy spans a couple of 100 years, I think. The NSPCC has really taken up this as a huge part of its mission now to kind of go and fight on behalf of children exploited online.

And so they’ve brought some real diversity to this debate. And they make sure that whenever we’re talking about things like encryption or digital identity or anonymity online, we don’t forget that protecting children is an important and legitimate use case, because otherwise, these things do tend to get forgotten. It’s everyone’s responsibility. And I would like to see a lot more diversity of voices at the table, I think discussing these important issues.

Oscar: Yeah, I couldn’t agree more. And it’s much wider than we normally hear, right? As you said, we work in the tech industry, I hear some of the voices that are already showing this concern about the ethical use of identity. But many of the ones you just said, we rarely hear about that. And there are people who are much less connected than us, as you say, they are not listened to this podcast or similar places where they can be informed or have a voice. So yeah.

Jessica: Yeah, for sure. And it’s difficult because I think many people in the tech industry are starting to wake up and become aware now and looking to be educated. And I think it’s often hard to know where to start.

Oscar: Exactly, exactly. Excellent to know all this based on your extensive work and research on all the aspects that relate to ethical use of identity. And now, looking at the organisations, both private and public, so how can organisations continue to protect themselves? Because something has to be done to protect themselves, while at the same time being on the right side of these ethical minefield.

Jessica: Yeah, it really is, it’s very challenging. And I think it’s particularly challenging for organisations that may be looking to develop their own solutions in this, which some still do. So I think, you know, first question, so, OK, organisations fundamentally should be aware that digital identity is an area where they need to carry out some assurance. And so, some questions to think about when you’re doing that assurance…

First of all, you need to know where in your organisation you need to verify people’s identities and how that gets done today. Typically, it falls into two categories, doesn’t it? Your own employees, and then your customers and partners. Of course, with customers and partners, some customers may be individuals, they may be businesses with individuals as well, some of identity verification is kind of quite complex.

So, the second thing is, you need to understand is, what are the purpose? Why are you needing to verify identity? And you need to really fully understand the risk you’re trying to mitigate and whether you’re providing the right level of assurance to mitigate it. If it’s too high, if you’re kind of putting in place too much friction that could make it harder for your customers to interact with you. And it also could worsen problems around inequality of access, like we’ve talked about. But of course, if you don’t put enough controls in, you’re opening yourselves up, of course, to all sorts of forms of cybercrime, and all sorts of risks.

So then you need to think about what is it that you’re trying to verify here? What attributes? Now, for the vast majority of organisations, this will be name, address, possibly the organisation you’re associated with. For some services, particularly public services, it may well be your entitlement, questions about for example, your immigration status. And if you think, some services, health related services, increasingly sensitive data. So for example, if you think of the COVID app there, you know, that’s your entitlement to go to a concert or something based on data about your vaccine status.

Now, on top of this, there are also services which are at particular risk of fraud, so it tends to be financial related services, where they’re regulated. They have a regulatory requirement to carry out additional checks. Now, these services are interesting, because as well as identity checks when you first sign up during on boarding, they often will lay on additional checks while you use the service to make sure you’re still the person you say you are. And here, you’re bringing in much more complex background data, often without necessarily the explicit knowledge of the user around for example, what is your device, your location, your behaviour? Is the person using the service, are they the same person who signed up in the first place?

OK, so then you need to think about, based on what attributes you’re trying to verify, what are the credentials you’re using to verify this individual? And what are the implications of that? You need to think about what you’re asking for. Now, in your target user group, can everyone in that user group, do they have those credentials? Can they prove themselves? Now, if they can’t, what are the implications of that? Who is excluded? This is when we start to think about, you know, what groups are being systematically excluded here. There are always going to be individuals with particular needs. But if there’s a kind of big group, we need to be concerned about that.

And of course, we also need to think, on the flip side, are the credentials you’re asking for, are they going to be enough? Who can get those? If anyone can get those credentials easily, you know, if you can steal something from someone’s bin, steal a document from someone’s bin. Well, you know, you need to think about that as well.

And last but absolutely not least, a really, really critical part of assurance now, of course, is your suppliers, you need to vet your suppliers. You need to ask questions about their own cyber security posture. Government is becoming much more demanding on this front now. You need to ask how they protect users’ data. And, of course, how they mitigate against bias and the problems we’ve talked about.

As I said originally, if you, as an organisation, if your own developers are creating an identity verification solution, you need actually to ask even more questions. Because we need to remember that in-house teams, unless they are specialists in building identity solutions, they’re often not going to have either the resource or the expertise to do this well. So yeah, so important to think about that too.

Oscar: Yes, in the end, it’s asking a lot of questions to a lot of people I believe, that’s the moral of this, asking a lot of questions and to a lot of people. So when designing the services, you mentioned developers of course, they are part of that, what are their roles, the persons who should be asking these questions or leading this work?

Jessica: These are basic governance questions. So, depending on how large the organisation is, they may well have risk owners. Senior executives who are running the parts of the business, which the identity solution is going to be aligned with, they absolutely need to be on board. And, in many cases where this is a kind of novel use, it may well be that the board needs to have visibility over this. This is an important dimension of risk management generally. And I think this is one area where senior staff certainly need to increase their awareness of the issues.

Oscar: Yeah, yeah, definitely. Yeah, and it’s clear that it’s, as you said very early in our conversation, the balance is quite hard to achieve, but of course, has to be the ultimate goal. Jessica, finally, one question, for all business leaders listening to us now, what is the one actionable idea that they should write on their agendas today?

Jessica: It’s an idea that I call digital native risk. And these are forms of risk, which are native to digital, they’re not necessarily kind of old risks which have now developed an online aspect. And I think, the fundamental idea is, business leaders need to think about digital native risk, and how they’re going to fill those gaps in their understanding, where are the weaknesses in your risk radar?

Risk management, it’s been a board level responsibility for a long time. And organisations in some sectors have a mature culture around risk management. I’m thinking particularly, for example, very safety critical industries, like the nuclear industry, and financial services, pretty mature in risk management. But as I say, there are new forms of risk now coming thick and fast, which are kind of quite new. And they fundamentally come from the very rapid adoption of digital in every area now of business in a way that has actually created a kind of dependency, a reliance which may well really undermine basic business resilience.

Many organisations have spent the last 20 years removing redundancy in their operations. They’ve removed manual processes. And we only have to look at what’s happening now in Ukraine and Russia to ask questions about well, just how long could a business operate with no internet access? We can add to that forms of risk, of course, around cyber security, how the rise of ransomware during the pandemic has really made that top of mind.

Cybercrime is spiralling in every developed country, particularly economic crime, lots and lots of new novel forms of fraud, insider threat as well is top of mind for financial services. And then of course, there’s that whole piece around the unintended consequences of innovative digital services. So we’ve talked today about ethics and issues such as, for example, how the rise of social media created a real problem around online child exploitation. Many, many innovative services, they do have unintended consequences, that’s just a fact of life.

And I think that business leaders, particularly at board level, they really now need to start kind of building this in right from the start. It’s not about becoming incredibly risk averse, I think it’s just about planning right from the start about how you’re going to, as I say, kind of fill the gaps in your understanding. So the way that we develop our understanding of digital native risk, it needs to evolve and get better.

And as I said before, I think our failure to anticipate the scale of online harms, I think many people feel that it’s lack of diversity was at the root of that. The fact that technological innovation has been driven by such a narrow demographic of people. Of course the stereotype is you’re talking about white men in Western developed countries, particularly the US, who tend to be disproportionately young, disproportionately highly educated, and probably have no children yet. We really, really need to think about broadening our awareness of the environment.

And one thing I’m seeing, which is interesting now, is that those organisations which find themselves under attack, which find themselves particularly subject to these forms of digital native risk, increasingly, they’re finding that they’re needing to share much more information between themselves, often their competitors, bringing in government, law enforcement, academia, the private sector, their technology suppliers, and the importance of collaboration. As I said before, I feel these issues are just too important to leave to business as usual. These issues are everyone’s responsibility to fix and I think that if we can get this kind of collaboration and open discussion, diversity of voices around the table, I think we have a good chance of moving forward in a positive way.

Oscar: Thank you. Thank you so much for this conversation. Jessica. Please tell us how people would like to get in touch with you.

Jessica: Absolutely. So, you can find me at jessicafigueras.com, or you can find me on Twitter @JessicaFigueras, or LinkedIn. So I’ve got a relatively unusual name so I think it’ll be easy to find me. I’d love to speak if you want to get in touch.

Oscar: Again, it was a pleasure of talking with you, Jessica, and all the best.

Jessica: Thanks so much.

Thanks for listening to this episode of Let’s Talk About Digital Identity produced by Ubisecure. Stay up to date with episodes at ubisecure.com/podcast or join us on Twitter @ubisecure and use the #LTADI. Until next time.

[End of transcript]