NIST Releases Updated Draft Guidance for Federal Agencies’ Use of AI in Identity Verification Systems — AI: The Washington Report | Mintz – Antitrust Viewpoints – JDSupra

xxx

On August 21, 2024, the National Institute of Standards and Technology (NIST) released its second draft for the fourth revision of the Digital Identity Guidelines. The revised draft guidance puts forth a risk-mitigating framework and requirements for “identity proofing and authentication of users (such as employees, contractors or private individuals)” accessing government services or interacting with government information systems online. Building off the first draft of the revised guidance, the latest draft now includes an entire section on AI in identity systems, recognizing both the benefits and risks that AI poses in identity systems and proposing three broad requirements to mitigate these risks.

From: NIST Releases Updated Draft Guidance for Federal Agencies’ Use of AI in Identity Verification Systems — AI: The Washington Report | Mintz – Antitrust Viewpoints – JDSupra.

xxx

Balancing the benefits and risks of AI, the new draft guidance proposes three requirements for AI in identity systems, focused on transparency and risk mitigation:

 

Organizations that rely on AI would be required to document and communicate about their AI usage in identity systems. Identity providers and content security policies that leverage AI would be required to document and communicate their AI “usage to all [relying parties] that make access decisions based on information from these systems.”

Organizations that utilize AI would be required to provide certain information “to any entities that use their technologies,” including information about the techniques and datasets used for training their models, the frequency of model updates, and the results of any tests of their algorithms.

Lastly, organizations that utilize AI or rely on systems that use AI would be required to adopt the NIST’s AI Risk Management Framework for AI risk evaluation, and also consult the Towards a Standard for Managing Bias in Artificial Intelligence. Both NIST publications lay out practical steps to reduce AI bias, including using datasets with balanced statistical presentation, documenting potential sources of human bias into datasets, updating and testing AI models regularly, creating a fairness metric by which to evaluate AI models, and assembling diverse and inclusive teams to design and deploy AI systems.

POST In The Loop OLD

A case study conducted by TELUS International (O’Niel. “How to use AI to shape efficient digital and omnichannel experiences.” Journal of Digital Banking 8(3): 256-262, 2023) noted an improvement in customer satisfaction scores by 40%, a decrease in average call handling time by 9% and a significant reduction in human involvement in mundane tasks after implementing intelligent automation solution for the customer services agents in a financial services client. Personally, I don’t really care whether I am chatting with a bot or not, so long as I get the service I want, so I can such deployments spreading rapidly.

xxx

It seems to me these examples illustrate my theory that in day-to-day life there are only two categories of retail financial transactions. There are the transactions that are too boring for people to do and transactions that are too complicated for people to understand. In both cases, the bots stand ready to help and given the rapid advances in technology, I don’t think I’ll have to wait too much longer before I can stop thinking about transactions completely.

From: ChatGPT Is A Window Into The Real Future Of Financial Services.

xxx

PayPal: A Fintech OG rejoining the Fastlane

xxx

It is worth noting that Elon’s role as part of PayPal is sometimes over-exaggerated. He took on the role of CEO of PayPal after the Confinity/X.com merger, but PayPal was already a product with reasonable traction with eBay prior to his arrival, and he lasted only a short time as CEO before butting heads with employees and the board. He undoubtedly provided crucial expertise as part of their growth but not the core figure in the ideation of PayPal that some think of.

From: PayPal: A Fintech OG rejoining the Fastlane.

xxx

NEWS: Detectives in UK break up sophisticated £55m Chinese underground banking laundry, student money mules at its heart | LinkedIn

xxx

Specialist officers then undertook comprehensive analysis of Shu and Huang’s mobile phones, revealing that the pair were using a Chinese messaging app to sell British pounds to university students to circumnavigate foreign currency controls which limits the amount of cash that can be taken out of China per year.

The messages also identified that Shu and Huang were working for a person with the user handle ‘There is a Big Sun in the Sky.’ This person arranged for the pair to collect large amounts of cash, sometimes as large as a quarter of a million pounds at a time.

Shu and Huang were not told the identities of those they collected cash from, and were instructed to take a photo of a £5 bank note, including the unique serial number. This was then passed onto the courier allowing the interaction to take place without either party knowing the identity of the other.

This was done by the group to prevent those in the chain giving information to police if they were arrested.

From: NEWS: Detectives in UK break up sophisticated £55m Chinese underground banking laundry, student money mules at its heart | LinkedIn.

xxx

The Battle Against AI-driven Identity Fraud – Signicat

A report produced by digital identity company Signicat (disclosure: I co-wrote the forward to the report) shows how bad the situation is. They commissioned a survey of industry prractioners across Europe to see what is happening on the ground right now and found that (along with, much to my surprise, that only two-thirds of the those questioned thought that AI-driven identity fraud is a bigger threat today than it was three years ago!) the decision-makers in the fraud world as confused about the nature of the threat, the impact of attacks and the countermeasures that might be avilable.

How to use AI to shape efficient digital and omnichannel experien…: Ingenta Connect

xxx

Using AI, financial institutions can elevate their omnichannel approach, ensuring a seamless and personalised customer journey across a diverse set of interactions.

From: How to use AI to shape efficient digital and omnichannel experien…: Ingenta Connect.

xxx

 

A case study conducted by TELUS International noted an improvement in customer satisfaction scores by 40 per cent, a decrease in average handle time by 9 per cent, and a significant reduction in human involvement in mundane tasks by implementing a comprehensive intelligent automation solution for a financial services client that provided their customer service agents with 24/7 support when responding to customer queries.3 This resulted in a win-win for both the company and the customer.

Full article: Artificial Intelligence and Cyber Defense System for Banking Industry: A Qualitative Study of AI Applications and Challenges

xxx

While the threat associated with chat bots stems from the web-based implementation of an AI-powered system, such systems are also associated with inherent vulnerabilities. A major concern is that hackers may embed fraudulent mechanics in AI engines by feeding them fake data.

From: Full article: Artificial Intelligence and Cyber Defense System for Banking Industry: A Qualitative Study of AI Applications and Challenges.

xxx

Full article: Artificial Intelligence and Cyber Defense System for Banking Industry: A Qualitative Study of AI Applications and Challenges

xxx

employed AI-based tools have vulnerabilities that can be exploited

From: Full article: Artificial Intelligence and Cyber Defense System for Banking Industry: A Qualitative Study of AI Applications and Challenges.

xxx

Free Money – by Marc Rubinstein – Net Interest

xxx

Among Chinese visitors, it ranks behind only Buckingham Palace as the most-visited site in the UK. Many take the 47-minute train ride from London Marylebone (where announcements are made in Mandarin and Arabic), stopping first at the Samsonite store close to the Village entrance to buy luggage they fill from up to 150 other outlets before heading on to Heathrow.

From: Free Money – by Marc Rubinstein – Net Interest.

xxx

POST Real Names, Real Problems, Real Solutions

There is a real problem brewing in social media. Regualtors, law enforcement and parents want some form of age verification for social media use. The providers of social media platforms do not. Or, as in the case with Meta’s response to proposed age gating laws in Australia, they are willing to go along with some restrictions provided that the responsibility (and presumably the liability) falls on someone else. Actually, they might be right: but is should not be the app stores that step in to fix the problem, it should be the banks.

Let’s think this through, starting with listening to an actual expert. Dr. Elisabeth Carter, Associate Professor of Criminology at Kingston University in London, has a very well-informed view around online harms and their mitigation. She says that that “conceptual change” is needed, where frictionless social media experiences are not seen as desirable but dangerous, akin to cars without seatbelts. I could not agree with her more. And what’s more, I think that the relevant conceptual change, and the relevant frictions, the equivalent of the standard three-point car seat belt that no-one even thinks about any more, but for the internet, is already clear, even if people do not see it that way. The conceptual change isfrom  elecronic versions of analogu idenity (ie, digtised identity, which has been around for years)  new forms of digital identity that are native to the new environment.

We will come to what that digital identity seat belt might look like shortly, but first let us summarise the problem. When people read about the scale of online harm—and there are news stories that illustrate the scale of the problem every single day—their natural reaction is to call for some form of internet passport and to demand that online discussions need to show the real name of the participants. Even setting aside for a moment the problem of deciding what “real” means in this context, this view is misguided. Real names don’t fix anything (but real reputations do, as I will explain).

This misguided view of action around online fraud, abuse and criminality is endemic. Just to illustrate with one example, way back in 2012 I wrote about legislator’s lack of understanding of the issues around online privacy. The reason for my comments at the time was that the head of internet security at the Cabinet Office, the administration department of the British government, Andy Smith, had commented wholly accurately that demanding real names and addresses in online transactions might actually make security worse. Indeed, he advised people to use fake details in Facebook at this was a “sensible thing to do”. He was apparently unaware that providing fake details was in direct violation of Facebook’s policy, which is why Simon Milner, Facebook’s head of policy in the UK and Ireland at the time, was not particularly happy with Andy and had a “vigorous chat” with him to persuade him to revise his view.

I wasn’t writing all of that to shill for Andy Smith. Andy and I disagreed about things from time and time, and while I make no comment on whether he was an
Epic F***ing Secure Hero or not, he certainly was an actual internet security expert, unlike the politicians who criticsed hs remarks. His comments were informed and relevant and exposed a lack of policy integrity. Nothing has changed in the past decade. Politicians still reach for the same knee-jerk response: some sort of internet passport or driving licence in response to call from the general public (who are not by any stretch of the imagination experts on internet security) for real names. In a 2023 YouGov survey of 1,000 American adults, around two-thirds said that social media platforms should require users’ real names and identity verification. They are wrong. There are almost no circumstances where it is necessary to use “real” names.

(Even if we could agree what a real name is. My father’s forenames were Frederick Gerald. To everyone in family he was known as Gerry, but to everyone else he was known as Fred.)

As has been clear from the earliest days of the web, if there was an “Internet Driving License” that you had to use to log in to web sites, that would almost certainly make the situation far worse, since these websites would now know exactly who you are, and this information would then be freely obtained by perverts, the secret police, the National Enquirer or whoever else wants to pry.

You do not have to specualte about whether I might be right about this because there are plenty of real examples to look at going back over the years. Consider the early experiences of South Korea as a case study. In 2007, South Korea temporarily mandated that all websites with over 100,000 viewers require real names, but rescinded the law after it was found to be ineffective at cleaning up abusive and malicious comments. In fact the results of the “real names” law were predictably perverse. While unwanted comments fell by an estimated .09%, identity theft went up, because real identities were stolen from the thousands of web sites that now had to ask for them and store them. And since people became used to be asked for their real identity all the time, it was easier for dodgy web sites to get them to hand them over, which is exactly what will happen with mobiel driving licences as they enter general use.)

There are plenty of places where I would not want to log in with my “real” name or by using any information that might identify me: the comments section of national newspapers, for example. “Real” names don’t fix any problem because your “real” name is not an identifier, it is just an attribute (there are a great many David Birches) and it is in any case only one of elements that would need to be collected to ascertain the identity of the corresponding real-world legal entity anyway. What would my real name mean anyway? What matters, it seems to me, is not so much whether you are commenting anonymously, but whether you are invested in your persona and accountable for its behaviour in that particular forum. There seems to be value in enabling people to speak on forums without their comments being connected, via their real names, to other contexts. This is not an anecdotal perspective: the online comment management company Disqus, in a similar vein, found that comments made under conditions of durable pseudonymity were rated by other users as having the highest quality.

The key point here is that we only use real names now because we lack a proper identity infrastructure. That is, by and large, we use the real name as a proxy for the attributes that are actually needed to execute a transaction.

Design a site like this with WordPress.com
Get started