There was a post on Twitter in the midst of the coronavirus COV-19 pandemic that caught my eye. It quoted an emergency room doctor in Los Angeles asking for help from the technology community, saying “we need a platform for frontline doctors to share information quickly and anonymously”. It went on to state the obvious requirement that “I need a platform where doctors can join, have their credentials validated and then ask questions of other frontline doctors”.
This an interesting requirement that tells us something about the kind of digital identity that we should be building for the modern world instead of trying to find ways to copy passport data around the web. The requirement, to know what someone is without knowing who they are, is fundamental to the operation of a digital identity infrasturture in the kind of open democracy that we (ie, the West) espouse. The information sharing platform needs to know that the person answering a question has relevant qualifications and experience. Who that person is is not important.
Now, in the physical world this is an extremely difficult problem to solve. Suppose there was a meeting of frontline doctors to discuss different approaches and treatments but the doctors wanted to remain anonymous for whatever reason (for example, they may not want to compromise the identity of their patients). I suppose the doctors could all dress up as ghosts, cover themselves in bedsheet and enter the room by presenting their hospital identity cards (through a slit in the sheet) with their names covered up by by black pen. But then how would you know that the identity card belongs to the “doctor” presenting it? After all the picture on every identity card will be the same (someone dressed as a ghost) and you have no way of knowing whether it was their ID cards or whether they were agents of foreign powers, infiltrators hellbent on spreading false information to ensure the maximum number of deaths. The real-world problem of demonstrating that you have some particular credential or that you are the “owner” of a reputation without disclosing personal information is a very difficult problem indeed.
(It also illustrates the difficulty of trying to create large-scale identity infrastructure by using identification methods rather than authenticating to a digital identity infrastructure. Consider the example of James Bond, one of my favourite case studies. James Bond is masquerading as a COV-19 treatment physician in order to obtain the very latest knowledge on the topic. He walks up to the door of the hospital where the meeting is being held and puts his finger on the fingerprint scanner at the door… at which point the door loudly says “hello Mr Bond welcome back to the infectious diseases unit”. Oooops.)
In the virtual world this is quite a straightforward problem to solve. Let’s imagine I go to the doctors information sharing platform and attempt to login. The system will demand to see some form of credential proving that I am a doctor. So I take my digital hospital identity card out from my digital wallet (this is a thought experiment remember, none of the things actually exist yet) and send it to the platform.
In this world, the credential is an attribute (in this case, IS_A_DOCTOR) together with an identifier for the holder (in this case, a public key) together with the digital signature of someone who can attest to the credential (in thsi case, the hospital the employs the doctor). Now, the information sharing platform can easily check the digital signature of the credential, because they have the public keys of all of the hospital, and can extract the relevant attribute.
But how do they know that this IS_A_DOCTOR attribute applies to me and that I haven’t copied it from somebody else’s mobile phone? That’s also easy to determine in the virtual world with the public key of the associated digital identity. The platform can simply encrypt some data (anything will do) using this public key and send it to me. Since the only person in the entire world who can decrypt this message is the person with the corresponding private key, which is in my mobile phone’s secure tamper resistant memory (eg, the SIM or the Secure Enclave or Secure Element), I must be the person associated with the attribute. The phone will not allow the private key to be used to decrypt this message without strong authentication (in this case, let’s say it’s a fingerprint or a facial biometric) so the whole process works smoothly and almost invisbly: the doctor runs the information sharing platform app, the app invisbly talks to the digital wallet app in order to get the credential, the digital wallet app asks for the fingerprint, the doctor puts his or her finger on the phone and away we go.
Now the platform knows that I am a doctor but does not have any personally identifiable information about me and has no idea who I am. It does however have the public key and since the hospital has signed a digital certificate that contains this public key, if I should subsequently turn out to be engaged in dangerous behaviour, giving out information that I know to be incorrect, or whatever else doctors can do to get themselves disbarred from being doctors, then a court order against the hospital will result in them disclosing who I am. I can’t do bad stuff.
It is possible to make a much stronger form of this kind of conditional anonymity (that is, no you knows who you are, unless you do something against the law) by using cryptographic blinding to deliver unconditional anonymity. Here is another medical example to make the point.
Suppose I am a nurse and I want to go through some whistleblowing platform to alert the authorities to the fact that the anaesthesiologist that I’m working with is off his head on ketamine. Now, if there is any possibility of the anaesthesiologist’s lawyer ever finding out who I am then I might not want to remake the report at all. This would clearly not be in the public interest. So in this case, the whistleblowing case, we might want to provide unconditional anonymity in the interests of public safety.
Again it’s very hard to imagine how to do this in the physical world, but in the virtual world it is quite straightforward. I send my public key to the hospital to be digitally signed to prove that I’m a nurse. Before I send it to them I apply a cryptographic “blinding factor”. Blinding factors are a clever piece of mathematics (first used at scale by the pioneering David Chaum at DigiCash) that means that you can form digital signatures across data without being able to know what the data is
(So the hospital signs the public key multiplied by a cryptographic blinding factor and when I receive the signed public key back from them I divide out the cryptographic blinding factor. The digital signature remains valid. But now, not even the hospital knows my public key because they signed the blinded version of the key, not the key itself.)
This is another example of how cryptography can deliver some amazing but counterintuitive solutions to serious real-world problems. I know from my personal experience that it can sometimes be difficult to communicate just what can be done in the world of digital identity by using what you might call counterintuitive cryptography, but it’s what we will need to make a digital identity infrastructure that works for everybody in the future.