xxx
Opponents of the bans on cashless establishments have argued that businesses should be able to make these decisions for themselves.
From Cities And States Are Saying No To Cashless Shops : NPR:
xxx
A library of snippets
xxx
Opponents of the bans on cashless establishments have argued that businesses should be able to make these decisions for themselves.
From Cities And States Are Saying No To Cashless Shops : NPR:
xxx
xxx
The PatronScan kiosk, placed at the entrance of a bar or nightlife establishment, can verify whether an ID is real or fake, and collect and track basic customer demographic data. For bars, accurate ID scanners are valuable tools that help weed out underage drinkers, protecting the establishments’ liquor licenses from fines and scrupulous state alcohol boards. But PatronScan’s main selling point is security.
The system allows a business to maintain a record of bad customer behavior and flag those individuals, alerting every other bar that uses PatronScan. What constitutes “bad behavior” is at a bar manager’s discretion, and ranges from “sexual assault” to “violence” to “public drunkenness” and “other.” When a bargoer visits another PatronScan bar and swipes their ID, their previously flagged transgressions will pop up on the kiosk screen. Unless patrons successfully appeal their status to PatronScan or the bar directly, their status can follow them for anywhere from a couple weeks to a few months, to much, much longer. According to a PatronScan “Public Safety Report” from May 2018, the average length of bans handed out to customers in Sacramento, California was 19 years. (The company’s “Public Safety Report” is embedded in full below.)From This ID Scanner Company is Collecting Sensitive Data on Millions of Bargoers:
xxx
Xxx
I accidentally replied to a bot…4 followers and a ridiculous position…..oops. https://t.co/YmxlOQpLPy
— Dion F. Lisle (@dionlisle)
https://platform.twitter.com/widgets.js
xxx
xxx
Asking people to choose between privacy and health is, in fact, the very root of the problem. Because this is a false choice. We can and should enjoy both privacy and health. We can choose to protect our health and stop the coronavirus epidemic not by instituting totalitarian surveillance regimes, but rather by empowering citizens.
From Yuval Noah Harari: the world after coronavirus | Financial Times:
xxx
xxx
It contained my Oyster card, driving licence and debit card. I’d managed to quickly cancel the debit card and order a new one (which I was assured would arrive “within the next five days”) but for the time being, after taking a loan from a friend, I had to go cash only.
In London, that is a problem.
From (1) Cardless in the coronavirus crisis | Financial Times:
xxx
It’s really easy to design a digital identity infrastructure for the most of us for most of the time. Trying to figure out how to help a law-abiding citizen with a passport or driving licence to open a digital bank account or to login remotely to make an insurance claim or to book a tennis court at a local facility is all really easy. It doesn’t provide any sort of stress test of an identity infrastructure and it doesn’t tell us anything about the technological and architectural choices we should be making to construct that infrastructure. That’s why I’m always interested in the hard cases, the edge effects and the elephants in the room. If we are going to develop a working digital identity infrastructure for the always-on and always-connected society that we find ourselves in, then it must work for everybody and in all circumstances. We need an infrastructure that is inclusive and incorruptible.
This is why whenever somebody talks to me about an idea they have for how to solve the “identity problem” (let’s not get sidetracked into what that problem is, for the moment) then I’ll always reach into my back pocket for some basic examples of hard cases that must be dealt with.
(In conference rhetoric, I used to call these the “3Ws”: whistleblowing, witness protection and adult services. In fact, it was thinking about whistleblowing many, many years ago when I was asked to be part of a working group on privacy for the Royal Academy of Engineering. Their report on “Dilemmas of Privacy and Surveillance” has stood the test of time very well in my opinion.)
My general reaction to a new proposal for a digital identity infrastructure is then “tell me how your solution is going to deal with whistleblowers or witness protection and then I will listen to how it will help me pay my taxes or give third-party access to my bank account under the provisions of the second Payment Services Directive (PSD2) Strong Customer Authentication (SCA) for Account Information Service Providers (AISPs)…”. Or whatever.
The current pandemic has thrown up a particularly interesting case where conventional thinking doesn’t help us to understand how things could work in the future. We’ve all read with interest the accounts coming from Asia, and now Israel, of the use of mobile phone location data to tackle the dread virus. In the UK, the government has used some aggregate and anonymised mobile phone location data to see whether people were following social distancing guidelines, but it can actually play a much bigger role in tackling pandemics.
China got the virus under control with lockdowns in areas where it was endemic and apps to stop it from getting a foothold where it wasn’t. In Shanghai, which has seen few death, QR codes were used to authorise entry to buildings and to collect a detailed contact history so that control could be targeted in the case of infection. The Economist (21st March 2020) reported that the use of these codes was pervasive, to the point where each individual carriage on a subway train had it’s own code so that if someone tests positive only their fellow passengers need be contacted rather than everyone on the train.
South Korea, a country of roughly 50 million people, appears to have dealt with the pandemic pretty effectively. By mid-March it was seeing less than a hundred new cases per day. It did so without locking down cities or using the kind of authoritarian methods that China had used. What it did was to test over a quarter of a million people and then using contact tracing and strict quarantine (with heavy fines and jail as punishment). They were able to do this because legislation enacted as a result of the Middle Easterners Respiratory Syndrome (MERS) epidemic in 2015 meant that the authorities can collect location data from mobile phones (along with payment data, such as credit card use) from the people who test positive. This data is used to track the physical path of the person and that data, with personally-identfiable information removed, is then shared via social media to alert other people that they need to go and be tested. At the time of writing, South Korea has seen a hundred deaths, Italy (with a similar population) has seen more than thirty times as many.
The pandemic has given my another “hard case” to add in to my thinking. Now I have 4Ws, because I can add “wellbeing” to the list. A new question will be: How does your proposed digital identity infrastructure help in the case of a public health emergency?
Whatever we as a society might think about privacy in normal circumstances, it makes complete sense to me that in exceptional circumstances the government should be able to track the location of infectious people and warn others in their vicinity to take whatever might be the appropriate action. Stopping the spread of the virus clearly saves lives and none of us (with a few exceptions, I’m sure) would be against temporarily giving up some of our privacy for this purpose. In fact, in general, I am sure that most people would not object at all to opening their kimonos, as I believe the saying goes, in society’s wider interests. If the police are tracking down a murderer and they ask Transport for London to hand over the identities of everybody who went through a ticket barrier a certain time in order to solve the crime, I would not object at all.
(Transport for London in fact provides a very interesting use case because they retain data concerning the identity of individuals using the network. Oyster card journey history is retained for 8 weeks after the card has been used, and for contactless payment cards the journey history is retained for 13 months. After that time the data is anonymized and retained for the purposes of traffic analysis and network improvement. This strikes me as a reasonable trade-off. If a murder is committed or some other criminal investigation is of sufficient seriousness to warrant the disclosure of location data, fair enough. If after six weeks no murders or serious crimes have come to light, then there’s no need to leave members of the public vulnerable to future despotic access.)
It seems to me that the same is true of mobile location data. In the general case, the data should be held for a reasonable time and then anonymized. And it’s not only location data. In the US, there is already evidence that smart (ie, IoT) thermometers can spot the outbreak of an epidemic more effectively than conventional Center for Disease Control (CDC) tracking that replies on reports coming back from medical facilities. Massively distributed sensor network produce vast quantities of data that they can deliver to the public good.
It is very interesting to think how these kinds of technologies might help in managing the relationship between identity, attributes (such as location) and reputation in such a way as to simultaneously deliver the levels of privacy that we expect in Western democracies and the levels of security that we expect from our governments. Mobile is a good case study. At a very basic level, of course, there is no need for a mobile operator to know who you are at all. They don’t need to know who you are to send a text message to your phone that tells you you were in close contact to a coronavirus character carrier and that you should take precautions or get tested or whatever. Or to take another example, Bill Gates has been talking about issuing digital certificates to show “who has recovered or been tested recently or when we have a vaccine who has received it”. But there’s no reason why your certificate to show you are recovered from COV-19 should give up any other personal information.
I think that through the miracles of cryptographic blinding, differential privacy and all sorts of other techniques that are actually quite simple to implement in the virtual world (but have no conventional analogues) we ought to be able to find ways to provide privacy that is a defence against surveillance capitalism or state invasion but also flexible enough to come to our aid in the case of national emergency.
xxx
The email addresses and travel details of about 10,000 people who used free wi-fi at UK railway stations have been exposed online.
Network Rail and the service provider C3UK confirmed the incident three days after being contacted by BBC News about the matter.
The database, found online by a security researcher, contained 146 million records, including personal contact details and dates of birth.From Rail station wi-fi provider exposed traveller data – BBC News:
No sane person ever registers with their real name and personal details for this sort of thing, do they? Thanks to my brother’s suggestion, I always register as Mr. Donkey Bollocks. And let me take this public opportunity to the real Mr. Bollocks, who must get all sorts of annoying e-mails because of this.

xxx
A World Health Organisation spokesperson told The Telegraph: ‘We know that money changes hands frequently and can pick up all sorts of bacteria and viruses.
‘We would advise people to wash their hands after handling banknotes, and avoid touching their face.
‘When possible it would also be advisable to use contactless payments to reduce the risk of transmission.’
From WHO urges switch to contactless to slow virus transmission:
xxx
xxx
“That we need, but currently lack, institutions that are good at thinking through, discussing, and explaining the often complex trade-offs that need to be made about data.
That the task of creating trust is different in different fields. Overly generic solutions will be likely to fail.
That trusts need to be accountable—in some cases to individual members where there is a direct relationship with individuals giving consent, in other cases to the broader public.
That we should expect a variety of types of data trust to form—some sharing data; some managing synthetic data; some providing a research capability; some using commercial data and so on. The best analogy is finance which over time has developed a very wide range of types of institution and governance.”
xxx
xxx
In one plausible future, many people routinely are offered, and use, technical tools to keep their identities obscure. Call it Pseudoworld. When controlling what is known about us is difficult, the natural path is pseudonymization: establishing online presence without using a real name. One recent study found that the more sensitive a topic is, the less likely people discussing it online are to use their real names. It recorded about one in five accounts on English-speaking Twitter as plainly using pseudonyms. In Pseudoworld, that will be far more common. There, to tweet or blog—or sign on to Facebook—under a real name will be seen as a puzzlingly risky thing to do. Just as universities remind students to lock their dorm-room doors, civic education will teach us how to obscure our identities so we can’t be traced online.
We get to Pseudoworld precisely by trying to take individual responsibility for our own privacy.
From A World Without Privacy Will Revive the Masquerade – The Atlantic:
xxx