Hacker News new | past | comments | ask | show | jobs | submit login
NHS open-sources contract tracing iOS and Android apps (github.com/nhsx)
342 points by orangepanda on May 7, 2020 | hide | past | favorite | 131 comments



Some quite interesting stuff around how they might be getting around the background iOS restrictions[1] can be found in the Android source.

Looking at the Apple documentation on background peripheral bluetooth, they state "All service UUIDs contained in the value of the CBAdvertisementDataServiceUUIDsKey advertisement key are placed in a special “overflow” area; they can be discovered only by an iOS device that is explicitly scanning for them"[2]

I wonder if they have managed to reverse engineer this overflow area so it is accessible via Android.

Someone did some looking into this and it seems at least feasible[3]

[1]: https://github.com/nhsx/COVID-19-app-Android-BETA/blob/maste...

[2]: https://developer.apple.com/library/archive/documentation/Ne...

[3]: https://crownstone.rocks/2018/06/27/ios-advertisements-in-th...


Yes, this seems to be one of the rather clever aspects of the design - this appears to make it possible to discover iOS devices that are in the proprietary "iOS-only" mode of discovery.

It seems that once one of these "overflow" advertisements is discovered, it attempts a connect that is enough to wake the device and carry out normal tracing.

I left an iPhone unplugged, stationary, idle and asleep, running the app for about 5 hours. Then introduced an Android device and it was detected immediately. That seems to suggest this aspect works well.

There's also a clever "ping-pong" type keepalive system at play - there are multiple characteristics broadcast, and one of them is for keep-alive. It seems devices also use this keepalive to keep the app active over time. That should work with even 2 devices - if I understand it correctly, device A will K/A device B after a period of time, and this will cause B to act. That means B will then K/A device A after a period of time, and the cycle can continue.

I see the concerns about "2 sleeping iOS devices might not find each other", but wonder to what extent this will be a realistic scenario, versus an academic one - given the extent to which people are on their phones these days, and the size of that population, the issue likely fades in significance. Especially considering no app will get full adoption, no system will ever be perfect, and it's very much all about probabilities - the app can't determine if your 30 second exposure was higher risk than 30 minutes sitting 5m apart. Hopefully the testing phase currently underway will give some insight as to if this is a problem.


I built an exposure notification app that solved the backgrounding issue by simply using the backend to reconstruct the missing Android -> iOS piece: https://medium.com/@dangish/a-solution-to-iphone-backgroundi...


> I built an exposure notification app that solved the backgrounding issue by simply using the backend to reconstruct the missing Android -> iOS piece: https://medium.com/@dangish/a-solution-to-iphone-backgroundi....

An interesting approach here, although worth noting there are some privacy implications of this, since it requires "healthy and not ill" people to share their full list of contact events (or at least healthy iOS users). And Android users in your approach would need to check in for updates to "what they missed".

One thing I'm slightly unsure of is how well this would perform in reality - did you get this to work for two long-sleeping iPhones? That seemed to be the "edge case" that was the most challenging here, and I'm not sure what you're suggesting changes anything there?


A variety of countermeasures can be used to keep the app awake in the background, including restarting services and (more controversially) location.

And IMO, including a general/non-precise location of contact is incredibly useful for epidemiological purposes and to assist the manual contact tracing teams. The value far outweighing the hit to privacy.

In any case, Google-Apple are dictating how these apps need to be built and released, so private innovation doesn't seem welcome at this juncture.


Yep - location listeners can help keep apps open quite well on Android. Not sure about iOS.

RE rough location, the NHS currently asks people to enter the first "half" of their postcode, which gives a broad approximate geography, but still is a large area with a large population (probably up to 100k people). This also lets the NHS get an idea of app adoption throughout the country, which will help them know how much attention to give app-based contact tracing on a more localised basis, compared with alternative approaches and the old-fashioned "ask the person for names and phone numbers, and call them up".


It's interesting to consider that Bluetooth cannot be used to estimate distance between two devices in any meaningful way in real-world environments.

A recent article [1] confirmed this in interviewing the inventors of Bluetooth.

The environmental factors were explored in a recent conference on contact tracing [2] with leading researchers.

So in a UK context, what is the plan in dealing with the huge number of false positives captured by this system, which in reality cannot estimate a 2 metre distance?

The Australian COVIDSafe app sends all contacts in Bluetooth distance, which obviously includes people in other rooms, and on entirely different levels of a building - because there's no directional information.

Separate to the issue of proximity is the issue that this system cannot really work unless it's mandatory, as others have outlined.

Does any purpoted health benefits have to be modelled, proved and measured against the issues of shifting Western society to one which requires mandatory install and carry of a mobile phone which records all nearby devices?

It gets us closer to a society where political actors will be tempted to provide a rationale which justifies permanent extreme surveillance of citizens.

The limited analysis of the particular client implementations appears to miss this wider geopolitical point, and responsible technologists might want to consider these more carefully.

In the Australian situation all we are currently told is that if the app registers you as a close contact then it is used for 'usual contact tracing' and that contact tracing involves determining whether you are a 'close contact' and have to self-isolate for 14 days - a huge cost in a re-opened society if it is an unnecessary false positive.

It's been quite strange to see leading technologists such as Mike Cannon Brookes of Atlassian and Troy Hunt of HaveIBeenPwned strongly urge Australians to download it as an 'unequivocally safe app' when the most important part - the server-side algorithm to 'guesstimate' proximity - has not been shared by the Australian Government [3].

How can a technologist give the thumbs up to a proximity app when there is no evidence of it reliably estimating proximity in the real-world scenarios it is meant to operate in?

[1] https://theintercept.com/2020/05/05/coronavirus-bluetooth-co...

[2] https://www.youtube.com/watch?v=KgKbllhgESc&feature=youtu.be...

[3] https://www.abc.net.au/news/science/2020-05-06/coronavirus-c...


Just to pick up on a couple of these points.

> shifting Western society to one which requires mandatory install

It is not mandatory to install the app here in the UK.

> permanent extreme surveillance of citizens

My reading of the code and solution architecture is that the system is anonymised, with IDs generated using random rotating keys.

Can you expand more on how you feel there are privacy implications to this approach?

My view is that a decentralised approach might not provide enough data to perform adequate contact tracing. One of the reasons being the example you give relating to false positives and unsuitability of RSSI to measure proximity/range.

However, I do feel that, with a centralised approach, there exists a much better chance of doing something useful with the pooled data. For example, correlating patterns, tuning an algorithm to reduce false positives. Even extending the code base to support additional proximity events such as use of public transportation.


> My reading of the code and solution architecture is that the system is anonymised, with IDs generated using random rotating keys.

Yes - the system is arguably about as private as you can get, if you aim for "level" playing field privacy. It is possible to have more privacy if you accept privacy asymmetry. The decentralised contact tracing approaches, for example, give the "infected" less privacy, as they need to broadcast out a list of "infected" people's temporary IDs. If you log the time of each observed contact, then go back and check your calendar, it's pretty likely you can figure out who you met that was infected, just based on time correlation.

In the NHS approach there is a long term (but still randomly generated) identifier for each user, that's encrypted via an ECDH key exchange and included in the "daily" data blob. This lets the clinical model (but not other users) link back together one user's exposure events, and understand the extent to which they've been exposed to the virus, and therefore approximate the likelihood they have it.

> My view is that a decentralised approach might not provide enough data to perform adequate contact tracing [...] However, I do feel that, with a centralised approach, there exists a much better chance of doing something useful with the pooled data.

This is the view of the NHS as well, it seems. They feel it's important to get reports of symptoms, not just "positive tests". Given the incubation period of the disease, and the reports people are most infective a day or two before the symptoms show, this seems to hold true - the sooner you can alert someone to proactively self-isolate, the better. If you wait for the test result (assuming it takes maybe a day to get a test done, and a day for the result), that's the difference between asking a contact to isolate before they hit their suspected "most infective" time, versus asking someone to isolate as they're about to experience symptoms anyway.

Just to demonstrate this:

Imagine A is infected on day 1. On day 4, A met with B and C. A experiences symptoms a few hours into day 6.

With a decentralised approach, you really need the certainty of a test before uploading anything, to stop Sybil-like attacks. Most probably, A would be tested the following day (day 7), and get a result (while isolating at home) a few hours into day 8. Meanwhile, B and C are walking around in the population, on day 7 and 8 (days 3 and 4 of their own infection, the time they are likely to be most infectious). They get asked to return home, and probably experience symptoms the next morning.

In the NHS approach, when A experiences symptoms on day 6, they report this via the app, and are told to go home and self-isolate. B and C might also be asked to isolate, based on their contact with A, at this time. This gets B and C at home and isolated on day 2 of their own infection, and has them at home during their 2 most infectious days. If neither gets symptoms, and others exposed to A don't develop symptoms either, the daily "how are you feeling" polling via the app for symptoms will be able to capture that based on lack of symptoms, there likely wasn't exposure, and the group can leave self-isolation. Testing then augments this for more certainty, but this model does appear to be quite novel and unique, and near impossible to do in the decentralised approach (especially some of the more complex parts like deciding how likely A is to spread the virus, based on A's own exposure to infected people in A's recent past, and the date A reports symptoms).

> Even extending the code base to support additional proximity events such as use of public transportation.

In essence, public transport is the real goal of this app. The traditional contact tracing approach (people asking you for names and phone numbers, then calling them up to alert them) will catch the social contacts. It's the random encounter type situations, where strangers may be proximate, where the app plays a part. The app itself isn't going to be a single solution.


One of the numerous problems with the very concept of Bluetooth-based contact tracing is: it doesn't work.

Due to all the hardware and environmental factors, it really works as a large spreadsheet of all devices in Bluetooth range over the last 21 days (in the case of the Australian COVIDSafe app).

All it can do to grasp at the environmental context is to record the phone model, and the RSSI over time.

In a dynamic, real-world environment that will vary hugely, especially on public transport, as human bodies could be in the exact same close spot, but due to phone position and rotation the RSSI will vary wildly (as mentioned in the video), making two people close to each other actually look 20 metres or more away.

The huge amount of contacts each have to be manually contacted and interviewed to determine whether they're epidemiologically relevant. That also relies on the recall of the infected and the false 'close contacts' who are at risk of having to isolate for up to 14 days regardless of test or symptoms (according to a reading of the Australian rules).

This reality is extremely divorced from the Oxford University paper, which imagined a system that could accurately determine proximity of users and instantly notify them to isolate. This rapid notification and isolation was, in fact, the central proof of the impact of a digital contact tracing system.

So we have a centralised system that:

* only 'works' in a 'mandatory install' context

* can't determine proximity in the real-world (too much variability)

* would introduce huge numbers of false positives into a system that has to manually follow-up every one

* normalises carrying a tracking device of all nearby people (which in future would be far more useful as a societal monitoring system than as an epidemiological tool)


Interestingly, and purely anecdotally (so not designed to replace the study itself), I've been experimenting with the RSSI being reported by the NHSx app through the debug menu. At least based on what I've seen so far on my devices (and noting the NHSx model considers device model ID to allow for antenna variations going forwards), there was a ~25 dBm change in signal strength (from -32 to around -57) between phones being sat together, and through a wall yet still close.

Clearly this is going to vary depending on building construction, but I suspect the most relevant factor will be determining whether a contact event takes place through a wall or not. The real question is whether or not this can be modelled into the app and it proves reproducible.

My understanding is there's no claimed intention to measure the 2m distance, and this is accepted as a known factor, at least for now. I suspect in an indoor setting the initial challenge will be preventing spurious triggers from indoor use (although arguably if people are that close they might live in an apartment block, and they could have been exposed through contact with door handles or lift buttons etc). But once people are outside more and returning to normality, all bets seem to be off - I imagine a lot of false positives.

My instinct would be that contact exposure duration will become more of a factor than the RSSI, or that a min/max/median/standard deviation might be captured in addition to a "raw" RSSI, to perhaps get a better idea. If the RSSI never goes "above" -60 (noting it's a negative number) then that's not likely to be a hugely close contact event.


It would be interesting to model two humans standing close to each other, and each phone on the closest side of the pairing, then on the opposite side, with the phones rotated at every angle.

Having to traverse two human bodies will create significant variation, and antenna angle will also add to that.

As mentioned in the video [1] this can make people look much further away, making it rather useless at estimating real distance.

[1] https://www.youtube.com/watch?v=KgKbllhgESc&feature=youtu.be...


I wonder if WiFi connection information could help. Being on different networks would suggest greater social distance.


During current "distance under all circumstances, stay at home", that is likely going to be true (modulo people using multi-AP setups that aren't meshed under one SSID, assuming you use SSID, or who use 2.4 and 5 GHz SSIDs).

Going forward though, I imagine that as restrictions loosen, the usefulness of WiFI connections would drop significantly, as people aren't always at home. It would also not really help in the workplace, assuming a large campus WiFi setup, since everyone would probably be joined to the same network anyway.

The challenge would be privacy - you'd have to send some kind of information (or unique derivative) of the BSSID/SSID, which would introduce some privacy impact too. At that point, assuming you got access to the hashed SSID/BSSIDs, someone like Google with a street view dataset of AP MAC addresses could "enrich" the anonymised dataset with "ordinary location".


I'm delighted to see that they have a vulnerability disclosure program for the app too [1].

That's fairly rare for the public sector - the UK Gov and NCSC are really leading on this sort of stuff.

[1] https://github.com/nhsx/COVID-19-app-iOS-BETA/blob/master/SE...


Agreed. I think the decision to go it on their own and not use Google/Apple's API is completely bone-headed and won't work at all. But given that they made that decision, everything else they've done seems pretty good.


There’s some good initial investigation into this at https://reincubate.com/blog/staying-alive-covid-19-backgroun..., which I posted to https://news.ycombinator.com/item?id=23108867 in case it warrants a separate discussion.

Sounds like their approach is fairly privacy conscious and clever in terms of taking advantage of how each platform works - whether it works well enough and whether people will use it despite “not using the Apple/Google way” I guess we will see


Tom do you know if anyone has told the developers of the useless Australian app about this?


I’d be amazed if they weren’t aware of it. It does sound like they’ve found ways round some of the issues.

Whether it is worth the trade offs vs the Google/Apple system remains to be seen, but the fact that they’ve released the source already and people smarter than me seem impressed is a better situation than I expected it to be in. Also good to hear they may be open to switching to the Apple/Google way if needed.

Hopefully contact tracing apps prove to be helpful given the amount of hope being put in them!


Yes. There seems to be a huge amount of hope invested in these apps with little evidence they will make much difference.


Given the human and economic cost of this virus, "not much" difference in R could still make it worth it. It's supposed to be complement human tracing and social distancing, not control the virus spread on its own.

If it allows you to reopen some business sectors, say, half a week earlier, I can't imagine that wouldn't pay for the resources invested in the app.

There has been some modelling carried out that suggests use of the app would be worth it, but even with a level of uncertainty it seems like it's worth trying.


There's one other angle too here - a future version of the UK app will give people feedback on their level of distancing via a "social mixing score", which is basically the number of unique people they were recorded as being in contact with per day. This looks like it will give people an idea of "You were close to X people today, and really close to Y people".

"Close" will be locally determined, probably based on the RSSI model and duration of contact. This will give people feedback on how well they are distancing.

There's a lot of research which shows people struggle to know if they are doing well without feedback, and that people's perception of how well they can do it will influence if they try.

Couple this with a differential privacy daily voluntary "upload" (add a random, zero-mean number sampled from a normal distribution to their daily number) and people can find out if they're doing well, or if others are reducing contact more. NHS also gets an idea of average number of contacts and close contacts, and nobody actually needs to reveal their own number.


Absolutely agree!


Why do you say it is useless?


Because it doesn’t work on iOS. Given the Australian market is 50% iOS this kind of defeats the purpose.

We really should just dump the worthless app we have and just use the NHS version.


These [1] people disagree. Can you commment? Personally I don't know enough to judge myself.

[1] https://www.scimex.org/newsfeed/expert-reaction-covidsafe-ap...


No they don’t. They are all saying it doesn’t work when the app is in the background on iOS. This makes the app worthless on iOS as nobody it going to walk around with the app in the foreground the whole time.

The NHS app appears to solve this problem.


Ahhh that makes sense, thanks for the clarification.


> Each two hours, we merge any changes from master into internal

Not hiding the fact the published source code isn't that built into the app then...


Yes indeed. I'm not buying it isn't patched in some way before it's pushed to any app stores. Especially with all the noise beforehand about how it operates and the involvement of GCHQ/NCSC in development.

Edit: some digging. NHSX is run by Matthew Gould. Former Israel diplomat and tied via UK-Israel technology hub to NICE (Neptune Intelligence Computer Engineering) Ltd which is a former Israeli army surveillance and data security company among other functions.

So basically a hotpot of reasons not to install this thing.


A security researcher in Twitter was looking into it the other day, and found the app isn't obfuscated (they were able to decompile it "cleanly").

I'm not saying that guarantees no skullduggery is afoot, but it goes a long way - at least in it's current form, at that point in time.


That's good to know.


It's the same story as with Signal for iOS. How do we know the version downloaded from the App Store is the same code hosted on GitHub? I'd like to have the option to sideload it. Clearly that would involve some work on their backend, but I think that's fairer than the stuff described in this Signal GitHub issue [0] and Telegram's website [1] regarding the same thing.

EDIT: if I'm wrong please leave some details for myself and others to learn more. I've worked on iOS apps for several years, and as far as I know there's no way to verify if an app downloaded from the App Store was compiled from a particular code revision. Even the same code compiled on separate machines can produce different unique binary IDs. And on the scale of difficulty, I'd say that sideloading is less difficult than jailbreaking, if that method even really works for this problem.

[0]: https://github.com/signalapp/Signal-iOS/issues/641

[1]: https://core.telegram.org/reproducible-builds#reproducible-b...


It's not possible to side-load iOS apps and still get push notifications. Also, Apple doesn't allow you to distribute applications outside of the app store (unless you violate the terms of your enterprise developer account).


While I'm not an expert on push notifications, shouldn't you be able to receive push notifications if you entitle your application for them and create the right certificate online?


This is my understanding. You would have to register your device in ADP and create an ad-hoc certificate for the app. If people are doing this on their own ADP accounts, then they're just testing their own builds, not receiving an enterprise deployment from some single developer.


Apple does allow self-hosting enterprise apps, one of the main distribution methods for in-house apps is buying a license from Apple to install your own certification on iOS and MacOS devices which will let you basically remotely install, update and sideload in-house applications.

We personally opted out of it because the user's device's security would ultimately fall into our hands and at worst we have a root kit on the user's device someone might exploit.


FWIW, in iOS 13 Apple has finally made it so you can firewall off user personal device control from MDM control https://www.brianmadden.com/opinion/Apple-iOS-13-user-enroll...


If you install from an App store and want to know if it matches the source, you need to pull the installed package from your phone and compare the contents to your own known good binaries.

Ideally, if the builds are fully repeatable, the only differences should be in the signatures, but of course, you need to confirm that the signature doesn't drive unexpected differences in behavior.

I don't have the skills or the tools to do this, but it's not like it's some impossible mystery.


In order to do what you’re suggesting you have to jailbreak a device, which I mentioned in my comment and is discussed in many blog posts such as [0], which links to others.

Without a jailbreak, which isn’t a sure thing and even if available is not nearly as feasible as sideloading, it in fact is an impossible mystery to decrypt an iOS app store build to inspect it.

[0]: https://ivrodriguez.com/reverse-engineer-ios-apps-ios-11-edi...


Well, can’t the vendor upload the binary that was signed by Apple’s private key? So all you’d have to do verify it would be to compile the source code to produce the exact same binary... assuming that the process can be made to be deterministic?


Yes exactly that. I imagine vendor signing causes some issues there as well.


I wouldn't trust anything coming from Israel ever. Their government appears to have a mandate for installing backdoors, evidenced by the scrapped Iron Dome project which they wouldn't hand the source code over for.

https://news.ycombinator.com/item?id=22512674


You are doing a huge huge leap here, Iron Dome was not scrapped because of backdoors nor do we know about any (officially). Then one project doesn't mean that Israel are doing the same in other projects, and finally the mentioned company is private and has nothing to do with the government


I'm not making any leaps, I provided a link that explained it. I have no trust in Israel to begin with due to their uncomfortable control over US politics, this was just further evidence to support that decision.


Well, it should be fairly trivial to put it through the rigor of some MITM proxies and verify that the built app is not making any unexpected network requests. I look forward to infosec twitter ripping this apart.


That would be interesting analytics but you just never know with this stuff. Even basic intercept-avoiding technology like burst encoding require very long term sampling to detect, so the thing would need to be disassembled and every line of code analysed.


...is it not possible to build the app from source yourself?

I will install this app happily from F-Droid, and no other place.


From [1]:

// TODO: We need a real device to test Bluetooth scanning if isMultipleAdvertisementSupported == false

// TODO: We need analytics to identify number of devices that fall into this bucket

I don't know whether to laugh or cry reading those comments, the fact that this is a "to do" whilst the app is in "testing" and the fact they've decided the appropriate procedure is to reinvent the work Google and Apple have already done.

Google already published their code/APIs in [2]. I'm sure these considerations are taken care of by the framework the OS vendors have published themselves. It's absolute pure arrogance that anyone would even attempt to re-create the work that is being done at the OS level and not 'fall in line'. Germany back tracked, Australia back tracked, and the UK too, will back track and implement Google's APIs.

In a time where the message is "saving lives", those TODOs are not living up to that motto. This app is a complete waste of time, resources and money. Use the official APIs and be done with it, anything else is putting people in harms way because you think you "know best". The UK government claimed they reached 100k tests, fudged the numbers, and haven't hit that target since. But it's okay, they'll hit 200k by the end of this month too!

Pure 100% incompetence.

[1]: https://github.com/nhsx/COVID-19-app-Android-BETA/blob/maste...

[2]: https://github.com/google/exposure-notifications-android


The FT reported yesterday [1], that NHSX had contracted Zuhlke Engineering to develop a “two week timeboxed technical spike” (deadline mid-May) to “investigate the complexity, performance and feasibility of implementing native Apple and Google contact tracing APIs within the existing proximity mobile application and platform”.

[1]: https://www.ft.com/content/d44beb06-5e3e-434f-a3a0-f806ce065...


This app is many things, but it does not reimplement the Google/Apple scheme, and works very differently. If you believe knowing contacts who tested positive is enough, the Google/Apple scheme is fine. If you need to know earlier and/risk assess based on exposure, only the NHSX approach works. It's not a clear cut Central v Not Central decision.


Yet, reading a recent report in [1]:

"The NHS contact-tracing app must not be rolled out across the UK until the government has increased privacy and data protections, an influential parliamentary committee has said".

The app requests location permission on Android, something that Google doesn't allow when using their implementation.

I suspect this app, in this form, is DOA and will be quickly replaced. Even if they do somehow roll it out widely, there are concerns in [2] about its effectiveness working in the background. This app is a complete disaster, and will be unviable for the mass tracing they will want to conduct. They've pushed and pushed for "why" a centralized approach is the "right" one, made blog posts, issued justifications.

Yet the app isn't likely to see mass adoption and isn't going to work due to restrictions within the OS. This app represents the disastrous approach the government took to this virus. Flip flopping around, shutting down way too late, no clear guidelines or approach, and a high death rate to solidify their failure. Then, they decide the official approach is "wrong" or "not good enough".

Heads should roll over this.

[1]: https://www.theguardian.com/world/2020/may/07/uk-coronavirus...

[2]: https://github.com/nhsx/COVID-19-app-iOS-BETA/issues/2


Location permission is a technical requirement for using Bluetooth in this way because of beacons.

Lots of people keep saying the app can't work due to restrictions. Maybe, I'm not sure that has really been shown yet. A lot of the criticism doesn't really withstand the light of day.


The problem is, there's no way to disassociate the two flaws with this:

1: people see "government app wants your location"

2: the government possibly using that location access at a later time

I don't want to accept location on a government app, and the UK has some history with facial recognition cameras and other privacy invasive laws. So there's no way for me to know this location is only for them to use BT LE and they won't make an API call to getLocation and uploadLocationToServer. I assume once you grant location permission even for BT LE purposes it can be (ab)used in some other way.

They've made their own bed with their actions, public confidence might not trust an app like this with a permission like that.


It's probably exactly your concern about the ability for apps with network access to upload user data that led the Android team to introduce the locations permissions for BLE identifier scanning.

Without a location requirement, an app could claim to use only Wi-Fi and BLE permissions, yet it could combine beacon scanning and data access to de-anonymize user locations.

This permission requirement was introduced in Android 6.0 (see the release notes[1]) in October 2015, so it's been around for a while.

If you have some suggestions around how to improve the tracking of permission usage (static analysis? run-time requests (preferably avoiding times when users are under duress and likely to click 'OK' by default)?) then you may wish to file some requests with them and/or contribute to other projects that you feel are taking a better approach.

Your concerns around trust in app developers -- regardless of whether they are a government or any other entity -- are best handled by two means:

- Open sourcing the code (which NHSX have done)

- Enabling reproducible builds[2] so that users can confirm they have an authentic binary build of the source code

[1] - https://developer.android.com/about/versions/marshmallow/and...

[2] - https://reproducible-builds.org/


I may be misunderstanding this thread, but what the parent is saying is that the BLE permission in Android was _never_ meant for privacy focused contact tracing, and you are saying this is not a secret.

This is why Google and Apple developed the Exposure API, which is more private because it doesn't save as many metadata than the classical BLE permission, and at least Google does _not_ allow apps to declare both the Exposed API and the location permission at the same time.

This gives more guarantee to the users than just trusting the app developers.

In other words, any serious privacy-focused contact tracing apps should use the Google/Apple Exposure API, and not a custom made solution on tops of the older BLE permissions.

However, I still think digital contact tracing, even private, is a bad idea.


I agree with you and won’t be installing the app. However I think you may Misjudge public mood on this. If told this ends the lockdown early it will at least be downloaded by a lot of people. Whether they keep it running when it hammers their battery is another question.


Totally agreed. You cannot trust a government as there would not be oversight. An independent one with explicit safeguard against anybody other than health tracing not contact tracing will be possibly first stage. Still abuse is possibly even then. But get the institution barrier right first before we talk about technical issues.


I don't think you understand the technology or the privacy architecture.

The location permission you refer to is related to Bluetooth and is standard.

Just because the collected data is stored centrally it does not mean that it is not anonymised.

And finally I would suggest the Guardian is not the best place to get a balanced view of privacy implications!

I think it's an odd situation to be in where a foreign company is able to hold an elected government to ransom over its healthcare provider's technology choice.


Could you let us know if you are involved in this work?


> I suspect this app, in this form, is DOA and will be quickly replaced. Even if they do somehow roll it out widely, there are concerns in [2] about its effectiveness working in the background.

That particular issue is a bit of a niche case, which I argue is rendered less of a problem by behavioural psychology. If your devices are locked with screen off for long periods of time, you're probably at home, or generally not moving. The kind of people who are going to willingly download an app onto their smartphone like this are probably going to be actively using their devices enough throughout the day that the app can at least get a _reasonable_ sample. The best use of the data gathered from this app is supporting mass scale epidemiology, after all -- tracking contact rates on public transport and so on.

This is also specific to iOS devices. Given that the current UK market share is roughly 50/50 between iOS and Android, it's unlikely your device is _only_ going to encounter other iOS devices daily, so the Android "wake up iOS devices" workaround comes into play here.

> The app requests location permission on Android, something that Google doesn't allow when using their implementation.

I think this is going to be less of an issue than you think it is -- even lots of tech-savvy people gleefully accept app permissions without really caring about it. It's not ideal, but Google/Android could have simply made a separate permission for bluetooth beacons instead of piggybacking on FINE_LOCATION, and have had several years to do so, yet have not chosen to do so. Every now and then there's some tech-blog outrage piece that gets into mainstream media about how "[social media] app is always listening because it asks for mic permissions" and lots of otherwise privacy conscious people just kinda shrug and carry on with it.

Bit different with a government-backed app, I know. Which is why I'm glad the infosec community will be tearing into this - it's one thing open-sourcing the code, and I trust NHSX themselves, but for all we know it's been intercepted by GHCQ on its way to the app stores...

Disclosure: nah, I don't have anything to do with the app, though my last 3 comments on HN have been defending it. Working in a research council, I just like seeing civil service software projects done in-house. The NHS has suffered horrifically in the past from putting out IT/software contracts out to tender to private companies, horrible waterfall dev, millions of pounds wasted. Stuff like NHSX, the team behind uk.gov and the parliament petitions site are a breath of fresh air, and I'd like it to be future proof that no you don't need to outsource everything to private companies, cough cough like trusting Google and Apple's "decentralised" (how is it decentralised??? There's no p2p networking, you're still eventually uploading data to their private databases...) approach.

Outsourcing stuff to private companies is how the current gov have completely fucked this whole thing up, after all.


> This is also specific to iOS devices. Given that the current UK market share is roughly 50/50 between iOS and Android, it's unlikely your device is _only_ going to encounter other iOS devices daily, so the Android "wake up iOS devices" workaround comes into play here.

To add to this, I think that this is where the "keepalive" feature kicks in too - it seems to me that, on the whole, even sporadic proximity to other devices should keep the system working. When device A sees device B (via advertisement), it does the usual ping. This prods the app on device B, keeping it slightly in the foreground again. Device B then has the ability to prod device A via the keepalive mechanism, and so on...

That means that this momentary contact should keep both devices "more awake". As you say, it's the bigger picture that matters, and if this reduces the "edge cases" to 0.1% of occurrences, that's a load better than the alternative (Australian approach). Important to also benchmark this with the reality that many people don't have a phone with BLE, or won't install the app. And yet an app may still work and help, even with non-universal adoption.

> Google/Android could have simply made a separate permission for bluetooth beacons instead of piggybacking on FINE_LOCATION, and have had several years to do so

Absolutely. I fear they lack incentive/motivation to fix permissions. Android's permissions system is mostly still inherited from the original version seen on Cupcake (Android 1.5) in 2009! Only recently did we see any granularity (i.e. the ability to allow location when an app was open)!

Only recently is scoped storage really kicking off on Android. There's still no proper scoping of contacts. That Bluetooth is not a separate permission has its relics back in old Android, before BLE beacons and similar existed. Someone clearly realised Bluetooth beacons could be used for locating people, so said "someone could scan for Bluetooth devices and work out the device location from beacons" and threw this under the location permission, as it could leak location. The obvious issue is that granting "location" permission gives access to network and GPS location. The old Android model had "fine" and "coase" location; not sure if that's died out.

It's in Google's interests not to make permissions too granular on Android however - they and their clients (advertisers) benefit from lead tracking, SDKs having access to advertising IDs, and having ability to easily access a user's contacts and data with minimal restraint etc.

Clearly there's a usability trade-off, but the Apple model of pushing people to use a "select which photo you want to let this app get" is probably better than letting apps access all photos, for example.

> It's one thing open-sourcing the code, and I trust NHSX themselves, but for all we know it's been intercepted by GHCQ on its way to the app stores...

Fortunately, this is fairly easy to address by comparing the app (fetched via the store from your device) against the source. The Android app isn't obfuscated at all that I can see - it's pretty straightforward to compare the Kotlin to the "reversed" Java, though admittedly not quite as easy as comparing Java to "reversed Java".

RE NCSC being involved in the project, from what I've seen their brief has been to try to ensure the app can't be taken advantage of by external adversaries like nation-state attackers looking to cause panic or unnecessary concern over exposure that didn't happen. For the most-part, it seems they have done a good job at this app. It's withstood the first 24 hours of scrutiny, but as always, time will tell. It also seems NCSC helped work around the iOS BLE background "hidden broadcasting" and finding a way for Android to be able to reach devices in this state.


I'd heard another interesting angle to this - that the Google/Apple notification scheme is decentralized, but users can only test C-19 positive via a central authority. Meanwhile the NHSX notification scheme is centralized, but users can self-identify as C-19 positive.

Can anyone confirm this?


That's exactly how it works. In the G/A world there's still a central list, but of people who volunteered a positive test. In the NHSX world, they capture broader data and can assess the network.


Interesting... could that mean the focus of the NHSX app is the central authorities "assessing the network" to develop epidemiological models, rather than providing a notification service to users that doesn't give out false positives?

There's fear of people trolling this service by maliciously marking themselves positive, potentially forcing others to self-isolate repeatedly, perhaps without pay.


Read the NCSC paper on the design. Malicious attacks by both trolls and APTs are considered and accounted for. It's an incredibly interesting design.

I wouldn't say it's better/worse than Apple/Google, but it did have the advantage of not building policy into the design. If the A/G policy is wrong/not enough, the whole thing delivers little value.


You're right in your assessment of the risk of trolling etc. There is a model already in place, but it's worth noting that it's got quite an interesting design.

In the "Google/Apple" decentralised approach, someone who is infected submits a list of their own historical identifiers, and this is broadcast to everyone to check against their own observation list.

All you can really do is, on the client side, count up the number of occurrences of "infected" identifiers, and trip a client-side threshold for "after X contact instances, alert the user". You could get fancy and make X tweakable by the health service via that routine check-in for a new infected person list.

This is basically calculating the risk to someone.

The issue with this approach is that you have to create an public(ish) "infected" register. And even if it's not officially public, it takes 2 minutes to extract the API keys needed to fetch it, and a further 2 minutes to write and post a cron script to check the list into a git repo hourly.

In the NHS approach, there's no big "infected list" you can look at, or publicise. If you spend time with someone, you can't get an identifier that will let you (forever more) determine if they test positive from the "infected list". This raises the privacy for someone who is infected. While some may think "well, I am not infected, they can lose privacy", the system only works if people feel willing to report symptoms or infection without repercussions. So privacy of the infected is important!

Since the "upload" you make if infected or experiencing symptoms is of a list of other users you (the suspected infected person) saw, this makes it possible to look at the "risk from" aspect of infection - the hypothesis is that you can give advice to the potentially exposed users, based on their risk from you. And that can include how many infected people you were exposed to.

When someone has symptoms and is told to isolate, some people they were in contact with might be told to as well. The NHS app approach will survey them regularly (daily?) about symptoms. This data is fed back in to determine, based on those you might have infected, whether you actually have the virus. When you have a meaningful sample, and none have symptoms after a certain date, this allows you to let people out of self-isolation sooner, at least in theory.

The final point that is relevant is that the NHS believes it is essential to gather "self reports" for an app to work - if we assume an approx. 5 day mean/median incubation period, and if the research suggesting someone is most infective a day or two before symptoms is true, this means at the first sign of symptoms, you need to isolate someone and those they may have infected. Waiting for a test + result would mean you're potentially leaving people in the community, unaware, at their "most infectious" day or two.

Clearly there's risks of "Sybil" attacks. But as per the NCSC paper, there's been quite a bit of thought put into this design. More than I think many realise.


> the official APIs and be done with it

Ah, you mean the official API that hasn't been released yet, will take even longer to roll out, and won't be available on older models? Great idea.

Not to mention the fact that Google's proposal works quite differently from NHSX's one. Google's appears to be slightly more privacy focused, which also gives it a smaller feature set, particularly around epidemic analysis and dealing with malicious entries.

Maybe they will end up backtracking and just implement the Google APIs. Wouldn't be the end of the world if they did. But I would think that just illustrates the power Apple/Google have over their OSes, rather than the incompetence of the initial appraoch.


> ...Australia back tracked...

They didn't, Australia's COVIDSafe app is already out and uses the same OpenTrace system as Singapore's TraceTogether app which it was derived from.

Did you mean to type Austria instead? There are no kangaroos in Austria.


I’m shocked to see all the Firebase and Google analytics in there. I‘m quite surprised that a gov built app would include this. Disappointing from the UK (along with the fact that they really just should use the Apple/Google framework to begin with, which will in the end better integrate with the device and thus will perform better from a technical perspective).


I assume they want to know if people are actually using this app and I don't know if building an analytics platform is the best way for them to spend their time right now.

It does undermine any privacy claims they might make.


I agree around usage of Firebase here because no one really knows what data is being sucked up there but it looks like the intent of usage here is pretty clean;

If you look at the code Firebase is in there because the push notifications to notify of infection are sent via Firebase Cloud Messaging.

I also grabbed the Google plist out of the IPA from the app store and it has analytics turned off.


Then why is the Analytics SDK even there? It’s not required for messaging


The directions for integrating SDKs are usually something along the lines of "Add Firebase to your Podfile", which often adds way more than you need.

Given that these devs were likely in a hurry, it wouldn't surprise me if they forgot to go back and clean this up after getting it working.


> I assume they want to know if people are actually using this app

Apple provides such information without any third-party analytics SDKs.


One mildly interesting thing I noticed is that some of the files date back to 12th March (possibly earlier, I’m on my mobile so can’t search properly) - back before the UK was taking this particularly seriously in public, if I remember correctly (restaurants and pubs weren’t closed until 20th). Suggests at least someone realised the gravity of the situation!


As someone from the UK, I'm really appreciative of all of the people who are actively looking into the app and highlighting privacy, security and other concerns.

My main concern with all of the contact tracing apps is that I live near a very busy road, hundreds of people per day walk past my house and they're going to be about 3 meters away from my phone, albeit through a brick wall and a 6' hedge. My concern is that the apps will pick up the random passers by and I'll be forced to isolate my family based on a false detection.

Maybe this isn't an issue and these apps will have protection against this sort of false positive. I've yet to hear either way.

edit: Thanks for all the answers! Definitely helped calm the concerns about this sort of false-positive.


> I'll be forced to isolate my family based on a false detection.

Installation of the app is voluntary, and there's no sign that's going to change. Forced installation would probably be politically and logistically impossible.

If this was a problem, workarounds include deciding that the app wasn't usable in your situation, or turning off bluetooth on your phone while at home.


As far as I understood the apps are designed to measure time of being near someone as well.

I think within X meters for Y minutes (based on how often the bluetooth pings are exchanged).

This also has basis in virology where rate of infection for flu/corona substantially increases when you are close to someone within X meters (say 1.5m) for longer than a defined time Y (say 10min).


The current app needs two phones to be in contact for at least 15 minutes before it triggers anything.


Firstly, people briefly walking past your wall shouldn't trigger either system as they're both set up to require a period of time and a certain signal strength.

Secondly, situations like this are possibly one of the situations that will benefit this NHS-style centralised contact tracing over the distributed one as they can apply more advanced models to the nature of the exposure and try to reduce false positives. Or indeed, potentially expose that information to human contact tracers (though right now they say they have no plans to do this).


Here is the approach Switzerland it taking [2]. All open source and decentralised privacy-preserving. Available for any other state to use.

It is the responsibility of the app user to get tested or quarantine themselves if the app shows you have been in contact. No central authority knows who you have been in contact with or who you are.

The installation of the App will also most likely be voluntary.

[1] https://github.com/DP-3T/documents

[2] https://github.com/DP-3T/dp3t-app-android-ch


Interesting to see from the docs that it seems to have been built by VMWare Pivotal, rather than an internal NHS(X) team.


This was mentioned by the media some days ago, eg https://www.bbc.co.uk/news/technology-52551273


This is again one of those cases where, even though it's nice to see the source code, it's very hard to verify what's actually running on our devices. The same case with Signal and other secure messaging apps.

How would a scheme work, that (within the confines of the current App Store regime) allowed users to verify the source code look? Could Apple/Google maybe hash every file as part of the compilation/linking process, and then concatenate that in some way and include it in the signed binary and expose it in the App Store?

It would be really neat with some process where I can clone a repo on my machine, run some script and get an identifier, and verify that it is the same as what is being shown in the App Store on my device.


There are initiatives to provide reproducibke builds - you build the code youself and get a binary identical to the one being distributed.

This is mainly taking place in Linux distros at the moment, bit Fdroid is doing something related on Android, building all the apps they host on their own infrastructure.


Yeah, thinking about just a bit more, it seems like it's not possible to do without giving the source to Apple. If it's done on a the developer's machine I'm guessing it's always going to be possible to mess with it somehow.


For reproducible builds the source needs to be available to the users, so they can rebuild the software themselves and verify the binary that is being distributed is 1:1 similar.

There is really no way around that.

As for "messing it up on developer machine" - that's not really possibly if the build is really reproducible. If the users get the same binary after building the source, they can be sure they have the source matching the binary.

If the binary they get is different from what the developer built on their machine, there is a good chance they don't have the source matching the binary, which is exactly what reproducible builds aims to protect them from.


Yes - to clarify this further: as I understand it (and I'm not deeply familiar with the process), reproducible builds require that developers take additional measures to ensure that their source code cannot accidentally or intentionally produce variant build output.

That means that some commonly-used practices like embedding current timestamps into files become unacceptable.

It does tighten the development constraints -- although frankly, that's similar to many other secure coding practices. Better tooling, linting and frameworks will help to reduce the developer friction.


Yep, IIRC timestamps are one of the issues - you can either drop the.timestamp or inject a well documented timestamp from the outside, so the user can do that as weel during the rebuild.

Indeed, a lot of the stuff you need to do to achieve reproducible builds can definitely make your build process more robust and secire.


I think on iOS with bitcode it would need changes within Apple to make some form of secure supply Han possible.


It’s pretty simple to reverse engineer an Android binary. While most Android apps ship with code obfuscation, it’s relatively easy to get around. Using that method, one could determine if the publicly available app came from the same source code (it would at least allow you to determine no extra nasty bits had been added).

iOS is a little harder. If you have the dsym files it’s pretty trivial to the same exercise as Android. NHSX could choose to release them so people could again verify it more or less matches.


There's no obfuscation that I could see on the Android app, which should help with this. Certainly when looking across to the source release, it was a familiar codebase (modulo the fact the reversed version was in pseudo-java, and the original codebase is in Kotlin).


dSYM files are not very useful in ensuring reproducible builds; what you actually need is a jailbroken device and hashes of certain binary segments that Apple doesn't touch too much.


If they had a transparent build server which published directly to the stores (ala fastlane) that would go a long way.

Folks could see builds running when code is pushed and the app versions being released soon after.


This is a big step. I’ve previously been pretty against the NHS’s approach (being mostly centralised etc), but I’m very happy to see that they’ve open sourced it. Does this also contain the server side that does the matching?


No it doesn't. Neither does it include some of the configuration values you need to build the mobile apps. It looks like this was a last minute political decision to open source the code in the face of growing public disquiet rather than a co-ordinated effort.


It takes about 5 minutes to obtain the API keys from the Android app to be able to build it. It was also a pretty well-documented process to get the Android app to build (unlike some Android "open source" projects that seem to be designed to be impossible to actually build!)

Open sourcing the app was actually planned all along - when Matt Hancock announced the app several weeks ago [1], he said it would be open sourced at the time.

[1] https://www.theguardian.com/politics/2020/apr/12/uk-app-to-t...


They have said the server code will be coming soon. The intention has been to release the source code for the app since it was announced. I doubt this is a last minute political decision since the UK government already release a huge amount of their code as open source.

The project isn't perfect - but consider that it's been put together in a month, it's still beta and their focus has been on making it work, not making it easy to build.

Frankly, the level of transparency from the NCSC and NHSX has been far ahead of most governments and private organisations. You can disagree with the contact tracing approach, but if all government work was done with this level of transparency we'd be in a much better position than we are now.


Very interesting. We probably won't need it fully up and running for another week or two so hopefully bugs will be out by then.

For context, there are two ways of doing digital contact tracing.

A centralised system which tracks everyone all the time and runs a central algorithm for deciding whether someone is at risk on all that data. This is what China, South Korea, and Taiwan have done. Advantage: it seems to work. Disadvantage: government has a database of literally everyone's movements. In SK they also use CCTV and other data, it's a nightmarish panopticon.

"Minimally disclosive" contact tracing. This keeps as much data as possible on the device and only share some of it with other devices or central servers. Obviously, designing these protocols is a balancing act between privacy and functionality. Different groups have designed protocols that make different choices there.

The two main competing protocols are DP-3T, on which Google/Apple have based their API and PEPP-PT / ROBERT which some countries including the UK have been building their solution. Until very recently, Germany was also using PEPP-PT but they have switched to DP-3T.

Both protocols make trade-offs between the amount of information which leaks and the usefulness of the tool. That's important. Installing an app based on either will strictly reduce your current privacy.

Both frameworks use pseudo-random rotating keys which are exchanged over Bluetooth. In PEPP-PT a central server manages a rotating private key which is used to generate a set of time-gated ephemeral IDs for each device. Devices exchange and log these IDs. When a health authority determines that someone is infected, they issue them a key which allows them to upload all their logged IDs to the central server. (Optionally, they can just allow them to self report which NHSX seems to be doing) The server is able to determine who the infected person's phone has logged a contact with and notify those people. A random sample of additional people also receive notification messages which their phones are able to discard as invalid decoy messages. Because relatively rich data goes to the server, real time data on interaction patterns of the infected is available and notification algorithms can be tested and tweaked. Since we don't know exactly how Bluetooth propagation and covid infection map to each other, this may end up being an important step.

In DP-3T, the keys are generated on the devices and IDs are stored only on the devices. If a central server authorises you to do so (based on a confirmed diagnosed infection), you broadcast the IDs of all the devices you have been in proximity to. All devices regularly download a list of IDs and check the list for one of their own rotating ID numbers. If they match, the user is notified and is able to pre-emptively isolate.

The second approach reduces the consequences of a nefarious central operator but at the cost of sharing more information with more people (since everyone sees the list of possibly-infected IDs). In other words, even in privacy terms, this is not a perfect approach either. That information can be used to carry out re-identification attacks and reveal infected users if certain conditions are met.

From a privacy point of view, I think many people would prefer the latter (possibly allow malicious attacker, if they are able to do certain not-so-easy things to determine that they were infected) since most people in the UK will not consider that deeply private and secret information about themselves. Many of my friends who got it have posted about it on FB, twitter, etc. The former, which gives a state actor more information seems like a greater breach of privacy.

A final point which I think is super important: we don't know if either of these minimally disclosive options work in practice. No-one has used them in the wild yet. If they both worked then we could say - DP-3T has fewer privacy implications, they both work, therefore it is strictly superior.

Since we don't know whether they both work (or even if either one works), we are instead left with a balancing act between privacy and usefulness which is not an easy one.

Of course, if due to API support reasons, only the OEM supported protocol can be made to work then regardless of any theoretical arguments, that is the one that we will need to use. I suspect we will know by the end of the weekend if their implementation works in practice.

I would like to see statutory safeguards put in place for this data. If they really need it then I get that, but it is important that if we do not have the technical safeguard we have legal ones.

(Also, Google/Apple had a difficult choice to make here because they operate everywhere. They had to pick something that they feel comfortable rolling out to their devices in every country.)


Thanks for the explanation - very helpful. I was wondering if you might have time to go into more detail on this:

> A random sample of additional people also receive notification messages which their phones are able to discard as invalid decoy messages


Basically, if you didn't do this, then people could snoop on your message traffic and even though the payload was encrypted, they would be able to tell that you received a warning message about having been in contact with an infected person.

By sending decoys, everyone gets the occasional such message as far as any attacker is concerned, and no information is disclosed.


Did the NHS have any good reasons why they weren't going to work with Google and Apple (who surely have more expertise for this)?

Why not work on a single open source iOS + Android app that is shared and used in all countries so we can get cross border tracing?

If governments are concerned about Google and Apple getting access to data, what's the argument against the above where each country uses their own servers to store tracing data?


The NHS wanted the ability to model "risk-from" a contact, rather than only "risk-to". In the decentralised approach, infected users upload a list of their own identifiers, these are broadcast, and everyone checks if they've been near one. There needs to be a threshold picked that will trigger to alert you - probably "contact-minutes" with an infected person.

The NHS approach is less binary, as they believe based on the rate of spread that it's necessary to request people report symptoms and share their own anonymous contact log. Based on the level of exposure the "suspected individual" had with other known/suspected infected people, combined with the number of other known/suspected infected people you were near, your advice can be tailored. You might be told to isolate, but then you're asked to report symptoms regularly. Others in the same position as you will be doing so as well - there is potential here to actually tell people they no longer need to self isolate if the "group" don't have any symptoms emerging, based on the epidemiology and the rate people are believed to develop symptoms in the population.

There's a good write-up of this actually in the NCSC security report and post - https://www.ncsc.gov.uk/files/NHS-app-security-paper%20V0.1.... and https://www.ncsc.gov.uk/blog-post/security-behind-nhs-contac.... It explains a lot about the rationale, and not only for security, also in terms of the approach and how the app is designed to work when people have symptoms etc.


Thanks for that I'll have a read.

Is there any discussion about other countries using this NHS app (the repo says MIT license)?

Each country could customise the alert heuristic but it seems like a waste of resources for each country to code their own app for a global problem.


> Thanks for that I'll have a read.

> Is there any discussion about other countries using this NHS app (the repo says MIT license)?

> Each country could customise the alert heuristic but it seems like a waste of resources for each country to code their own app for a global problem.

Not seen any other country talking about this yet although it only came out today in source form, and still is technically in limited beta. I imagine Australia and Singapore will be interested in the iOS Bluetooth in sleep workaround method, as that does appear to be novel and not really done before.

If you implement the same basic API, it would be easy for another country to implement. The nice aspect of this design is that you can have an MVP (like UK has right now) while you evolve the more complex heuristics and models on the backend, and add those as you develop them. I'd agree it's a waste to build multiple apps, and I imagine countries will be looking at this approach now, and the value of an app with real working code, versus trying to build something.

The Bluetooth protocol itself is designed to support multiple countries using it - the outer message contains a country code flag so you know which country's health system you'd need to communicate with in order to exchange info of a proximity event.


> the iOS Bluetooth in sleep workaround method

Why do they need to find a workaround? Can't Apple tell them the best way or push an iOS patch to help?


> > the iOS Bluetooth in sleep workaround method

> Why do they need to find a workaround? Can't Apple tell them the best way or push an iOS patch to help?

Apple seem to take the view governments should use their decentralised approach, or not do it, and don't seem to have been willing to help governments do this without using their "joint approach" with Google.

I am normally the first to back anything decentralised. But in this case, the stated goals of the app (from a very good NCSC writeup I've linked elsewhere in these comments) simply don't work with a decentralised approach (namely being able to do risk-from calculation, not just risk-to, and being able to gather and model from symptoms whether others should keep isolating or can go back to normal).

Given Apple said it wouldn't help anyone not wanting to go "their" way, it's quite reassuring to see that engineers do what they do best, finding a new workaround, and making details of it available for any other devs... Perhaps Australia can use this in their app now to fix their issues with Bluetooth and iOS.


The Bluetooth message includes the country code unencrypted, this allows the system to interoperate with others. Every country will want its own data store at a minimum.


From one of their strategy documents:

"We believe that any likely decentralised model (because there are many) will have the following impacts in the UK:

-Move the health response from ‘react to symptoms’ to ‘react to clinical test results’: We cannot currently find a way to manage malicious notifications, or possible amplification attacks, in a decentralised model without authentication. Consequently, notification must be uniquely tied to an authentic clinical test. This generates a dependency on the digital authentication of clinical testing … "

I hadn't thought about that but that might be quite a big flaw in the Google+Apple approach. If central verification is required in that model, it introduces a delay step which might make it a lot less effective. SARS-CoV-2 seems to be transmissible very early relative to symptoms so this central notification step would need to incredibly quick (fast enough that I don't think you could wait for someone to be swabbed and PCR+ even with a very efficient testing service which the UK does not have).


I posted this in reply to another comment, but yes this does appear to be the flaw of the Google/Apple approach. Assuming the incubation time is just over 5 days (seems a broad consensus here), and someone is at their most infectious in the day or 2 prior to this (not sure if this is yet broadly agreed, but some studies are suggesting this [1])

Imagine that person A is infected on day 1. On day 4, A met with B and C. A experiences symptoms a few hours into day 6.

With a decentralised approach, you really need the certainty of a test before uploading anything, to stop Sybil-like attacks. Most probably, A would be tested the following day (day 7), and get a result (while isolating at home) a few hours into day 8. Meanwhile, B and C are walking around in the population, on day 7 and 8 (days 3 and 4 of their own infection, the time they are likely to be most infectious). They get asked to return home, and probably experience symptoms the next morning.

In the NHS approach, when A experiences symptoms on day 6, they report this via the app, and are told to get home and self-isolate. B and C might also be asked to isolate, based on their contact with A, at this time (as well as their general perceived risk from any other infected individuals they might have had distant contact with - the NHS app can consider multiple potential sources of infection to an individual, and aggregate the risk).

This gets B and C at home and isolated on day 2 of their own infection, and has them at home during their 2 most infectious days. If neither gets symptoms, and others exposed to A don't develop symptoms either, the daily "how are you feeling" polling via the app for symptoms will be able to capture that based on lack of symptoms, there likely wasn't exposure, and the group can leave self-isolation. Testing then augments this for more certainty, but this model does appear to be quite novel and unique, and near impossible to do in the decentralised approach (especially some of the more complex parts like deciding how likely A is to spread the virus, based on A's own exposure to infected people in A's recent past, and the date A reports symptoms).

When taking this into account, I'm not sure how a "testing-based" model can work, unless there's pre-emptive screening being done. I don't think many countries have the capacity to pre-emptively screen asymptomatic people with the regularity and coverage you'd need to benefit from reacting to test results.

[1] https://eu.usatoday.com/story/news/nation/2020/04/17/covid-1...


Consider also the scenario where A reports symptoms on day n. They have logged contacts with 10 people on day n-2, 10 people on day n-3, etc.

With the NHSX design, you might warn back to n-6 but might be able to determine that the riskiest cohorts are n-3 onwards. Since symptom onset distribution can be modeled, if by day n+3, not a single person in the n-6 and n-5 cohorts have developed symptoms (and maybe you've managed to get a -ve PCR from two n-5s - the fact that you have the contact graph means that you can use that data to adjust your risk score for all the others) you've now got a really good case for letting the n-5 and n-6 cohorts out. These models will be refined over time as well with new data. The presence of a central system which has some of this information lets you extrapolate from limited information.

If you have infinite PCR tests and can logistically get them to potentially infected people very quickly, get them back to a lab, and get them tested then maybe this doesn't matter but not even South Korea's capabilities are infinite and there is no way the UK would be able to test all those people rapidly.


For reference Austrias Red Cross App is here: https://github.com/austrianredcross


Interesting - it looks like they've gone down the route of having a way for people to do a "digital handshake" (both press the button, and have devices within a short distance of each other) to create their contact pairings.

I do wonder how many people will use this feature, or remember to use it. This would seem to reduce false positives, but make it much less useful at scale (i.e. it won't help trace people who were on a train or bus).


Contact not contract in the title?


Twitter user probing for safety & security issues ‘india now NHS next’

https://mobile.twitter.com/fs0c131y/status/12584360338400215...


Any contract tracing system that is not time limited and disposable by design is a surveillance system being introduced in the guise of contact tracing.

My ideal tracing system would include disposable components, such as time-boxed smart cards, that are disposed of when an epidemic wave (cycle) ends. If the pandemic has multiple waves, each wave will entail producing a new series of disposable components.

Another important aspect of a genuine tracing system would be that it does not hook into infrastructure of surveillance capitalism behemoths.

The regrettable fact is that this contact tracing app will definitely become a requirement for getting a job, and very likely at some point a de facto identity and civil passport for being able to access modern infrastructure and urban areas.


>The regrettable fact is that this contact tracing app will definitely become a requirement for getting a job, and very likely at some point a de facto identity and civil passport for being able to access modern infrastructure and urban areas.

Should be an easy fix now that the app is open-sourced: just build a version of the app with all the tracking code stripped out, leaving a mock UI that always says "no contact detected"


The issue is not that. This is raising private corporate infrastructure (Google, Apple, etc.) to a quasi-governmental role. Something so tightly bound with fundamental civil and human rights concerns, should not depend on these giants. Think China's Baidu.

And here is a deliciously ironic title and article:

China's Dystopian Tech Could Be Contagious

https://www.theatlantic.com/technology/archive/2018/02/china...


Contact not contract


s/contract/contact/


Whelp, we've sure entered into the twilight zone of digital privacy quickly. A month ago we were saying "15 days to flatten the curve", today we have MRAPS busting up protests and digital privacy continuing to circle the drain... and THE PEOPLE LIKE IT THIS WAY?


There's a code review, of sorts, (from an ex-Googler) here:

https://lockdownsceptics.org/code-review-of-fergusons-model/


I'm not entirely convinced a site called "lockdownsceptics" where almost every article is about how the lockdown is stupid, unconstitutional, fascist, etc. and uncritically reposting articles by people like Toby Young is going to be even slightly unbiased about that model.


From the article:

> On a personal level, I’d go further and suggest that all academic epidemiology be defunded.

That's a hell of an extremist position to take.

It's also an anonymous report, not a code review, with the evidence presented being some graphs produced with no idea the author had any idea they knew what they were doing.

Pretty ironic that the author says "this situation has come about due to rampant credentialism and I’m tired of it" - but implies they know what they're talking about because they used to work at Google.


This review seems to be of academic code for an epidemiological model, rather than anything to do with the app itself? The 2 apps are in Kotlin and Objective C, rather than C++.


Sorry, it is. Managed to cross wires in my head.

In my weak defence, it's not often we get two bits of code open sourced in a week relating to UK government policy.


Hah, good point! Although in fairness to UK Gov, they are pretty forward thinking when it comes to open sourcing infrastructure that others would baulk at open sourcing [1]. Just usually not hugely relevant to their current policy, rather the underlying infrastructure.

There's actually some quite nice (open-source and reusable) guidance/handbooks etc. on remote working and security from the Ministry of Justice [2]. Probably not nearly as well advertised as it should be. Clearly won't meet everyone's needs, but there's a lot more out there than people probably realise! :)

[1] https://github.com/ministryofjustice/hmpps-book-secure-move-..., https://github.com/ministryofjustice/correspondence_tool_sta..., https://github.com/alphagov/govwifi-admin etc. [2] https://github.com/ministryofjustice/security-guidance


Actually, it is often.

https://github.com/alphagov


Can we stop pretending that if half the population refuses to wear masks and/or stay-at-home to spare others that they are going to be open and honest and assist contact tracing when there is no obligation to do so? Why would they bother?


Im pretty opposite to those folks. Still im not installing anything.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: