Hacker News new | past | comments | ask | show | jobs | submit login
60% of school apps are sending student data with third parties without consent (me2ba.org)
613 points by InitialLastName on May 5, 2021 | hide | past | favorite | 155 comments



In the report, not the summary, Firebase is listed as the most used 3rd party, with Firebase Messaging and Firebase Analytics being designated as "High Risk." When the report says "these apps are sending data to Google" it sounds like they are sending ad tracking data to commercialize off of this data, but tools like Firebase are common web hosting platforms, not data tracking tools. This is like saying data is being sent to Amazon Web Services.

Here is the report: https://me2ba.org/school-mobile-apps-student-data-sharing-be...

As an edtech founder, I 100% agree that there should be more time spent holding school apps accountable for the data that they sent out, but this report doesn't accurately capture the risks present and how to hold organizations accountable.


> This is like saying data is being sent to Amazon Web Services.

I have a (large financial institution) client that asks this question on their vendor due diligence and on-boarding and plans to phase out vendors who say yes and aren’t willing/able to change. They’re concerned about data privacy. Not just security, privacy. From Amazon.


They better be huge or super niche, otherwise this sounds like an egotrip. Not sending data to a major cloud provider is quickly becoming like producing your own electricity because you don't trust the grid will provide a stable frequency and won't go down. At some point you just need to trust the utilities and go with it. Plus you can hopefully encrypt your data if this is a major concern.


Not using AWS is nothing like producing your own electricity. I can easily set up something like AWS, but I could never set up the infrastructure to produce electricity on a large scale. Also electricity is a relatively uniform and neutral product compared to a cloud service.

You should have picked a different analogy.


It would be easier to produce your own electricity than replicate any significantly sized portion of AWS.


Creating a 24h off the grid electricity supply is not as easy as you think it is. Even installing solar panels is a non-trivial task on its own.

In the end AWS S3 is just distributed file storage.


Its so easy to take things for granted when they just work with many nines of availability and integrity.


The thing about AWS that is so impressive is not what it does. What AWS does was previously done by every single IT department at every company across the globe. What is impressive is the scale at which they do it. When you move to a world of every product hosting their own AWS equivalent, you've removed the challenge and impressiveness of AWS since you no longer need their massive scale, you only need enough to host your own product.

Setting up your own servers was not a bottleneck to the generation of internet companies that preceded AWS, and time has only made hardware cheaper, bandwidth more affordable, and OSS higher quality and more plentiful. It is easier than ever to host your own services, AWS just makes it cheaper up front thanks to their massive scale.

Point being, people aren't choosing AWS because AWS solves a problem they're incapable of solving, people are choosing AWS for pricing flexibility and faster time to market.


Setting up your own servers was absolutely the bottleneck. Providing on demand scalable computing and storage untethered a generation of garage startups from venture funding. Prior to 2007, if you had a great idea you could build it and then start renting, or building and colocating, servers to run that app for you, at considerable up front expense. Plus, you got to do all of your own system administration, up to and including driving down to the colo when you couldn't adequately diagnose issues remotely.

So when you say "what AWS does was previously done by every single IT department at every company across the globe" yes sure but (1) it was done worse in many cases, with data loss or significant downtime, and (2) there were fewer such companies by magnitudes, because of the cost barrier.

What AWS did was take a costly process, done inconsistently and to varying degrees of correctness across the business world, and make it available to everyone at a very high quality, with a innovative pricing schedule.


There are a dozen different contractors and companies that can build out electricity supply for you in a relatively sensible timeframe. Most of the major components are off-the-shelf.

S3 has eleven 9s of durability. That alone is bonkers. Sure, let's say you replicate that and staff it up to keep it working ongoing.

I didn't say "replicate S3", I said "replicate a significant portion of AWS". A lot of the value of AWS doesn't come from using _a_ product, there are alternatives for most of what they offer, it comes from having the whole ecosystem of tools integrated and available in one place.

So now you need to go build out a highly available, redundant queuing service like SQS that supports FIFO delivery and up to 20k inflight messages.

And a highly available, redundant notification service with integrations not only with the web/email/etc but also SMS.

And a geographically redundant database service with multi-master, instant snapshots, point-in-time restore, etc, etc.

And... well, pick whatever other handful of AWS services you're using in your specific use-case.

And wrap it all behind tools for management. And hire a whole wackload of ops staff to keep it all going. And pay 10x as much because you don't get the economies of scale that AWS does.

I'm not ignorant of the difficulties in setting up generation and storage for electricity. But I'm also not ignorant of the absolutely massive task it would be to replicate AWS if you're it for more than a really expensive VPS hosting service. I would 100% choose to work on off-grid electricity generation before rebuilding AWS.


I am sorry but this read like a report from fantasyland. In the past 3 years I have seen several AWS failures but not a single powercut.

You entire take is based on the idea that AWS replacement has to be superbly reliable and scaleable, while your grid replacement does not.

Lets compare like for like, your grid replacement has to have redundancy so that generators can be repaired without power loss, it has to support megawatt scale spikes in demand in case several friends with electric cars come to visit, it has to have mean time between failure measured in years, withstand extreme weather, and be renewable. Also you need to have black start capacity and logistics to replace equipment promptly when it fails.

How is the cost for that going to compare to using the grid?


Amazon have been caught using the sales data of their customers to compete against them.

If they want utility levels of trust they should get nationalized.


That's not AWS though


But it's still amazon, the left hand doesn't get a free pass because the right hand pulled the trigger. Amazon has shown willingness to utilise privileged information in the past why wouldn't they go snooping around?


"why wouldn't they go snooping around?"

Because if Amazon retail, which sells stuff, is looking at sales in their channel, well, that's sneaky but the entire retail world is sneaky.

If Amazon AWS is caught snooping at private AWS data, which contains HIPPA-compliant health info, financial records, government data, and some of the most sensitive data imaginable ... then they are toasty-toast.

I think the same thing would apply to Google. GCloud is different than Search etc. if that firewall breaks down that business will crash.


Amazon snooped on Netflix, and built a direct competitor. They’ve done similar to smaller *aaS firms.

In those cases they mostly looked at traffic volume, etc., not private customer data, but I don’t have any insight into which ethical lines they will and will not cross.


AWS wouldn’t have to go digging through private buckets and servers to gain valuable information on a potential competitor. They can determine how fast a company is growing and what services they rely on based off billing data and bandwidth usage alone.


> then they are toasty-toast.

Why? Says who? Who or what would make them 'toasty-toast'? The toothless regulators? Feckless IT managers who make purchasing decisions? Impoent developers whining on hacker news?

> if that firewall breaks down that business will crash.

Not sure why you're so confident about this.

a) GCloud is a distant 3rd place competitor in the cloud market anyway.

b) Google's real business is ads, and at some point the numbers might make sense to decide it's worth cannibalizing their dying cloud. (Especially if they're going to pull the plug anyway?)

c) None of the other data scandals has even slightly dented big tech.


Well, for starters they wouldn't be able to claim HIPAA or PCI-DSS compliance anymore. Anyone dealing with health information of Americans would expose themselves to criminal charges if they ever use AWS again. Though it's a private sector response, Visa/MasterCard/etc would also push anyone dealing with credit card numbers off of AWS.

At least PCI-DSS certification (I don't know about HIPAA) further involves annual audits to make sure that certain proactive things are being done as well, specifically including things like data access logs. Those audits aren't as comprehensive as they ought to be, but they'd catch something egregious like marketing people looking into data owned by AWS customers.

I'm pretty sure (though much less than the above) that this would also be a de facto GDPR violation, meaning that nobody who wants to do business in Europe could safely use AWS anymore either. Amazon itself uses AWS and wants to do business in Europe, so that's a pretty good incentive.


Many years ago, I went through a PCI-DSS compliance "audit" at a previous company. The audit was carried out by some third party whose job it is to sign off on those things.

We had to deliberately downgrade certain software to "approved" older versions and temporarily close some ports while they ran their scanning utility on our servers. After they rubber-stamped it, we re-upgraded and enabled whatever we needed to run again.

They certainly would not have been able to detect if data center technicians (this was pre-cloud) were accessing our data behind our back. Maybe some companies take the PCI certification process more seriously than my previous employer did.


They would be 'toasty-toast' foremost, because thousands of customers would sue them to pass the liability buck.

Second, because IT managers and executives would freak out everywhere, with legit cause. Nobody on the planet running an SaaS would want AWS to be using their data.

Third, it's probably illegal, so there's that.

Finally, the PR fallout would be huge.

If Blue Shield had a major client leak of HIPPA info, and it was because 'AWS staff were looking at it' it would be a big deal for AWS. They would have to prove to everyone that it was just a few bad apples etc..

Most of the other scandals have not dented tech because they are not really scandals. If FB has a breach and some consumer email addresses get leaked ... well that's not so bad. If AWS is looking at BestBuy AWS data, then BestBuy will sue and drop them, and others will follow suit.


>"Second, because IT managers and executives would freak out everywhere"

Remember when we though that whoever is in charge would be Toasty-toast it it turned out that Government was spying on innocent people without due process?

Or if an aircraft company knowingly produced unsafe aircraft and killed 300 people as a result?

For the past 10 years I have seen countless corporate or breaches and fuckups, and one thing they have in common, there seem to be no consequences for those in power


I think you're crossing streams a bit.

It's not 'we' thought the government would be in trouble if they were caught spying, it's 'some' people. Most people have more nuanced views. Especially in areas of national security most people accept some degree of oversight, so the issue then becomes a matter of details. What was the oversight? What are the material repercussions? Who is harmed? How? All of those things add up in complicated ways among the general population.

The Boeing issue is also complicated. These are not black and white decisions, and just because there was an Engineer 'who said something was wrong' doesn't always help, because there's always a person of credibility that disagrees with systems, many of them are safe. Boeing has paid a huge price for their screw up, with grounded fleets, cancelled purchases.

When Facebook does bad stuff - remember that consumers greatest power is the choice to not use Facebook. So either people continue to use it - or not. Apparently they are, so that's a measure of their real concern for their data given the breaches.

If HIPPA information was looked at by AWS, then there would be lawsuits immediately for example, there would be an investigation and if it was 'just a guy' then I think AWS would be ok, but there would be a lot of scrutiny.

But if there was a whistle-blower at AWS who said 'people are looking at sensitive data all the time' then it would be over for them. While individual consumers may not collectively have any real power to do serious damage, big companies do.

Put yourself in the shoes of an Exec running on AWS infrastructure: all of your most sensitive data leaked, possibly to potential competitors? So the issue is raised far beyond IT personnel etc..

Just like you'd ground your Boeing jets if there were a safety issue, you'd probably move away from AWS.


Not yet.


> Not sending data to a major cloud provider is quickly becoming like producing your own electricity because you don't trust the grid will provide a stable frequency and won't go down.

What, you mean smart? A lot of data centers do exactly that. SevOne has at least five Bloom Energy H2 fuel cells that run the whole building because they can't afford any downtime.


Privacy is the default. Why would any app ever share info without explicit user consent per-session?


FYI to find the table, ctrl-f 'top 10 most frequently ' (or, if you use Chrome/your browser supports text fragments, https://me2ba.org/school-mobile-apps-student-data-sharing-be... )

I also don't see how OkHttp[0] or Okio[1] are 'medium' risk. Are they secretly sending data to Square? The code is auditable on GitHub.

0: https://square.github.io/okhttp/

1: https://square.github.io/okio/


Likely just user-agent scanning, those libs would be used by other service.


IMO there's no such thing as a trustworthy 3rd party. To say the least they are very rare and not the big ones. They just can't care about all the small customers. If they mess one up it's no big deal for them. But if you are small and happen to be messed up it is a big deal for you. Asymmetry is a huge problem.

And yes, what would XY-Analytics be other than stalking, put bluntly?


The point is that Firebase may not even be a third party, but rather the hosting platform of the application. Like the parent says, it’s like saying that AWS is a third party and a high risk threat.


Firebase is a hosting platform and a third party.


AWS is a third party and if you use their services, they are handling your data. I suppose it's up to you to consider how much of a risk that is to your business.


I think the tracking landscape is such that you can't trust ANY of the companies involved in it. I personally divide tracking companies into those that were caught doing something shady with the data and those that are doing it but were not yet caught.

School apps should not involve ANY tracking, regardless of your opinion about the company.

Kids require special protection because they don't yet know how to function in the world and can't make informed decision about whether they want or don't want to be tracked and in fact for these kinds of apps, they are not even consulted or asked to consent.

Schooling is obligatory. I can make decision whether I want to use the internet or not and whether I want to be identified, but kids don't make that decision -- they are being forced to use the software provided by schools.


Google, who operate Firebase, enable and encourage Firebase customers to partake in advertising, e.g., by linking their Firebase account to an ads account.

https://support.google.com/google-ads/answer/6333536?hl=en

https://firebase.google.com/docs/ads

https://firebase.google.com/products/analytics/partners


Unless things have changed, the bigger issue is that Firebase Analytics is included by default and can't easily be excluded.

Linking Firebase to Ads requires conscious action on behalf of the developer, while the former only requires using any Firebase product.

Also, Google is in the online ads business, so I don't think it's surprising that they provide docs that show how to use Firebase data to target ads. That said, it's not the default, so it's not as shady as you're making it sound.


What ar the agreements with amazon and google/firebase regarding the privacy of your data? If they ensure privacy, can they be trusted? For the data of children, shouldn't they be held to a higher standard? E.g. for protected government data, you have to run in amazon's govcloud.


Firebase's terms are here: https://firebase.google.com/terms (and GCP's data processing terms are here: https://cloud.google.com/terms/data-processing-terms)

Having worked on those products (and in some cases, written the terms), I'd say so.


Facebook used to promise that phone numbers collected for 2FA purposes wouldn't be used for advertising purposes, and then broke that promise.

That is just one example which was made public and was done intentionally. There's potentially plenty more cases where it's either done but concealed (because of how many factors go into ad targeting it's often impossible to categorically prove/disprove which data was used to target an ad) or done by accident from earlier code that just assumed all data is fine to use for ads (or that the new product which shouldn't share its data for advertising purposes was accidentally storing data in the same place as other products who do share data for advertising purposes).

Why should we trust companies that have a business incentive to break their promises? If Google's biggest revenue stream is ads and that requires personal data then I wouldn't trust any of their promises not to use some personal data unless I can 100% audit all the code myself and prove that it's indeed that code that is running in production.


Why are there no consequences to breaking their promices


Because we have, as an industry, collectively surrendered our morality to "the market", which is supposed to fix all these problems by its almighty powers.

Because the prevailing voices our country have spent the past half-century telling us that government is Bad, and business (and greed) is Good, and enough of us have bought into the idea that we've systematically dismantled enough of the systems that keep corporate greed from running amok and ruining people's lives that they now effectively run large chunks of government (eg, see ALEC).

Because too much of our society—especially in the tech sector, and more especially in the parts of it that are overrepresented on HackerNews—has come to worship wealth and the wealthy, and to believe that they should be allowed huge amounts of latitude to do what they want with their wealth.


Naive question: why do children need a higher standard of privacy vs. adults? It feels like "children" here are mainly used to evoke an emotional response.

If I had to choose a stage of my life to become part of the public records, I'd definitely pick my childhood over my adulthood.


While I see your point of misusing children to evoke an emotional response, I think this is an exception:

1) School is a mandatory part of life for every child, therefore increased scrutiny and high standards are warranted. As an adult, you are free to go if you ever disagreed with your employer on ethical grounds. But if children were forced to participate in a dubious (in terms of privacy) online service, they really have no choice other than to accept it (going to another school in the same area is hardly a realistic option).

2) In general, it is understood that children might not be able to grasp all consequences of their decisions. Even if children had a choice whether to accept their school's mandatory online services, I don't think they can really give consent considering they might not understand the ramifications of their personal data ending up in the wrong hands.

Reading your question again, I don't think children need a higher standard of privacy than adults - rather, I think the point is that regardless of the standard of privacy, children can only give limited consent.


>If I had to choose a stage of my life to become part of public records, I'd definitely pick my childhood over my adulthood.

that doesn't sound sensible unless you are assuming at adulthood you would be out of public records, in which case sounds great! because your adulthood will hopefully be a lot longer than your childhood. On the other hand I don't think an advanced society would work with that model.

Secondly - why do children need a higher standard?

People at different changes of their life are not necessarily the same people, a man at 50 might not be very much like who he was at 20. Unfortunately our society does not do much to support such a concept. It does however support the very rudimentary concept that things you did as a child should not follow you as an adult - when you have different legal responsibilities and possibilities of action. So that is something that probably shouldn't be taken away.

Furthermore as a child is not fully developed in reasoning it is probably nice that things the child does is not part of their permanent record.


As an adult, any service provider can claim that you agreed, when you signed up to a service, to give away your personal data.

It’s presented as a contract mutually agreed between you and the service provider - you get the service; the provider gets the data. A legal contract.

But, in all systems of law that I’m aware of, you can’t make enforceable contracts with children.

This exists as a way of protecting children from making disadvantageous agreements that they don’t yet have competence to understand.

So, the service provider can’t use any contract as justification for handling personal data of children.

This leaves the service provider without legal justification and then calls their ethics into question.


> But, in all systems of law that I’m aware of, you can’t make enforceable contracts with children.

In most (all, I think) US, contracts with children are generally legally valid, but voidable by the child. This means the child (or the child’s guardian) mau cancel the contract before performing any obligation. It does not mean that once completed, the exchange can be retrowctively invalidated, though.


In my view, "children" are used here, because the topic is school apps. On the other hand, one thing special about children is that we have specific laws to protect the privacy of their school records, namely FERPA. This also provides a potentially useful attack vector that the rest of us don't necessarily enjoy. This problem might lend itself to a class action FERPA lawsuit against the school districts that have released the information.

I'm going to assume without being a lawyer, that sending out school data is a violation of Federal law.


There are many contractual commitments to privacy on all of the big platforms. Like any tool, the wielder can make bad implementation choices. (And I can assure you, on-prem Edu IT is a shitshow)

> E.g. for protected government data, you have to run in amazon's govcloud.

Not really. Many .gov workloads are fine in commercial cloud. It depends on your compliance requirements. Sometimes there are downsides too - some services aren’t available. At one point, Azure Gov required separate credentials.


AWS terms do not grant them a license or ownership over your data, although "If you process the personal data of End Users or other identifiable individuals in your use of a Service, you are responsible for providing legally adequate privacy notices and obtaining necessary consents for the processing of such data" - so if you just store random files on s3 or EBS it's not like AWS can do anything they want with it.

https://aws.amazon.com/service-terms/

The GDPR DPA does a good job of encompassing the entire extent of their data use:

> AWS will not access or use, or disclose to any third party, any Customer Data, except, in each case, as necessary to maintain or provide the Services, or as necessary to comply with the law or a valid and binding order of a governmental body (such as a subpoena or court order)...

https://d1.awsstatic.com/legal/aws-gdpr/AWS_GDPR_DPA.pdf


They forced us in high school to use a service called “TurnItIn” which I really disliked because they took ownership of your writing and added it to their database to diff against all future submissions.

It’s supposed to prevent plagiarism, but it made me frustrated to be forced to aid in this system or fail.

Being a public school student in the US sucks most of the time - no power and miserable adults. It’s like being imprisoned with some of the dumber, less successful people from the previous generation in charge of you and they have a lot of discretion over your future success.

Some teachers are great obviously and make a huge difference, but many are truly awful.

If they’re requiring services like turnitin as a matter of policy it’s not a surprise other stuff would be poorly handling student data too. School’s don’t value it, and even if they did - they don’t have the technical capability to fix it.


University felt like a limbo where the government expected me to be an adult and pay my own way but the school wanted me to be a child and trust that they knew what was best for me.

In our (UK) system, there's no profit incentive because there's no bad customer service that will be enough for you to give up the economic signalling. So, despite being the most expensive thing I've ever paid for, I had less input into how it was done than I do for my coffee subscription.

I didn't have a bad experience with school before that but I still remember distinct occassions of cruelty or unfairness. At that age you're just less well equipped to deal with it, nobody trusts you and you have no freedom, so your world is smaller. So a minor thing has a major impact. Adults used to tell me that it's all downhill after 18 - did they forget what it's like to be a kid?


I was going to give you some friendly advice that having the clichéd attitude that everyone else is stupid really makes for a miserable life.

But then realized you're not a high school student as you joined in 2009 and you're called 'fossuser'.

Bit of a lost cause maybe, but seriously, let the superiority complex go, it really doesn't help make you happy. I say this from experience.


Everyone isn’t stupid - today I work with wonderful, very smart people I can learn from.

HN and a lot of places online have wonderful people too.

The average American public school teacher? They’re not that bright and they can be petty. I usually didn’t have trouble because I was a good student.

Some examples:

- Teacher in health class said BAC wasn’t lethal until 40%, when I mentioned I think it was .4% she said I forgot to multiply by 100. On the final the multiple choice answers for lethality ranged from 10% to 40% BAC. She also had an obvious hyper-Christian abstinence only lean despite state law to teach actual sex Ed.

- Art teacher in elementary school called home and humiliated me in front of the class because I was talking about video games with another student while working on arts and crafts (this teacher hated video games).

- Middle school history teacher hated children, picked favorites, and was just generally mean, would bully kids in the class. She was worse than most middle school students.

It’s not a superiority complex, I had wonderful teachers too that cared about students and were kind. The incentives and pay of public schools are such that you get a lot of bad ones (and they’re impossible to fire).

When I tried to tell adults about what was going on they often dismissed it with condescending nonsense “oh it’s a tough age” - no some of these people you have in charge of us are legitimately crazy and should not be in charge of children.

Children have no power in school and it often sucks to be there.


You might like this blog post from Scott Alexander https://astralcodexten.substack.com/p/book-review-the-cult-o...


Thanks - that was good.

Also reminded me of an older comment I wrote a while back that’s a little related (at least has some more absurd examples I had forgotten about).

“ Illegitimate (or at least arbitrary) authority is something schools seem to thrive on. Our school had a staircase you could only go up and one you could only go down and you got yelled at if you went the wrong direction. Naturally the building wasn’t made by insane people so the stairs were on opposite ends, this meant if you had to go downstairs but were near the “up only stairs” you had to traverse the entire building and would be late with 3min class change times (and you couldn’t run either).

We also couldn’t talk during the second half of lunch because it was too loud for the lunch monitors.”

https://news.ycombinator.com/item?id=26637787


I can relate to most of this. agree.


Oh, I got into all sorts of trouble by being a good student…


> they don’t have the technical capability to fix it

This is a key point. Many school administrators just pick tools to use and don't bother understanding them and do not have the technical staff to setup and maintain them properly.

We all work in tech related fields, I assume, so think about the most sought after positions, have those ever been at a university IT department or K-12? Probably not.

There was a thread a couple weeks ago on here where it was joked (kinda half serious) that a school IT department is where you can go and just "coast" with great work/life balance.


> They forced us in high school to use a service called “TurnItIn” which I really disliked because they took ownership of your writing

They store a copy of the work, but I don't think anybody claims that copyright actually transfers from the student to TurnItIn.


> “ We are free to use any ideas, concepts, techniques, know-how in your Communications for any purpose, including, but not limited to, the development and use of products and services based on the Communications.”

https://www.zdnet.com/article/turnitin-if-youre-a-student-al...


> be me

> want to pirate best-seller book

> create alt account in turnitin

> publish pirated copy of book in turnitin

> turnitin acquires permission to use book

> buy intellectual property of book from turnitin for cheap

> become millionaire selling IP of book.


The entire value of the product comes from a corpus of student papers. The students are not compensated for their contribution to the company. Instead, their contribution is secured by extortion.


This is probably similar to the posts that regularly do the rounds claiming Facebook are asserting ownership of any photos or text you post to Facebook, when if you read the terms what they're actually saying is that by posting your giving them license to publish that content (because without that license any time they showed your photo to another user they'd be breaching your copyright).


>This is probably similar to the posts that regularly do the rounds claiming Facebook are asserting ownership of any photos or text you post to Facebook

Maybe it's true that FB don't own these right but they certainly did try. I recall a huge user campaign against Facebook who were brining out a new update to their ToS regarding photo ownership back in circa 2011. They ultimately backed down.

I can't find it after Googling though.


Facebook lets third parties use your photos in ads. The TOS publishing clause probably covers that and other usages:

https://www.snopes.com/fact-check/facebook-using-profile-pho...


I work for a large educational company you have probably heard of on a product you may or may not have.

I was on vacation a couple years ago, and in some downtime I was just scanning PRs awaiting code review, because I am bad at work/life balance. Lo and behold, there was a PR ready to go that would have exposed students directly to a third party. A reputable third party, but a third party nonetheless. It had the requisite votes but the user had not merged it. I very quickly threw up a blocking review and called my boss.

I made quite a stink about it to my boss, but he didn’t seem to think it was a big deal. I went up the chain to his boss, and he also didn’t see what the problem was. At this point I call my bosses boss and basically beg him to talk to legal about it. He does, and they put the kibosh on the whole thing right out the gate. I’m sure I burned some political capital that day, but in my book at least it was well worth it.

Hanlon's Razor in full effect. Even then though, I know these people well and know they’re not dumb, the way the web is structured makes it really easy to expose people and not even realize it.


So often these things happen with the best of intentions. We use OS level geofencing support in our mobile apps at work to provide some functionality to customers, and have been looking at providers of better support for that recently to try and work round some problems.

During a call someone mentioned they were looking at a provider who provides a full location history for debugging purposes, which was being looked on favourably (who doesn't want better debugging?) until I stepped in and pointed out we don't want to be anywhere near a full precise location history for our users. It provides basically no benefit to us other than debugging being a bit easier, and the massive risk that if someone's account for this service is compromised they've potentially facilitated stalking our users.


The third party doctrine in the US also means that this data is accessible without a warrant from the provider, if the provider turns it over on a request. The fact that it's your location doesn't matter, it's the provider's data to do with as they wish (including rat you out to police fishing expeditions) if they feel like it.

Then again, anyone who has location services on systemwide on an iOS or Android device is sending this log to Apple/Google anyway (because location services transmits all of the visible Wi-Fi APs to Apple/Google to improve location).

https://en.wikipedia.org/wiki/Third-party_doctrine


Also, the data can be retroactively collected with a warrant, even if it would have been illegal for the police to obtain the data themselves.


This year my son was assigned a login and password for a learning site by his school.

I logged in the day it was assigned.

Google Chrome immediately notified me that the login and password were part of a leak.

I contacted the school IT to let them know that the service they used seemed to be compromised ( other parents saw the same alert), but they seemed confused how I would know the logins were compromised...

Most school IT teams are woefully underfunded, and often aren't making decisions on what they use.


Government agencies with “special” employee classes like schools are really stupid about funding choices. My neighbor makes >$100k as a coach — great guy, very talented, but ridiculous when you consider that they have like 10 IT people for a district with 15 schools, and the director makes $60k and gets a stipend for teaching some elective course. They meander along with a patchwork of consulting engagements through grants.


If the learning site was Edmodo, I wouldn’t worry about it. I was a member at the time and they notified me promptly¹. To date, it’s the only account I have that’s listed on Troy Hunt’s “Have I Been Pwned?”² but I was impressed with how they handled it and reassured that the passwords were stored with the bcrypt algorithm.

1. https://support.edmodo.com/hc/en-us/articles/115007376848-Im...

2. https://haveibeenpwned.com/


This being Hacker News I guess not, but did you use a user / password that you had used before on another site?

Because in that case Chrome was just telling you that "one of the many sites you use (where you used this pwd) has been hacked in the past"


The username and passwords were issued by the site, you did not pick them and could not change them.


Ouch. That is so bad on so many counts. Hopefully infosec will improve in society, like we have learned to manage workplace safety or environmental safety. But there's a looong way to go :-)


Students are afforded very little privacy. I'm sure you've heard of administration software that lets school staff view and control student's computers remotely. One of my teachers even showed to the class that he was able to view the screen of a student who was absent from class, without the student's knowledge. The software also logs keystrokes (yes, including passwords) and browsing history. If that wasn't bad enough they also enforce custom certs so they can MITM ssl traffic. Many students log into their personal accounts on their school computers without realizing that they've compromised their passwords and sensitive personal information.

Schools are also notoriously bad at security. My school force disabled Firefox updates, meaning every browser remained unpatched after multiple CVEs, one of which allowed websites to execute arbitrary code on the host's machine (sandbox escape).


Fortunately all of the security problems you mentioned also make it extremely easy for the students to compromise the machines and install a clean OS.

The high school I went to (about 5 years ago) had a laptop program, and every year when laptops were given out to the new students one of the senior students would go around and teach everyone how to wipe them. Administration obviously weren't particularly keen on this, but what are they going to do, suspend the entire school? Confiscate over a thousand laptops every week to re-install their spyware? You just need to get buy-in from enough of the students to make any retaliation implausible.

I think this process was a lot easier at the school I attended as it was a very tech-focused institution to begin with, but kids are only getting more and more technologically-minded as time goes on, so getting that buy-in should be doable.


Convenient security holes are harder when the spyware is built into the OS. The popularity of Chromebooks means most students don't have the option you're describing. You can't disable the analytics without disabling school control over the whole OS and then you lose the school wifi password and your machine is useless anyway. What's worse is in a particular district I used to work for, this was the only device the kids had, hence why the school needed to hand them out in the first place. So suddenly the school and Google see all the personal browsing data of every student. Obviously there's no malicious intent so nothing bad has come of it yet presumably, but it is unsettling.


In general MDM provisions both the spyware and the configurations/client certificates necessary for internal resources. With some effort you might manage to unbundle the two. But just wiping the machine and freeing it from MDM only nets you a normal off-the-shelf machine. You aren't getting at privileged resources with it anymore.


> One of my teachers even showed to the class that he was able to view the screen of a student who was absent from class, without the student's knowledge.

Isn't the fig leaf of "it's stated in the ToS that the user read and understood" the only reason such software is legal? Once you admit the student has no knowledge of it, how is it different than the RAT stalkerware people go to jail for?


I think it's good to teach the kids that these devices are not theirs in any way. They should expect no privacy whatsoever while using them. Later they will handle similarly any devices provided by employer.

Tell the kids not to use such devices for anything that can raise an eyebrow, let alone raise a concern.

Doing leetcode problems? Fine. Browsing Wikipedia? Fair enough. Watching educational videos on YouTube? Proceed with caution. Playing games? A bad idea, stop. Opening your private email / social network account / instant messenger? Please never don't do it again, and change your password just in case.

This of course, is hard for kids. For many of them, it's the only real computer they have a chance to put their hands on. Which is, of course, bad. But fixing that is not very expansive: a reasonable used laptop can be had for $200, an RPi + a reasonable monitor, for less than $100. I hope most US families with kids who are genuinely interested in technology can afford it. (Getting the parents interested is a different problem.)


> Later they will handle similarly any devices provided by employer.

And actually, it is important that we correct this situation in both directions right at the start.

In my opinion, the laptops should be treated more like toilets. Sure, they aren't your property and you should be a bit suspicious about "hygiene" (security, etc, in the case of the laptop), but I absolutely expect to not be spied upon and it should be a crime to spy on usage of a company laptop once they allow it to be brought home.

In fact, when an employer expects me to take the laptop home with me I generally reject liability for its physical security I would generally push for access to a secure locker on premises to store it. If I go to a bar after work, I don't want to be concerned with company property, I don't want to carry around anything more than my wallet and phone. There should be consideration for me taking care of the physical security of company property if they are unwilling to provide secure storage in the office. The bare minimum of such consideration would be the understanding that I can use it for (limited) personal use at home. I can make personal exceptions for limited times I have some kind of on call, but my general norm would be to leave company property on company premises.

So yeah, if you are going to issue laptops to students that you expect them to assume responsibility for, you should provide some kind of consideration over just the "opportunity" to do school work at home. Leave that for school hours. I do recall some teachers talking about how homework prepares one for the real world when the sad reality is you should be trying to find employers that can provide you with a better work life balance than that.

These are my norms at least, and I think they are reasonable. If the company wants to get something (me taking the laptop home) then they need to give something (absolutely forbid involuntary remote full administrative access, that is give me privacy).


In general we teach children not to do "anything that can raise an eyebrow" because it's wrong. Because the raised eyebrows of the adults who care for them are based in compassion, protection, and legitimate moral authority. And if they find themselves scheming about how to avoid the eyebrows then they are knowingly and intentionally doing something wrong, self-destructive, or both. And that is not the kind of people they are, or want to be.

I think many parents would be profoundly uncomfortable spending significant money so that their children can evade supervision and guidance by authorities they are meant to respect. My own parents were pretty liberal, and it would still creep them out.


Learning that they should actively protect their privacy and that there is no reason to think that school authorities always have their best interest as a priority seems rather important, actually. What you teach otherwise is more like mindless conformity.


They seem to consider any use of an SDK or library to be "data sharing". For example, if an app uses OkHttp (an open-source HTTP client library) [1] then it counts as data sharing. That doesn't seem right...

[1] https://square.github.io/okhttp/


Yeah this entire thing just reeks of low effort clickbait.

Classing OkHttp as "Medium Risk" is ridiculous (especially since under the hood it's been the default HTTP library on Android since something like 5.0). That alone signals that the authors don't have the the skills or expertise to carry out this study.

It's a shame - since I bet there is definitely data being leaked places it shouldn't be in apps like these, and a proper investigation would be interesting.


I feel like they must have just seen it was a library from Square and assumed it sent data to Square. I doubt that 40% of the apps were written with no SDK or library.


I asked Remind– a teacher-student communication app– to remove pings to Facebook ads, DoubleClick.net, and Google Ad Services from their site when students viewed messages. Allegedly they were being used to track the performance of ads they bought, and that the ad companies promised in their ToS not to use it for ad targeting. They were polite and removed it after 2 weeks. Last time I checked they added Twitter ads analytics again...


At this rate we'll need to give kids weekly throwaway identities akin to how we use throwaway email addresses and throwaway social media accounts.


I would say a lot of schools themselves add those things, just to do analytics. Heck, my doctor's website has google and doubleclick links - on the pages where I'm communicating with my doctor, or viewing test results! I complained and they said "our website is a convenience"


Did you point out they’re breaking the law when you complained?


> the owners of the most (third parties accessing student data) were Google

Hardly a shocker. Google has been successfully pushing their classroom.google.com at school boards for several years. Schools of course don't ask many tough questions and (I suspect) when installing/setting up it's next-next-finish. Students (and parents) really don't have a choice if their school advocates using the service.

Glad my kids have long graduated so I haven't the need to fight to keep Google's hands out of (our) children's lives!


I would expect Google Classrooms to be particularly good at not sending data to _third_ parties!

They're already Google, so are not a third party. In this case it's at least clear and obvious that Google is being trusted with student data. And being Google, they already have in-house services for ad tracking, analytics, and what not. You're still sending data to Google, but at it seems less likely it'd then be sent out to other third parties with unclear privacy policies.


This is also why I think concern over "third parties" is misguided. It needlessly advantages the biggest tech companies, and encourages them to get even bigger.


I think the definition of 'third party' really needs an update. It should include other business units of a company not related to the product.

Google's Ad unit should have no access to the data generated from the Classrooms sites.


Yep. I’m in a k12 school, and we have google classroom on ipads and every proprietary thing you can think of. All my teachers have this annoying apple teacher image in their email signatures. No choice.


Might you spare some support for all the other families and children you may never meet but who will impact the future of your own children?


I looked at the full report too (https://me2ba.org/school-mobile-apps-student-data-sharing-be...) and I still can’t find what the actual apps are.

Who are the worst offenders? How do I know if I’m affected?


There really should be a law against mandated use of non-free software, at least with children who won't know any better.


Agreed. Children can't give informed consent to whatever the software may be doing behind the scenes (or in the backend) esp. if it handles their personal data. This is exactly the sort of thing the AGPL was made for: it should be possible for others to host and inspect software rather than blindly trust.


To be fair, very few people can really give informed consent on this topic. There's a million and one ways that you could be taken advantage of in a scheme like this.


This is true, but the right to inspect software has meaning even if it isn't used. It can be used by other people besides the proverbial "average Joe", and non-techs can ask tech friends about software. Every technical person also started out as a non-technical child.

This is also a baseline, not the endgame. Software freedom is a necessary but insufficient condition for user freedom. I wrote about other conditions as well. HN discussion: https://news.ycombinator.com/item?id=25982860


Yeah that's why there are people who argue non-free software should be illegal entirely but it's much harder to convince people of this (although Apple and Google certainly have made it an easier case to make.)


That the subject matter is too complex to comprehend should be an additional reason to ban this practice entirely, not to accept it under the guise of "fairness".


Oh, this reminds me:

One of my email addresses somehow ended up as the main parent address for a random student at Institut Els Pallaresos, a Catalan school.

To be clear, I have no connection to this person, the emails just started coming out of the blue. I've chased the school to stop sending me emails; they said they'll update the email on the account, but nothing changed.

I currently receive timely updates about this child concerning the following:

* school events

* grades

* misbehaving in school

* missed classes (the kid skips a lot of classes btw)

* performance

I am also able to use my email to sign into the account on ieduca (the software platform the school uses) and see a bunch of personal information (e.g. home address).

I've repeatedly pointed this out to the school and they've done nothing about it. At this point I've resorted to sending all emails to spam.



> (e.g. home address).

Have you considered contacting the kid's parents about this?


I'm already feeling uncomfortable having access to this private information. I don't want to make use of it in any way, and there's no telling how the family would receive the news.


Seems like literally any cloud hosting at all is considered third party by them, which would limit schools to running LAN based apps only with all local or LAN storage. They have an overly broad definition of 'third party', because the usual concerns are data being leaked and used for advertising, not hosting servers, databases, disks, or other app services, which usually have some privacy terms.

Some people might think a non-cloud based solution for a school would be better, but would almost certainly be worse. Schools are terrible at IT, security, updates, provisioning, etc and very quickly, it would be hacked, if not by external people, than by one of their own students.


While I am sure there are major issues in this area, I am not sure if this report provides an appropriate measure of them.

My biggest concern is that there is no mention of G Suite Education or districts using Google-branded products (e.g. Google Classroom, Google Meet, Gmail), presumably[1] with an agreement that limits data sharing. Without distinguishing these use cases, it is hard to take the article seriously.

If taken seriously, the graphs comparing Android and iOS apps (buried in the actual report, Figure 11) would be completely damning:

> Android apps are about 8 times more likely than iOS apps to include very high-risk SDKs. Android apps are 3.5 times more likely than iOS apps to include high-risk SDKs.

[1] Evidence that this does not exist would be interesting, but I would be very suprised to find out that large districts have no such agreements. Small districts? Sure, they are at the mercy of the vendor.

[2] EFF article from 2015 saying at that time information was shared for non-advertising purposes. https://www.eff.org/press/releases/google-deceptively-tracks...


> presumably[1] with an agreement that limits data sharing

I have worked for both a municipal government and school boards. The difference in attitudes is astonishing. At the government level, the agreement is not enough. Confidential data, including personal information, must be hosted on servers within the country in order to ensure the data cannot be accessed by a third party without legal recourse. For school boards, an agreement suffices.


Is this legal? There's pretty strict regulations on what data you can share as an educator. Does this not apply to them? I imagined since HIPPA applies to anyone working with medical data that FERPA would apply to anyone working with education data (these companies).


It is legal. What's protected under FERPA is data generated by the school that is given to a technology provider, not data generated by an app user. So yes, a school app can legally use Google Analytics or Facebook like buttons.

So for as long as you're not sending PII, you're in the clear.

...However, it's actually allowable to send PII if the sending meets the following criterion:

- Provides a service or function that the school would otherwise use its own staff

- Be under the direct control of the school with regard to the use and maintenance of the PII from education records

- Collection and use of the PII must be consistent with the school or district’s annual notification of rights under FERPA

- Not re-disclose or use the education data for unauthorized purposes.


Not a lawyer, but wouldn’t COPPA [1] apply, at least if the student is under 13?

1: https://www.ftc.gov/enforcement/rules/rulemaking-regulatory-...


COPPA permits a school to obtain parental consent on the operator’s behalf, as long as the operator uses the information only on behalf of the school pursuant to the agreement between the school and the operator.


I wonder if an US student who is also a citizen of an EU country can demand GDPR rights in this case. A school may not be a business entity (to say nothing about having presence in Europe), but the data brokers can be called.


Should probably try it with a for profit university.


This should be a gigantic red flag. Esp. where schooling is mandatory.

The parents should know -the details- and consent. Every school district should be required to provide full details of at least: provider, data collected, anonymization status, all parties with access to data, and for how long. And District and parents must be notified of any changes in very short order.

That way the District must be aware of it. The providers and District should also be appraised of their possible lliabilities: 1) If they don't provide that information, 2) If anything goes very wrong. Which is quite likely.


> Android apps are much more likely than iOS apps to be sending data to third parties, and are much more likely to be sending to high or very high-risk third parties

This seems like an important conclusion because most schools do not use Android. I run and edtech startup and we don't even have an Android app. No school has ever asked for one (individuals have, but not enough for us to build one). Most schools use Chromebooks or iPads.


Chromebooks are going to be doing a lot of "send to Google (and its affiliates)"


Most chromebooks can now run Android apps, too


At Common Sense Media we have a project that evaluates various edtech apps for privacy concerns. You can check it out here:

https://privacy.commonsense.org/


This is tangential to the issue, but when I was in school, around 2005 in the UK we had a fascinating IT class where the the teacher explained all the different facts that can be derived from a simple loyalty card (the kind you use in shops to collect points and claim rewards).

I don't remember all the details but I remember feeling alarmed by how much they can learn about you, and haven't ever used one since.

I think it would be great to look at some benign hypothetical software platforms, with students, and run through this same exercise to understand why data protection matters.


We're creating a CRM / ERP SaaS specifically for consultancies. After spending some lawyer fees two years ago we've concluded that 'free consent' does not exist when your school or company proscribes a SaaS solution.

If you say "do not consent" can you still use the application in the same way? Can you refuse to use the tool? Without impacting career or school results?


Schools are often big offenders when it comes to violating privacy. It's usually a combination of ignorance and arrogance. They don't know what they're allowed to do and assume they're allowed to do a lot more than they actually are. "You're a student, you signed up, now you belong to us!"


When i did projects with and for our local university we had clear rules about where which data is allowed to go to. Which mostly ment not using any US services, and choose European services smartly.

I still don't get why this kind of data protection is not more common especially for things like education.


So if the report labels Google Sign-in and Firebase analytics high risk what's the point of reading it.


Did anyone find the list of apps they tested or reviewed (if I understand their references to AppFigures properly)? They, for example, show a picture of an app for Sioux Falls but the text is not findable in the report.

It seems irresponsible not to report what they reviewed clearly.


I'd like to highlight that 60% of apps isn't synonymous with 60% of students. That is, if a dominant app(s) is used by e.g., 80% of students then the impact on privacy and such is even higher.

Full disclosure: I recently finished reading "The Age of Surveillance Capitalism." While I understood the gist of the situation, the book shifted my paranoia even further.


Well, presumably 100% of Chromebooks are sending student data to Google; whether or not that qualifies as "third parties" and/or "consent" is debatable.


Person in US school system here. I’ve had to use Chromebooks for school, and all I can think about is what Richard Stallman would do if he were in the US school system right now.


I told him about part of a local school district. He politely told me, if he were a student, it would make him hate the system, even though the rules are very reasonable.


Or maybe the rules aren’t actually “reasonable”.


FERPA has no teeth. :(


Only 60%?

I would've expected this to be closer to 100%, honestly


<off topic>

I see a lot of sentences like this with wrong prepositions. It should be "sending ... to" third parties not "with" them. I see this error far more often than other errors and no one seems to object. "On accident" (it should be by accident) seems to be common parlance now.

Are we changing how prepositions work?


This is why I switched to Supabase for my upcoming projects.


"Without consent" is misleading and implies that everything is ok if they asked for consent. In reality, students are effectively coerced into giving consent if they want to graduate.

When I went to University a few years ago no alternative was offered for almost all programs we used, even third party. Online submissions and gradebook (hosted canvas), anti-plagiarism (usually turnitin), etc. All these critical or straight up requires services I had to agree to. Maybe if I had made enough of a fuss I could have got an exemption but there was no easy way.

I'd argue in privacy-related legislature like the GDPR students should be treated as incapable of consenting (to services used with their school), similar to how children are incapable of consenting to sex.


Let me add to the rant. I complained to the Swedish Data Protection Authority (IMY) twice about such school apps. Their answers were consistently "we don't have the resources to pursue this case".

With such a weak regulatory environment, the result is fairly expected. GDPR and data protection is not just a Google and Facebook thingy. My privacy needs to be respected by both big and small players.


Turn to the European ombudsman and start forcing your government to work.

https://www.ombudsman.europa.eu/en/publication/en/3510


Only 60%? That surprises me.


What's the recourse from the student here? submit your data to third parties or fail? Inevitable lawsuit fuel?


Under FERPA, students have the right to contact your school and tell them to delist your directory information or submit pseudonymous information instead.

However, data that is not considered PII isn't an issue, so even if you chose to anonymize yourself, they can still use Google Analytics to track your behaviors and remain compliant.


Raising awareness and trying to get policy changed? Civil disobedience? If you have even a small percentage of the student body you have more than enough to affect change.

Basically the same options citizens have, but at a much smaller scale, meaning individual contributions matter more.


Students are broke, so no lawsuit fuel.


Municipalities are insured for lawsuits, and have the power of taxation; they are an attractive target for many lawyers.



End-to-end encryption doesn't help when it's the app (i.e. one of the ends) that is sending the data away.


That just needs bolder thinking about what represents the 'end'.

If the phrase "end-to-end encryption" means strictly from network interface to network interface, without regard to application layer concerns, then part of the conversation is already missing.


Between the NIC and the human is a massive stack that might be called the user agent (using that phrase way more broadly than just "browser" here) and so long as the human can audit all of it, it's reasonable to let all of it access the unencrypted data in an E2E scheme. But we know that's not merely unreasonable for the average user, but truly impossible for even the expert user, thanks to stuff like Intel Management Engine / AMD Secure Technology. Therefore the E2E scheme should attempt to keep the unencrypted data away from as much of the user agent stack as possible, perhaps ideally confining it to HID components.

But then all you've got is unaltered human speech flowing from one human to another, with no data processing to chop it up, calculate on it, conditional logic based on it, nothing! It's quite a conundrum.


And in no case has any student been harmed by it. In general people on HN waaaay overestimate the danger of banal data sharing.


> And in no case has any student been harmed by it.

How can you possibly make such an assessment?


Same way US law does. Absent specific laws, to have a claim you have to demonstrate tangible harm in some way. Burden of proof is on the claimant. And in 95% of the cases HN complains about, its impossible to find harm.


> Same way US law does. Absent specific laws, to have a claim you have to demonstrate tangible harm in some way. Burden of proof is on the claimant. And in 95% of the cases HN complains about, its impossible to find harm.

I don't think you have any idea whether any of these students has even tried to find harm; but, even if you do know that, the inability to prove harm is a legal standard, but it's a far different thing from no harm being done.

(I just recently read Rothstein's The color of law, which makes clear, for example, the situation in which people of colour today find themselves, where I think any reasonable person would agree that they have clearly been harmed by long-term racist policy-making in the US, but where there is no one person who meets the legal standard of having done them harm, and so no-one from whom they can seek legal redress.)



This doc literally just says its bad because its bad, I don't see even an argument of harm here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: