Hacker News new | past | comments | ask | show | jobs | submit | scandinavian's comments login

He's also an avid anti-vaxer and covid conspiracist. Does it matter? Not sure. I will personally not touch anything he is helming (brave).


Unlike you and your smears, I make my positions clear and cite my sources for them.

If you mean by "anti-vaxer (sic)" my opposition to the Covid shots and mandates, then so be it. Many, including me, who have had older vaccines, especially from before the 1986 US liability shield and subsequent problems, are "anti-vax". Even though we still vaccinate our children.

"Covid conspiracist" must mean I cited lab leak possibility and reasons for considering it. Now that federal agencies agree, you should reconsider this lame smear attempt.

Your use of (misspelled) "spell words" (Roger Scruton's phrase) to curse me marks you as superstitious and thoughtless. Do better!


[flagged]


He's not helming javascript in any way or form or even contributing to the standard to my knowledge.

> Stop branding not willing to take a drug that did not go through standard vaccine approval process as anti-vaxx.

That's not why he's anti-vax, I never said that.


> He's not helming javascript in any way or form or even contributing to the standard to my knowledge.

Brendan may not be helming JavaScript anymore. But he was very active during critical standardization period. For example:

2009: https://www.youtube.com/watch?v=eUtsgUrF-ec

2011: https://brendaneich.com/2011/08/my-txjs-talk-twitter-remix/

2012: https://brendaneich.com/2012/10/harmony-of-dreams-come-true/

Also note that Asm.js (2013 precursor to WebAssembly) was developed during his term at Mozilla.


> Even the infotainment system, which a blind person might want to use, for example when waiting for a sighted acquaintance in the car, does not have a screen reader and is not in any way usable.

It has really excellent voice commands for pretty much any function though. Sadly it can only be triggered by pressing the right scroll wheel on the wheel. While possible to just reach over, it's probably not optimal for your suggested use case.


> (On a side note, Bing chat already knows now that she won the prize. Color me impressed.)

It actually doesn't. Bing searches for your query and uses plain old search results as extra context for the actual LLM. GPT-4 still has the same knowledge cutoff as when the model was last trained.

Here's what it feeds to the model when searching for "nobel prize in physics 2023":

https://pastebin.com/raw/MhW4EmTx


How are you getting this? I is it visible to the browser (client)?


Yes, there's a websocket that contains all bing chat communication.


which?


wss://sydney.bing.com/sydney/ChatHub

This is using Chrome.


Here's a chart per capita, which is often more interesting:

https://ourworldindata.org/grapher/solar-electricity-per-cap...


bit misleading not to add the top two (Australia and Netherlands)

https://ourworldindata.org/grapher/solar-electricity-per-cap...


That's generation per capita, not installed capacity per capita.


You can just use LD_PRELOAD to load your own version of ptrace. Not as stealthy though.


Airlock is not using anything as far as I can tell, they are warning their customers that if they are using old binaries, that are signed with the revoked key, that airlock or windows (unsure) will now complain about it.

> Over the coming months Airlock Digital customers may notice an elevated occurrence of files reporting ‘(invalid certificate chains)’ over the coming months, for software that was signed between 2006 – 2017 with revoked certificate chain.

As Airlock seems to be software intended to allowlist the execution of binaries, it would make sense that they pick up on the user running binaries signed with revoked certs.


> Taking measurements on the "12V" line and seeing voltage dips as low as 2V. Watching this voltage-dip propagate over to the 1.65V reference voltage used by the TI and STM32 chips's ADC.

Did we read the same paper? He takes no measurements, everything is based on theoretical assumptions, schematics and the three youtube videos he mentions in the beginning of the paper.

From the paper:

> The enclosed paper proposes a simple test that can be done to prove or disprove the explanation provided. This test has not been done by the author because of the cost involved with acquiring the needed Tesla Model 3 inverter PWB.

> The details of Tesla's Model 3 inverter design as revealed by Irish engineer Damien Maguire have been presented. These details were used to construct *a hypothetical model* of all hardware and software operations performed on the two accelerator position sensor (APP) sensor signals inside the inverter as they pass from the APP sensor to the electric motor controller.

He doesn't have access to the parts in question nor the car in question.

Also of interest is the authors own comments on his previous papers about sudden unintended acceleration:

https://www.autosafety.org/wp-content/uploads/2022/01/Note-t...


Now that we have another Nix post, maybe someone can enlighten me about something I've been wondering about.

I'm one of the maintainers of a popular django application. Someone made a nix package of the project, but we've now twice gotten invalid bug reports from people using the package because the package depends on "django_4" and whenever someone updates that nix package, the package for our project breaks.

Of course we, like all other python projects, don't support using other dependency versions then the ones in the requirements.txt file. So when someone just uses a different minor version of django, stuff breaks. What's the disconnect here? Why does all nix packages that use django_4 need to use the same version, that seems super prone to breaking all kinds of stuff. Same for the other 35+ dependencies that run arbitrary versions instead of the ones defined in the requirements.txt file.


I am not an expert, but here’s my attempt at a useful comment.

On the highest level, `nix` is an alternative build system. So, if someone packages your app with `nix`, there’s now extra work to keep that working, and it’s on the packager to keep it working. If they packaged your app such that it’s using different dependencies than those required, that’s a bug in the package. As a maintainer, you can help here by making it clearer what versions are accepted, and by making it easier to run the tests for a package.

If we open a black box, there are two things in play here: Nix-the-build-system and nixpkgs package collection.

The build system is very open ended and can specify all dependencies precisely, but it’s on the user to define what that means exactly.

nixpkgs is a coherent collection of nix packages, a bit like a Linux distro. In particular, it _generally_ has one version of each package, and there’s some testing to make sure that all the packages work together.

Now, to package a Python app with Nix you can either pull dependencies from nixpkgs, in which case the situation would be similar to, eg, packaging for Debian.

Or you could create a hermetic environment, where an app gets an isolated copy of dependencies, specific just to the single app, a situation similar to using virtual env.

It sounds like what happened here is that your app got packaged in the fist way, but actually it can work only in the second way. I assume you do specify specific compatible version of Django somewhere, and if a package (be it .deb, .rpm, or .nix) doesn’t respect that, that’s a bug in the package.

Hope this helps!


> packaging for Debian.

Not really, nix is way more flexible and more up to date and nix also often runs tests and different pythons cannot interfere with each other that easily. On a high level things are similar but the details are wastly different.

> Or you could create a hermetic environment, where an app gets an isolated copy of dependencies, specific just to the single app, a situation similar to using virtual env.

That could also be done with nix but is often not because upstream pin quality is often lacking.


It seems like Nixpkgs aims to minimize the number of package versions in use at one time. Not just nix, most package managers do, it seems (i.e. you wouldn't expect to find different minor versions of Nginx in Debian, would you?)

So by that same logic, there is only one version of Django 4.

It is definitely possible with Nix to use the precise versions of what's in your requirements.txt, but I'm not sure if the Nixpkgs maintainers would allow all that extra duplication upstream.


I packaged some python applications in nixpkgs, and it seems the consensus is to try and relax the dependency so that the globally packaged version is used, but if it fails the you can override the version yourself. Though this is not done through the requirements.txt because that file does not have enough information (no integrity hash for example).


I get what you are saying, but nothing you said works in practice for python packages, so not sure that I actually learned anything.

Is it fair to summize that python applications with python dependencies do not really work well as nix packages and shouldn't be used?


> but nothing you said works in practice for python packages

How do transitive dependencies in the Python ecosystem work, then? I assume Django works with multiple versions of python and bcrypt. I assume pandas works with multiple versions of scipy. Is there no semantic versioning? If everything requires an exact version, how do you prevent everything from grinding to a halt?

> Is it fair to summize that python applications with python dependencies do not really work well as nix packages and shouldn't be used?

Let's not conflate Nix and Nixpkgs. Nixpkgs has its reasons for minimizing redundant packages, however it is certainly possible to package your app with Nix and use the exact specified dependencies.


> How do transitive dependencies in the Python ecosystem work, then?

Not very well.

> how do you prevent everything from grinding to a halt?

I don't have a good answer for you.

> Is there no semantic versioning?

You can read django release process here [1], not sure how it's relevant. I'm not the maintainer of django, but of a project using django. Would it be better if all software was perfect, had no bugs and used perfect semantic versioning? Yes, I would say so. Is that a requirement for using nixpkgs?

> Nixpkgs has its reasons for minimizing redundant packages, however it is certainly possible to package your app with Nix and use the exact specified dependencies.

I'm not packaging it, someone else is, it breaks and they come to the project to raise invalid bug reports.

[1] https://docs.djangoproject.com/en/dev/internals/release-proc...


> not sure how it's relevant.

Well you said earlier that nothing I said works in practice for python packages. My only point is that it must work at some level in the python ecosystem, else the ecosystem would collapse.

Anyways, it sounds like you're unhappy that someone did a bad job packaging your application. That sucks. Elsewhere in this thread someone mentioned that there isn't a strict single version policy in nixpkgs, so this can probably be easily fixed. I'd suggest filing a bug in Nixpkgs.


> Elsewhere in this thread someone mentioned that there isn't a strict single version policy in nixpkgs, so this can probably be easily fixed. I'd suggest filing a bug in Nixpkgs.

There isn't one but we are not collecting multiple package versions for no reason and since python itself cannot well handle multiple versions of packages they are only allowed outside of pythonPackages where all end user applications should live.


In this case I think it is important to distinguish nix (the package manager) and nixpkgs (the popular package repository / distribution used with nix).

Packaging python applications with nix is doable, but you have to specify the exact versions of your dependencies and for that you can't easily use nixpkgs.

Nixpkgs tries to keep a minimum number of packages (like Arch or Debian as well), so each of the dependencies will typically only occur with one minor version for each release of nixpkgs.

We could still use the nixpkgs to build our application but we have to override each of our dependencies to the right version, but that approach can get quiet tedious for a large number of dependencies.

Fortunately there are tools to automatically generate your dependencies from a requirements.txt such as mach-nix or pip2nix.


What's the point of a minor version change if it's breaking? Does Django not have a versioning policy that enforces non-breaking changes between minor versions?


That's my fault for writing minor, as that's what it would be in semver. I should have written feature release.

You can read the release process here.

https://docs.djangoproject.com/en/dev/internals/release-proc...


> python dependencies do not really work well

Yes, that is exactly correct.


No, that is not a fair summary; Nix is the nicest way to manage Python packages that I have found thus far.


I'm assuming you try to keep all dependencies on the nixpkgs version?


That's not really necessary. Pulling in arbitrary versions of packages from PyPi is fairly easy, if a bit verbose.


For packages that I need to pin to specific versions, that's also easy to do with Nix.


> Is it fair to summize that python applications with python dependencies do not really work well as nix packages and shouldn't be used?

No, applications that are properly maintained work as they should and this can be ensured with tests and e2e tests.


This is such a condescending attitude. What you mean is applications that are maintained the way that you and the Nix developers think an application should be maintained.

It's incredibly naive for a package manager as ambitious as Nix to assume semver. I'm a big fan of semver myself, but the vast majority of software projects follow it imperfectly or not at all, and for good reason—it's nearly impossible to follow it perfectly, because even bugs are part of your API. Every project I've worked on has eventually had something break on a version upgrade because we were depending on something that was later decided to be a bug (but at the time was just how it worked).

Elm can mostly get away with enforcing semver because they designed it that way at the language level, but Nix wants to manage dependencies written in all languages and ecosystems, which have dramatically different versioning practices.


I think the thrust of the parent comment was more that the test coverage of this package isn't good, not that semver must be followed.


Ah, fair enough, I misunderstood. I thought the tests they were recommending were tests to ensure backwards-compatibility between version bumps, I didn't realize they were talking about the downstream pacakge's tests.

I still disagree with the insinuation that it's everyone else who's screwing up and if we all did things the way Nix wants us to then Nix would actually work just fine. That's just another way of saying Nix doesn't work in the real world.


Yep, I agree that the tone was bad.

It is unfortunate that some in the nix community come off that way, because I would say that in general Nix goes to great lengths to adapt to the world as it is. Especially compared to, say, Bazel.

I myself have been using nix in an org that is blissfully unaware of nix for about 2 years, if that's any indication of how adaptable it can be.


You're totally right. If you're a package maintainer and you find out some package is misbehaving even though all if its included tests pass, it might kinda make you feel like kicking the thing and calling it junk.

But we should recognize that some of what drives that is just defensiveness, and some is personal frustration. At the end of the day, Nix and Nixpkgs are for letting people run useful software more or less as it exists. It's not just for users or developers of perfectly tested, bug-free software. (Nix itself is certainly neither of those things, and neither is Nixpkgs!)


Nixpkgs does not assume Denver that's why we run if possible the package's tests, our own tests and build dependent packages to make sure the most obvious breakages are noticed before things are even merged.


Ah, I thought you were saying that if we all just used e2e tests to ensure we didn't make breaking changes in minor versions, we'd be fine. I didn't realize you were talking about the downstream package's tests.

I do still take issue with your insinuation that it's the package maintainers' poor practices that are at fault here. The real world is a messy, complex place and "best practices" don't translate well from situation to situation.

OP didn't ask for their package to be included in Nix. Presumably OP's system works for them and for their use case, but whoever created the Nix package made assumptions that turned out to be flawed. It's not fair of you to say that those bad assumptions are OP's fault because their package isn't "properly maintained" and doesn't "work as it should".

Someone (you?) made a bad assumption. Don't cast blame for that on someone who only knows Nix exists because it sends phony bug reports their way.


Sounds like the problem is with Python maintainers who don’t understand that breaking changes should only be made between major versions.

If that’s not possible though then as sibling comment said - you can override the dependencies and the nix maintainer should make sure the package works as expected


Sounds like the problem could also be with Nix maintainers who don't understand that "semver" is not a universal law of nature and that not all projects and ecosystems follow it. This kind of blanket dismissal can cut both ways.

Semver (the website and "spec") was created in 2009 by some guy. It's not an RFC, a standard, or anything like that. Yes, it gained widespread adoption. Yes, the guy in question is a cofounder of GitHub. So what? You cannot force it upon everyone. Python is about 20 years older than semver. Django is several years older. Should the whole ecosystem change their conventions because it's more convenient for a few people?


Except Django site says that a.b are feature releases which should be backwards compatible except for specific exceptions. If their software truly breaks “with every update to django_4” then it’s either a problem on Django’s side or a problem in how said person uses Django


I don't know if it's deliberate or a communication/comprehension problem, but you're misquoting Django's release process https://docs.djangoproject.com/en/dev/internals/release-proc...

> * Versions are numbered in the form A.B or A.B.C.

> * A.B is the feature release version number. Each version will be mostly backwards compatible with the previous release. Exceptions to this rule will be listed in the release notes.

> * C is the patch release version number, which is incremented for bugfix and security releases. These releases will be 100% backwards-compatible with the previous patch release. The only exception is when a security or data loss issue can’t be fixed without breaking backwards-compatibility. If this happens, the release notes will provide detailed upgrade instructions.

Going from "mostly backwards compatible with the previous release. Exceptions to this rule will be listed" to "should be backwards compatible except for specific exceptions" is quite the stretch. There are no "specific exceptions": incompatibilities can be anywhere and you need to read the release notes to know where. In semver, a minor version increment is backwards-compatible, no exception, no ifs or buts.

If you want to shoehorn Django's release process into "semver", then act as if the product is called "Django 4". If the version is "Django v4.X.Y", then X is the major version number, Y is the minor version number, and there is no patch version. It should be version in Nix as "django4 vX.Y.0".


They clearly say “exceptions to this rule will be listed in the release notes” meaning that backwards compatibility is the rule. There’d be no exceptions if there was no rule hence I said they “should” be backwards compatible except for specific exceptions, which shall be noted in the release notes.


Not sure how this conversation is productive, but there's never been a X.Y release of django without noted backwards incompatible changes to my knowledge. Just imagine that djangos X.Y releases are semvers major releases, not much more to it than that.


It also very clearly states that there may be exceptions to the rule. So a package repository that assumes that Django follows semver is unequivocally doing the wrong thing, because Django is very clear that they don't (otherwise there would be no exceptions).


i use nix for 2 years. never was thinkin or relied on semver. exactly nix allows me not to consider semver as something realy existing or working.


Doesn't this auto-upgrade behavior punch straight through the reproducibility Nix is supposed to be giving you? It's not exactly a functional build system if the results you get depend on when you download the dependencies.

(I mean, I guess you could say that time is an input to the function, but that seems to miss the point.)


What do you mean by auto-upgrade behavior?

If the Django package in nix were upgraded, all packages that use it would be tested.

And you wouldn't get the upgrade automatically, instead you would only get the upgrade when you change the version of Nixpkgs that you are using.

And if you don't like that, then you can use multiple versions of Nixpkgs at the same time. Your old package will stay exactly as it was. This of course cuts both ways, and means you get no security updates for it or any of its transitive dependencies.

Which part of this isn't reproducible or functional? If nixpkgs never changed, it wouldn't be a very good package repository.


Using Flakes, you can lock the version of nixpkgs (and any other repository) to a certain commit, and that commit is an input to the function. When you update that commit, of course the build changes, but I'd say that's pretty expected. If you don't upgrade it, you'll keep the prior versions.

Now this only works as long as you keep your package outside of he main nixpkgs repository, once you upstream it you're locked into the versions of packages that are "currently" in nixpkgs in the same commit. Builds are still reproducible, because you select the commit you build, but your package might break if a dependency changes in an incompatible way. If that happens, there's a problem with either the definition of the application or the dependency. In the given case it sounds like there might be an issue with the package of the application since it seems it doesn't lock down the precise version of Django that it needs.


You can think of the function inputs as:

1. All the package definitions in nixpkgs

2. Any external sources

When a package is updated in nixpkgs, input #1 changes.


I mean, I get that, but that means that the reproducibility of my build depends on the whims of the nixpkgs maintainers, it's not a property guaranteed by the package manager.


You can however define inputs that are not the whole of nixpkgs. You would use something like this and you would pin it to a very exact version and hash of a package:

https://nixos.org/guides/nix-pills/nixpkgs-overriding-packag...


The goal of a downstream Linux distribution is never to reproduce whatever builds you run on your own machine as an upstream developer. It's to produce a collection of installable software that meets various constraints and goals, like cohesion (can all be installed and managed uniformly), minimal size, easy/manageable security updates, integration (compatibility and so on). That can involve things like building the software against particular library versions mandated by downstream needs or even patching it. Some distros try hard to avoid patching upstream and some don't, and in all distros there may be cases where other priorities take precedence over the value of leaving upstream untouched.

In the case of Nixpkgs and Python, the community wants to maintain a collection of Python libraries that are all interoperable, and Python doesn't support vendorization well enough to allow multiple versions of the same library in a single Python process, which is one reason for preferring singular versions of most Python libraries in Nixpkgs. The other factor is likely just reducing the maintenance across Nixpkgs by maintaining as few redundant versions within the tree as possible.

If you want to control/determine the entire runtime your end users use, you have to do the packaging work required to ship them that runtime with some tooling that's capable of the reproducibility you desire. Python doesn't have one a reproducible package manager, so your options are basically creating your own Nix package (probably as a flake.nix in your repo), Docker, and Flatpak.

That said, it's perfectly possibly to include multiple minor releases of Django 4 in a single snapshot of the Nixpkgs tree and maybe that should be done. Have you talked with the maintainers of your downstream package of Nixpkgs to let them know Django breaks things on minor releases, and so using different versions of Django 4 interchangeably is not tested or supported in your application?


You can pin the commit of nixpkgs. Why should reproducibility hold when nixpkgs changes?


I think there's a bit of confusion caused by equating Nix "derivations" with "packages" of traditional package managers.

Nix mainly concerns itself with derivations [1]. They're build recipes for creating binary artifacts that are meant to be consumed by the Nix daemon. The Nix daemon instantiates derivations by building the artifact and storing it to a store path under /nix/store. Store paths are unique to each derivation.

When people say Nix is reproducible, they mean that derivations are reproducible [2]. This is because anything that might cause the build to change is captured as inputs to the derivation. Every input is explicitly specified by the author of the derivation. This means that when a dependency gets updated, the resulting derivation and store path would change. The new derivation might fail to build, but the old one would still continue to build regardless of how much time has passed since it was first built. So if a latest package in Nixpkgs is broken, you can always go back to a known good commit to get a working derivation while waiting for the package maintainer to fix it [3].

Traditional package managers don't have a concept of a derivation. Instead, they have packages. Those packages have no reproducibility whatsoever. Even if they built successfully in the past, they might not build today. That's because a traditional package is only identified by its name and version, as opposed to a Nix derivation which is identified by its content (= the build recipe) [4]. Traditional package managers see two incompatible builds with the same name and version as the same package, replaceable with each other. Worse, most package managers don't require versions to be specified as part of dependencies. Whether a package builds or not is then dependent on the current state of the central package repository. Again, this isn't the case with Nix derivations.

[1]: Internally, Nix doesn't even have the concept of a package. A package is a concept that we humans use to group related derivations together.

[2]: To be clear, derivations aren't bit-by-bit reproducible. For example, CPU caches would be observable during builds because in general, process sandboxes don't prevent hardware information leakage. However, it's reproducible in a practical sense because people would have to go out of their way to make software builds dependent on things like CPU state. People might do that as a joke, but not for any serious reason.

[3]: Ideally, tests and reviews should catch any breakage but sometimes it happens. Hence the rolling release branch is marked "unstable." Fortunately, it's also easy to apply fixes locally before they're available in Nixpkgs because Nix makes it straightforward to create a custom derivation by extending existing ones.

[4]: Not to be confused with content addressed derivations, which identifies derivations by the resulting binary artifact.


for 2 there is sandbox from facebook to isolate tests (and builds) from cpu non determinism. i have raised ticket on nix. so really it is just another derivation sandbox.


> It is definitely possible with Nix to use the precise versions of what's in your requirements.txt, but I'm not sure if the Nixpkgs maintainers would allow all that extra duplication upstream.

They do for end user applications, but not for Python libraries. The libraries in Nixpkgs are expected to be interoperable, which requires converging certain versions because otherwise transitive dependencies on varying library versions mean that libraries used together are subject to serious, mysterious bugs. But applications packaged in Nixpkgs can pull in an exact set of libraries of their own if that's what it takes for them to run reliably.


It sounds like the package is implemented improperly. If the input from your repo to the package is not targetting a specific commit, it should be.

Building from "latest" is really not how nix is ever meant to operate. In that case, when you update your requirements.txt, it is now out of sync with the package definition; the inputs _have_ changed and your guarantees are gone.

When your project repo is updated, that should never result in a change to what gets installed by nixpkgs until you also update the package to point at that commit and do any work necessary to fix breaking changes. Once you do that work, that version of your package picks up a guarantee to always be producable.

Like another comment mentioned, this is all much easier to accomplish with flakes as they have a lockfile that sits next to the flake, both of which reside in your repo and can be updated atomically with your releases instead of also needing to make a PR for nixpkgs.

I've actually been working on learning how to better package python with nix and found the historical information on python packaging infrastructure in this talk incredibly enlightening (I think this landed on HN a few days back): https://www.youtube.com/watch?v=ADSM4vR2EQ0


The answer today would probably be to use flakes: https://nixos.wiki/wiki/Flakes


They don't need to use the same version of the Django package but Python dependency pins are often either way to tight and can easily be expanded or outright missing, so they often get ignored.


Are you opposed to filing a bug in Nixpkgs for your application? Alternatively, are you willing to point to your application or its package in Nixpkgs so that someone else can do so?


> Of course we, like all other python projects, don't support using other dependency versions then the ones in the requirements.txt file.

That's really bad. You should always support reasonable version ranges.

> when someone just uses a different minor version of django, stuff breaks

That's why some people say that managing dependencies in Python is difficult and move to statically compiled languages.


> That's why some people say that managing dependencies in Python is difficult and move to statically compiled languages.

Yes, completely agreeing with that.


It's not even really about static compilation. NixOS (and many Linux distros) include tons of dynamically linked C applications that just do a way, way better job of compatibility. Imagine if GNU grep were as fussy about only being built against 1 version of glibc as many Python libraries and applications seem to be about their dependencies.


I don't.


What makes you think that the versions specified in the requirements.txt aren't reasonable ranges? All OP is saying is that if you're outside the version ranges in requirements.txt then you're outside the supported range. It's literally in the name of the file—requirements.


> What makes you think that the versions specified in the requirements.txt aren't reasonable ranges?

Because that's what the parent wrote.


for rust or haskell i point nix to my version and lock and toolchain files. and it uses exact tools and versions default build uses, but built with nix.

hope such solution exists for python.


nixpkgs doesn't use requirements.txt for whatever reason.

(That reason probably being the utter brokenness and braindead state of Python packaging; Node packages work much better.)


> Node packages work much better

Are you sure about that? I haven't seen a node app built from source on nixpkgs yet. That includes Electron apps like Signal Desktop, which is a bit disappointing.

There is this article about trying to package jQuery on Guix:

http://dustycloud.org/blog/javascript-packaging-dystopia/


Grep nixpkgs for `buildNpmPackage`, it's ridiculously easy to package a node app nowadays.


Yes, buildNpmPackage works great.


Guix has several different npm importers (none of them merged), but it's debatable whether it is desirable to build npm packages from source when it either creates thousands of barely useful packages.


You can package simple python projects, but as soon as there are too many huge deoendecies that use CPython and whatnot, it becomes impossible to generate the nix derivation. I just use imperative python-venv + pip install on those.


Take a look at Home Assistant to see a complex python app being packaged in Nix.

(disclaimer: it's still rough but it does work)

https://github.com/NixOS/nixpkgs/tree/master/pkgs/servers/ho...


It doesn't, but you need to ditch requirements.txt and just overridePythonPackage with the correct github revision hash.

It's a PITA but unlike pip and conda it's 100% reliable.


> I just use imperative python-venv + pip install on those.

The whole point of NixOS is to manage this and to get rid of those manual steps that are error prone.


I only do this on some work project that I don't touch often, as an escape hatch. Everything else is managed by nix.


Wasn't Nix supposed to solve these problems?


It does. Nix can package everything properly. What is depending on the language ecosystem in question is whether this packaging can be more automized or not.

Python is not trivially automatized with Nix.


And it does, for most languages. Python seems more difficult than average.


Yea python is the exception. Go, rust, nodejs, have been easy to get running with specific versions and dev envs .


...and yet there's tons of Python packaged in traditional distributions including Django.

Nix promises to solve exactly this problem... so it's not clear what the real benefit of Nix is.

EDIT: a rain of silent downvotes?


There's plenty of python packaged in nixpkgs too. It doesn't mean that it isn't a dumpster fire disaster. Dealing with it has been trouble with every other distro I've used. It isn't just a nix problem. If anything I think the situation is improved.


Nix hasn't been a benefit when working with python for me, but again, python is the outlier. It has been a benefit for projects in other languages.

I guess the reason is because python packaging/tooling varies wildly between projects, and there are a lot of bindings.

BTW a colleague was setting up the python project on a non-nix machine, and also had problems with dependencies, and ultimately had to do some nasty workarounds (disabling deps/features). To me, it seems endemic.


> Of course we, like all other python projects, don't support using other dependency versions then the ones in the requirements.txt file. So when someone just uses a different minor version of django, stuff breaks

That sounds wrong. A Python package should not have a requirements.txt file at all. A requirements.txt file is for "freezing" and fully reproducing an environment (ie. in a virtualenv or docker container). This is useful for certain applications like deploying services or sharing notebooks etc. It is not for packages. A package should document its requirements via setup.py/pyproject.toml and do so in the loosest way possible. Django uses semver and Django apps don't generally need to pin to minor versions.

Stuff like this is why people think Python packaging is worse than it really is.


The application is not distributed via pypi, nor is it installed as a package and thus have no setup.py file.

> A requirements.txt file is for "freezing" and fully reproducing an environment (ie. in a virtualenv or docker container).

No, it's just for specifying which versions of packages should be installed by pip. There's no such concept of a lock file with pip. Poetry and the likes have lock files though.


> There's no such concept of a lock file with pip.

There's the --require-hashes flag and the ability to specify the hashes in your requirements.txt


Django doesn't use semver. It uses a Major.Feature.Patch release notation, not Major.Minor.Patch. Feature releases usually contain breaking changes, where SemVer minor releases never should.


With my ignorance of the python packaging ecosystem, I was always under the impression that requirements.txt was the version constraints, not the lock file.


I really think they are just misunderstanding something. When I look at my profile, there's no comments, but I know there should be some. Isn't it more likely that when a subreddit you posted in goes private, the comments don't show up. The people in the linked thread ran the delete script after some subreddits went private. Now when some are made public again, so are their comments. Comments that were never deleted with the tool as they were hidden because of the subreddit being private.


I used Power Delete Suite to remove all comments from all public and private subs. They all came back today. So I re-deleted them again.

I suspect it's more likely the site didn't process the deletions properly, rather than maliciously bringing them back, or as you suggest, that they were private.

Edit: It's possible that the deletion only worked on public posts, after all, it seems?


I've got a decade old account on which I've made a habit of manually deleting comments older than 6 - 9 months, since they get so little visibility and there's no value in leaving a breadcrumb trail.

Checking just now I see that comments up to 3 - 4 years old have been restored.

I'm not going to speculate as to why (beyond agreeing it's more likely to be incompetence than malice) but in my case at least there are definitely long deleted comments that have been restored.


I’m astounded and confused that you and a lot of other commenters are giving Reddit the benefit of the doubt. At a time when tensions between Reddit management and its users are at an all-time high, and both sides are maliciously striking back and forth, most commenters are assuming that an act like this is due to incompetence rather than malice? I don’t understand the thought process.


Innocent until proven guilty can be a decent philosophy even outside the courtroom. Especially when tensions are high. Both sides giving the other the benefit of the doubt can help to deescalate.

Unless the objective isn't to deescalate.


That's the thing: you are deemed innocent until proven, guilty or innocent.This framework supposes there is a common search for truth and working together. The current situation is not that: there is no accountability of reddit, no desire to be open or to listen to the users, contempt towards users for years,...

If there's no due process, reddit doesn't deserve to be held innocent.


There are different thresholds in the legal system for criminal (beyond a reasonable doubt) versus civil (the preponderance of the evidence.) It already varies by venue and consequence. This is a social discussion. 4 year old comments being restored means Reddit is doing something to restore them now, which they didn’t do before.


Except in this case, we have a proven bad faith actor with a pension for deceit and vindictive behavior.


A penchant for*


Outside of court, the preponderance of evidence changes with a history of operating in bad faith.


> Unless the objective isn't to deescalate.

Oh, the sleighthanded irony.


Well Reddit's objective certainly isn't to deescalate.


It's also worth noting that reddit admins have maliciously modified comments in the past.


This one in particular, in fact - Steve Huffman, aka spez. Google "fuck spez" to see just how popular this guy is.

Same guy who un-personed Aaron Swartz, claiming he wasn't a cofounder after Swartz died, and removing him from Reddit's founder page.


> I’m astounded and confused that you and a lot of other commenters are giving Reddit the benefit of the doubt.

I think it makes sense for anyone who values objective truth above any other agenda. "Benefit of the doubt" is just acknowledging that we don't know for sure.


Because there's no apparent benefit to reddit to bring back long deleted comments from arbitrary users. When you can think of no motive for malicious behavior, it is unparsimonious to assume malicious behavior.


Of course there is a benefit. If users leave and remove their comments in protest, the data content available on Reddit is lowered, thereby lowering one of the IPO metrics. By un-deleting comments, the site's message count and user activity goes up, and thereby its IPO value.


It's probably more about search engine results than the appearance of user account.


Investors aren’t that dumb, and if they were, your theory would create liability for securities fraud.


> Investors aren’t that dumb

Maybe not individually, but this is an IPO and the P stands for Public, and when you aggregate everyone together then intelligence is a moot concept.


But that's not what's happening (only those protesting having their comments restored).


The point stands…more comments, more content, more value, higher exit price


There’s a huge incentive for Reddit to retain the comments - that’s their knowledge base. Without that a lot of their value is gone.


Retain is not equivalent to undelete (making visible to everyone).


It is in this case. They don't "need" the data themselves, they need Googlebot to see it to get traffic, which is their lifeblood both in general and for IPO.

Have you heard the recent popular saying that Googling things barely even works anymore unless you append reddit to the query, which tends to bring up actual information instead of SEO trash?


Arguably the perception of the size of their knowledgebase is more important than the actual size, at least when talking about the upcoming IPO.


If it’s not visible to other users or paying LLMs then it’s worthless. That’s what I’m getting at.


Strong incentive also exists to undelete.


Do you think that choosing to believe it is malice somehow punishes Reddit?


you and a lot of other commenters are giving Reddit the benefit of the doubt

It's what normal people did before the internet.

People who didn't were known as lynch mobs, and were considered bad.

Thanks to the web, it's now perfectly normal to believe the worst of people for no better reason than to fuel one's own anger issues.


> Thanks to the web, it's now perfectly normal to believe the worst of people for no better reason than to fuel one's own anger issues.

I believe that in this case it's more that Reddit's management has completely lost any sort of trust, it's not so much to fuel one's own anger issues but give the current context there's very little in terms of Reddit's management being trustworthy, transparent. Spez was caught lying in the open, how can one still have any trust they aren't lying in other aspects?

Let's agree that this particular case is not a baseless witch hunt, Reddit's management own dishonest actions have brought a dissatisfied lynching mob to them.


Isn't the fact that they're able to restore deleted comments from that far back itself an indication of malice, or at least irresponsibility? I could understand if it was comments from the past month, but after 3 years I'd expect the only remnant to be on very old backups if at all. The fact that they're visible again adds a lot of weight to the common suspicion that they're just setting a delete bit and keeping them in the live database

I do seem to recall that their database schema is mostly a big unstructured key-value table, so it's possible that this is part of the explanation - and they've never cleaned up any garbage/orphans in at least 3 years?


> Isn't the fact that they're able to restore deleted comments from that far back itself an indication of malice, or at least irresponsibility?

Meh. You're not exactly wrong but I think it's pretty common for user-generated content sites to follow a logical delete strategy. It holds open the door to being able to restore data deleted by end-user error, and within the bounds of their data retention policy keeps data around that may be useful for internal analysis.

Actually come to think of it seems plausible that they only have ~3 years of logically deleted data, having purged deleted records older than that.

It's also plausible they had all the is-logically-deleted information in some redis datastore that wasn't being reliably persisted to disk and the process had to be restarted for the first time 3 years.

I'm actually leaving my restored comments untouched for now out of curiosity about what they'll do about it now that the issue is known. I think that will probably answer the question about whether this was accidental or intentional.


So it looks more like a database restore of some data. I'm inclined to think it was a rollback of some sort - to fix something that needed more QA time.


This was more or less my working theory. It's not _all_ of my comments that have been restored, it's only my comments going back to 2020 (and I can't be sure that _all_ of the comments in that time range were restored either, but it looks pretty thorough).

I wouldn't put it past Reddit to restore old comments given sufficient motivation, I just have a hard time imaging how the cost/benefit analysis would say that this is a good idea at this specific point in time.

It seems plausible that with all the other churn going on at Reddit - and as others have noted a large number of people deleting comments and accounts and maybe subs - that they accidentally restored some data-store to the wrong snapshot or something.

I just don't understand how the difference between "we HAVE N million comments" and "N million comments HAVE BEEN posted" in some investment deck could be worth the risk to reputation and good will, not to mention potential GDPR violations or bad press from doxing stalking victims or whatever.

Someone else mentioned SEO as a possible motivation. I might buy that. If Reddit is losing PV and DAU and restoring a bunch of old content would offset some of that with organic search traffic, that seems like something they might do.


If they have done this to an EU citizens I am fairly sure they have broken GDPR in some way or another.

For most users this isn't going to be a problem but my guess is there is a rather big chance for a number of the comments that were restored there were really good reason why they were deleted and now Reddit are responsible for them being online again.


> Checking just now

how did you check? profile page?


just hope they dont edit them too


I did the same a week ago, deleting fifteen years worth and several thousand comments using Shreddit, by editing and deleting, however mine have not been restored, so I doubt this was some kind of broad action.

I very much doubt Reddit cares at all about the small number of us that have done this.


Me too… note that Shreddit first edits the comment, and then, deletes it.

Not sure if this would complicate a restore process by Reddit.


Apparently not. I deleted my comments with shreddit, editing first, and at least a large portion of them appear to have been restored to their original text.


Absolutely this. I highly doubt Reddit would want to fuck this up at this point for seemingly minimal gain. Much more likely that a 10x increase in deletions caused some pipeline to collapse somewhere.


As much as I hate to give Reddit the benefit of the doubt, I think you’re right that Hanlon’s razor may well apply here, albeit substituting incompetence for stupidity.


I think you meant to say "substituting stupidity" -- the new thing is the substitute for the old thing.


Hanlon’s Razor states: "Never attribute to malice that which is adequately explained by stupidity."

I’m saying Reddit is incompetent in this case, rather than stupid. Not quite sure what you’re getting at?


Oh, my mistake. For some reason I thought it stated incompetence.


No worries; I assumed you weren’t being malicious.


I can see this starting a positive feedback loop of issues. More people get upset; so more people start deleting -- cascading failures start occurring. Hopefully their team can keep it under control.

But would the current upset userbase even believe reddit if they came out and said "our deletes arent working right now. please try again later."


That's impossible. You can't see comments you made in private subs, therefore you couldn't have deleted them.


Which is itself a problem IMO. Discord is the same, but at least they have a tiny excuse (being very charitable here) in that they'd have to add a new UI view for that, but Reddit already has the profile view where a user should be able to see all the comments they've made


Hopefully the top organizers of the reddit strike consider arranging a "delete day" where all subs temporarily go public again for this purpose


Ok, r/funny just went public, and one of my comments re-appeared in my profile. So I am confident that what's happening is the deletion of comments isn't happening on subreddits that are private (as r/funny was when I ran my delete script). As soon as the subreddit goes public, they "re-appear"


Reading about Power Delete Suite https://github.com/j0be/PowerDeleteSuite

They don't mention being able to delete from private subs, and their method of deletion sounds like it would fail when the subs are set to private.

I'll admit Reddit barely deserves the benefit of the doubt at this point, but I have no idea how you would delete posts on private subs except through some GDPR way that must exist.


If I comment in a sub that has since been set to private, can I not see my own comments on my own profile page? If so, do those comments not have an edit/delete button under them?


Yes, comments you made in private subs don't show in your comment history. I recently made a browser extension that deletes your entire reddit history and ran into this while testing.

Edit: Adding the link: https://chrome.google.com/webstore/detail/bulk-delete-reddit...


I've been using Shreddit for years, and thus far, all my deleted comments and posts have not shown back up.


It could be because people are selling there's accounts in protest.


Use GDPR. Even if you aren't in the EU.


CCPA for some of us, and refer to the CA AG if the business does not adequately comply


Also important to note that this is an indicator that Reddit has lost all trust from (this part of) the community. Even if that is a plausible explanations, many will not give Reddit's current management the benefit of doubt. I simply would not put it past Hoffman to be that petty that he would do it, he seems to be taking after Musk more and more lately. Wasted so much goodwill. That will be tough to navigate for them, because once you have lost your user base's trust even an untrue allegation will cause a stir.


The adage that “If you’re not paying for it, you’re the product” cuts both ways.

Musk and Hoffman seem to be intent on waging war with their own product. They both get a lot of unpaid labour producing their content and then complain that the unpaid labour isn’t paying for the privilege.


Huffman (spez) has his reasons for the reddit changes, but they do seem a bit short-sighted. But there comes a point where these online services have to make money. It's as simple as that. So is he waging war on users, or making a business decision?

Also, Musk is waging war against advertisers. Running a site that is controlled by advertisers is the epitome of extreme centralization, since the site's income can be halted if an advertiser gets upset. Musk is charging on Twitter so the users aren't the product.

As for reddit, I think limiting API access to accounts with reddit gold seems like it would've been fair. That would've solved the income issues (the stated reason for the API changes), but then reddit wouldn't get all the telemetry data associated with users on their first party app.

It seems like Huffman (spez) got greedy and wanted gold subscriptions, and the telemetry data from their first party app. It's usually one or the other (ads vs. user payment).

I also want to end this by saying that I'm not trying to start an argument, but I know a lot of users on this site are very trigger happy with the downvote whenever anyone speaks objectively about Musk. If you don't agree with me, just chime in and we can discuss it.


If it was indeed a business decision, then Huffman is breathtakingly incompetent. To the degree that, were the company public, I would be agitating for the board to remove him as his actions negatively impact my investment.

A number of ways this could have been handled better in no particular order:

1. Give more than 30 days notice to third party app developers.

2. Mandate that third-party clients display advertising as delivered by the API and return telemetry.

3. Keep the API changes but exempt paying subscribers.

4. Refrain from making bad-faith lies about the developer of the most popular application, which he then had to disprove using call recordings, and then after that, don't try to play off your actions as misunderstandings or mistakes.

5. Don't lie about deliverables for years and years to the point where the community memes on you for your history of lying.

6. Don't fuck around in the production database to edit comments critical of you.

7. Be a little forthright for once.


Even just a little bit of giving concessions to legacy customers would have been fine. It's not like you need to completely give up on your new pricing model just because your have existing customers. It's crab.


8. Don't go on major news outlets and come across as being on crack.


> Musk is charging on Twitter so the users aren't the product.

I think the adage is a little wrong. In Twitter's case users are still part of the product as the existing network effects and ability to communicate with users are part of the product, and thus it's users are. Not to such a significant degree as when advertisers are sweeping up every bit of data about you, but still to some degree.


> In Twitter's case users are still part of the product as the existing network effects

This is a nitpick. I'm speaking from the POV of keeping the site running, or not resorting to changing the site's essence to please advertisers.


My point was more in reference to GP's comment where even on a paid service, waging war on on users is waging war on your product. Though, after typing this out, that would seem to be even more so the case since that's where your money is coming from.


Ah, that makes sense. Good point. +1


> But there comes a point where these online services have to make money.

Reddit is 18 old, and you're telling me that they are just thinking about making money now? How come 4chan and Wikipedia are both profitable, but not Reddit? And how is it a problem with their users and not their management?


> How come 4chan and Wikipedia are both profitable, but not Reddit

reddit has 1.7 billion visits per month[5], with an astronomical amount of persistent storage, with the content never being deleted. reddit is ranked #18 globally.

4chan has 51 million visits per month[4], has very little persistent storage (posts are deleted once the thread slides to the bottom of the board list), and strict size limits for the posts that exist at any given time. 4chan is ranked #708 globally.

Wikipedia does get 4.7 billion monthly visits[3], but they do have a public list of large donors[1], and the entire wikipedia catalog can fit onto a 20gb microSD card [2]

So I can't give a solid answer, but it seems like the other 2 sites you mentioned have a slightly better design when it comes to infra costs.

1: https://wikimediafoundation.org/about/2018-annual-report/don...

2: https://meta.wikimedia.org/wiki/Data_dump_torrents#English_W...

3: https://www.similarweb.com/website/wikipedia.org/#overview

4: https://www.similarweb.com/website/4chan.org/#overview

5: https://www.similarweb.com/website/reddit.com/#overview


If reddit had stuck to what it is good at: threaded, in-depth text conversations and links, it wouldn't need such a large amount of storage or bandwidth. Yes, I know that 1.7 billion users is a lot of text, but (1) there wouldn't be 1.7billion users if it were only text and links (2) the users who wouldn't be on reddit without multimedia offerings I am sure use a almost exclusively multimedia and account for the lions share.

Go back a few years, nix the 'let's host video and pictures and live chat and ignore every single thing the users are asking for so that we can bring in the eyeballs' idea and instead of that monetize the regulars using the their content and site's ability to guide google to it.

Keep 150,000,000 dedicated users who reliably generate valuable content for you and keep the site spam free for you, and all you have to do is keep some devs on hand to add tooling and site features that are useful. The caveat is that Stevey Huff has to live with one or two fewer commas on the balance in his bank account.


That 20gb dump doesn't include history afaik and probably doesn't include images and other multimedia.

I don't think it invalidates your point, but I just wanted to clarify.


> Reddit has 1.7 billion visits per month > […] > 4chan has 51 million visits per month[4],

This has an impact on their costs, but in an ad-driven business, it increases their revenues by as much.

> with an astronomical amount of persistent storage, with the content never being deleted

I'd like to know the actual amount of storage, but I really doubt it is actually “astronomical” (unlike Youtube).

Moreover, I suspect that the biggest part of that storage is actually video, which isn't really where the value (for the users at least) is.

Overall, if their costs are to high compare to other players their revenues, it's first and foremost a management and cost effectiveness issue, not a lack of revenues.


VC money. VC money gambles on big wins. They'd rather have a huge blow-out than a small success. So you take VC money, they want you to grow. They care about that more than making a profit. For a long time. Then, when you are huge, then they want money.

This sometimes works, sometimes doesn't. But Google and Facebook started out without a profitability plan.


> Also important to note that this is an indicator that Reddit has lost all trust from (this part of) the community.

Assuming the worst, ignoring details that violate the narrative, and circulating ragebait without verification are all staples of mainstream Reddit content.

It’s not surprising that the Reddit outrage/grievance machine has turned on itself and is now assuming the worst and getting outraged at every turn.


I wouldn't read too much into it. It's not like the reddit community wasn't prone to paranoid suppositions.

Just ask that moderator accused of being Ghislaine Maxwell.


Yeah, so, at first I scoffed at the notion as well, but I kept following up on it, and… are we really so sure that it wasn’t Ghislaine Maxwell?

I kept looking for them to post something, anything… but nothing. https://old.reddit.com/user/maxwellhill/

All it would have taken is a single post to prove that it really wasn’t her, but no. Radio silence. For over three years.

It’s not like somebody is protesting being defamed here either… nobody has any idea who this person is, if not Ghislaine Maxwell herself. Do you have some proof that this isn’t her? Other than the assurance of a single other person, I have never seen any.

I’ve been forced to re-examine my views on this, going from the idea that it’s utterly preposterous to, well, yeah, maybe it’s true. Do you have evidence to refute it? If so, please share. I really don’t want to believe it’s the case that an established sex-trafficker and pedophile/pedophile-enabler was one of the most influential mods on Reddit for over a decade, but at this point I’m having a hard time concluding otherwise.


This attitude is exactly why if I was the mod in question, I would just ditch the account. You're asking them to prove a negative, no amount of proof they could provide would ever be enough. They'd be forever hounded by internet lunatics.


Not at all. A single post on the account would be enough to completely disprove the theory. The whole reason the theory exists is that the account was previously extremely active, and all activity stopped abruptly when Maxwell was arrested. I don’t think you can say “no amount of proof would ever be enough” when no proof at all has been provided.


I think the lack of activity since the conspiracy theory started could be easily explained by the user being creeped out by the witch hunt and abandoning their account. I know if I was accused of secretly being a hated figure like Maxwell on an account that wasn't connected my real identity, I wouldn't come out and deny it, I'd just ditch my account and make a new one.

It also seems unlikely to me that someone like Ghislaine Maxwell would have the time to be such a prolific Reddit user, and that she would write like a Redditor - for example calling someone "butt-hurt" in one comment.


> I think the lack of activity since the conspiracy theory started could be easily explained by the user being creeped out by the witch hunt and abandoning their account.

The problem there is that nobody made the connection until after the activity on the account stopped abruptly, coinciding with Maxwell’s arrest.

> I know if I was accused of secretly being a hated figure like Maxwell on an account that wasn't connected my real identity, I wouldn't come out and deny it, I'd just ditch my account and make a new one.

If you’re a power mod that has spent years building up influence and control over numerous subreddits, that’s not so simple. You could leave, of course, but continuing with an alternate account wouldn’t really be feasible.

> It also seems unlikely to me that someone like Ghislaine Maxwell would have the time to be such a prolific Reddit user, and that she would write like a Redditor - for example calling someone "butt-hurt" in one comment.

Who knows? It also didn’t seem likely that someone like Ghislaine Maxwell was engaged in a massive sex trafficking operation.

I’m still not 100% convinced the account actually did belong to Maxwell, but at first I scoffed at the idea as ridiculous and insane, and now I’m not so sure about that.


Seconded.

Sounds crazy, but the more you look into it [0]... And don't forget the blackmail! You know, which Huffman projected onto Apollo's developer, even after being proved a liar by Christian's recordings.

This is actually quite important, because Huffman and 'maxwellhill' go way, way back.

0 - https://www.reddit.com/r/conspiracy/comments/r45a5n/here_is_...


I don't think the delete function actually deletes anything. At best sets a flag.

If i wrote public forum software, that's how I would do it and batch delete only comments that no one asked to undelete in at least a few months...


For a while it seemed that while this was true, edits were at least destructive: so a lot of mass-deletion tools would edit the comments, then optionally delete them. But I wouldn't be surprised if this is no longer the case (TBH, making the edit history of comments public is a good idea in general).


I disagree. Making edit history public would turn the discussion into a total shitshow. Reddit would drown in stuff like: "I see from your second edit you said ... so clearly you're a ignoramus and a bigot."


It's just a soft delete behind the scenes usually (a flag in a table as you say)

They usually store some history in a logging database that can be reverted at some point in time

Nothing is really truly hard deleted on the web most times


You can request your Reddit data under the GDPR, and this includes all of your deleted content. It is visible from your profile to Reddit staff.

Might have changed in the past 2 years, but unlikely.


Delete Suite's main feature is actually that it edits your comment to be junk, rather than just deleting it. I come across some corpses linking to it every now and then.


Note that Shreddit first edits the comment, then deletes it.

So the question would be, if Reddit also stores comment edit history.


Hey so I guess this is an opportunity to overwrite past "deleted" comments. A second chance lol


I doubt that is GDPR-compliant.


A 30-day lag in deletion is compliant IF you communicate that that's what you do (i.e. write that down in your privacy policy).


Yes, but after that there should be a wipeout.


You could argue that it's not the comments themselves that are personally identifiable, but the association between comment and username (and IP etc). Following that argument, you could retain the comments as long as you delete the username and other identifying info.

Not sure if that would hold up, as some comments can be pretty identifying. But it's a compromise that a company could try.


Is GDPR only for PII? My understanding was that it applied to your data, regardless of PII status


Does reddit have any kind of business presence in the EU ? How would the EU law be enforced?


They sell their Reddit Gold in the EU and sell advertising space to European companies.


Presumably there are EU companies paying them for ads


If reddit employees want to travel to europe then a way to enforce it can be found


Yeah, the mention of June 14th suggests the same to me. I can attest that comments from private subreddits don't show up on your own profile. The deletion script won't be able to find them nor delete them.


From my observation you're right about how comments work regarding private communities, so this is plausible. Before going straight to a conclusion, I'll just caution it's also possible that both things are true.

I'll wait and see how this plays out (while deleting my comments from the previously private communities.)


CS:GO and other source games have had loads of serious vulnerabilities. I've found a few myself just by looking at the leaked source code:

https://github.com/perilouswithadollarsign/cstrike15_src

The engine code is very old, and CS:GO itself was not developed by valve but by Hidden Path Entertainment initially as a port of CS: Source to consoles and they didn't really do a great job.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: