Hacker News new | past | comments | ask | show | jobs | submit | pluijzer's comments login

"Block nags to accept cookies and privacy invasive tracking"

Took me a couple of tries to parse the senetence. Is this blocking or accepting privacy invasive tracking? Guess the former but kept reading it as the latter.


I found https://github.com/oblador/hush#does-hush-accept-or-deny-per...

So neither it seems. Whatever the website does when a user doesn’t make a choice.


That is nice. That is why I don't use 'I don't care about cookies'. I do care, I want to press reject, if for nothing else than to send a signal.


Honestly interested why this was downvoted, which yes makes for boring reading but I hope it results in some interesting argument. If you need a cookie banner it means you are collecting users' personal information. For many websites this is totally unnecessary. I do not like it if a website tells me how important my privacy is to them and then asks me to accept sharing all the information they can about me with other commercial entities. For no functional reason. I don't want to press accept that. As a insignificant little protest I want press reject. I want their statistics to show that some people don't like data being shared.

"I do not care about cookies", when it cannot just hide a popup will accept the terms and removes the option for this small protest from me. That is why I do not use it. Is this somehow wrong, offensive, off-topic?


I guess the allure of physical events is twofold, to feel a connection with other fans and to feel a connection with the players. I wonder if this diminishes the latter. Maybe put the players somewhere else entirely and stream it in the form of these stage holograms.


Not my favorite language but I like how C# is C++++ (even though in musical terms C# would be inbetween C and C++).


Arguably you only need two pluses to form a hash symbol.


I thought this was specifically not part of Wayland, am I wrong?


Sort of. The foundational issue here is the longstanding premise "Wayland can replace X!". The problem is that Wayland can't replace X, only Wayland plus a bunch of other components - or as I like to call it, Wayland++.

So Wayland++ can provide network transparency, but whenever a W++ feature has issues and those issues are criticized, Wayland advocates will just motte-and-bailey the issue by saying "but that's not part of Wayland!", which is technically true but irrelevant. "Wayland" can mean Wayland++ or just Wayland-core, depending on what's convenient.


Wayland proper is a protocol specification. By it itself, it's completely inert and it's all up to an implementation.

The protocol uses shared memory buffers and file descriptors, so it can't be just transported through TCP as-is. You need something like waypipe, which parses part of the protocol, extracts things like file descriptors that won't make sense on the other end, and then reconstructs things on the destination.

waypipe turns out not to be that complicated, it's just 15K lines of code.


>Wayland proper is a protocol specification. By it itself, it's completely inert and it's all up to an implementation.

Wayland should have shipped with a default implementation that had screen sharing, recording, clipboard and everything else that x11 had by default. The fact that they've thrown all that responsibility on DEs without so much as a HOWTO on how to reach parity is ridiculous. I will never understand why anyone took their effort seriously.


> The fact that they've thrown all that responsibility on DEs without so much as a HOWTO on how to reach parity is ridiculous.

Well, all DEs (excepting maybe Xfce?) have members in the work groups that design the wayland protocol extensions, so it can be assumed that people are well aware of what needs to be done.

Wayland has a default implementation called Weston, but I'm not sure that any of its devs cared enough to implement the extensions which are responsible for all the other bits that you mentioned.


X11 is just a protocol specification. By it itself, it's completely inert and it's all up to an implementation.


outsourcing the responsibility that is.


not putting complexity into places it doesn't belong to leading to sub-par outcomes for everyone that is


bah. the sub-par outcome is they broke compatibility with everyone and then claimed it was every man / DE for themselves.


x11 was (development) dead long before wayland was relevant

x11 had long time become unsustainable in multiple ways

braking changes where inevitable

all of the most relevant DE had somewhat already gutted out X11 leaving nothing behind then the interfaces

so the replacement being defined by interfaces and "every DE for themself" was just natural, for the large DEs it was already the case anyway

and for the other DEs you could say wlroot is now the common core they need because they don't have the will/time to implement everything by themself


no, you are right

most software which did support network transparency in a way similar to what X11 started out with has giving up on it (like in the industry as a whole) and there seems to be a clear technical consensus that it's best to not to approach remote access this way, even in X11 it was kind of semi-abandoned long before X11 was semi abandoned (from developers not from people using it)

As far as I can tell from the POV of the discussion of weather Wayland (or any other hypothetical replacement) needs to support it the answer always had been a clear "no it doesn't need to, nor should it try to".

This doesn't mean that you can't have remote shared applications, desktops, screen sharing or similar just not using network transparency. I.e. not by pretending the things the application communicates with (compositor, GPU, etc.) are on the same computer and "transparently" routing (part of) them to a different computer. And if you consider the stark difference in latency, reliability and throughput between a Unix pipe/speaking to a GPU over PCIe and TCP over Ethernet it can feel surprising that it was ever considered to be a good idea (but then when X11 was build network transparency was just that big think people put into everything, from most of which it is removed by now).

So what replaces network transparency (and did replace it in many cases long before Wayland was relevant) is typical remote desktop functionality. I.e. and additional application will grab the mouse/keyboard input on one side and the rendered output on the other side and sends them to each other. This has many benefits both for the people not using it and the people using it while many of the drawbacks often practically do not make much of a difference anymore. The main issue is if there is a high quality open source for free program you can use and if it's installed on the system where you want to use it...


A server in a datacenter generally doesn’t have a GPU, certainly not enough to support thousands of clients (each of which does have a GPU plugged right into one user’s monitor). Software rendering is a regression that didn’t need to happen, and Javascript apps seem to be the way the industry is avoiding it (with the browser as a remote display server).


> datacenter generally doesn’t have a GPU

this use case is broken in X11 since a very long time, because to make this work well you don't just need some form of network transparency in the network manager but also remote rendering for OpenGL and Vulcan

> Software rendering is a regression t

But in most cases it's not happening, because you don't render on the server for most applications you render on a client which interacts with a server.

> and Javascript apps seem to be the way the industry is avoiding it (with the browser as a remote display server).

Today many JS apps are not thin clients they are often quite complete applications, but lets ignore that for a moment.

I'm not sure what exactly you are imagining, but as far as I can tell the only way to make this kind of remote rendering you are implying work in general would be by making X11 a GUI toolkit with some form of cross OS stable interface and it also would be the only supported GUI toolkit and any fancy GPU rendering (e.g. games) would fundamentally not work. There is just no way this would ever have worked.

The reason the industry mostly abandoned network transparency not just for remote display servers but also in most other places is because it's just not working well in practice. Even many of the places which do still use network transparency (e.g. network file systems) dent to run into unexpected issues due software happen to not work well with the changed reliability/latency/throughput characteristics this introduces.


> Software rendering is a regression that didn’t need to happen

Actually, it is. The actual straw that broke the X developers was font metrics, IIRC. Essentially, if you want to support fonts for the language of the most populous country on Earth, you need to do more or less complete font rendering to answer questions like "how long is this span of text going to be" (so that you can break it). And the X developers tried to make it work with the X model, but the only way they could get it to work well was to have the X server ship the font to the X client and the X client ships rendered bits back to the X server [even over the network!].


I'm not sure about your use case here. Why would a server in a data center need to render the GUI for thousands of clients?


Virtual desktops on demand with thin clients.

Sometimes because you want users to be able to change workstations, sometimes because you want a highly specific environment outside of the user's control (it can reset on each connection), sometimes because you want nothing to be kept locally. Eg, the country somebody works in is untrustworthy, so they access everything somewhere remote and safe.


virtual desktops on demand tends to be run on servers with GPUs and in general prefers server side GPU rendering because it's meant to work with any client which can access it even if it's has an extremely weak GPU

and if you have no complex rendering requirements then often it's a much better choice to place the network gap in the GUI toolkit instead of the DM as this tends to work way better, in this case you do need a thin client on the other side, but so do you need for X11 remote (the client needs to run X11) so it's kinda not that difference. And today the easiest way to ship thin clients happens to be JS/WebGPU, which is how we have stuff like GTKs webrender backend.


Just a heads up for people living in the EU, it is possible to have things like negative credit ratings removed on grounds of GPRD.


... only if they're no longer considered necessary for the purpose of judging your creditworthiness. So in practice you're unlikely to get anything expunged ahead of schedule, filing a complaint only makes sense if the credit reporting agency kept your data for longer than they said they would (in Germany, this is 3 years at most).


Do you have a source that specifically states I can get a bad debt scrubbed from my credit rating under GDPR provisions? It doesn’t match my understating of GDPR at all.


I got my bad debt scrubbed this way. This was in the Netherlands. It is not always possible, some sources here (in Dutch): https://www.vldwadvocaten.nl/blog/bkr-en-avg-bezwaar-en-verz...


Google translation: “and that means that you can submit a GDPR request to have a BKR registration corrected”

This suggests you can only get records removed if they’re incorrect — is that what you meant?


The record can technically be correct but not in "pursuit of a legitimate interest". In that case the negative record can be "corrected".

So, in my case. When I was a teenager I opened a second bank account with credit card. In only used the card once for something small and forgot about it. Throughout the years the cost of this card started to built up, messages to pay back where not received because I already had a different address and phone number.

Years later this resulted in a negative BKR registration for me. I explained the situation to the bank (that could technically let the registration be removed) but they refused. Later I had a lawyer order them to remove the registration, on grounds that me forgetting about a credit card when I was young was no reason to believe I wouldn't be paying my mortgage. The registration was therefor not in "pursuit of a legitimate interest". The bank honored this.


Very interesting, thank you


Are your sure this is the case? I see the word used in the original meaning all the time.


Brave gives me a strong negative gut reaction. I really doubt crypto goes well with ethics. Looking at some of Brave's past transgressions confirms it for me. Sure, you can turn it off, just like you can remove the pineapple from pizza, the pizza is still ruined though.


> Sure, you can turn it off, just like you can remove the pineapple from pizza, the pizza is still ruined though.

I like this analogy; pineapple on pizza is just like a browser that runs a crypto miner.


> runs a crypto miner

No it doesn't.


You don't need to turn it off, because it's off by default.


In a way it became the complete opposite of how it started. At first one OS for many users, ea with many processes. Now, with containers, micro services etc. we have an OS per service/process. Still the original abstractions work surprisingly well though makes it me wonder how a complete redesign of would look like aimed at modern usage.


But the question is why did we arrive at containers and one OS per "microservice"? Has memory-to-IO bandwidth, scalability requirements, or whatever really changed (like in orders of magnitude) to warrant always-async programming models, even though these measurably destroy process isolation and worsen developer efficiency? After almost 50 years of progress? Or is it the case that containers are more convenient for cloud providers, selling more containers is more profitable, inventing new async server runtimes is more fun and/or the regular Linux userspace (shared lib loading) is royally foobar'd, or at least cloud providers tell us it is?


The traditional Unix IO model broke with the Berkeley sockets API introduced in 1982. The obvious "Unix" way to handle a TCP connection concurrently was to fork() a separate process servicing the connection synchronously, but that doesn't scale well with many connections. Then they introduced non-blocking sockets and select(), then poll(), and now Linux has its own epoll. All these "async programming models" are ultimately based on that system API.


>But the question is why did we arrive at containers and one OS per "microservice"?

I think it makes more sense if you consider the interim transition to other isolation mechanisms like commodity servers instead of mainframes, VMs, then containers as a way to get more isolation/security than traditional multi user model with less overhead than an entire machine.

Obviously cloud providers want to push for solutions that offer higher densities but those same cost/efficiency incentives exist outside cloud providers.

I'd say we've more accurately been trying to reinvent proprietary mainframes on commodity hardware.


Calling an independent set of libraries in an isolated space an entire OS is a bit of a stretch. Containers generally don't contain an init system and a bunch of services (sure, they technically can and some do), but there's generally much less running than an entire OS.


Most of the time the OS is just overhead now. Look at unikernels for one possible future.


I'm not sure I think the exokernel/unikernel approach by itself is the path forward. While the library operating system approach makes a lot of sense for applications where raw throughput/performance are crucial, they don't offer as much in the way of development luxuries, stability, or security as most modern operating systems. Furthermore, outside of very specific applications the bare metal kind of performance that the exokernel approach promises isn't really that useful. That said, I suspect a hybrid approach may be viable where two extremes are offered, an extremely well isolated and secure microkernel which offers all of the luxuries of a modern operating system built on top of an exokernel which can also be accessed directly through the library operating system approach for specific performance critical applications (say, network and disk operations for a server.)


There's been research into using Linux as the basis for having a unikernel for performance-critical workloads (e.g. database) while retaining all the development /performance-optimization/etc. tooling for both developing the unikernel and for all the other workloads. Of course that doesn't give you a small OS codebase but it does let you optimize for specific workloads.


> Most of the time the OS is just overhead now.

And they became so good at it that we added more OS' on top, with our VMs and OS-like web browsers...


I have used Magic Mushrooms on occasion. They provided me with very valuable insights that led to make meaningful changes and choices in live. Insights that have stops my panic attacks, made me more appreciative and empathetic to others. I really believe I am at a better place in live because of those experiences. I am however very careful with taking them. Only when I feel very stable mentally and physically. Only when me and my environment is well prepared and not too often. I had a friend that took them after he split up with his girlfriend and went to the fair on them. Yes, ... don't do that.


I made a QR code from this link. It really isn't anything crazy. Isn't that cool, we can host websites from QR codes!


Hosting a website without a computer is quite cool if you ask me


Wow, this is just crazy cool. I can imagine this growing into a thing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: