Hacker Newsnew | past | comments | ask | show | jobs | submit | AshamedCaptain's commentslogin

The last thing anyone would want to spend their spare time is fixing your problems with drivers.

Being able to do it yourself is truly the only liberating thing out there, since paying someone else to do it does not seem to actually work these days (or ever).


Even if you run fully Valve hardware you are still going to be subject to the usual finicky-ness when connecting external devices (e.g. if you use multiple monitors, issues with the open source AMD GPU drivers; etc.).

This hasn't been my experience with the Steam Deck. I've plugged it into all sorts of shit and it's worked with almost all of it.

Of course there is undefined behavior that isn't security critical. Hell, most bugs aren't security critical. In fact, most software isn't security critical, at all. If you are writing software which is security critical, then I can understand this confusion; but you have to remember that most people don't.

The author of TFA actually makes another related assumption:

> A crash from a detected memory-safety bug is not a new failure. It is the early, safe, and high-fidelity detection of a failure that was already present and silently undermining the system.

Not at all? Most memory-safety issues will never even show up in the radar, while with "Hardening" you've converted all of them into crashes that for sure will, annoying customers. Surely there must be a middle ground, which leads us back to the "debug mode" that the article is failing to criticize.


    In fact, most software isn't security critical, at all. If you are writing software which is security critical, then I can understand this confusion; but you have to remember that most people don't.
No one knows what software will be security critical when it's written. We usually only find out after it's already too late.

Language maintainers have no idea what code will be written. The people writing libraries have no idea how their library will be used. The application developers often don't realize the security implications of their choices. Operating systems don't know much about what they're managing. Users may not even realize what software they're running at all, let alone the many differing assumptions about threat model implicitly encoded into different parts of the stack.

Decades of trying to limit the complexity of writing "security critical code" only to the components that are security critical has resulted in an ecosystem where virtually nothing that is security critical actually meets that bar. Take libxml2 as an example.

FWIW, I disagree with the position in the article that fail-stop is the best solution in general, but there's experimental evidence to support it at least. The industry has tried many different approaches to these problems in the past. We should use the lessons of that history.


> The people writing libraries have no idea how their library will be used.

Unless you're paying them, the people writing the libraries have no obligation to care. The real issue is Big Tech built itself on the backs of volunteer labor and expects that labor to provide enterprise-grade security guarantees. That's entitled and wholly unreasonable.

> Take libxml2 as an example.

libxml2 is an excellent example. I recommend you read what its maintainer has to say [1].

[1] https://gitlab.gnome.org/GNOME/libxml2/-/issues/913#note_243...


That's part of my point. As Nick says, libxml2 was not designed with security in mind and he has no control over how people use it. Yet in the "security only in the critical components" mindset, he's responsible for bearing the costs of security-critical development entirely on his own since daniel left. That sucks.

But this isn't a conversation limited to the big tech parasitism Nick is talking about. A quick check on my FOSS system implicates the text editor, the system monitor, the office suite, the windowing system, the photo editor, flatpak, the IDEs, the internationalization, a few daemons, etc as all depending on libxml2 and its nonexistent security.


>Not at all? Most memory-safety issues will never even show up in the radar

Citation needed? There's all sorts of problems that don't "show up" but are bad. Obvious historical examples would be heartbleed and cloudbleed, or this ancient GTA bug [1].

1: https://cookieplmonster.github.io/2025/04/23/gta-san-andreas...


nooooo you don't understand, safety is the most important thing ever for every application, and everything else should be deprioritized compared to that!!!

> Of course there is undefined behavior that isn't security critical.

But undefined behavior is literally introduced as "the compiler is allowed to do anything, including deleting all your files". Of course that's security critical by definition?


Arguably the effort presented assumes the context of LLVM, where there is information on the actual compiler behavior.

Most people around here are too busy evangelizing rust or some web framework.

Most people around here don’t have any reason to have strong opinions about safety-critical code.

Most people around here spend the majority of their time trying to make their company money via startup culture, the annals of async web programming, and how awful some type systems are in various languages.

Working on safety-critical code with formal verification is the most intense, exhausting, fascinating work I’ve ever done.

Most people don’t work a company that either needs or can afford a safety-critical toolchain that is sufficient for formal, certified verification.

The goal of formal verification and safety critical code is _not_ to eliminate undefined behavior, it is to fail safely. This subtle point seems to have been lost a long time ago with “*end” developers trying to sell ads, or whatever.


I appreciate your insights about formal verification but they are irrelevant. Notice that GP was talking about security-critical and you substituted it for safety-critical. Your average web app can have security-critical issues but they probably won’t have safety-critical issues. Let’s say through a memory safety vulnerability your web app allowed anyone to run shell commands on your server; that’s a security-critical issue. But the compromise of your server won’t result in anyone being in danger, so it’s not a safety-critical issue.

Safety-critical systems aren’t connected to a MAC address you can ping. I didn’t move the goalposts.

Sure they are. Eg, 911 call centers. Flight control. These systems aren’t on the open internet, but they’re absolutely networked. Do they apply regular security patches? If they do, they open themselves up to new bugs. If not, there are known security vulnerabilities just waiting for someone to use to slip into their network and exploit.

And what makes you think buggy software only causes problems when hackers get in? Memory bugs cause memory corruption and crashes. I don’t want my pacemaker running somebody’s cowboy C++, even if the device is never connected to the internet.


Ah. I was responding to:

> Your average web app can have security-critical issues but they probably won’t have safety-critical issues.

How many air-gapped systems have you worked on?


Individual past experiences aren't always representative of everything that's out there.

I've worked on safety critical systems with MAC addresses you can ping. Some of those systems were also air-gapped or partially isolated from the outside world. A rare few were even developed as safety critical.


    The goal of formal verification and safety critical code is _not_ to eliminate undefined behavior, it is to fail safely.
Software safety cases depend on being able to link the executable semantics of the code to your software safety requirements.

You don't inherently need to eliminate UB to define the executable semantics of your code, but in practice you do. You could do binary analysis of the final image instead. You wouldn't even need a qualified toolchain this way. The semantics generated would only be valid for that exact build, and validation is one of the most expensive/time-consuming parts of safety critical development.

Most people instead work at the source code level, and rely on qualified toolchains to translate defined code into binaries with equivalent semantics. Trying to define the executable semantics of source code inherently requires eliminating UB, because the kind of "unrestricted UB" we're talking about has no executable semantics, nor does any code containing it. Qualified toolchains (e.g. Compcert, Green Hills, GCC with solidsand, Diab) don't guarantee correct translation of code without defined semantics, and coding standards like MISRA also require eliminating it.

As a matter of actual practice, safety critical processes "optimistically ignore" some level of undefined behavior, but that's not because it's acceptable from a principled stance on UB.


Now go complain to Adobe, where they just shut down their activation servers and leave you with 1k$ of "perpetual" unactivable & unusable software, no matter how much care you took keeping the installer media.

I remember downloading the offline installers for CS2 they released after turning off its activation servers. I take it that wasn't repeated for the later versions

https://www.cnet.com/tech/computing/adobe-releases-creative-...



That’s horrible too.

I really ponder what is the usecase here that requires dragging windows "quite a lot" and also makes a lagging window border "not usable".

You seldom resize, or drag windows?

I wish there was an actual thriving business model like this -- just fixing most annoying bugs, for a price, of commonly used desktop software. Why proprietary software companies cannot or do not want to provide this service is over me. Perhaps I'm too much used to consulting.

Given that “fixing this issue required weeks of intensive work from multiple people”, the price would have to be prohibitively high.

More generally, software is really, really expensive to produce and maintain. The economics only work at scale, in particular for B2C. (Maybe AI will change that, if it becomes more reliable.)


For many large companies or even teams, there exists a class of bugs / issues / features where dropping 5-10k on a bounty is extremely cost efficient compared to working around the issue or internal development. That might not fund development outright, but at worst it would point out the features people want and serve to inform what to work on next. I think there are a couple reasons why that is not prevalent. Most important one is that highly compensated enterprise teams that would benefit the most from placing bounties tend to avoid software that is lacking features or has bugs. Secondary is not implemented here ego and general disconnect between people in the trenches that know what needs to be done and people controlling ability to place bounties.

Imagine FAANG assigning $500 per engineer per year to allocate to feature / bug bounties.


I’m confused.

Bounties for security holes make sense because you don’t need to submit the patch, just find the hole.

And bounties for open source (like in this case) also make sense because you have everything you need to submit a patch.

But for everything else (like big tech, startups, and so on) bounties can’t fix bugs because even if I find a bug, how am I going to patch it without access to the source code? How can someone submit a patch to Netflix or whatever?

IME your average SV startup has a long list of bugs they are aware of, but just haven’t gotten around to fixing because other priorities are in the way. But people can’t help patch unless you have an open development process.

Am I missing something?


You can fix bugs without source lots of ways, although many are arcane and finicky. An example of a healthy and productive ecosystem for this is in game modding. Sometimes this relies on vendor supplied tools (like a modkit, e.g. Elder Scrolls games), messing with bytecode directly (Minecraft until recently), or some cooperation from the vendor (Dwarf Fortress).

In all of those cases users/players were able to fix bugs and add desired functionality (mostly) independently, on a closed-source program.

For industrial software you don't see as much, even though arguably cracks (to skip license check) qualify here.


That seems different to me: a user can download and run a mod, but the fix isn’t then a part of the game itself and available by default to all users. Unless of course the real developers back port it to the game, but that’s just the kind of development effort the parent’s comment seems to be seeking to avoid.

The parent seems to be talking about the companies using bug bounties as a way to fix bugs in their software and the fixes becoming part of that software (not a separate mod run on top).


> even if I find a bug, how am I going to patch it without access to the source code?

That's how. Bethesda put a mod manager in Skyrim and works with some of the developers, they distribute fixes as game patches, you can distribute yours as "mods" or let them repackage it into an official patch or the next update.

https://en.uesp.net/wiki/Skyrim_Mod:Unofficial_Skyrim_Patch


Yes I’m aware of this sort of thing.

I guess maybe it could apply to some niche cases of locally run software like photoshop, though I’d be be shocked if the marginal gains of a bug bounty program could justify the massive cost of implementing a mod system like this for photoshop.

But the fact is that most software in the world doesn’t work like Skyrim. Large parts of most software runs on servers or on locked down mobile operating systems where modding systems are not possible.

What you are proposing kind of already exists for web frontends in the form of browser extensions, but having worked on several apps for which an ecosystem of browser extensions sprung up, my experience is that there is no simple way to port these features to the main product. For security and QA, every line of code needs to be vetted anyway, and then “translated” into a form appropriate for the existing code base. At most, they just validate demand for a feature or bug fix.


Most larger companies would probably find it way easier and more sensible to contract with some outside consultancy to work on these issues than just posting a random bounty, even if the latter might potentially be cheaper. See Google Summer of Code projects for a very practical example of how "just pay randos to work on issue X for cheap" can quite often end up in failure.

Yes, when my org needed a very specific feature from an open source project the company reached out to the authors. I don’t know the terms, but they dropped a chunk of cash. No strings either on the new feature and everyone benefited in the end.

> See Google Summer of Code projects for a very practical example of how "just pay randos to work on issue X for cheap" can quite often end up in failure.

That potential for failure is there for any "subcontractors". I wonder if anyone has any stats on this.


While you are completely correct about the bounty price, sometimes there are people who work deeply in the field and can solve those things relatively fast because they have already done similar things in the past.

Especially if you’re talking about a business who takes on these types of bounties routinely. I imagine you’d be able to build up a body of historical knowledge about fixing common issues. You could see how that could be a viable business model.

Eh, I think you're underestimating some people perseverance.

You generally only need multiple people for timely action, and it usually even slows you down (from the perspective of total hours spent)

Like 2k bug bounty? I guarantee you some people would be willing to spend a lot of time for that. But yeah, people which are gainfully employed and have a decent salary - likely not.


People will have fun spending their free time on such projects. But it’s virtually impossible to turn it into “an actual thriving business model” that people can make a living on.

Why not? In much of the world, working on one of those a month would provide a comfortable living.

This $1900 bug bounty is quite an outlier, you generally won’t find one per month. An additional challenge is that it’s hard to predict how much work something will take, or whether there are any showstoppers. Also, if you don’t live in the same country as the client, it will be more difficult to get legal assurance that you’ll receive your money (or for the client that they won’t lose their money).

You bought up a lot of points. And I think they are all negligible, compared to the gigantic elephant in the room.

Which is, in order for some rando to fix the bug; a company would need to give access to their codebase to some rando.

And they don't wanna do that.


lt could become some sort of leetcode final boss and/or something that you can put on your resume.

And also scarce skills.

For small stuff, the cost is just going to be too much for people to want to pay it. This bug had a $1900 bounty attached. Let's put the cost of one software engineer (salary plus overheads) at $200,000 a year, which I think is an underestimate. That's $3850 a week, so unless your bug can definitely be fixed (including getting any necessary hardware, investigation, fixing, code review overhead, etc) in two or three days it doesn't pay. And if it could obviously be done in two days then it's likely somebody would have already done that.

The above back of envelope maths ignores the overheads of interacting with the people who posted the bounties to get them to agree to pay up, and of the cost overruns on the class of bugs that look like two day fixes but take two weeks.


$200k is one expensive software engineer. On average, you can get people to work for much less.

Paying for software developers is really weird. State governments for example struggle to pay for a FTE that makes $140k. But they can pay me over $200/hour for consulting services for multiple years. The technical FTE employees that they have generally aren't qualified to evaluate their consulting needs so you get multi-million dollar contracts with very little actual oversight. I was really impressed with the folks I was working with at this particular state government and looked into what it would look like if I joined them full time as a FTE technology leader. I would have to take almost a 50% pay cut. The top senior IT position that oversees all of the state resources makes 70% of what I do. It's crazy. Unless you're working in medicine or sports, government pay sucks.

I've seen similar but less extreme examples play out in the private sector. 16 year senior architect making less than freshly hired software dev that was just an intern within the same company. Software developer pay is largely based on what you're demanding. In a lot of companies, there is a wide range of pay for folks doing literally the same job. They will hire a dev at $180k because that dev wouldn't go lower and turn around and push back to get another dev at $120k for the same level of unproven experience.


They give up pay for guaranteed work and benefits, maybe a pension? Most likely little risk of being fired or laid off.

You have to keep finding clients (I'm sure it's easy now, will it always?) and pay all your expenses.


I assumed the commonly cited 2x markup, so that would be a $100k salary, which is less than various websites say is the average US software dev salary. You could probably find cheaper elsewhere in the world, but even if you cut the salary in half that's still "bug must be doable in a week", which isn't going to cover many of the bugs people will care about.

I believe that the $200k figure was meant to express what such a person might cost the company, not what that person would be paid as salary.

(And it's just a placeholder. $200k seems like it's at least in the direction of the right ballpark.)


$200k is on the extreme high-end of software engineers. For example in eastern europe $30k is normal. And that's not even the floor. You can go to india or africa to get even cheaper. The problem with this bug bounty though is that it requires pretty rare expertise. It's not a "throw any developer at it" type of thing.

You don't need to hire American expensive, but not so productive, engineers, there are lot of other countries. Also, there are ML models.

If you let the ML do it, it means it was so trivial you could have done it yourself faster.

200k is a fairly high salaried software eng in expensive markets, a bounty program like this would be open worldwide and many people would be willing to work for a fraction of that, quality control is another concern but take a look at prices on sites like upwork and bids for this type of work and realize 200k is nowhere near the lower baseline.

$200k in cost to the company is a lot different than $200k in salary. It probably relates to someone making like $140k, depending on the various tax rates.

also, don't forget to include QA and release management overhead, as well as projectmanagement etc...

the 60k buffer probably just covers the salaries of the multiple layers of management and facilities (building, cleaning...)


You are forgetting that typically many users want a bug fixed.

Did you realize that you didn't include 'open source' in your statement? This is exactly what the desktop OS makers -Microsoft and Apple- do every single day. Their prices are mostly B2B and therefore hidden, but there is a steady income for each person involved in making the fix.

and yet, Microsoft Teams is a total trash fire full of bugs that users hate. So something is broken (Teams. It's Teams that is busted).

it's the management structure that's broken. Plenty of decent engineers around microsoft who could fix it, plenty of customer and enterprises willing to pay, but they are not allowed to work on it because of prioritization bullshit, allegedly they could get more money elsewhere

That's literally the issue, management by KPI frameworks


I think it has more to do with bundling reducing the need to compete to zero. Change that and the economics of competition would take over and the changes would get prioritized but nobody at Teams needs to sell a single license, so the priorities become the bs like internal status and visibility and not product success.

How many companies have Teams for basically free with their 365 license but still pay for Slack? The marginal value of Teams is nearly zero.


There is also a matter of selective effort by staff senior enough to make their own choices. Many SDE3 (or whatever MS equivalent is) wouldn’t want to be associated with a dumpster fire product like Teams.

The economics for something like MS teams is not what you'd expect.

It has to be good enough that other options are not worth the hassle to switch over to, for enterprise customers. The quality doesn't matter in the slightest, because making it 5-10% better would cost double or triple.

Where quality does matter for these customers, backward compatibility, Microsoft does pretty well.


I have used it every day for past 3-4 years. What bugs? I don’t love it but I don’t hate it either. I don’t understand the Teams hate

It asks me a quality survey after every call even though I have surveys off.

It shows unread messages for a chat that has the focus and the “unread” message is visible.

When using the keyboard shortcut to create a new message (Command-N) it drops the first character of the recipient unless I introduce a noticeable delay between shortcut and recipient.

I’m sure I have more, but these are just from memory.


I just click the X rather than leave and I don’t get the survey popup. I mute most of the group chats. But it still shows unread so I know to go back and look at them

If I try to play a video in a chat, and then press the three dots to go full screen or change quality or scroll through the video, it just jumps to the top of the chat. Messages do not sync between devices. Microphone or video sometimes do not work and requires a restart. Sending attachments often just results in a failed message. Note these happen on all the OS I have triend on high end devices and on extremely high end internet 10-gigabit.

There is also just feature jank like for instance you cannot have two instances open at the same time if you have two organisations that you work for, you have to switch constantly. This is a disaster for any consultants or contractors who are placed in-org on teams.

The calendar space for an all day event takes up a sliver of space on the calendar, meaning people will often instead schedule an event for 9-5 or worst case 12-12 hours so it's not missable on the calendar easily etc...


Most recently I had it put meetings on a different day because something was broken with it's outlook integration w.r.t starting the week on a Sunday vs Monday.

If you had made the same complaint about Win11 and you wouldn't be so far off. Microsoft is great at driver support which is the subject at hand.

I think that 2k is really really cheap for the expertise in kernel development

It is, but it's amazing how cheap kernel expertise is relative to comparable experience in other specialties like frontend.

there are a lot more kernel programmers than kernel work

But also lots of kernel developers work for free, so the average price of their work is very low

"Lots" is a relative term, but the overwhelming majority of kernel developers are employed and usually do kernel work as part of their job (usually at least ~80% but it could be argued as high as 97% depending on how you interpret the breakdown done by LWN of each release[1]).

[1]: https://lwn.net/Articles/1038358/


And I would guess that most of the kernel devs who are "working for free" are doing the stuff they personally enjoy and find satisfaction in working on, because it's a hobby -- so many of them are probably not interested in fixing random bugs for cash either.

Well, if one person spent a month on this, they’d be making about $10/hr.

Makes StarBucks barista pay look good…

Of course, if they can churn this out closer to 2 days, maybe there is something there.

Such a talented person would probably prefer a more certain and higher income.


I don't think this argument is accurate. There are other reasons to do this work even for free such as self-promotion, community-building, hobby, etc.

I think the real blockers are the legal implications of reverse engineering.


For a lot of people in the world $10/hr is a fantastic wage. And you get to work at your own pace, probably from home.

And for a lot of people it is fantastic to have one slice of bread per day. What is your point?

I wish there was regulation that you have to sell and maintain a working product, so that open source devs don't have to waste their time fixing proprietary products.

It looks like these laptops are usually sold with Windows; are you saying that every manufacturer should be obligated to develop drivers for every software which is theoretically compatible with it? Or are you just saying that we need even more caveats in the interminable EULAs we all just click through?

Maybe the obligation should be to provide adequate information about the hardware, so anyone could make a driver for their own software if they so desire.

Sort of unrelated, but I've been thinking a lot of founding a non-profit that fund raises just to undercut the usual shitty consultancy companies that build government websites and apps just to build them properly.

Since you are talking about proprietary software, I assume you mean fixing bugs by the corpo devs themselves.

Well, this would imply broken software. You already payed for the software, now you are required to pay to get bugs fixed? Bad optics, although not beyond contemporary sentiments... Inherently shady incentives: https://en.wikipedia.org/wiki/Perverse_incentive

This kinda only works best for FOSS, incentivizing external devs IMO.


Yeah, you'd want some sort of micro-kickstarting website where users can pool money that goes into paying for some fix or feature if the committed money crosses a threshold.

The problem is one-offs don't make steady, predictable, recurring revenue. Owning a consulting business is hard: you have to have customers waiting.

There is some—VueScan for example, where the developer reverse-engineer scanning protocols, re-implement and sell it.

I'd gladly pay a couple hundred to have Swift-like optionals in Godot's GDScript, among other things that are just a pain to convince all the random idiots on their official spaces of, but GitHub doesn't have a way to offer that :(

People spam the most minimal viable patch to collect the bounty and move on. And these days they are sending an AI slop solution. It doesn’t promote good code like actually hiring someone.

I think the real issues are attributing work, and fear of doing a ton of work only to be pipped at the post.

Out of all bugs and feature requests, this one is an outlier in that it requires specific hardware to work on and has an obvious success condition. This means that every man and his dog is not going to be throwing an LLM at this to see if their particular slop wins the prize. People get weird when money is on the line and managing a bounty is a job for which I would never volunteer.

The paperwork.

Shouldn't come as a surprise for anyone that followed them since the Allerta days.


> I bought myself for just a few euro a Bluetooth DAC (a FiiO) that had superior sound quality to any phone’s audio-out that I had ever used.

I hate the 3.5'' jack myself (see below), but I can already tell you that mentioning some unscientific definition of "superior sound quality" that likely no one amongst us is humanly able to distinguish is not going to win any minds over. Proponents of 3.5'' like it because it is ridiculously simple to use, intuitive, cheap, doesn't have a lot of things that can go wrong (e.g. no batteries) and despite that is overall effective.

The reason I dislike 3.5'' is because the _socket_ part (i.e. the part on the expensive device) wears out very quickly, becoming fragile and generating distracting artifacts even with slight cable pulls/movements, as the springs in the connector start to fail. This annoys me to no end, much more than any issues with other interfaces.


Talking about “superior sound quality” in the context of mobile phones isn’t controversial, it’s not like a home-stereo audiophile snake oil debate. It is well known that DACs are an area where mid-range and low-end phone makers have cut corners, choosing chips that are quite flawed for anyone who uses their phone to listen to music where pristine sound quality is valued.


The elephant in the room for me is "microphonics" or the noise piped to your head via the wire any time anything touches it.

You demand higher quality, yet don't care about the loud noise created with every small movement of your body? I have heard this dismissed before as "doesn't bother me" and it's hardly ever mentioned in discussions about good audio vs Bluetooth.

I'm bewildered why wireless audio isn't praised for completely eliminating this source of noise that plagues every wired headphone, earbud, and IEM.


This is your host idling the connection due to the silence. Just keep something playing (like a stream of almost-silence) on loop and you won't have this problem.


Yes, but I was asking if high quality or newer Bluetooth audio devices have a lower latency.

I will suggest to the app developers to add optional silence. Thank you.


Yes, there are newer Bluetooth headphones with significantly lower latencies, either through LE Bluetooth audio or through codecs like AptX LL.


Most reviewers are already utterly unable to measure "normal" latency. In the very ridiculous chance you'll find a reviewer measuring wake up latency (which has little to do with the codec used), I wouldn't even trust it.


Yes, wake up latency is going to be a crapshoot, but you can eliminate it by playing a silent file at the expense of battery life. Actual latency though there's nothing you can do about it, so it's more important imo.


Because you can literally verify every single step of what they do. That's the reason you can trust them.

You cannot apply this logic to almost anyone else. Apple, Google, etc. can only give you empty promises.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: