Hacker News new | past | comments | ask | show | jobs | submit | froddd's comments login

Alongside the quality of life and mental health improvements, too — some clear positives.

Great point: The mental health improvements cannot be understated. Even if the physical impact is net-neutral, the mental health impact is surely positive.

“Be’er” seems like even less work. For some people


This is a bit of a myth. A glottal stop is a full consonant sound which takes effort to produce. It's not really any 'easier' to produce than an alveolar stop in any objective sense.


Reducing the problem to ‘people coming from overseas’ is an equally reductionist argument.

There are properties going unused, for very many reasons. Second homes, holiday homes, etc. This also drives the price of properties up. This is one of the inputs to the problem. Planning permission laws is another input. The size and change of size of the people needing housing is another input.


Is it not the dominant factor, at least in the short term? It's much faster for 150k people to enter/leave the UK than for a corresponding number of homes to built/demolished.


It’s obviously the dominant factor to anyone with the eyes to see it.


It does, but occupancy rates in the UK are already incredibly high compared to countries like France.

There are simply too many people and not enough houses.


reductionist in this case is not a bad thing. We need a major change to fix this situation and doing some little tweaks like increasing taxes on second homes or holiday homes does not actually fix this (we already tax those specific cases, with things like second home stamp duty or in some areas second home council tax).

You have

A - Demand (immigration of 1 Million per year)

Or

B - Supply (building only 120,000 houses per year)

We MUST fix one or both of these sides of the equation. Holiday homes aren't going to add up to a row of beans quite frankly (and will have very negative effects on the tourism industry, not so bad in London - but might be quite an impact in cornwall for example).


Holiday homes have quite the impact in many parts of Wales.

Occupancy ratings per house seem to be quite low on average [1], but weirdly I can’t seem to find any figures for overall occupancy.

[1] https://www.ons.gov.uk/peoplepopulationandcommunity/housing/...

Edit: clarification


Maybe on a hyper local basis, but beware there will be negatives with flushing them out too.

On a national scale I'm afraid it's just not statistically significant.


The people have been saying pretty clearly for 20 years that they want A fixed.

But the politicians just ignore that for "growth" and keep granting 100,000+s of visas and then blame asylum seekers.

And then complain that there's mysteriously low productivity in the UK.


Sticky desktop notifications don’t work on macOS on FF, which means event notifications from calendar apps disappear within a few seconds, leaving no time to snooze them or act on them in any other way. Queue missed meetings.


What about a surname like ‘Collinson’ — if ‘Collins’ is issued from this process already, would this be another layer of it?


I guess it's Collin + son. Conceivably Collin's + son.


That quote is Georges Clemenceau, not Charles De Gaulle. He was a French politician, but long before De Gaulle.



That’s a rabbit hole level of interesting! Thanks for sharing!


It's why I come to HN honestly


On such popular and packages, one should reasonably expect security holes to be found and made public with relatively little delay. At that point, upgrading becomes relevant.


The main question should always be: why update?

Should a library become compromised with a vulnerability, fine (if said vulnerability is relevant to your usage). If you need a feature only available in a newer version, fine (I’m counting better performance as a feature).

What I’m seeing far too much of is upgrading for the sake of it. It feels like such a waste of dev time. Pinning dependencies should be absolutely fine.


There are two reasons to do frequent updates:

1. Process. As a guiding principle, it is easier to make frequent small steps rather than one big step. There are many reasons for this, and the benefit of frequent small chunks of work apply beyond updates. 2. Security. Frequent updates can improve security posture, for different reasons: you apply undisclosed security fixes without knowing it (not everything is a CVE), prevent unnoticed vulnerabilities (this can be fixed by automated monitoring) and when there is a time-critical upgrade, the work is faster and less risky (see previous reason).

Pinning and updating reactively would be fine and sometimes is, however: there will be security issues, you will have to update. Given that the task is hard to avoid, for any product that is actively maintained and developed I think the better choice is to do it regular updates regardless of security issues. Maybe with good monitoring and for products that are really not developed any further just reacting to security issues is the better choice - its often also a pain though.


You either waste your time updating daily or you rewrite from scratch every 3 years. JS is what happens when you let the inmates run the asylum.


In my experience (~10 years of front end stuffs), the rewrite will happen either way. Code rot, sweeping redesigns, obsolescense, or over-eager consultants will trigger a full front-end rewrite / re-architecture every ~5-10 years.


Yes, but that's 2 to 3 times slower than what happens if you let the code rot.


The issue is that feature or vulnerability might not be patched on older versions. If you are using a 2 year old version and a non-backported vuln or needed feature comes along that means you have to absorb 2 years of breaking changes to move to that version.

Frequent updates allow you to address the breaks gradually rather than all at once.

JS is just awful, though, because of the sprawling dep tree. I get why devs would prefer pinning as any one of the 1000 deps that get brought in could need an update and code changes on any given day. A sticky static version requires less daily maintenance.


It's vastly, vastly easier to upgrade small version bumps constantly via automated tools like renovate than it is to try upgrade several major versions every few years. It's shite being stuck with dependencies the dev team has now put in the "too hard basket" in terms of upgrading because the delta is too scary or difficult and too much code has ossified around the now ancient version. Don't willingly do that to yourself if you can avoid.


I get that, and it’s a good point. But at some point that easy patch/minor version bump becomes a major version with a breaking change, and does take time to upgrade regardless, scary delta and such. My point is that, without an actual feature need or an actual vulnerability (none of these guaranteed to spring up in future), any time spent upgrading is potentially wasted. I know some projects are unlikely to last beyond a few years —- in those cases I think the risk is calculated enough to not matter too much.


It's down to engineering culture at that point. We have a weekly process where we merge those PRs and including any that are failing. It doesn't suck up much time at all, but our stuff is always well maintained with few surprises lurking. The side effect of this type of culture is high quality test suites and pipelines that you have very high confidence in and are executed frequently and quickly. It's overall been a far better experience than just letting stuff rot.


Any security work always involves a calculated risk. The risk here is that you will be forced to do a painful and error prone upgrade at the time of the vulnerability - under pressure. You haven't done that too often, thus the process is unlikely to go smooth. So there may be bitrot, lots of debt, and time pressure to put out a patch: perfect storm for a lot of things to go wrong even if you don't get exploited. It also throws a wrench into your current schedule. This should be part of the risk calculation.

As I web developer I see so many CVEs in mature stacks, and every so often they really do apply to our work. It is hard to avoid updating, unless you kind of pretend those vulnerabilities don't exist or apply (honestly, the vast majority of devs and small orgs do just that). Even monitoring and deciding what vulnerability applies is a recurrent 'waste' of time, sometimes you might as well just do regular updates instead.

One issue I often see is that if you do your job well, any time sunk into security can by definition be seen as wasted. Until that rare moment comes when it is not so, and then it suddenly transforms from wasted time into a business critical or even business ending death crunch.


> My point is that, without an actual feature need or an actual vulnerability (none of these guaranteed to spring up in future), any time spent upgrading is potentially wasted. I know some projects are unlikely to last beyond a few years —- in those cases I think the risk is calculated enough to not matter too much.

You could make the same argument for any kind of code quality efforts. Frankly I think this site probably leans too far into a high-quality mindset, but apart from anything else good programmers won't want to work on a codebase that isn't seen as valuable and treated as such.


It’s a strong business case for the founders


Undelivered letters could be returned to sender.

In which case… there would definitely be no need to instruct to not write anything below the line, as nobody would have opened the letter.


If only they knew the fate of the letter before sending, then


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: