Hacker News new | past | comments | ask | show | jobs | submit | zaphar's comments login

Then don't visit the site. Cloudflare is in the loop because the owner of the site wanted to buy not build a solution to the problems that Cloudflare solves. This is well within their rights and a perfectly understandable reason for Cloudflare to be there. Just as you are perfectly within your rights to object and avoid the site.

What is not within your rights is to require the site owner to build their own solution to your specs to solve those problems or to require the site owner to just live with those problems because you want to view the content.


That would be a much stronger line of argument if cloudflare wasn't used by everyone and their consultant, including on a bunch of sites I very much don't have an option of not using.

Cloudflare doing a really good job meeting customer needs doesn't impact my argument at all.

When a solution is widely adopted or adopted by essential services it becomes reasonable to place constraints on it. This has happened repeatedly throughout history, often in the form of government regulations.

It usually becomes reasonable to object to the status quo long before the legislature is compelled to move to fix things.


Why? This isn't a contrarian complaint but the problems that Cloudflare solves for an essential service require verifying certain things about the client which places a burden on the client. The problems exist in many cases because the service is essential which makes it a higher profile target. Expecting the client to bear some of that burden for interacting with the service in order to protect that service is not in my mind problematic.

I do think that it's reasonable for the service to provide alternative methods of interacting with it when possible. Phone lines, Mail, Email could all be potential escape hatches. But if a site is on the internet it is going to need protecting eventually.


That's a fair point, but it doesn't follow that the current status quo is necessarily reasonable. You had earlier suggested that the fact that it broadly meets the needs of service operators somehow invalidates objections to it which clearly isn't the case.

I don't know that "3rd party session cookies" or "JS" are reasonable objections, but I definitely have privacy concerns. And I have encountered situations where I wasn't presented with a captcha but was instead unconditionally blocked. That's frustrating but legally acceptable if it's a small time operator. But when it's a contracted tech giant I think it's deserving of scrutiny. Their practices have an outsized footprint.

> service to provide alternative methods of interacting with it when possible

One of the most obvious alternative methods is logging in with an existing account, but on many websites I've found the login portal barricaded behind a screening measure which entirely defeats that.

> if a site is on the internet it is going to need protecting eventually

Ah yes, it needs "protection" from "bots" to ensure that your page visit is "secure". Preventing DoS is understandable, but many operators simply don't want their content scraped for reasons entirely unrelated to service uptime. Yet they try to mislead the visitor regarding the reason for the inconvenience.

Or worse, the government operations that don't care but are blindly implementing a compliance checklist. They sometimes stick captchas in the most nonsensical places.


Why? When there are 100s of hopeful AI/LLM scrapers more than willing to do that work for you what possible reason would you have to do that work? The more typical and common human behavior is perfectly capable of explaining this. No reason to reach for some kind of underhanded conspiracy theory when simple incompetence and greed is more than adequate to explain it.

CF hosts websites that sell DDoS services.

Google really wants everyone to use its spyware-embedded browser.

There are tons of other "anti-bot" solutions that don't have a conflict of interest with those goals, yet the ones that become popular all seem to further them instead.


In most languages that have exceptions you don't have the same guarantees because the values are not immutable so if they were mutated they will stay mutated. The language can roll back the stack using exceptions but it can't roll back the state.

The BEAM runtime and all languages that target it including Erlang do not allow mutation, (ETS and company excepted). This means that on the BEAM runtime you can not only roll back the stack but you can also rollback the state safely. This is part of what the poster meant by the most granular level possible.


Strange how a number of Transhumanists are claiming significantly more than you list here as their ideology.


Why do you think it's strange that people don't describe themselves with precision? People can be transhumanist and also other things. That doesn't make the other things part of transhumanism.


That is not what I said though. I said that a lot of people who claim they are transumanist claim that transhumanism is those things. Which is not the same as what you are describing.


The Singularity is a belief that you can extrapolate from history and current events to a specific point in the future that will lead to some very religious sounding results:

* Eternal life

* A powerful entity or entities who will fix whatever is wrong

- An end to poverty

- An end to disease

* Personal Freedom

This is entirely a faith based belief that can not be proven or disproven. It is by no means certain that you can extrapolate to a singularity in our future. Nor is it certain that you can assume it will have the claimed effects. While the term singularity in math and physics has a well defined term with clear non-religious meanings. "The Singularity" is entirely a faith based religious belief.


I'm not sure where you get that version of The Singularity? Wikipedia has the most popular version of the technological singularity is AI making better AI leading to an intelligence explosion which may or may not happen but is not especially faith based or religious. (I think we'll have AI making better AI but more of a gradual ramp up than an explosion).


> The Singularity is a belief that you can extrapolate from history and current events

That's exactly backwards; a singularity is a point past which you can't extrapolate, because trying to do so leads to absurdities (e.g. infinite densities, time as a spacial dimension, one egg costing more than the GDP of Europe, etc.) "The Singularity" was called specifically that because it was the point at which historical projections would become absurd, and thus we could not meaningfully forecast past it (like an event horizon).

Some people ran with that and decided that it was a forecast that specific absurdities would certainly happen, but that's mostly just a reading comprehension issue.


I think the idea is that you can extrapolate that a singularity will occur and when it’s likely to happen. It’s what happens next that you can’t predict.


The singularity in this context refers to the point beyond which predictions will fail because we cannot possibly foresee the consequences of certain technological changes.

There's nothing historical about it; it came about as a result of a few different science writers looking into the future and wondering how we keep up in an accelerating technological context.

I actually agree that it's become something else, but the origin of the term was what I was correcting, and its origin isn't something woo-woo, it's firmly based in scientific speculation.


Unfortunately as someone of the christian faith I have first hand experience that you can not control someone else's use of language. Whether that is the meaning it was originally intended to convey or not that is the meaning that it conveys now. The best you can do here is to say: "That does not represent my own personal definition despite the zeitgeist coopting it to mean something else".


Even in its original context, it's still a bunch of woo-woo. The core idea, that creating a technology that can improve technology will lead to exponential technological advancement that can be modeled as a mathematical singularity, is very hand-wavy and silly on its face.


It's been very easy to observe acceleration in progress over time, and there's a natural question that emerges: Will we reach a point where people can't keep up?

Nothing hand-wavy or silly there. And the discussion of the topic as it was formed in the second half of the 20th century was pretty carefully couched in terms of what-ifs and conservative projection.


Skip a couple framework versions and indeed entire frameworks. Maybe go a couple years before you "upgrade" to something else. It is entirely possible you could go as much as 5 or 10 years on something. You'll still have to evaluate and potentially mitigate some CVE's. But that could actually be less work and less aggravating.


I'll counter your anecdotal experience with my own. I can count on one hand the number of people I know well who voted against Harris because of her gender or color. And I live in the rural south. The simple truth is that Harris failed to make a compelling case to enough of the nation in enough of the right states to win. The reasons for that failure are going to be complex and varied depending on area. Any attempt to pin it on a single trait or activity is going to be somewhat wrong. This is way too nuanced a topic for "She lost because she was a black woman" to be at all useful.


> I can count on one hand the number of people I know well who voted against Harris because of her gender or color.

You can count on one hand the number of people you know well who are willing to admit they voted against Harris because of her gender or color. This is a critical distinction.


I know a lot of people well enough that they know they don't have to be afraid to admit such things to me. This is not a critical distinction because assuming you know what people think is a great way to completely misunderstand them.


As someone who spent 30 years living in the rural south, I can state this with the confidence of personal experience: some people will not reveal their worst opinions until they are extremely sure in the safety of doing so. In some cases, they'll only reveal those opinions to someone else who has repeatedly shared the same opinions with them.

So unless you're giving big racist or misogynist vibes in how you communicate with them, the racists and misogynists you know "well" may be keeping their opinions to themselves.


The Democratic white male candidate (Biden) was polling so much worse than Harris that he was removed from candidacy.

Harris was part of the Biden administration, she was the 2IC of the Biden administration, she endorsed the policies of the Biden administration and the only official difference between her and Biden was that she was a different race & gender and polled much better than he did. Although unofficially she looked mentally sharper than Biden.

It is difficult to say it was race and gender when the approved candidate of the appropriate race and gender was clearly unelectable. At least Harris was a close call, Biden had lost before the race started. If the Biden administration had been popular and had some achievements to run on then she'd probably have won.


I can create tagged unions in literally any language not just C++ including Assembly. Rust having syntactic and compiler checked support for them as a foundational part of the language is a material difference in the design that differs fundamentally from C++. So the the point still stands.


I instantly thought of this when I saw the title.


The counter point here is that at least with Checked Exceptions you know only those 20 functions are part of the code path that can throw that exception. In the runtime exception case you are unaware about that 21'st function elsewhere that now throws it and it's not in the correct handling path anymore.

You have no way to assert that the error is always correctly handled in the codebase. You are basically crossing your fingers and hoping that over the life of the codebase you don't break the invariant.

What was missing was a good way to convert checked exceptions from code you don't own into your own error domain. So instead java devs just avoided them because it was more work to do the proper error domain modeling.


Like I said, the implementation is wrong. Adding an exception to that 21st function, and then that whole call chain as well ends up being a lot of work. Sure you eventually find the place to handle it, but it was a lot of effort in the mean time.

It gets worse. Sometimes we can prove that the 21st function because of the way it is calling your function can never trigger that exception, but still it will need code to handle it. If that handler code if the 21st function changes to trigger the exception now should be propagated back down but since you handled the exception before checked exceptions won't tell you that you handled it in the wrong place.

I don't know how to implement checked exceptions right. On paper they have a lot of great arguments for them. However in practice they don't work well in large projects (at least for java)


The Rust Result type with accompanying `?` operator and the `Try*` traits are how you implement exceptions correctly. It makes it easy to model the error domain once in your trait implementations and then the `?` does the rest of the work.

You could imagine something similar with Exceptions where there is simple and ergonomic way to rethrow the exception into your own error domain with little to no extra work on the part of the developer.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: