Hacker News new | past | comments | ask | show | jobs | submit | quaunaut's comments login

Config/settings management are often paired with things that require at the very least an app reboot, where Feature Flags are explicitly something that should be capable of changing at will.

Now, could you have real-time config management that doesn't require a re-deploy/reboot of the app? Sure, but the typical 12-factor app can't really avail itself of that without significant rework.


I mean, this mechanism wouldn't make a server operator using entirely non-Google cert authorities more difficult to maintain. I'm pretty cynical on Google these days, but I don't see how this wouldn't be a boon to everybody pretty equally.


To tl;dr for people:

- As we've known for years, cryptographically-relevant quantum computers(CRQC) likely could wreck digital security pretty massively

- For HTTPS, 2 out of its 3 uses of cryptography are vulnerable to CRQC

- The currently accepted algorithms that fix these vulnerabilities transmit 30+ times the data of current solutions, which for more unreliable network conditions(like mobile) can introduce latency by as much as 40%

- Because attackers could store data now and decrypt it later with a CRQC, some applications need to deploy a solution now, so Chromium has enabled Kyber(aka ML-KEM) for those willing to accept that cost

- However, other algorithms are being worked on to reduce that data size, but server operators for your applications at the moment can generally only use one certificate, which older clients like smart TVs, kiosks, etc are unlikely to support

- So they're advocating for "trust anchor negotiation" by letting clients and servers negotiate on what certificate to use, allowing for servers to allow multiple at the same time

Honestly really impressively written article. I've understood the risk that a cryptographically-relevant quantum computer would pose for years, but I didn't really know/understand what was being done about it, or the current state of things.


If people get it wrong so regularly, what value is it providing as a concept? These concepts are supposed to help us reach something better, if you have to add 30 caveats to every part of it, all it did was hide its own complexity from you, instead of managing it for you.


Because the tree is a nice abstraction for some problems. But sometimes you need a collection of pure functions. Sometimes it’s best to think of your object as data blobs going through a line of map,filter,reduce functions. Not every part of your application is the same, use the right abstraction for the job.


I'd ask what you mean by "fighting the browser"- as generally, the number one way to ruin the performance of your CSS is to introduce depth to it. In general, keeping everything isolated regularly leads to better rendering performance.


Avoiding the cascade at all costs, for example. It can introduce a lot of unintended consequences.

Another anti pattern I have seen is the over use of media queries to force the browser to do certain things rather than embracing relative sizing constraints via intrinsic design and letting the flexbox and grid algorithms do most of the heavy lifting.

Here though I want to point out isolation is relative, as is the cascade. I think it’s important to leverage the cascade wherever you can but that doesn’t mean you are leveraging it from top to bottom per say, but it does mean thinking more wholistic about the context of styling


To be fair, there was a time that flexion did not exist and extensive use of media queries was the only way to create a responsive website.

Those efforts don't just dissappear quickly, since that was the only way available that worked across all browsers for nearly a decade.


Isn’t the browser responsive by default? The only issue I see is taking a layout built for refloyable documents and wanting to create applications and magazine-like design with it. Now it’s possible but it’s always more convulated than something like the tools available on platforms like iOS and Android.


> Where I think inheritance works best is when the state in base classes is limited and the interface is quiet clear. Ideally where you are meant to override is also well defined.

What benefit is inheritance providing here? What you described sounds mostly like a struct, at which point the only value the interface provides is possibly some computed fields.


When you scratch deep enough at programming, everything is structs and interfaces defining how you interact with them and how they interact with the world.

The best example of this (IMO) is how `AbstractMap` works in Java. [1]

In order to make a new object which implements the `Map` interface, you just have to inherit from the `AbstractMap` base class and implement 1 method, `entrySet`. This allows for you to have a fully compliant `Map` Object with very little involved work which can be progressively enhanced to provide the capabilities you want from implementing said map.

This comes in handy with stuff we've done when you can take advantage of the structure of an object to get a more optimal map. For example, a `Map<LocalDate, T>` can be represented in a 3 node structure, with the first node representing the year, the second the month, and the final the day. That can give you a particularly compact and fairly fast representation.

The value add here is you can start by implementing almost nothing and work your way up.

[1] https://docs.oracle.com/javase/8/docs/api/java/util/Abstract...


>In order to make a new object which implements the `Map` interface, you just have to inherit from the `AbstractMap` base class and implement 1 method, `entrySet`

Used to think that way, but I now prefer the alternative - passing function(s)/lambda(s) for the necessary functionality of the dependent class.

This way is actually more flexible, as you can change behavior without modifying the override or having a bunch of switches/if-else in your required function.

So instead of 'entrySet' being defined inside MyClass, you would define it outside it, or possible as a static method, and pass it to AbstractMap when you create it.

So you don't need to have every class implement a bunch of interfaces like or Hashable, Orderable, etc. in order get the desired behavior.

Now I guess you would come back about you shouldn't be able to able to do that outside the class, but I also think those are also bad ideas. Python famous gets away with not having private/protected (although there is a way you can kinda of get something similar).


It is just a matter of how flexibility you want to expose through your API. Sometimes a rigid and stricter API is the right choice where you want the API itself as the guardrail against non-standard patterns.


Scopes and closures Give you guard rails and don’t require gluing functions to state unnecessary.


I understand, and what you shared is a perfect example of what I said- but I fundamentally disagree with the notion that it's the same between the two.

I think that in effect, as you associate more behavior with a particular struct(as opposed to what you're attempting to do with said struct), the greater expectation it presents that the struct is what you code around. More and more gets added to state over time, and more expectations about behavior get added that don't need to exist.

Sure, you could say "Well, then just be strict about what behavior is expected in the interface"- but that effort wouldn't be necessary if we didn't make the struct the center of the behavior in the first place.


This works with Rust's traits as well, for example, Iterator. Or Ruby's mixins (which are inheritance, I guess, heh). It is super useful, but doesn't actually require inheritance, even if you can use inheritance to do it.


> The best example of this (IMO) is how `AbstractMap` works in Java.

I think it is fair to say that this is idiomatic Java, and for that reason it is a great example within the context of Java.

But does it translate to the abstract? Given your hypothetical ideal language that allows you to do anything you can imagine as you can best imagine it, is this still a good example, or would you reach for something else (e.g. composition)? Why/why not?


I'd be ready to agree if I could be pointed at a time that inheritance actually carries a real benefit- a time you would choose it over composition, if composition is available.


There is no benefit to the "tree of life" single inheritance [1].

What you want to reach for to achieve compositional behavior or polymorphism are type classes, traits, and interfaces. They attach methods to data without the silly "Cat is an Animal, Dog is an Animal" cladistic design buffoonery.

There's nothing wrong with OO, except for inheritance-based OO.

[1] Don't get me started on multiple inheritance. Instead of solving problems, they invented them.


What about a situation where you have an ECS, and if it has say, LifeComponent(), ReproductionComponent(), it can be identified as an Organism (as opposed to say a Crate). Now, you can have inheritance off of that, a composition can be identified as an Orangutan, or a Human, but only if it was an Organism. The Organism is basically a memoization of Life and Reproduction components on the Orangutan.

Now, A human can create a child, but only if the parent object was a "Human." Am I thinking about this in the wrong way?


Yes. I hadn't heard the term tree of life inheritance, but that is the heart of the problem. I think this could be somewhat mitigated by banning access to non-abstract parent methods and all grandparent methods (as well as all relations that aren't directly reachable by going up the tree). But at that point you might as well just use composition anyway.


GreatGrandparent->setCancerChancePercent(10);

Grandparent->mutateCancerChancePercent("+|-", 3);

Parent

Child

By banning access to GreatGrandparent's getCancerChancePercent(); method, I won't know what the starting chance was, which will make it harder to determine nature vs. nurture. Isn't that what ancestry and genome mapping is doing? Going back up the tree?


Are you visiting from the iOS(possibly Android too, but didn't see anyone mention) app? That's where it's generally happening.


I use the website.


While I certainly agree, I've found that this is often an indication of too-complex an architecture, and a fundamental re-think being necessary. I've had projects that depend on [fp-ts], which end up incredibly generic-heavy, but still make it entirely through a typecheck(not build- typescript's just worse at that than other tools like esbuild) in seconds-at-worse.

Obviously depends on your organization/project/application, but I do like these things as complexity-smells.

[fp-ts]: https://gcanti.github.io/fp-ts/


How large in lines of typescript are the projects you've used fp-ts or similar with?

We have about 3 million; when I discuss a slow type, i mean a type that contributes ~1 min of checking or more across all the uses in 3 million lines, analyzed from a build profile using Perfetto. I've looked at a generic-heavy library that's similar (?) to fp-ts, effect-ts (https://effect.website/), but I worry that the overhead - both at compile time with the complex types, and at runtime with the highly abstracted control flow that v8 doesn't seem to like - would be a large net negative for our codebase.


Nothing that large admittedly- but I have gotten near the 1 million mark(prolly ~800k?) in one project. But I'd also say that at that size(honestly these days, I reach for it pretty much by default) I'd go toward a monorepo that only runs CI on the packages that have changes, as that much JS to even just go through a typical eslint is gonna be a real chore. As a result, the 'complex types' don't end up impacting as much.

As to runtime: While v8 doesn't like it, what it doesn't like even more is having code to run in the first place- and I've found that my FP-heavy projects often have fewer lines of code by factors of 3 at worse, often as high as 15. So in general I didn't get much in the way of perf issues, and when there would be a place that perf mattered, I'd then rewrite it to not use the FP stuff and instead be written to purpose.

Basically, I use FP(and by extension fp-ts) as a good default(as it increased velocity by enormous factors, and more as time went on), then reach for the toolbox when the situation called for it.

BIG ASTERISK however: I don't use `fp-ts` much in React however. With it primarily depending on `Object.is` for comparison, the pure nature of the libraries creates a need for a lot of tools I wasn't able to find a satisfactory answer to. So most code like this was either accomplishing things outside of components(components would often call them though), or was backend-focused(ie, Node.js).


There is actually efforts in the Typescript community attempting to do just that. Personally I think it'll end up being a waste, but these sorts of experiments, even when they fail, often can help along new discoveries.

And on the off-chance they get it right, then damn that's pretty great.


Fully agreed that failure is expected! I think it can still make great learning for all involved. And, 100% agreed that proving us wrong on this would be great!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: