It's a MediaTek SoC, so the Linux experience will be Bad to say the least. This thing will be running the oldest kernel possible with all kind of nasty vendor hacks.
It seems like this is grasping for languages to expose recursors[1] or induction principles. These factor out the structure of an inductive type (e.g. trees) and leave the caller to simply plug in the behavior to perform for each possible situation that might arise during traversal.
The Gruen transfer seems to make use of confusion or disorientation upon entering a new physical space. On the other hand, the phenomena described here with data search implementations (e.g. a feed) intentionally returning things that were not asked for does not rely on user confusion. The user already made a perfectly legible request by performing the query (e.g. opening the feed). Ignoring or misconstruing this request is gaslighting. The site tries to pretend that the user asked for something different than they actually did and to sneak this subterfuge past them. It's rampantly unethical. The purported justification I've heard is that fuzzy search can find results in cases when exact matching does not, but taking the control of fuzzy vs. strict search out of the user's hands is unethical because it is motivated by the opportunity to introduce mistakes and pervert intention.
Sites like Etsy implement this dark pattern in ways intentionally intended to make CSS-based blocking of injected sponsored products difficult to block. The arms race between user agents and corporate manipulation continues, and corporate web designers will use every tool available to subject users nonconsensually to their preferred experience. This is why I consider it a net loss for users to add functionality to the "web platform". The corporation is your enemy and they're well-funded.
Personally I've been inspired by nnethercote's logs (https://nnethercote.github.io/) of incremental single-digit percentage performance improvements to rustc over the past several years. The serial portion of compilers is still quite significant and efforts to e.g. parallelize the entire rustc frontend are heroic slogs that have run into subtle semantic problems (deadlocks and races) that have made it very hard to land them. Not to disparage those working on that approach, but it is really difficult! Meanwhile, dozens of small speedups accumulate to really significant performance improvements over time.
I concur. I looked pretty hard into adapting Serf as part of a custom service mesh and it had some bonkers designs such as a big "everything" interface used just to break a cyclic module dependency (perhaps between the CLI and the library? I don't recall exactly), as well as lots of stuff that only made sense if you wanted "something to run Consul on top of" rather than a carefully-designed tool of its own with limited but cohesive scope. It seemed like a lot of brittle "just-so" code, which to some extent is probably due to how Go discourages abstraction, but really rubbed me the wrong way.
Having done some nontrivial Bazel/Starlark hacking, I completely agree that lightweight static types would be a good usability improvement. But I want to point out that Starlark is actually not Turing complete, which is imo one of the more interesting characteristics it has. Recursion is forbidden (https://github.com/bazelbuild/starlark/blob/master/spec.md#f...) and loops must be structural; there is no while loop or other unbounded iteration construct. Starlark is one of the more capable and mainstream non-Turing-complete languages out there, and doesn't resemble the other common ones which mostly show up in theorem provers. On the one hand I think the logic in a build system that needs to reason about incremental builds absolutely should be guaranteed to terminate, but in some particularly painful situations I've had to resort to iteration over smart-contract-style "gas" parameters.
EDIT: as a parting shot, dhall is another non-Turing-complete language in common usage, but its claim to fame is that it gets used in places that arguably shouldn't be doing any computation at all.
These types don't seem to escape the scope of what can be described with algebraic types, but the relationships between them seem like you're looking for a notion of type-level functions: subset ≡ X => 2^X, partial ≡ A×B => (A+1)×partial(B)
Consider the case of Partials - we might like to restrict the Partials to different subsets of the fields for different purposes; consider the modelling of an inverted index.
Certainly it is possible to represent each specific case as some algebraic type; but beyond trivial cases, I find that when I need such of these types, quickly I discover that there are myriad ways to express them, none of them uniquely natural, unlike the way a sum of products type (and its terms) can be pretty much unambiguously drawn from a specification.
This matters especially when e.g. I need to evolve my types in a data migration.
One interesting perspective is to view the sequence of lists -> trees -> DAGs -> general graphs as a loosening of restrictions:
- list nodes may have one child
- tree nodes may have multiple
- DAG nodes may have multiple parents though restricted by topological ordering
- graph nodes may have multiple parents from anywhere in the collection
Lists and trees can be fully captured by sum and product types, but extending this representation style to DAGs and graphs doesn't work--you either get inefficiency (for DAGs) and then infinite regress (for cyclic graphs) attempting to continue the "syntactic" style of representation, or you need to adopt an "indirect" representation based on identifiers or indices or hash consing.
> Lists and trees can be fully captured by sum and product types, but extending this representation style to DAGs and graphs doesn't work--you either get inefficiency (for DAGs) and then infinite regress (for cyclic graphs) attempting to continue the "syntactic" style of representation
You also need "inductive" recursive types to represent lists and trees, in addition to sums and products.
One way of representing the type of a list of T is like:
mu X.1+T*X
(Hence sum and product types, but also inductive or "least fixed point" recursion.)
But you can also use "coinductive" recursive types to represent "processes" (or "greatest fixed point" recursion) with almost the same notation:
nu X.1+T*X
This represents a FSM which at any point yields either a termination (the "1" on the left side of the recursive sum) or a value and a continuation (the "T" in "T*X" is the value and the "X" in "T*X" is the continuation).
This doesn't answer every question about graph representation, obviously, but it's a useful tool for attacking some graph problems you'd like to represent "syntactically" as you say (though you have to think in terms of "coinduction" instead of "induction" e.g. bisimilarity instead of equality).
That’s a neat idea. The other side of it might be expressiveness.
The more expressive constructs are usually more productive and concise. The less expressive constructs are usually easier to optimize or analyze with a machine. The old rule, like in LANGSEC, is to pick the least-expressive option that works. Some people also develop transforming code (eg netaprogramming) to let you write highly-expressive code that generates correct, low-expressiveness code.
I like thinking of this concept via free constructions.
As is somewhat commonly known the free Monoid is the List type; monoids are not commutative so we get a sense of "direction", like a list has a start and an end.
If we add commutativity and look at free groups, we find they are equivalent to multisets.
If we take associativity away from monoids and look at free semigroups, we get binary finger trees, I think?
In some sense removing constraints from the binary operator results in more general free types. Would be interesting to find what free construction makes digraphs but I have to bounce.
There are a few ways to handle this design question ("what happens if something needs to interact with a process while it's blocked in a system call?") more or less sanely, and UNIX chooses the least sane one (making userspace deal with the complexity of system calls possibly doing part or none of the work that was requested). Other operating systems might fully transparently guarantee that the system call completes before the process is notified of said interaction, but this isn't compatible with IPC as "lightweight" (unbuffered and without backpressure) as signals.
The Right Thing is to make system call submission atomic and asynchronous, only waiting on completion by explicit choice, and remove signals entirely in favor of buffered message-passing IPC. This is basically the world we're approaching with io_uring and signalfd, except for ugly interaction with coalescing and signal dispositions (see https://ldpreload.com/blog/signalfd-is-useless), and the fact that many syscalls still can't be performed through io_uring.
If UNIX had a better API for process management, people wouldn't see signals as necessary, but that's its own can of worms with its own Linux-specific partial fix (pidfd) and genre of gripe article (e.g. https://news.ycombinator.com/item?id=35264487).