The fallacy is that AOSP (which GrapheneOS forks from), and Chromium used to install it, are both dependendent on Google engineers, money, and the willigness to keep the platforms open, to some extent.
Is your alternative that someone should build a complete from-scratch alternative OS that can still be booted on the same hardware?
For the time being, AOSP and Chromium are still open source, so why not piggy-back off of all that labor and development to provide what GrapheneOS users want at minimal cost and effort?
Desktop Linux works very nicely on smartphones actually provided all the drivers are there. I lived with a PinePhone running FVWM on Xorg for a couple years and if the hardware didn't crumble away I'd still be using it today.
No need to "build a complete from-scratch alternative OS" when that was already done 30 years ago.
If the source is fully open (it is) than detecting and disabling backdoors is completely possible. Not to mention the fact that other OS projects face the same risks.
If Google cuts development of AOSP in favor of some closed-source alternative, the GrapheneOS team could simply continue development of AOSP on their own.
If you define the goal that way, then you actually need to clear a much higher bar of making your own hardware. Personally I'd much rather maintain a long term fork of AOSP than have to design, market, sell, and support a new device.
This looks the same kind of situation when I noticed FOSDEM corridors started to be full of Apple laptops, but apparently the irony is lost on new generations.
I remember about 10 or 15 years ago somebody pointed out that a big chunk of the GNOME devs used Apple laptops, even at public appearances, and it answered a lot of my questions about the state of the project.
They really don't. It's just that development of custom roms like GrapheneOS are centered around Pixels. Plenty of other devices have unlockable bootloaders. The custom rom scene though is so small that concentrating on a couple devices is the only way to keep development moving forward though. Same reason why Asahi Linux is the only option on Apple Silicon Macs.
Many have unlockable bootloaders (though the number is rapidly declining with Samsung closing up). But not many have relockable bootloaders. This is one of the things that grapheneos have set as a minimum standard, hence the reliance on pixels. There's a few other specific things that the titan chip provides which they rely on but the relocking is the main thing.
Hardly anything left, Apple and Microsoft have their own issues, Web is basically ChromeOS aka Google, and I still cannot buy GNU/Linux or BSD laptops at the local computer store.
Ok, I should have been more specific. By ADT I meant sum types, or, to be even more exact, discriminated union. Everyone and their mother has product types.
Java doesn't have a discriminated union for sure (and C# as of 8.0). It does have a `|` operator that can cast two objects to the nearest common ancestor.
Having nullable support is the issue. I've played around with it in C#. Nullables are horrible. And I say this as someone who was formerly in Option<T> is horrible.
You can easily cause type confusion, and if the number of times a non-nullable has been nullable (or at least has appeared as that in the debugger) was greater than zero. To be fair there was reflection and code generation involved.
I advise you to update your language knowledge to Java 24, C# 13, Scala 3.
From another comment of mine,
type Exp =
UnMinus of Exp
| Plus of Exp * Exp
| Minus of Exp * Exp
| Times of Exp * Exp
| Divides of Exp * Exp
| Power of Exp * Exp
| Real of float
| Var of string
| FunCall of string * Exp
| Fix of string * Exp
;;
Into the Java ADTs that you say Java doesn't have for sure,
public sealed interface Exp permits UnMinus, Plus, Minus, Times, Divides, Power, Real, Var, FunCall, Fix {}
public record UnMinus(Exp exp) implements Exp {}
public record Plus(Exp left, Exp right) implements Exp {}
public record Minus(Exp left, Exp right) implements Exp {}
public record Times(Exp left, Exp right) implements Exp {}
public record Divides(Exp left, Exp right) implements Exp {}
public record Power(Exp base, Exp exponent) implements Exp {}
public record Real(double value) implements Exp {}
public record Var(String name) implements Exp {}
public record FunCall(String functionName, Exp argument) implements Exp {}
public record Fix(String name, Exp argument) implements Exp {}
And a typical ML style evaluator, just for the kicks,
public class Evaluator {
public double eval(Exp exp) {
return switch (exp) {
case UnMinus u -> -eval(u.exp());
case Plus p -> eval(p.left()) + eval(p.right());
case Minus m -> eval(m.left()) - eval(m.right());
case Times t -> eval(t.left()) * eval(t.right());
case Divides d -> eval(d.left()) / eval(d.right());
case Power p -> Math.pow(eval(p.base()), eval(p.exponent()));
case Real r -> r.value();
case Var v -> context.valueOf(v.name);
case FunCall f -> eval(funcTable.get(f.functionName), f.argument);
case Fix fx -> eval(context.valueOf(v.name), f.argument);
};
}
}
It implies that there are some differences between sealed interfaces and discriminated unions. Perhaps how they handle value types (structs and ref structs).
Then why did you tell me to update my knowledge of C#? It wasn't wrong for the discussion at hand.
Also, you failed to mention that the generics ADT in Java is still abysmal (it's relevant, because the topic started as boosting productivity in Java with ADT, I don't find ClassCastException as really boosting anything outside my blood pressure):
sealed interface Result<T, E> permits Ok, Error { }
record Error<E>(E error) implements Result<Object, E> {}
record Ok<T>(T value) implements Result<T, Object> {}
//public <T, E> Object eval(Result<T, E> exp) {
// return switch (exp) {
// case Error<E> error -> error.error;
// case Ok<T> ok -> ok.value;
// };
//}
public <T, E> Object eval(Result<T, E> exp) {
return switch (exp) {
case Error error -> (E) error.error;
case Ok ok -> (T) ok.value;
};
}
Guess I'll wait for Java 133 to get reified generics.
Yeah, Java's generics still kind of suck. There's a rumor after project Valhalla, Java language maintainers might add reified generics in the future. However, I think the current state of Java ADT & generic isn't that bad for most purposes.
Though due to its nullable-by-default type system and backward compatibility, there's a decent amount of footguns if you're trying to mix Java's FP & ADT with code that utilizes nulls.
About your code example, you could just do something like this to avoid explicit casting
sealed interface Result<T,E> {
record Ok<T,E>(T value) implements Result<T,E> {}
record Error<T,E>(E error) implements Result<T,E> {}
public static <T,E> Object eval(Result<T,E> res) {
if (res instanceof Error<T,E>(E e)) // Rust's if let
System.out.println(e);
return switch (res) {
case Ok(T v) -> v;
case Error(E e) -> e;
};
}
}
The new "pattern matching" feature of `instanceof` is certainly handy to avoid stupid ClassCastException.
> Because your English reading comprehension missed the fact I was talking about the nullability improvements.
Tank you :P Never claimed to be native English speaker.
Were there any meaningful changes in nullability semantics between C# 8.0 and C# 14.0? The issue I encountered was related to a complex game engine doing some runtime reflection, dependency injection, code gen and so forth.
I also never claimed to be ML expert. But them being reified or not, doesn't change my point that ADT in Java much like generics look like a design afterthought.
Yes, the secret of Rust is that it offers both a) some important but slightly subtle language features from the late '70s that were sadly not present in Algol '52 and are therefore missing from popular lineages b) a couple of party tricks, in particular the ability to outperform C on silly microbenchmarks; b) is what leads people to adopt it and a) is what makes it non-awful to program in. Yes it's a damning indictment of programming culture than people did not adopt pre-Rust ML-family languages, but it could be worse, they could be not adopting Rust either.
>important but slightly subtle language features from the late '70s
Programming-language researchers didn't start investigating linear (or affine) types till 1989. Without the constraint that vectors, boxes, strings, etc, are linear, Rust cannot deliver its memory-safety guarantees (unless Rust were radically changed to rely on a garbage collecting runtime).
>it's a damning indictment of programming culture than people did not adopt pre-Rust ML-family languages
In pre-Rust ML-family languages, it is harder to reason about CPU usage, memory usage and memory locality than it is in languages like C and Rust. One reason for that is the need in pre-Rust ML-family langs for a garbage collector.
In summary, there are good reasons ML, Haskell, etc, never got as popular as Rust.
> Programming-language researchers didn't start investigating linear (or affine) types till 1989.
Sure, but as ModernMech said, the vast majority of Rust's benefits come from having sum types and pattern matching.
> In pre-Rust ML-family languages, it is harder to reason about CPU usage, memory usage and memory locality than it is in languages like C and Rust.
Marginally harder for the first two and significantly harder for the last, sure. None of which is enough to matter in the overwhelming majority of cases where Rust is seeing use.
> Sure, but as ModernMech said, the vast majority of Rust's benefits come from having sum types and pattern matching.
Doubt. There were lots of languages giving you just that, and they never had this amount of hype. See Scala, OCaml, Haskell, etc.
Rust has one unique ability, and many shared by other languages. It's quite clearly popular for the former (though languages are a packages, so of course it's a well put together language all around).
> There were lots of languages giving you just that, and they never had this amount of hype. See Scala, OCaml, Haskell, etc.
They weren't hyped because they didn't have a silly party trick like microbenchmark performance. But they give you all the practical benefits of Rust and more.
Scala, OCaml, and Haskell all approach programming from a functional-first perspective. What no other language did before Rust was to bring those features to such a well-designed imperative core.
And while this was necessary to Rust's success, I don't think it was sufficient, insofar as it also needed a good deal of corporate backing, a great and welcoming community, and luck to be at the right place at the right time.
Haskell never tried to be more than a academic language targeting researchers. OCaml never had a big community or corporate backing. Scala never really had a niche; the most salient reason to use it is if you're already in the Java ecosystem and you want to write functional code. The value propositions for each are very different, so these language didn't receive the same reaction as Rust despite offering similar features.
Scala is distinctively a mixed FP and OOP language. There are functional proponents doing full monads and whatnot, but there is just as much follower of a more balanced approach (see Li Haoyi's libs). Though Scala definitely had/have quite a niche around it.
Well I think Scala's main problem here is it can still have runtime errors with null values, so it doesn't really have the same runtime safety guarantees pattern matching brings Rust / Haskell / OCaml. For example, this will cause a runtime panic in Scala:
This wouldn't compile in Rust. Scala is an okay language, its main benefit as far as I can tell is its a way to write JVM code without having to write Java.
That's a theoretical problem, not a practical one. Scala programmers and Scala libraries don't use null and usually have a linter that will reject code like that. (You can hit null in Scala because you didn't check an FFI call properly, but that happens in Rust too)
The practice of software engineering and language design have improved considerably after the realization of two important facts:
1) If something is technically possible, programmers will not only do it but abuse it.
2) You can't enforce good programming practice at scale using norms.
Linters and as the sibling points out the addition of a recent compiler flag (which is kind of an admission that it's not not an issue), is the opposite approach Rust takes, which is to design the language to not allow these things at all.
> you didn't check an FFI call properly, but that happens in Rust too)
Which is why FFI is unsafe in Rust, so nulls are opt-in rather than opt-out. Having sensible security defaults is also a key learning of good software engineering practice.
> 1) If something is technically possible, programmers will not only do it but abuse it.
> 2) You can't enforce good programming practice at scale using norms.
Not quite. Programmers will take the path of least resistance, but they won't go out of their way to find a worse way to do things. `unsafe` and `mem::transmute` are part of Rust, but they don't destroy Rust's safety merits, because programmers are sufficiently nudged away from them. The same is true with unsafePerformIO in Haskell or null in Scala or OO features in OCaml. Yes it exists, but it's not actually a practical issue.
> the addition of a recent compiler flag (which is kind of an admission that it's not not an issue)
Not in the way you think; the compiler flag is an admission that null is currently unused in Scala. The flag makes it possible to use Kotlin-style null in idiomatic Scala by giving it a type. (And frankly I think it's a mistake)
> is the opposite approach Rust takes, which is to design the language to not allow these things at all.
Every language has warts, Rust included. Yes, it would be better to not have null in Scala. But it's absolutely not the kind of practical issue that affected adoption (except perhaps via FUD, particularly from Kotlin advocates). Null-related errors don't happen in real Scala codebases (just as mem::transmute-related errors don't happen in real Rust codebases). Try to find a case of a non-FFI null actually causing an issue.
C only got to its performance state, when optimizing compilers taking advantage of UB started being common thing, during the 8 and 16 bit home computer days they were hardly any better than writing Assembly by hand, hence why books like Zen of Assembly Language left such a mark.
So if we are speaking of optimizing compilers there is MLton, while ensuring that the application doesn't blow up in strange ways.
The problem is not people getting to learn these features from Rust, glad that they do, the issue is that they think Rust invented them.
Sorry, my post wasn't to imply Rust invented those things. My point was Rust's success as a language is due to those features.
Of course there's more to it, but what Rust really does right is blend functional and imperative styles. The "match" statement is a great way to bring functional concepts to imperative programmers, because it "feels" like a familiar switch statement, but with super powers. So it's a good "gateway drug" if you will, because the benefit is quickly realized ("Oh, it caught that edge case for me before it became a problem at runtime, that would have been a headache...").
From there, you can learn how to use match as an expression, and then you start to wonder why "if" isn't an expression in every language. After that you're hooked.
I mean, "fearless concurrency", while a hyped-up phrase that is definitely exaggerated, compared to the C-world where you are already blowing off your leg in single-threaded code, let alone thinking of multiple threads Rust is an insanely huge win. And it shows on e.g. the small Unix tool rewrites.
Sure, rewrites are most often better on simply being a rewrite, but the kind of parallel processing they do may not be feasible in C.
> Yes it's a damning indictment of programming culture than people did not adopt pre-Rust ML-family languages, but it could be worse, they could be not adopting Rust either.
I'll say for a long time I've been quite pleased on the general direction of the industry in terms of language design and industry trends around things like memory safety. For a good many years we've seen functional features being integrated into popular imperative languages, probably since map/reduce became a thing thanks to Google. So I'll us all credit for coming around eventually.
I'm more dismayed by the recent AI trend of asking an AI to write Python code and then just going with whatever it outputs. I can't say that seems like a step forward.
True. Well usually. In some cases the property you're writing is so simple you wouldn't need any tests, for example in a compression library you just check decompress(compress(x)) == x. If that is proven there's no need for tests.
But yeah usually it's a good idea to have a few tests anyway. You just need fewer tests the more properties you can prove formally.
reply