Yes, similar discussion across two separate articles:
(1) article from General Analysis on "Supabase MCP can leak your entire SQL database"
(2) article from Oso talking about why authorization in AI is hard, what to do about it, which references the General Analysis article
Right, because otherwise you end up with a split discussion and people miss stuff, moderators end up having to merge them, etc. Immediate followups count as dupes (in HN's weird dupery algebra), they're better off linked in the active thread.
There are many broadcasting laws worldwide, many quite archaic. Even Radio Garden got meaningfully restricted in the UK (only licensed national radio stations are allowed by a high court ruling). I worry for projects like TV Garden but they are undoubtedly very cool.
A UK High Court ruled in 2019 that websites like TuneIn are distributing illegal music[0]. It went to appeals but the previous ruling was upheld. There hasn't been much clarification beyond that nor very clear enforcement. But the precedent this ruling set makes companies fear repercussions if they accidentally link to a stream that has content not licensed for the UK. To interpret this ruling broadly would be to break the internet[1]:
> The claimants say that a finding for the defendant will fatally undermine copyright. The defendant says that a finding for the claimants will break the internet.
As usual, this happened due to rather rabid approach to copyright by big American labels. They may be legally in the right, though their actions, as always, have meaningful negative externalities. How far they reach in this case is unclear, but TuneIn and Radio Garden both have blocked non-UK streams for UK listeners.
It is rather awkward that the US right-holders chose to sue TuneIn in the UK, rather than US radio broadcasters that stream online without appropriate licenses. However, TuneIn was profiting from the premium subscriptions relating to content they knew didn't pass muster legally, and their service foundational was based on such content. There are certainly many things to be said about it. But unfortunately the debate is already settled by the appeals court in the UK.
Overall, the UK TuneIn service was valuable to the public. And it is an example of such value being destroyed by copyright laws. This is yet another topic that many people have said much on.
> Overall, the UK TuneIn service was valuable to the public.
I agree about stream directory services in general, but I'm a bit on the fence about TuneIn in particular.
It started out very useful, especially as the de facto backbone for Google Home devices – I believe they back or at least used to back "Hey Google, play <station name>".
But lately they started playing "pre-roll ads", and I think lately even playing ads over the live content, and I'm not entirely sure if they even share the revenue of those, or of premium subscriptions that avoid ads, with the underlying radio stations.
Why not? Public broadcast TV stations want to be viewed, just like web radio streams!
That said, the first one I tried (a German public broadcaster) was showing a static image of “this programme is currently unavailable for legal reasons”. (I believe they do IP-based geofencing for legal/broadcasting rights reasons.)
Yeah, just because a channel is public broadcast doesn't mean some of the content it shows hasn't been commercially produced, and a license purchased for that country's geographical area only.
I've tried watching some Italian TV channels, and some content was not available for streaming. It's a common practice here. It also applies to satellite-transmitted channels, they usually don't have the license to show some movies on that version (you can only see them on the terrestrial signal).
There was a high profile court case in about 2018 where a start-up was trying to sell rebroadcasted public TV and it was ruled illegal and held up on appeal. They even tried "renting" miniature TV antennae to users with the legal theory that they never made a "copy". Sad to see it was shot down.
This is very different though: The streams are provided by the broadcasters themselves, not by somebody that receives their signal and then rebroadcasts it.
If they didn't want their content watched abroad, they would add geoblocking or authentication. Some of the ones listed on TFA actually do that for parts of their program.
It's generic system for example, is built on top of comptime. A generic struct is just a function that takes a type as an argument and returns a struct.
```
fn Vec(comptime T: anytype) {
return struct {
// ...
}
}
```
IMO having a first class generic type parameter syntax is better but this demonstrates OP's point.
Exactly, the point is that C++ added templates as a huge new language feature, while in Zig it is just one of the things that is immediately possible thanks to comptime.
well, not quite since you can pass non-type things of generally any level of data type complexity (as long as it's comptime-valid, which only excludes certain types of mutation), and do stuff with them that you couldnt in c++.
You absolutely can't do that in C++ with consteval + template. C++ would need support for reflection to do that, and maybe it will get it in 10 years, maybe not, but as of today this would not be possible.
Furthermore, the original argument wasn't about whether something can or can't be done in C++, it was that this one feature in Zig subsumes what would require a multitude of features from C++, such as consteval, templates, SFINAE, type traits, so on so forth...
Instead of having all these disparate features all of which work in subtly different ways, you have one single feature that unifies all of this functionality together.
I'd agree that zig's comptime encompasses many of C++'s features, and I appreciate the approach they took to new features by way of builtins (like @typeInfo + @Type for reflection), but this is not a good example.
Furthermore, why is type traits among the list of features that you claim is subsumed by comptime? not only is that not the case, but type traits are not so much a feature of C++ as they are of its standard library, implemented using templates (a feature of the language).
You're not wrong in general here, but C++ is going to get the core of reflection in C++26. I'm not sure enough of the details to know if it supports doing this, however.
Rust on the other hand... that might be ten years.
Yeah, I always wished that the reflect crate got further along than it has.
I still think that language support us important, but unfortunately due to what happened, I suspect that will take a long time. And that’s disappointing.
I should check it out, I haven't had an actual use-case for reflection lately, so I haven't given bevy_reflect a try yet, but when I do, I'll make sure to give it a shot.
Assuming by "non-Rust types" you mean "those that the bevy_reflect crate doesn't know about", it's indeed limited by the orphan rule. That being said, bevy_reflect offers many workarounds for this problem. Because bevy_reflect is based on a type registry containing function pointers, you can actually populate it manually for types external to your crate without using the Reflect trait at all if you want to. And if your type contains fields that aren't Reflect, then you can use custom reflection logic.
If you allow `capitalized` to be it's own instance then there's no reason to mutate the comptime parameter in the first place, and it can be replicated in C++17.
Generics, interfaces/traits/concepts, macros, conditional compilation, const functions/constexpr. These are four or five different features in C++ or Rust, some of which are quite complex, all expressible as one simple construct: comptime.
- generics: comptime can implement some kind of polymorphism, but not at the type level. In other words it implements a templating system, not a polymorphic type system;
- interfaces/traits/concepts: comptime implements none of that, it is plain duck typing, just like "old" C++ templates. In fact C++ introduced concepts to improve its situation with templates, while Zig is still behind on that front!
- macros: comptime solves some of the usecases where macros are used, but it cannot produce arbitrary tokens and hence cannot fully replace macros.
I do agree that it can neatly replace conditional compilation and const functions/const expr, but let's not make it seem like comptime solves everything in the world.
in practice aside from interfaces the only thing you can't do at comptime is to generically attach declarations (member functions, consts) to a container type (the best you can do is to do it on a case by case basis).
you could probably cobble together an interface system with @comptimeError, but because of the time order of compilation stages, a failed method call will trigger the compiler error before your custom interface code, making it effectively useless for the 90% of cases you care about.
if I'm not mistaken in principle a small change to the compiler could amend this situation
How do you typecheck generics, with type inference, with comptime?
Or, more generally, address all the issues raised in [1]. You're saying that comptime can fully replicate all the features that a proper generics system has, which is plainly false.
I would say these are more differences than issues, and that some of those presented as more fundamental ones are actually quite small. Suppose that instead of `fn foo (comptime T : type, ...) { typecheck(T); ...}` Zig introduced just a tiny bit of new syntax to allow you to write something like `fn foo (comptime T : typecheck(T), ...) { ... }` -- i.e. the type constraints would be part of the signature -- would you then say it had generics rather than templates? Personally, I have not made up my mind on whether or not such an addition would be very valuable, but even if it is, it can be done later. That small addition would address most "issues" in the article, which I would say are more about IDE support than anything else. But even without it, what you want to know is known at compile time, and the article admits that the compilation errors are already better than those you get with C++ templates (I would say much better).
Now, I'm not saying that Zig's choices always dominate and that all languages would be better off with its approach; far from it. I am saying that it introduces a novel tradeoff that is especially compelling in cases where not only generics but also macros, conditional compilation, and constexprs are otherwise required. In a language like Java these extra features are not required, and so Zig-style comptime would not simplify the language nearly as much.
But even in cases where all these features are needed, I don't think everyone would take Zig's choices over C++'s or Rust's, or vice-versa. To those, like me, for whom language complexity is the biggest problem with C++ or Ada (I used Ada in the nineties), Zig is a revolutionary step forward. I don't think any low-level language has ever been this simple while also being this expressive.
It's interesting that so many replies in this thread (and indeed, most Zig threads) are along the lines of "yes, it doesn't do X today, but Zig could just add X". I'd really like to see arguments in favor of Zig that rely on what it can do today, rather than what it might do someday. After all, you don't extend Rust the same courtesy, and Zig is not that young of a language. And in PL circles Zig has a bit of a reputation for promising things that it has yet to deliver (e.g. static detection of potential stack overflows, which I'm convinced just can't be done in a useful way in the presence of higher order functions).
> It's interesting that so many replies in this thread (and indeed, most Zig threads) are along the lines of "yes, it doesn't do X today, but Zig could just add X".
That wasn't my argument.
> After all, you don't extend Rust the same courtesy
Given that my aesthetic issue with Rust is that it has too many complicated features, I don't see how that courtesy could be extended. There is, indeed, an asymmetry between adding features and removing them, but the aesthetic "points" I'm awarding Zig is not due to features it could add but due to features it hasn't while they've not yet been shown to be critical.
I think it's fairly obvious that any feature in any language was added to add some positive value. But every feature also has a negative value, as it makes the language more complicated, which in aggregate may mean fewer programs would be written in it. The challenge is balancing the value of features with their complexity. Even those who prefer Rust's aesthetics to Zig would admit that Zig's novel approach to power/simplicity balance is something we have not seen in programming language design in many years.
> The challenge is balancing the value of features with their complexity. Even those who prefer Rust's aesthetics to Zig would admit that Zig's novel approach to power/simplicity balance is something we have not seen in programming language design in many years.
I disagree. Minimalism in systems language design has been done over and over: see Go for the most recent example. Comptime is something that C++ was already doing in the form of constexpr since 2011 and a space that D had explored for over a decade before Zig came around in the form of "static if" and so forth (in addition to lots of academic work, of course). Stripping out template metaprogramming in favor of leaning heavily on compile-time function evaluation isn't novel either. I think you find the set of features that Zig has to be personally appealing, which is fine. But the argument that it's anything novel is weak, except in the trivial sense that every language is novel because it includes some features and leaves others out (but if every language is novel, then the word "novel" has no meaning).
From my vantage point, Zig is essentially a skin on a subset of C++, one that is in practice less safe than C++ because of the relative immaturity of tooling.
I've been doing low-level programming for over 30 years now, and Zig's use of comptime as a simplifying feature is nothing at all like C++'s or D's (it is more conceptually similar to the role macros play in Lisps). Denying how revolutionary it is for low-level programming seems strange to me. You don't have to like the end result, but clearly Zig offers a novel way to do low-level programming. I was excited and thoroughly impressed by Rust's application of substructural typing despite being familiar with the idea long before Rust came out, even though the overall language doesn't quite suit my taste.
Minimalism is also, of course, not a new idea, but unlike in Go, I wouldn't say minimalism is Zig's point. Expressive low-level languages have always been much more complex than similarly expressive high-level ones, something that many of us have seen as their primary problem, and Zig is the first one that isn't. In other words, the point isn't minimalism as a design aesthetic (I would say that explicitness is a design aesthetic of Zig's much more than minimalism) but rather reducing the complexity that some have seen as the biggest issue with expressive low-level languages.
What was so impressive to me is that we've always known how macros can add almost infinite expressivity to languages that could be considered minimalistic, but they carry their own set of severe complexity issues. Zig showed how comptime can be used to offer much of the power of macros with almost none of their cost. That, too, is revolutionary even without considering the low-level domain specifically (although high-level languages have other options).
Finally, if you want to talk about "safety in practice", considering more than just the language, I don't think we can know without empirical study, but the same claim could be made about Rust. Both Zig the language and Rust the language are safer than either C or C++ (the languages), and Rust the language is safer than Zig the language. But as to their relative safety (or correctness) "in practice" either now or in the future, only time and empirical study will tell. Clearly, languages that offer more soundness, like Idris or ATS, don't always work so well in practice. So I admit I don't know at this time whether Zig (the gestalt) offers more or less correctness in practice than C++ or which of Zig or Rust offers more correctness in practice, but neither do you.
> Zig is essentially a skin on a subset of C++, one that is in practice less safe than C++
Give it a rest please. Given your association with Rust, endlessly attacking competing languages is not a good look, regardless of whether your points are technically correct or not.
Of course they count, but they never reached Zig's expressivity/simplicity ratio. For example, Oberon doesn't have generics. You could argue that Zig doesn't have dynamic dispatch as part of the language, but it's expressive enough for that to be done in a library (https://github.com/alexnask/interface.zig). Put simply, Zig can do pretty much anything C++ can with the same expressivity (by programming languages X and Y having the same expressivity I mean that there is some small constant C such that any program in X could be written in Y in a number of lines that is within a factor of C compared to X).
Modula-2 and Oberon Pascal evolved to have generics, if that is the "expressiveness problem".
Also all the ones I mentioned, supported binary libraries, which apparently is not something the Zig folks are too keen in supporting, other than C like ABI.
For me any systems language that doesn't support binary library distribution isn't that relevant to me, and yes that is also something that I regularly complain about in Rust, and not only me. Microsoft has a talk on their Rust's adoption where this is mentioned as a problem, only relieved thanks to ubiquity of COM as mechanism to delivery binary libraries on Windows.
I agree that good separate compilation is valuable, but having a full ABI for natively-aot-compiled languages is rather difficult once generics are involved. Even C++ doesn't have one (and if you think about it, even C punts on separate compilation once macros are involved). I think the only such language that offers one -- and not quite to the full extent -- is Swift.
Another aspect of this argument, is that Zig is supposedly not about adding many features, in order to live up to its claim of being a small and simple language, with an emphasis on systems programming. Where on the other hand, Rust's pitch is not promising small and simple.
> would you then say it had generics rather than templates?
I think pcwalton's "generics" vs. "templates" distinction mostly boils down to parametric typechecking, which Zig's design just can't do. (Can it?)
Although, I vaguely remember some example showing that even Rust in some cases allows a type definition X<T> even when there exists a T such that X<T> would fail to typecheck.
I don't think pron was saying that Zig has a feature-by-feature match for everything that Rust's generics can do. I think his point is that comptime can handle what the target audience of Zig wants from generics. In that regard, I don't think the criticisms there are that big a deal.
1. if you wish, you absolutely can check for "extra constraints" on a passed type (or even an anytype parameter) using comptime reflection and the @comptimeError builtin.
2. if you want to restrict the use of a function to comptime (why you would want to is beyond me) it is possible to do with @inComptime builtin.
the only tricky bit is that your function could try to call a function inaccessible to you because it's transitively restricted and you'd have a hard time noticing that from the code but it's not possible for that code to be compiled (barring errors by the zig team) so its more of an annoyance than a problem.
I have seen projects like that, and they're a prime example of what I mean when I want to see arguments in favor of Zig that rely on what Zig can do now, and not what it could potentially do in the future. Ten years ago, C++ folks were also promising memory safety with the ISO C++ Core Guidelines; you don't hear much about that anymore, because it turns out it can't be done while keeping the resulting language C++ (for example, seanbax's work for C++, which is far more advanced than this project, is really awesome, but is essentially a different language).
If people are referring to what Zig or any other language could possibly do, as something valid to be taken under consideration, then it should at least be explicitly stated in their roadmap.
Otherwise, it can be construed as just another wish list or feature request, that main developers have no plans to implement.
And what about all the promises of things making it out of Rust Nightly?
When it comes to Zig, one crowd says it's too early because it's not 1.0 fast enough. And now, are you saying that the feature development is basically done? What point are you trying to make?
The point I'm making is that I don't believe Zig can be made meaningfully memory safe without breaking compatibility to the extent that memory-safe Zig would effectively be a different language, any more than C can.
I tend to agree, but writing code in a memory-safe language is not the goal. The relevant goal on that particular front (and it is not the only goal people have when writing programs) is to maximise the resulting program's correctness within some effort budget and other requirements (such as memory consumption etc.). Using a memory-safe language is one approach toward achieving that goal, but if that safety comes at a cost of added effort, then showing that's the best approach depends on claims that are hard to verify without a lot more empirical data. The only theoretical justification could be that more soundness results in more practical correctness, but if that were the case then there are languages that offer more soundness than Rust. In other words, if Rust is generally superior to Zig because it offers more soundness, then other languages are superior still. Indeed, soundness is not considered the best approach to attaining every property in general because it can be more expensive than other approaches that achieve the desired outcome, and Rust clearly recognises that.
Rather, both languages' correctness-related claims rely on them being in some better or worse effort/correctness sweet spot, and that can only be measured empirically for those two specific languages. Crucially, results comparing Rust to C or C++ are not relevant. Zig offers more soundness than C, and even if it offers the same soundness as C++ (and I disagree -- I think it offers more) the language's claim is that its simplicity assists correctness. We can only tell to what extent that is true with more empirical observation; I don't think there can be any good guess made here regarding the size of the effect.
some correctness matters more than others. for a web browser, i would rather use a browser that is buggy/janky/crashy but doesn't give random websites arbitrary native code execution over a browser that is logically correct except for giving random websites rce. the languages that are more sound than rust are probably worse among some axis like ease of use, expressivity or performance. even if they weren't, they're probably not going to buy me much more than a slightly less buggy/janky/crashy browser, once you no longer have exploitable memory safety bugs.
Sure, but once soundness starts trading off other properties that may be important even for that kind of correctness, it is not necessarily the best approach. You don't care what the cause of the exploitable remote execution vulnerability is, and remote code execution is a real vulnerability even in fully memory-safe languages.
If you look at MITRE's CWE Top 25 [1], #2 (Out-of-bounds Write) and #6 (Out-of-bounds Read) are soundly eliminated by both Zig and Rust. It is only when it comes to #8 (Use After Free) that Rust's additional soundness comes into play. But because it comes at a cost, the question is to what extent eliminating #8 adversely impacts the other top vulnerabilities, including those higher on the list. It may be the case that eliminating the eighth most dangerous vulnerability through sound language guarantees may end up being worse overall even if we're only concerned with security (and it isn't the only concern). We can only try to assess that through empirical study.
if you looked at the clr project you would find a credible claim that it doesn't have to be an "effectively different language", that there is a real possibility that "minor annotations are sufficient".
> I don't believe
Of course, we're debating against a belief, and you have plenty of reasons to not believe it that it will be impossible to be swayed by any sort of evidence, and you will always find a way move the goalposts.
graphite only works with graphite tooling. you cannot just use sapling or jj to create a pr on github and then use graphites review tool, even though this should work in theory, but they block this somewhere in the pipeline.