This is incredibly impressive but I also simultaneously find it kind of depressing. In my not so humble opinion, typescript is already way too complicated and getting worse thanks to its velocity. A better approach would be to kill typescript's velocity by identifying a stable subset of the language that provides the vast majority of its benefits and optimize the heck out of a parser for that smaller language. This approach would be both easier and more sustainable since it would cap the language complexity making it easier to maintain and/or write alternative implementations.
That’s not my experience. If anything, I think the Typescript experience improves with each release: typings get more precise, the type system becomes more expressive and allows to simplify the type declarations that used to be more complex, and best of all, what worked before still works. And in most cases, you’re good to go with the basics, you just know that if you need to declare a complicated type, Typescript got your back.
I agree that Typescript improves with each release as I and the people I work closely with use it, because we use the KISS approach to typing so we rarely do use new features, but when we do need them they're useful.
But then I find some third party code which seems to use the kitchen sink approach to typing, AKA the "look at how smart we are" approach, and it's a nightmare to understand. A type system is supposed to make code easier to work with, by the time you're spending more time understanding the types than the actual code there's something wrong.
In these cases I wish Typescript was more limited and I would gladly give up a lot of the new stuff to enforce simple types across the ecosystem.
That said, I don't really have an opinion on who is at fault here - coders for writing overly elaborate types, or Typescript for allowing them.
> That said, I don't really have an opinion on who is at fault here - coders for writing overly elaborate types, or Typescript for allowing them.
Backwards compatibility is to blame. Typescript was designed to support a number of patterns which was already used in JavaScript.
For example events are registered in the DOM like `element.addEventListener("click", (ev) => ...)`. The type of the callback will depend on the string passed as the first argument, so the type system need to support literal strings as type discriminators which is a weird feature not necessary in most other languages. If the API had been designed with static typing in mind from the beginning, it would have been designed differently and wouldn't need such a complex type system.
This example looks like a simple sum type with method overload. Don't think that's complex, that's bread and butter in some functional languages.
Now take mapped types, template literal types, key remapping... Those features aren't necessary to express legacy DOM APIs, this is just for the sake of having a complex typing system. And that's not to add a new feature that unlocks some type magic that wasn't possible before. No, those are just shorthands for things that you could already express in the existing typing system.
It is not just to support the DOM it is also frameworks like JQuery, React, Vue etc.
For example the 'options' parameter is a common pattern where an object with a set of properties are provided to override a some default behavior. So you need to define a Foo type with a set of properties, and then you need to define a type which is like Foo except all properties are optional.
Some frameworks implement their own flavor of inheritance e.g. through mixins. The type of an object with mixins is not just the sum of all properties, you might also have one property overriding a property of a different type in base mixin, so you are adding and removing properties. You need a really powerful type system to be able to represent this.
The event type could just be exposed as a regular property so you could write `element.onclick.add((ev => ...)` so you would get static typing without any need for magic strings.
Magic string are use all over JavaScript for different purposes. For example createElement('P') could just be new PHtmlElement() ...
In other cases magic strings could be replaced with enums.
Changing Javascript is explicitly out of scope for Typescript.
I was curious how other languages did discriminators in a way that makes Typescript's feel alien. Just having different functions doesn't really tackle that.
Allowing literal strings and type aliases of unions of strings to represent enums doesn't seem unreasonable. It's the same concept as many other languages with some extra quotes, ex.
https://ocaml.org/docs/data-types#a-simple-custom-type
It's huge? I'm really confused lol. I write in Dart/Java/C++/ObjC/TS regularly to maintain a cross-platform library and this is a massive difference from every typechecker in any other language, much less the narrower thing of if the idea of reifying random strings into an enum is a novel feature. I get a very strong sense from the thread that people are talking past eachother, somehow.
> A type system is supposed to make code easier to work with, by the time you're spending more time understanding the types than the actual code there's something wrong
This may be true in a lot of cases, but certainly not all.
I’ve spent a ton of time on the types that validate that the input/output schema of my request handler function matches the defined schemas, and that was hell, but now any other of our developers will see delightfully red squiggles when they do something wrong. That’s a force multiplier and absolutely worth the effort.
Also agree, with each release I get an "Eureka effect" that now I can solve the type issue I strugled couple months ago trying to create to just make some highly used function safer/easier to use for the developers.
Example the new satisfies and some upcomign "as const" features to generics I'm looking forward to
What's bad about using clever types? My (limited) experience is that if they're done well they just work. I don't know why React-Query's types work the way they do, but I quite like that they do. Developing an intuition for it was a bit hard, but I believe this might rather be a failure of the documentation.
And if they aren’t, good luck debugging it. I remember recently some typing issue in a webpack config where a correct key was marked as a type error in a presence of another construct, despite being documented and having an effect.
Also try exploring npm/telegraf classes (auto-documented) by looking at their type definitions. The whole bot api could have been written as simple repeated definitions, but someone chose the clever way of multi-level type mutations and derivatives.
Typescript will soon need secondary community-maintained type contracts that are human-readable and cross-checked with those in libraries. Something like @humane-types/*.
TypeScript is complicated because JavaScript is complicated. As long it's sticking to the core goal of giving JavaScript static types in a practical way, complexity is unavoidable. I think what you want is a new language.
(I say this not as a dig against TypeScript or the way it's been designed; I just think the parent wants it to be something fundamentally different from what it is)
as someone working on a python type checker I can second this. we get occasional complaints that some feature is too complicated or too liberal, and the answer is usually "it works this way because python works this way, we are trying to support as much valid python as we can even if it's hard to express in terms of static types".
I agree, but I can't help but think of that meme about there being N+1 standards. The introduction of a stricter subset of JS will make Javascript more complicated, not less. I hate that, because honestly Javascript and the web would benefit form removing a lot of the cruft.
> I hate that, because honestly Javascript and the web would benefit form removing a lot of the cruft.
This is the primary argument against worrying about N+1 standards. We know it's bad, if we simply do nothing at all, we know it will get worse. We can and should at least try.
As someone who's been part of the team who tried to do exactly this, I don't think it can work. Once you start subsetting things it quickly turns towards being "better" than the existing web, and soon all backwards compatibility is gone.
All those legacy features and existing pages are the back pressure that keeps the evolution of the web stable.
At this point any other language soundly typed or not is semantically a subset of JS (at most plus some of their weird thing, like Rust ownership and move semantics). You are free to pick one and let JS be JS in all its glory.
Or more precisely, TypeScript is complicated because people insist on doing deranged things with JavaScript and its mess of a type system. TypeScript wouldn't need to contain so much complexity if the underlying JavaScript was doing simple, straightforward things. Blaming TypeScript here is kinda shooting the messenger.
It doesn't try to protect you from the really deranged things. Every TS check comes with an implied "assuming you didn't use a Proxy to make this property accessor mutate state or something"
It does reach admirably far into checking the regular dynamic-language programming patterns like using string variables to index into structures. Which is where a lot of its complexity comes from, but I wouldn't call that stuff "deranged". JS is a dynamic programming language, and people write it like a dynamic programming language, and typescript decided to meet it where it was at, which is probably the only reason it's gotten so much traction in the first place
You may not like this freedom, but I absolutely adore it. I really feel restricted when I'm writing in other languages, even dynamic ones after JS and TS shown me what should be possible.
The whole “Type manipulation” section has nothing to do with this issue, but at the same time is the usual source of complexity in type definitions. Even worse, it serves as a clever challenge for people to make such dynamic interfaces well-typed, instead of making it boring, autogenerated once and then discouraged.
Can't agree. Typescript already has a better type system than a lot of languages in common usage, but it's not good enough and we run into issues all the time because it's not powerful enough. It can probably stop adding features someday, but IMO that day is at least a few years away, when it is actually solving the problem of type safety in web development fully.
In particular, supporting nominal typing, fixing the way index types work, and fixing the story of debugging type failures on giant unions come to mind, but there are countless others. (Don't @ me, my day job is a few versions behind and I can't remember what's in the latest version, but there are plenty of other things if not these.)
FYI, TypeScript already supports nominal types, but the ways in which you "spell" `nominal` are a little unintuitive:
1. Use an enum
2. Use an intersection with an enum and literally anything else (except a number)
3. Use a private field (only works with classes)
4. Use a `declare const FOO: unique symbol` keyed object intersected with anything else (works with everything including numbers, lots of boilerplate to declare a new nominal type).
enum A {}
type B = { a: number } & A;
function newB(i:number) {
return { a: i } as B;
}
let b:B = newB(5);
let m:Map<B, string> = new Map();
m.set(b, "test");
m.set({ a: 2 }, "test"); // errors
Interesting, I thought that wasn't possible, but now I feel like I'm just confused and mixing it up with something else that wasn't possible but can't remember well enough.
But anyway, what I want from TS is
type UserId = unique string; // or something
const userId = "1234" as UserId
and no further boilerplate or trickery to get it to work. Would certainly be nice.
I think it can be safely ruled out. TypeScript tenet which they consistently uphold is that types can't influence behavior of the running code.
So to differentiate behavior depending of which part of sum type you got you need some value in the "real world" like a tag that can enable you to do that. Because all the type information is gone at runtime.
Strongest violation of this rule I know of are enums that generate their own JS code, but they are their separate thing. Something like sum types would need to be present everywhere.
Honestly I kind of like it this way. I'm trying out Rust right now and I'm very often surprised how what I do in one spot depends on huge context of all the generic types that can create wildly different behavior (mostly huge variety compile type error messages in surprisingly varied and distant spots) depending on what types they infer.
Yeah I think the new `satisfies` operator and improvements to type inferences in the latest version likely resolve a lot of your concerns.
TypeScript also supports some really advanced typing I've yet to see in other languages like template literal types (don't @ me, I mostly work with dynamic languages at my current job). I feel like even the TS docs haven't caught up to the full power of TS's full featureset
It's also worse in a lot of ways, not due to problems with expressiveness but more soundness and specification. It's pretty easy to make TS keel over during typechecking.
The other issue with the typesystem is the (soon to be obviated) need to compile to decent javascript and interact with it. That made sense before WASM existed and it will make a lot less sense once the component model is stabilized.
The thing about TypeScript (or any language) is that one needs to recognize that many features aren’t for us normals. They’re for library developers doing all kinds of wild type stuff. I remember my eyes glazing over at the template literal typings and such. But someone on the discord was like, “if it seems overwhelming, you’re probably not the audience. But you’ll end up using it by using a library that leverages it.”
Always check ts-essentials for the ideal abstractions built with the wild new features.
This sounds interesting to me, especially as somebody who was exactly zero interest in learning anything other than regular JS (i.e., no TypeScript, CoffeeScript, or any other JS-extension languages). Is this some kind of pattern, and do you know of any examples that show this?
I see a comment like this every time someone mentions WASM and have to eye-roll a little. Yes transpile-to-WASM techniques exist, but as someone like yourself with a self-professed inability to learn TypeScript, I don't think you are going to enjoy living on the edge of webdev with WASM either... The tooling is nowhere near as mature for one, which is always problematic when starting from a position of little knowledge.
Just search github.com and you will find countless example repos for many languages, but understanding typescript first is probably a more productive use of most people's time.
If you work in web-tech, I consider avoiding JS and TS to be almost negligent to some extent in 2022.
I find this comment validating. Most of the languages I've used start to click and feel good after a while - but it feels like this just won't happen for me with typescript. I'm clearly missing some fundamentals and I'm not even sure what it is I need to go study.
Can you explain why you feel that way? I only know JS and TS, and I love TS. With both languages, I understand I'm missing out on a lot, but I kind of assumed that was normal with my level of study (that is, reading in depth to fix bugs or make new features, and generally no further).
The type erasure. You get this nicely expressive type system that can encode all sorts of useful metadata, and then you have to recreate it all again by hand in 'Javascript' to access it at runtime.
Say I have `type Foo = 'bar' | 'baz'` and want to validate that an input string is within the set of Foo. Why can't I just enumerate the type to determine the valid values at runtime?
For me it's a great feature because it means that the code has purely javascript semantics not influenced by andy generic type magic that happens on compile time.
Occasionally I bump in the same things as you when I'd like to have some JS code generated on the basis of TS type information. I think there could be some preprocessing tools that could generate JS validators on the basis of TS code.
Now, trying Rust I strongly appreciate this separation. In Rust what your code DOES might depend on what you are going to do in the future with the result it returns. It's really shocking for me and makes finding errors really hard when the spot of the error messages jumps wildly across large chunk of your code as I change it.
I really prefer TS philosophy of "types are for you and your text editor and the actual code is for runtime".
how is it getting worse, exactly? I know it has features (all of which are optional) that can auto-generate code, but it _doesn't_ require being used. You can, and largely most do, use it just for the syntax for typing.
Is there something inherent with TypeScript that is causing issues that you can point to?
The funny thing is that TypeScript still has some way to go to be capable of fully expressing JS semantics. This shows that even most complex statically typed languages are way simpler than some dynamic languages.
> But there's a problem. Rewriting a library like TypeScript is extremely challenging. First, "absence of [a] specification" for TypeScript's behaviour. Donny's had to infer TypeScript's behaviour mostly test cases alone.
IIRC the TypeScirpt leadership is of the position that the implementation is the specification, which is unfortunate, because it means that the bugs in the implementation are necessarily part of the specification as well.
Go is staunch about its specification. To the point that the Go team maintains two independent compiler implementations to ensure that "features"/bugs in one implementation doesn't end up defining the standard and to ensure that a compiler can be implemented from the spec.
Dart has a complete specification. It tends to lag somewhat behind the most recent features (which are documented in separate language proposals), but we keep it as current as we can.
Maintaining a language specification is a lot more work than people realize, and a job that few have the skills to do well.
C/C++ "undefined behavior" means your program is wrong and not within the spec, not that different compiler implementations might differ. There is no scenario in which any code with UB is valid, regardless of what compiler implementation you use. That's very different. (And IMHO a horrible feature of the spec, but it's there for historical reasons.)
C/C++ spec does have some "implementation defined behavior", which is where you can't rely on what will happen unless you know what compiler you're targeting. They are explicitly listed out as such, and not that common.
I guess you have to define "specification" because there's a spectrum. A full book describing the formal syntax, semantics, type system, and standard library/built ins (and macros, preprocessing, compiler flags, what have you) is fairly rare because of how big it needs to be.
Partial specification like the grammar, the type system, etc are more common (particularly because they're so useful for writing one implementation, let alone multiple!).
If you're referring to the ISO spec, it's effectively completely irrelevant.
If you're talking about ruby/spec, a comprehensive test suite is really great, but is also not what most people think about when they think of a spec, though it also may count! You could even argue that such a thing is better than a traditional spec.
Yes, I was thinking about the latter but was unsure if it counted.
Yes, looking at the spec has been a helpful strategy for me to understand how something is supposed to work.
Nobody except Microsoft is able to write a spec for Typescript. Microsoft has already declared that the implementation is it, any spec a random person writes will just be ignored.
It's not about boredom, it's simply an implementation derived language as are Haskell or PHP or Python.
Having a spec is really valuable only if you intend to target different environments, that's a non-goal for TS it simply transpiles, rather than compiles, into another programming language.
It's also bound to follow each new JS feature and change.
The value of a spec is having something that resolves how to deal with quirks discovered in an implementation. Absent of a spec, any implementation quirks become a part of the language.
Which is okay as long as you only ever have one implementation, but when you want to, say, reimplement Typescript in Rust then you have to duplicate all of the quirks that came before, even in cases where nobody has yet realized that the quirk exists. Failure to do so means that your implementation won't be compatible with the canonical implementation.
At the other end of the spectrum, the Go project maintains two independent compiler implementations to ensure that the spec is implementable and to see that a single implementation, with all of its quirks, doesn't end up defining the language. That puts a lot of confidence in the spec to guide additional implementations, but, of course, is a lot of added work.
Code, called implementation here, is a specification written in a certain grammar that is executed.
You want 'evidence' tsc is not working as it should, by writting a duplicated less specifying specification that would sometimes conflict with the typescript one.
It make me remember the people that wanted to specify everything and generate implementation from it: that called a programming language.
They are not mutually exclusive. Code identifies a class of language. Specification identifies something that is conveyed through language. Specification can be provided in code.
In fact, while poorly named, what we call testing is actually specification. A test suite specifies for software authors what a program is intended to do and how it is intended to be used. Of course, as a nice side bonus, because the specification is delivered in code the documentation can be validated for truthfulness against an implementation by machine.
Just because the comprehensiveness of something is on a spectrum is not a reason to just not bother and have none at all.
Also, when done well a spec is a great way to learn & understand the language too. http://golang.org/ref/spec is so much more usable than Typescript's docs.
I agree that specifications are useful. I also think they're often presented as a panacea. It is important to acknowledge that they're a tool, with pros and cons, like any tool. The existence of a specification does not immediately mean that any and all ambiguities are resolved.
If an implementation isn't doing what you expect, is it just an issue with that single implementation? A problem with all implementations? A problem with what you expect? If another implementation does do what you expect is it correct or just lucky?
A spec resolves such discrepancies. Indeed, the spec may be where the problem is, but once the spec is corrected then implementations know what they must do. Absent of a spec, who knows?
#1 is the same as the current situation. With #2, you might get a bit closer but natural language has many ambiguities and idiosyncrasies that can render it useless in various situations.
You can try to define formal semantics, but that is not only hard to create, it's also hard to translate into an implementation, adding significant overhead in both parts of the process. And there can still be bugs, and there can still be edge cases with undefined behavior.
But at least formal methods tools can be applied directly to the specification, to show properties like "well, if you carve out this subset of TypeScript, you have something you can safely eval()" or "you can never have type errors with this usage of this construct in this way."
> "well, if you carve out this subset of TypeScript, you have something you can safely eval()" or "you can never have type errors with this usage of this construct in this way."
This kind of certainty is very at odds with TypeScript's design philosophy and the way it (necessarily) works. Because it's overlaid on JavaScript, and JavaScript is wildly dynamic (Proxies, prototype modification, etc), almost nothing is known with absolute certainty. To deal with that, TypeScript makes a lot of "optimistic" assumptions, but holds them lightly (i.e. its assumptions may cause type checks to pass when they shouldn't, but it's not going to lean on them for anything that could really blow up if they're wrong)
IMO it strikes a very good balance given the constraints - not criticizing the designers at all - but TypeScript has to be treated differently than other statically typed languages might be, because nearly everything it knows/says is just a "probably"
I've poked around in the TypeScript codebase just a little bit, and it's clear they've squeezed as much performance out of JS as they can (though really, you can tell that just by using the thing; it's impressively fast even though it's still "slow"). It's funny to see stuff like bit-flags in a JavaScript codebase
I am a little surprised nobody at Microsoft has started working on an official native compiler (or maybe they have, and we just don't know about it yet). It does feel like they've hit a ceiling
The core metric for TypeScript's success was to be able to put an IDE in a browser able to run and compile TypeScript itself, which is what the TypeScript team did with the Monaco (and Visual Studio Code) editor in 2011.
Monaco is just the GUI, it doesn't require language support (and in fact, the LSP that was pioneered alongside it means you don't even have to run the language support on the same machine, much less in the same codebase)
Again- any references for this claim? Genuinely curious if this is part of TypeScript's known history, or if it's just speculation
At the beginning of this year, kdy1 (Donny) said he'd be switching to Go after initially trying things out in Rust [1][2], I guess he decided to go through with Rust after all? Or is this something slightly different?
> The most challenging thing is the absence of the specification. I had to infer everything just from test cases.
> The second thing is the velocity of tsc. It's way too fast, and I was unsure if I could follow it up. So I decided to use semi-automated translation and selected Golang.
> But it was too boring, and more importantly, my programming ability has improved enough to follow tsc only by myself.
From the main post and other comments it seems like it was for personal reasons, rather than pragmatic ones. They said they didn’t know how fast the Go implementation would be, but felt better about the original Rust one (I’m assuming before this really became a problem).
I think the reasons they switched are pretty weak, but justified at the same time. Kdy1 looks like more of a Rust person (their GitHub has more active Rust repos than Go), so this should’ve been the choice from the beginning. Going with the comfortable choice over the “pragmatic” one is almost always the best option if you’re the only contributor (or plan to be for a while).
This reminds me a bit of Rome, which aims to re-implement most parts of a JS tool chain without requiring dozens of different, misaligned dependencies. They also happen to be writing it in Rust, though they haven’t finished the bundler portion yet.
I wonder if there’s an opportunity for collaboration here!
If you're just porting the code to a new language, the process can be significantly faster than writing it from scratch.
But if you're also re-interpreting the code, which seems to be Donny's approach, you can get bogged down with the discrepancies. The more the two diverge, the harder it is to debug discrepancies and keep track of updates in the upstream code.
I'm also porting a static analysis tool from an interpreted language to Rust, but it's a fairly straightforward port and I've been able to stick to schedule pretty well.
There is also the scenario that tsc is really not that good, and the discrepancies are not a problem enough to avoid potential stc benefits. This can eventually phase out tsc in typescript altogether, the core typescript team making stc the official implementation. This is especially credible with rust having good support with wasm, keeping the js runtime compatibility of tsc.
Two compilers would be mostly a source of issues and reduce TS velocity.
All of that for what, slightly faster compilations? TS server is already fast enough for me to develop giant ts codebases in codespaces on 2 vCPUS with 4 GB of ram.
From the article some reasons not to use TypeScript:
"... in VSCode, the red lines and warnings take a long time to update on large projects.
... absence of [a] specification" for TypeScript's behaviour. Donny's had to infer TypeScript's behaviour mostly test cases alone.
The pure size of TypeScript is daunting. It's had ten years to iterate, grow and add features.
"
---
Therefore for practical reasons, I stick with plain JavaScript (on Node.js etc.) and use a small assertions library
https://www.npmjs.com/package/ok6
Assertions are the ultimate "types", they can assert any property you can express in your programming language. The key is to be able to make them succinct so they don't obscure your main code.
Yes they don't give you compile-time type-checking, which would be useful indeed. But as hinted at the excerpts above, in practice compile-time type-checking can slow you down.
In most case all I need is a way to prove to myself that "if this function runs a set of tests-cases, then the assertions in it are truthful. And, if anybody calls this function with wrong types of arguments my assertions will tell me there is a problem.
Often enough of a good thing is enough. You don't need the most sophisticated language and type-system to create robust programs quickly.
> Yes they don't give you compile-time type-checking, which would be useful indeed. But as hinted at the excerpts above, in practice compile-time type-checking can slow you down.
This is backwards. Having your type system be unplanned and ad-hoc is what makes it slow and complex (thus TypeScript, which had to retrofit a type system onto JavaScript), and making types be arbitrary runtime code is the epitome of that. There are better points on the expressivity/constraint tradeoff, but you'll only ever reach them by designing them into your programming language up-front; you can't retrofit consistency.
I dunno, I love types as documentations of how things are supposed to be structured. But I also agree that assertions are great because they can express any condition in the language. I wonder sometimes if it's possible to have a type system that is "any possible assertion, as a type".
I think the main difference between assertions and static type-checking is that ... static checking can do its checks at compile time. That is a great feature for big projects.
But the benefit of assertions is they can express arbitrary requirements, not only about the type of each argument and result but also about intra-arguments relationships between arguments, and also between arguments and the result.
As an example I could say:
ok (argA < argB);
...
ok (result > argA - argB);
It's a tradeoff: More expressivity, simplicity and speed of development vs. earlier detection of type-errors.
Yes, this is called refinement typing (which is a special case of dependent typing). Idris and Agda are probably the dependently typed languages that are closest to being usable for general purpose programming, but they've still got a ways to go.
in theory no, because you run into the halting problem. but even in practical terms it would impede your ability to do static type analysis long before that point.
I don't really care about static type analysis, in a sense:
If I write an assertion `x is a number`, then fine, statically analyze that, that works today.
If I write a hard-to-compute assertion like `x is Prime` (or `x is a counterexample to the Riemann Hypothesis`), then I don't care if the compiler can statically figure it out. It can simply require all callers to test that before calling this function, they assert `x is Prime` directly -- no analysis required. This way my function can be sure it is only called with prime numbers, but nobody has to write any logic in a typechecker that can figure out whether that's true for arbitrary inputs.
I believe that's what dependent types allow you to do.
However, even in weaker type systems you can get a similar effect by just creating a PrimeNumber type, and for functions that require a prime number, only accept PrimeNumber. Then any number you want to pass to those functions will have to go through PrimeNumber's constructor.
What are the callers supposed to do when the assertion fails? Either you carry the assertion all the way back to the point where x is created - in which case you're implementing a type system, whether you call it that or not - or you have code that will have runtime failures in the middle of your computations.
> you have code that will have runtime failures in the middle of your computations.
Preferably during running of your unit-tests.
In practice much of the software we use today has runtime errors. How the code handles those varies. Static typing does not prevent runtime errors does it?
Which means you need more unit test coverage, have more tests to maintain, and your code structure becomes more brittle.
> Static typing does not prevent runtime errors does it?
Often it does; and if a validation rule is too complex or externally-dependent to enforce at compile time, static typing at least lets you push checking to the start of an operation, rather than in the middle of your computation.
that usually gets classified under contracts rather than types. eiffel is probably the best known example of a contract based language, but also check out typed racket, it uses a nice mix of types and runtime contracts in a dynamic language.
That is good vocabulary. Clearly there is a difference between static type-checking and assertions. But "types" is not the same as "type-checking". Type-checking can happen at runtime, or at compile-time. Or both.
My point was more that static type-checking can be seen equivalent to assertions, except it checks those assertions at compile time and typically lacks some expressive power compared to assertions, in most practical programming languages today.
Static type-checking is more a feature of the compiler than of the language. Consider type-inference: You can have a language where types are not explicitly declared most of the time. Yet they can be checked at compile-time, assuming you have a compiler that is able to perform that amazing feat.
This is like throwing the baby out with the bath water (and is very similar to what I see with people lambasting Rust for its memory strictness and then using C, C++ or Go). I'd take a slightly slower language with static types than reverting to using a dynamically typed language, any day of the week.
Rust is a "new" language and many packages from other languages get reimplemented in it. This is similar to Julia. Unfortunately, I had the experience that many Julia packages are not of high quality, not maintained, or do not run any more on the newest version.
Like every package repository (or human endeavor in general) it follows Sturgeon's Law: 80% of everything is crap. That said, there's 100k crates on crates.io, and many of them are fantastic (well-supported, actively developed, documented, etc.). For a new user, understanding which are high-quality is a daunting task, and is expedited by just asking an experienced person for specific recommendations.
Honestly if you're writing Rust, just stick with the top 100 downloaded packages, and you'll be fine. That's basically what I do, unless the tasks requires quite specialized work.
Fine. I imagine someone posting that they just published their first ever crate. Maybe it’s the first and only Rust binding to some useful library. But cool your horses, this poster says (by implication), because 80% of everything is crap. Of course no one says that out loud. But that is the inevitable conclusion.
Maybe the main point of four average Rust users publishing a crate each is so that burntsushi can publish one great one.
Although I cannot comment on this specifically for Rust, what I would confidently say is that one of the best methods for finding the “best” dependencies in any language is to read lots of code. Find the popular and/or most useful projects written in the language on GitHub and see which dependencies that project uses and how they are used.
At least in my career this method has served me well. For a given problem domain I was able to quickly identify the best/most popular packages to use by reading the code that was heavily used by others. Obviously the more you do this the easier it becomes.
Julia's engineering is notoriously low quality (perhaps because it's more popular for scientific code). Almost any other language has a higher bar for what level of best practices is normal, IME.
Just as information that seems to be missing from other comments, there is already a Microsoft compiler for TypeScript, which compiles to native code via C++, however it is only a subset and is targeted to IoT
I hope he and others succeed! I've been using chatgpt to translate Python/TS into Rust/C++ for some smaller projects of mine in the past week. And with some tweaking it works beautifully. Sometimes it makes up crates that do not exist though.
Unrelated, but my Typescript wish is an (optional) way to express its types at runtime in a reflective way. Most of the time you don't want this, but sometimes... sometimes you wish you could use `keyof` or somesuch at runtime.
The problem isn't runtime implementation details, but the static analysis and compilation steps which devs rely on constantly. The tooling has had major performance issues at times (though in my experience it has improved consistently) and the experience of writing TS could be dramatically improved by having a language like Rust or Go powering it at the core.
If you meant parallel processes within the tooling, I suspect web workers wouldn't be the solution (they're part of the Web API). Node does expose a worker thread API, which they might already use in the typescript repo (I'm not familiar with any of their code).
I was investigating doing a typescript compiler speedup.
I think the time might have finally arrived where we can use ML to generate a faster compiler in a language like Zig.
Probably not for all portions of the compiler. But I think with some reinforcement learning, it ( ... might ...) be possible to automatically generate a robust Typescript parser in Zig.