Great, but unfortunately, even when compiled, the startup overhead is about half a second, which makes it unsuitable for many applications. Still I applaud it, as Shell scripting is finicky, people thend to rely on bash features, and perl is kind of over. Ruby was, and still is, my go-to language for this purpose, but I've recently migrated some scripts over to Swift.
Swift does a much better job at this as interprets by default, and a compiled version starts instantaneously. I made a transparent caching layer for your Swift cli apps.Result: instant native tools in one of the best languages out there.
Not GP, but can confirm on my M3 Max using the hello world sample:
$ time dotnet run hello-world.cs > /dev/null
real 0m1.161s
user 0m0.849s
sys 0m0.122s
$ time dotnet run hello-world.cs > /dev/null
real 0m0.465s
user 0m0.401s
sys 0m0.065s
There are a lot of optimizations that we plan to add to this path. The intent of this preview was getting a functional version of `dotnet run app.cs` out the door. Items like startup optimization are going to be coming soon.
Ah, I didn't managed to find something that talked about what was planned for this, so I opened an issue asking for that.
Is there a doc somewhere talking about it ?
This is nuts. More than a decade ago Microsoft made a big deal of startup optimisations they had made in the .Net framework.
I had some Windows command-line apps written in C# that always took at least 0.5s to run. It was an annoying distraction. After Microsoft's improvements the same code was running in 0.2s. Still perceptible, but a great improvement. This was on a cheap laptop bought in 2009.
I'm aware that .Net is using a different runtime now, but I'm amazed that it so slow on a high-end modern laptop.
This is also a preview feature at the moment. They mention in the embedded video that it is not optimized or ready for production scenarios. They release these features very early in preview to start getting some feedback as they prepare for final release in November.
For comparison, skipping dotnet run and running the compiled program directly:
time "/Users/bouke/Library/Application Support/dotnet/runfile/hello-world-fc604c4e7d71b490ccde5271268569273873cc7ab51f5ef7dee6fb34372e89a2/bin/debug/hello-world" > /dev/null
real 0m0.051s
user 0m0.029s
sys 0m0.017s
So yeah the overhead of dotnet run is pretty high in this preview version.
IME, windows defender hates compilers. When I run my big C++ project, Defender consumes at least 60% of the CPU. Even when exempting every relevant file, directory, and process.
Task manager doesn't show it, but process explorer shows kernel processes and the story is quite clear.
I run in a Debian arm64 container. I get 500ms consistently. It is using a cached binary, because when I add —no-build, it used the previous version. I’m not sure where it stores cached versions though.
I’ll try to compare with explicitly compiling to a binary later today.
But that’s the thing. It’s a JIT, running a VM. Swift emits native code. Big difference.
Maybe I’ll add AOT compilation for dotnet then.. Strange they didnt incorporate that though.
> But that’s the thing. It’s a JIT, running a VM. Swift emits native code. Big difference.
It's not only a JIT, you can preJIT with R2R if you need, or precompile with NativeAOT, or i think you can fully interpret with Mono.
Edit: it looks like the issue with the dotnet cli itself, which until now was not on a 'hot path'. `dotnet help` also take half a second to show up. When running a dll directly, I think it doesn't load the cli app and just run the code necessary to start the dll.
Tangential, but Windows Powershell kept nagging me to download PS6, so I did it, then I had to revert it to 5.1, because running a script had a ~1 second overhead. Very annoying. For one-off runs it's often the starting time what's matter, and Powershell just got worse at that. (In the end, I settled for .bat files in a cmd.exe window, chatGPT can write any of them anyway.)
Does dotnet install the script's dependencies all over again every time you run it? The quoted part was about the 0.5 second startup overhead, which I figured did not include installing the dependencies.
Anyway, lots of Python scripting can be done with the standard library, without installing any dependencies. I rarely use third-party dependencies in my Python scripts.
It may be that they are speeding it up by keeping the .net runtime resident in memory. They used to do this with Visual Basic runtime
I ran norton utilities on my pc yesterday and noticed a new service - it was .net runtime. Please note that I am a developer so this may be just to help launch the tools.
> I’d say anything except long running processes or scripts that take 5+ second to complete.
I don't think you're presenting a valid scenario. I mean, the dotnet run file.cs workflow is suited for one-off runs, but even if you're somehow assuming that these hypothetical cold start times are impossible to optimize (big, unrealistic if) then it seems you're missing the fact that this feature also allows you to build the app and generate a stand-alone binary.
These cold start times are not hypothetical, as shown by multiple commenters in this thread. They also have been demonstrably impossible to optimize for years. Cold start times for .NET lambda functions are still an order of magnitude greater than that of Go (which also has a runtime). AOT compilation reduces the gap somewhat but even then the difference is noticeable enough on your monthly bill.
This dismissive “startup time doesn’t matter” outlook is why software written in C# and Java feels awful to use. PowerShell ISE was a laughingstock until Microsoft devoted thousands of man-hours over many years to make the experience less awful.
But it doesn’t. It still seems to run in JIT mode, instead of AOT. That’s exactly why I made swift-scc (interpret vs compile, but essentially the same problem)
I recommend you read the article. They explicitly address the usecases of "When your file-based app grows in complexity, or you simply want the extra capabilities afforded in project-based apps".
> Imagine cat, ls, cd, grep, mkdir, etc. would all take 500ms.
Those are all compiled C programs. If you were to run a C compiler before you ran them, they would take 500 milliseconds. But you don't, you compile them ahead of time and store the binaries on disk.
The equivalent is compiling a C# program, which you can, of course, do.
Does this recompile each time? It should be simple to cache the binary on the hash of the input? A sub second first run followed by instant rerun seems acceptable.
The dotnet run command caches. However, even with the cached version, you have a startup overhead of about half a second on my M1.
My "Swift Script Caching Compiler" compiles and caches, but will stay in interpreted mode the first three runs when you're in an interactive terminal. This allows for a faster dev-run cycle.
Ruby can be slow as hell as well. Start the ruby shell for gitlab. Of course this only happens when tons of packages are loaded which will probably never happen for a cli tool, right?
Swift does a much better job at this as interprets by default, and a compiled version starts instantaneously. I made a transparent caching layer for your Swift cli apps.Result: instant native tools in one of the best languages out there.
Swift Script Caching Compiler (https://github.com/jrz/tools)
dotnet run doesn't need it, as it already caches the compiled version (you can disable with --no-build or inspect the binaries with --artifacts-path)