Hacker News new | past | comments | ask | show | jobs | submit login
Deno 1.6 supports compiling TypeScript to a single executable (github.com/denoland)
657 points by andyfleming on Dec 9, 2020 | hide | past | favorite | 280 comments



I've been using vercel/pkg with great success, in order to achieve a similar target and package a whole application into a standalone executable:

https://github.com/vercel/pkg

This can be useful for people wanting to do this with Node. It's nice to have a single file that can be started right away without any external dependency. And also, it prevents from having to distribute the full sources. Kudos to the Deno devs who have integrated this option directly into the runtime.


I'm using vercel/pkg as well. I have a Node.js server which generates HTML and opens a browser on Windows which then asks the server for that html at 127.0.0.1.

So browser will be my GUI and Node.js packaged with vercel/pkg my back-end. It is more flexible than say Electron because GUI can be anything I want it to be.

My concern is only will users accept a local server running on their desktop. I've tried to configure the executable so that the server accepts connections only from the same host as where the http-requests are coming from.

I assume the same situation would exist with Denon, if you build a product with it and want to use the browser as your front-end. Are users OK with a server running on their PC?


> My concern is only will users accept a local server running on their desktop

I would say that users don't mind. They want a working application, that does what it promises, and is dependable. Things like having an underlying server is just an implementation detail, nobody really cares (talking about the big public here, not the more technical users of HN who might have a say about the technology choices, but are overall a very small portion of potential users)

That doesn't mean the implementation should be sloppy, though! Make sure to iron out all those security concerns regarding 127.0.0.1, because it could cause a data breach or some other severe damage. Other than that, just aim to provide the best user experience.


Right. One additional complication might be that some web-tech in the browser may require https, which would bring its own complications with security certificates.


PSA: If you have an https web site that needs to talk to your server in localhost, that localhost server does not have to run https since localhost is considered potentially trustworthy.

It will not trigger the mixed content errors (for Chrome and Firefox – Safari does not do this for now, but it is in the works: https://bugs.webkit.org/show_bug.cgi?id=171934 )

I have seen some companies do the following too: They embed a snake oil certificate and instruct people to disable web security, don't do this. Some other companies do something like purchase a foolocal.com https certificate (spotify and dropbox do this I believe), but that certificate might be revoked afaik.

As to CORS, you need to check that the incoming request comes from your own remote machine, otherwise if you enable cors for all, other sites can scan and get your user's data (ebay was recently doing this, scanning ports: see https://blog.nem.ec/2020/05/24/ebay-port-scanning/ )


> It is more flexible than say Electron because GUI can be anything I want it to be.

Feels like one of us is misunderstanding something. Electron lets you run any web technology - so when users open your app, they are greeted with whatever you can show on a webpage.

You can also have Electron run in the background as a server if that's something you're into ;)


I believe the idea is if you wanted a client built with C# Winforms, wxWidgets, JS+HTML or anything else they could all run against the same backend server.. So it'd be more flexible than Electron in that you wouldn't need to use web technology at all, if you don't want to.


Right. Also each Electron app "comes with a copy of two large software frameworks, Node.js and Chromium" ( https://medium.com/dailyjs/put-your-electron-app-on-a-diet-w... )

Whereas if you build starting from Node.js (or Deno I assume) you can skip the Chromium part. Instead of packaging Chromium with your app you assume that users have a browser and can use that to talk to your app.

The benefit of using Node.js is you don't have to use a different language for the backend. It helps.

BTW. "Electrino" in the linked-to article seems interesting too.


The additional benefit of this approach is that you can then build your application such that it works with all standards-compatible browsers, such as Firefox.


That's not a benefit - that's a down-side. When writing a front-end for Electron, you only worry about 1 browser and you have a guarantee that it will work the same way across every OS.

If you have to write a front end (website) that works with more browsers, you have to put in moro work.


It's a benefit from the point of view of the user and for the openness of the web. We shouldn't rely on features exclusive to a single browser anyway.


> My concern is only will users accept a local server running on their desktop.

This sounds pretty cool -- is there a way for the deno runtime to restrict the server process to only be able to generate the legal types of http traffic that a browser could generate if allowed to open a connection to a specific url (or set of urls)?

If so i'd feel more comfortable about running the embedded server -- ideally the security risk of running the server would be closer to the risk when running an embedded set of daemon browser tabs -- rather than the risk when exposing all the unfettered power of a unix process ...


This reminds me of the server that Zoom used to have. Accepting connections only from 127.0.0.1, alone, isn't enough, since any request from the browser would match that IP, even if the request was being made through a XSS attack.

I'm sure someone with more knowledge in security would better chime in.


What I do is generate a random token, pass it to the browser I spawn, and only accept requests that include the token.


What is a good secure way to pass a random token to the browser? If the token is part of the URL, which is in the command-line, then it appears other users can see the token. What I do for DomTerm (https://domterm.org) is create a small only-user-readable html file which sets the secret token, and then does a 'location.replace(newkey)' where newkey to an http url to localhost and includes the secret token. I spawn the browser file a file: url, so teh browser can only read the file if it is running as the same user. Better suggestions?


I use chrome's devtools protocol, so I get a pipe to the browser that I can issue commands over. The token is sent to the browser via that; there is no URL on the command-line for other users to snoop via 'ps' or the task manager.


Pgadmin does something like this. And also makes it easy for you to get another URL with that token, in case you want to open the same page in a different browser.


Wouldn't proper CORS be enough? I guess you would have to avoid putting any sensitive data in GET requests


No, because CORS can only restrict which origins (scheme, domain, port combinations) are able to access the site's data. But you're not even connecting from a web origin but from localhost and you're trying to defend from all access except by your frontends. For this, you need a shared secret between the server and the frontend.

A further limitation of CORS is that certain requests are allowed even if they are not from an allowed origin.

To conclude, you definitely need a secret.


IKs there a good simple way to keep it secret?

I assume someone could have a look at the JavaScript on the browser and see hey this must be the secret stored here because it is passed to the server on every request. Then write there XSS attack to use that.


The secret would have to be non-static (not baked into the code).


I'm so fuzzy on the details but isn't this what client certs are for?


The cert couldn't obviously couldn't be static in this case (otherwise it would be trivial to get the private key).

Creating a cert during "install" probably adds a good bit of complexity (especially if the map has multiple env targets)


You could accomplish this with client certs too, sure. A random secret is a simpler solution in many ways and accomplishes the same goal, though.


One problem is that we developers are lazy and use Access-Control-Allow-Origin: * (wild star) instead of actual hostname. (eg. allowing all origins to access the backend)

No modern browser allows access to localhost without that header.

But it's still possible to forge a request using curl or whatever to bypass CORS. So as the parent post suggest - use a token of some sort.

I also recommend using a strict Content-Security-Policy to stop X-site injection attacks. (eg someone adding an image to your page/app with src="/api/cmd=rm -rf /"


That's a good trick. Thanks for the tip.


Do you have any trouble with the users being prompted by the Windows firewall to allow the server to listen on localhost? I like this way of doing things but I've had propmts like that for my own dev tools and I think that would put off my users.


Anecdotal, but I maintain a mildly popular Spotify player (https://github.com/dvx/lofi) which needs a similar flow for OAuth authentication. I've never had an issue created or complaint about the Windows Firewall popup.


Nice player BTW never heard of it!


I can see that could be an issue. I guess one solution is to make sure users understand that that is going to happen.

These days everybody is used to their smartphone asking things like "Do you want to allow App-X to use your microphone" etc.


Interesting. Are you sure the firewall prompt comes when the server is listening on 127.0.0.1 as opposed to 0.0.0.0?


I'm not actually sure in which cases the prompt happens these days - I meant that to be part of what I was asking but didn't word it very well. Hopefully listening on 127.* doesn't trigger it!


I did the same but used a server coded in go[0], I don't think it's a security problem as long as you take the standard security measures that apply to any web server. Mine is something that just scratches my itch so I explicitly enabled cors (if a malicious website can guess the endpoint, it's possible to get my stupid dev logs as JSON during the times this logger is running).

[0]: https://github.com/egeozcan/json-tail


I am trying to do this too. But I would like to make it work on MacOS as well. The problem there is Safari does not let you connect to localhost over http. So if Safari is the preferred browser of the user, you need to convince them to change, which is hard.

Or is there à better way there?


I can connect to localhost on Safari Version 14.0.1 Make sure you also load index.htm from the same host! (localhost) so you need a little web server in your app that serves that file.

One problem is however that there is no easy way to get Safari to run in chromeless/app mode (without browser/url bar etc), which you can do in most other browsers using --app=url flag


Thank you, I will try. I was actually trying to mix web app (https:// mydomain.com) and connecting to localhost. And yes, a localhost URL in the bar looks ugly... Trade-offs...


See https://bugs.webkit.org/show_bug.cgi?id=171934

I hope they'll fix it soon.


I may misunderstand the thread, but it looks to me like they don't want to fix it, even though it is the spec, and other browsers do follow it.


in this case why not just use go-binary as the backend? go is natively designed to be single executable and you don't need any package tools, and it does everything node.js can do on your OS while using your browser as a GUI frontend.


Except you'll have to develop with another ecosystem of packages and cannot do generic programming "natively".


What I like about not integrating the build-step as Deno does: You allow competition and the market comes up with great ideas like Vercel did with pkg.

Building TS projects is quite demanding and I doubt if one party monopolizes this important step and thinks it does the best job it will degenerate an ecosystem. Even the TS team says the build system is not the core of their work, they just have one for convenience but encourage the community to compete and complement. Integrating build systems is good for beginners who struggle with them but for the rest? IDK.

Or in other words, Deno wants to be more than just an opinionated node-Typescript-distribution nobody cares about but then they need to create this ecosystem and focus on the core (what's their core and value add other than repackaging node and TS would be the next discussion).

With integrating the build step they do the exact opposite, they shut-down an ecosystem before it can even start. There's a night and day difference between good and bad build systems and only competition and a rich ecosystem can bring up the best solutions.

FWIW, there're tons of ways to compile TS, every with different trade-offs and it's good that we have these options.


I know providing a fully baked compiler API isn’t high on their list (think babel). I couldn’t find any information from the Typescript team regarding anything you’ve stated about 3rd party build systems and the compiler being provided only for convenience.

Do you have anything you can point to from the team about this?


It's right in their wiki, "non-goal" number 4 in their TS Design Goals which is also referenced again in some issues:

[a non-goal] Provide an end-to-end build pipeline. Instead, make the system extensible so that external tools can use the compiler for more complex build workflows.

https://github.com/Microsoft/TypeScript/wiki/TypeScript-Desi...


I think I interpreted this differently than the parent poster


If it's good that we have these options, what Deno provides is just another option for us to choose from. I don't see how this is a shut-down for the ecosystem.


I don't think, Ryan wanted Deno to be just another build system for TypeScript but something a bit bigger. And even if it's meant as a build system there're much better options out just for this purpose.


Why anyone would look at the litany of mistakes that is npm and Node, then look at Deno and all of the same developers learning nothing except how to implement its "hurr durr URL loading code is cool" approach to security and think "this a good idea" is beyond me.

I appreciate Deno because I can ask job interview candidates what their thoughts are about it, and when candidates for senior positions don't point out any of the billion obvious reasons it's a stupid project for stupid people, it saves me a ton of time. Otherwise, it's a waste of time and effort, and all you have to do to convince yourself of that is look at the contribution history of the most prominent contributors on github.

I have never, not once, in my life as a developer wanted a project to die so badly.


Hiring people based on their ability to predict your own idiosyncratic hatreds of specific technologies is just a horrible idea. It also ensures your ideas never get challenged and you never improve.


Well, at least its mutually beneficial because they sound like a pain to work with.


This is called cultural fit. Predicting idiosyncratic hatred is not enough, one must also share it.


With an attitude like yours, I think it's the candidates who saved a ton of time...


> I appreciate Deno because I can ask job interview candidates what their thoughts are about it, and when candidates for senior positions don't point out any of the billion obvious reasons it's a stupid project for stupid people, it saves me a ton of time.

I hope you're either self employed, or that you run your own company. If not, this comment is a red flag for potential recruiters; you might want to consider editing it.


Based of this I don't think anyone genuinely senior (or not) would want to work for you with your insane biases.

So win-win, Your happy, they are happy.


Who hurt you?


Can you elaborate? What stands out to you in their contributions?


At least when you pipe to bash it feels gross and wrong. I can't believe they made it a fucking feature


I'm reading your post in Derek Zoolander's voice when he sees the model that Mugato had made.

> What is this? A center for ants?


I'm not very familiar with Deno but I wonder how deno compile compares to pkg when it comes to native modules.

One of my gripes with PKG (and all other node.js packaging tools) has been that it's a pain in the ass to package when your dependency includes a native module, for example SQLite.

Since Deno has a different architecture and works in different ways, thought I would ask just in case there's a solution for this on Deno, that would be great.


Yes, packaging a binary dependency can be problematic. I guess it depends on how easy the library puts it to import the binary in a compatible way (with the mechanisms that are expected by pkg)


WebAssembly fixes this.


webassembly doesn't support file locking for sqlite databases that would allow for other processes to interact with the same database your application uses. It's actually one of my biggest gripes. You can use it in-memory and save the output or use a different database approach, and lose the features of sqlite.

Of course, a different kind of database, similar to say leveldb could work in single-process mode.


WebAssembly supports threads and shared memory, which should make it possible to implement software-based locks in the meantime. I definitely see the issue that you're bringing up, especially wrt external interoperability.


For me, I often have a sqlite client connected to my active database while my application runs to run adhoc queries to check application's state/structure... with a wasm implementation, that is simply not possible.


We need a file system abstraction implemented in a webworker. Do you know if it or something similar exists?


Not really as the WASM version of Sqlite is memory only and cannot write to disk. Please feel free to correct me if this has changed though


I don't know all the issues but AFAICT wasm is just executing code. The libraries you provide for it are up to you. So if you so, for example, thinking of node, if I expose `fs.openSync` `fs.readSync`, `fs.writeSync` and `fs.closeSync` what's missing that wasm can't write to disk?


Nope... the wasm fs adapters don't allow the flexibility needed to really operate correctly.

It might be best to try alternatives like wrappers around leveldb as a baseline, which of course isn't sql and has its' own drawbacks.


SQLite still has a workaround for locking on Win95. That could be adapted for wasm.

https://github.com/sqlite/sqlite/blob/610f11de25993960ff616e...


In another comment I mention, that I often use a desktop sqlite client to actively connect to the operation sqlite database an application may be running against. Wasm will not support this, even with the workaround above.


WASI fixes this https://wasi.dev/


This is such a good feature. Go has been great for shipping single purpose binaries (like the CLI for https://fly.io), but I really enjoy writing TypeScript more than Go.


A lot of languages are doing single static binary deploys now. Rust, Nim, Go. It's a really nice pattern.

Static binaries are so much easier that the gross PHP / Ruby / Python pattern that has to ship directories full of files that (usually) have to be put in the correct place.

It's also easier than shipping a runtime like a JVM.

With a single binary, containers get even slimmer.


> A lot of languages are doing single static binary deploys now

As a developer for more than 30 years, I find that statement quite interesting. When I was a kid I was very happy to find a way to compile Basic to an executable like I was doing in Pascal and C++. For me is the standard way of thinking about applications.

Is it a common experience to actually have to ship many files for one application? I thought that it was just common for the web as it fits how HTML evolved not for anything else.


Same here, the static linking hype feels real strange, given that was the only option we used to have back in the day, and having access to dynamic linking on 16 bit platforms felt like liberating.


I don't really see the hype, I think it's just the intersection of a few factors:

- Languages like Rust which have an ecosystem that evolves very quickly (and uses complex symbol mangling schemes that would break all the time) wouldn't fare very well with dynamic linking in the first place. What good is dynamic linking if you need a different version of your .so file for every binary?

- Dynamic linking is not quite as useful today a it was in the past. Code size is not usually very significant, either on cold storage or in RAM. Actually for RAM it's cache usage that matters most, and static linking can usually be more efficient here through LTO.

- Over the past ~2 decades a new generation of developer arrived and many of them (in my experience) barely use "low level" compile-to-machine-code languages. They often specialize in scripting languages or languages that require a framework. Some of these developers are now learning Go or Rust and for them the idea of shipping a single ELF binary that packages the entire app might be seen as a novelty or maybe even as an innovation.


> Dynamic linking is not quite as useful today a it was in the past

Linux distributions need to be able to backport security fixes to shared libraries, test them and deliver updates to all users quickly.

With static linking it becomes practically impossible.


PHP, JavaScript, Python, Ruby, even Java and C# didn't (don't?) have a mainstream way to create a single binary with unused code and unused dependencies removed.

Dynamic linking was cool because you could use system dependencies which people don't want to use because you can't rely on them, especially for cross platform apps and also for efficiency, RAM (which people stopped caring about) and, disk space (this boat sailed a loong time ago) and compilation (we have 10000x as powerful computers now).


Java has had static linking support since around 2000, for embedded deployments.

C#, yes it has been mostly dynamic.

PHP, JavaScript, Python, Ruby, don't count, they are scripting languages, bundled with an interpreter.

Still, Python and Perl bundlers exist since around 2000 as well.

Compiled languages like Basic, Pascal, Modula-2, Ada, Eiffel, Modula-3, C, C++, Haskell, OCaml, SML, .... all started in days where static linking was the main option.


> Java has had static linking support since around 2000, for embedded deployments.

Yeah, but for non-embedded deployments i.e. 99% of Java code out there?

> Still, Python and Perl bundlers exist since around 2000 as well.

I don't know the Perl one, but the Python ones definitely aren't mainstream. They're finicky and relatively hard to use and definitely not distributed with the Python distribution, which would make them ubiquitous and well supported.

> Compiled languages like Basic, Pascal, Modula-2, Ada, Eiffel, Modula-3, C, C++, Haskell, OCaml, SML, .... all started in days where static linking was the main option.

Every language in the olden days had static compilation support :-))

That's why we had articles such as these: https://www.joelonsoftware.com/2004/01/28/please-sir-may-i-h... (which I agree with)


> Yeah, but for non-embedded deployments i.e. 99% of Java code out there?

100% of commercial JDKs have support for AOT compilation, what you are getting nowadays on OpenJDK is the free beer version of it.

> I don't know the Perl one, but the Python ones definitely aren't mainstream.

They surely were mainstream on Windows back in .com days, specially via py2exe and ActiveState tooling.

> Every language in the olden days had static compilation support :-))

Many of which are still mainstream languages.


I know you're aware of this, but 70% of the languages in your list are niche languages and especially for web development they weren't that popular.

It was (is?): PHP, Javascript, Java, Python, Ruby, C# or nothing.

Especially the dynamic languages are extremely popular, primarily with smaller companies and startups.


The whole point of my discussion doesn't have anything to do in which field the languages are popular.

Zope and AOL Server teached me that those dynamic languages are really only good for OS scripting tasks anyway, but that isn't the subject of this thread.


It is the point of the thread :-)

For many developers, maybe the majority, those languages are their objective reality. They haven't used anything else, they might not ever use anything else.

So things which extend the range of their tools are very much appreciated.

They're not going to shun their existing programming languages because other people don't like them and they're not going to switch to OCaml or to commercial Java compilers, either ;-)


Ruby has some options too, one of them being ruby-packer. Here's[0] how I used it to generate single executables.

[0] https://nts.strzibny.name/making-a-ruby-executable-with-ruby...


I imagine dynamic linking solved a problem for a long time that no longer is: binary size.

With internet speeds of today and immense storage sizes, the main attractiveness of dynamic linking vanished.


Not everyone is blessed with such connections, without data caps.


Even then, in general code size is not a significant part of an application's size, various assets are. I'm working on a 10k lines rust application with 74 direct and indirect dependencies and the resulting static binary is less than 3MB in size (less than 1MB xz-compressed). That would've been a significant amount of storage in the 90s but it's almost negligible nowadays.

And given that shared libraries need to be installed and updated separately and they have to ship the entire code of the library (whereas a static binary can be link-time-optimized to get rid of unused code) it might not always be a win in terms of total bandwidth.


Containers changed the game. 99% of Deno/Go/Rust server software will be running on containers in practice. You're no longer deploying to a system running other programs which may share pages. It's a container, not a process. Dynamic linking in a container is just a vestigial useless step.


Interesting, that’s the opposite of my experience. Containers allow bundling up lots of dynamic dependencies together so that they can be installed as a single unit. Static binaries can just be deployed as a single file, no need for containers.


> Static binaries can just be deployed as a single file, no need for containers.

Sure but most web apps are more than a single binary. There's generating static files, a database, background worker / cache, and more. Then there's wanting to be able to develop that project as a whole on Windows, macOS and various Linux distros as well as deploying it to a specific distro of Linux (most likely). Then there's the distribution of the binary across a network in a reasonable way.

Docker and its ecosystem of tools solves all of those problems once your app is containerized. And if you want to go 1 step further and run a distributed system with load balancers and friends, container orchestration tools let you solve this problem at a level above your application.

And the best part is you can do all of that in the same way with any tech stack.


Not if plugins come into play, unless you want to go OS IPC for that.


The issue I see with this approach is that library code is duplicated among applications. In the limit, this is how you get every Electron app bundled with Chromium.

It's a little more complicated, but modern Node development is a huge step forward to the old days of sadly attempting to get the LINPACK header configuration correct in your C project.


In the case of the above listed languages, it's not just that the application is compiled to a single executable, it's that they're statically linked/truly self-contained. The only thing they dynamically link against is the OS's standard library. So there's no "DLL hell" either.


Do they? As far as I know, Go is the only mainstream language that supports static binaries with normal non-trivial programs. Rust for example depends on dynamically linked libc if you use the standard library. While you technically can statically link libc, it is unsafe with glibc.


I think the belief that Go does no dynamic linking whatsoever is outdated; I seem to recall that they recanted this approach on Mac and Windows since these platforms only offer a stable system interface via dynamically-linked libraries, and trying to re-implement all of it ended up being a lot of work only to end up with broken code after system updates. On Linux I think Go still tries to link mostly statically, but if you use parts of the stdlib (e.g. `net`) you'll still end up dynamically linking to libc unless you instruct it otherwise. If anyone more knowledgeable could speak up I'd welcome it.


Go supports dynamic linking and most Go binaries will be dynamically linked. On Linux, this is mostly due to optional features which can be easily disabled. The optional features are all related to supporting rare and unusual system configurations and do not affect normal operation. Note that cross compilation with Linux as the target does not support dynamic linking, so all Go Linux binaries compiled on a non-Linux OS will be statically linked.

Regardless of how we scope it, Go is still the only mainstream language capable of producing statically linked Linux binaries without libc while still allowing full use of the standard library.


C with musl does just fine.


This is correct.


Rust has supported statically linking to MSVC and musl for a while now but also added static glibc support on Linux very recently:

https://github.com/rust-lang/rust/pull/77386


Is it possible to compile with statically linked musl with the pre-built Rust toolchain on a standard glibc based distribution?



musl is generally not optimized for performance.


Yes, it works well. The main caveat I've found is you need the rustls feature of packages rather than openssl. Though perhaps that was more related to cross compiling for Arm.


Ada, FreePascal, C and C++ do perfectly fine.

Not everyone is using Linux with glibc linking issues.


Using C and C++ without libc excludes the use of nearly all common libraries.


glibc is the problem, other C and C++ standard libraries don't have any issues with static linking since these languages exist for about 40 years now.


libc's other than glibc don't have issues with being statically linked.


Rust and Deno qualify as a "single executable" for what I need, but you're basically correct. It's not quite as dependency free as a Go binary.


We currently do dynamically link to glibc. But we could in theory support musl, which would remove our dependance on the glibc dynamic link.


That's why you statically link musl C


> With a single binary, containers get even slimmer.

Not really. I agree on the other benefits of binaries but our containers usually only have the final layer change (the source code). This means that all the lower layers, python base image, requirements, etc are cached. So we can ship 100 times and add maybe 100mb of new container overhead. Binaries will ship 100% every time.


It’s funny you say that because when PHP / Python / Ruby were all the rage, most people considered needing to compile everything into a single binary gross.

What’s old is new is old again!


It's a circular argument and a false dichotomy tbh, they both have their place.

Ex: I maintain an 'old' PHP / JS application while building a new Go / TS one. With the old one, I change a file, it gets automatically ssh'd to my dev server, and my changes work instantly. I'm sure that if the PHP and JS had to be comp/transpiled, it would take a minute or so (it's just under 100K LOC, some files 13K LOC).

For the new application, I get the same fast feedback because Go does incremental compiles and create-react-app with TS also does incremental compiles with live reload (without requiring a full page reload).

For production builds, the old is pulled through tools like ioncube and uglify, the new one churns out an optimized binary and webpack-flavored .js files.

I mean if you zoom out enough there's no noticeable difference.


Because compilers - apart from Pascal dialects - were quite slow. With Go, you aren't wasting time waiting for the compiler. So it feels like a script language, but gives you that extra security of a compiler.


Python has had packagers for many years...cxfreeze, pyinstaller, and others. I hear Nuitka works well. Just pointing out the fact, not disputing your statement. I also wish it was more of a first class citizen.


> It's also easier than shipping a runtime like a JVM.

jlink has shipped since JDK 9 and can package all dependencies and the JRE into a single file. Hello world clocks in at about 22 MB.


Yeah, its especially horrible in python for webdev. How are you supposed to deploy django/flask projects when NOT using docker or some PassS?

I haven't figured out anything better than a git pull script to update things. I can't imagine there is nothing better in 2020.


You may be interested in Nix. It can track all the dependencies of your project. So you d an copy them to a server, or Tarball them, or just run them with Nix on the server. It also has a nice property that you only ship what is necessary, since it keeps track of all dependencies.


Prior to docker some common patterns were to rsync the files up to each server, or compress the files to .tar or .zip on a build server, rsync it to the servers and then uncompress into your desired directory.


Pip freeze?


The single binary still have to embed the runtime somehow. So the container size would be similar?

Except the single bundle trimmed off unused part from the standard runtime


Definitely, I do wonder how it compares in binary size. Especially the 'baseline' size of a hello world.


Currently it's 30-45Mb depending on the OS. We're working on providing a "lite" version of the runtime, that doesn't include tools like formatter or linter. Preliminary works show that we should be able to trim executables down to about 20Mb.


Currently binary size is a bit larger than the `deno` cli.

We are currently working on reducing size for these `deno compile` binaries though. From preliminary testing we think we can reduce size by around 60% - maybe even more.


Seems like it's big, our Go CLI clocks in about 30MB on Linux. Deno with hello world is like 32MB.

But I honestly don't mind up until about 100MB.


The files are big now. ~30 MB.

Size was not really a goal for the first pass of the feature.


The link shows its 48MB for a trivial program (I assume its trivial - its called "cat")


Hey, Bartek from deno.land here.

I'll be more than happy to answer your questions about Deno and its development.


Last time I tried deno there was some friction with depending on npm packages that didn't natively support deno without vendoring, is that easier nowadays? Can we seamlessly import anything from npm inside deno-run code and use the deno stdlib side-by-side with node_modules code? We love the Deno direction and would even willing to donate $ to grow its development, but for us it's pretty much non-starter to switch to a different runtime until we can use the wealth of packages available on npm without any additional friction.

The Deno docs here almost seem to purposefully avoid answering this question: https://deno.land/manual@v1.6.0/examples/import_export


That really depends on concrete package; a lot of npm packages works perfectly fine in Deno, especially if they're available via CDNs like Skypack.

For packages that use native Node APIs there's a Node compatibility layer being developed as part of the standard library: https://deno.land/std@0.80.0/node. It's still lacking a lot of modules and doesn't provide seamless experience, but with every release it's getting better.


Thanks, that's useful info, we'll probably wait until there's >90% compatibility with the node APIs then. We're never going to put URLs into our imports because we want to be able to run things offline without depending on 3rd party servers to stay up & consistent over time, but if we can depend on local ./node_modules libs once compatibility is improved then that's great.


> We're never going to put URLs into our imports because we want to be able to run things offline without depending on 3rd party servers to stay up over time, but if we can depend on local ./node_modules libs once compatibility is improved then that's great.

You can achieve the same thing with Deno! By default Deno downloads all dependencies into a central cache directory, but by providing DENO_DIR env variable with a path you can tell Deno to change that cache dir. And these files are cached indefinitely; Deno will not try to fetch them again on next run (unless you opt into it with --reload flag).


Local caching alone loses all the benefits of using a package manager though.

I want to have a central repository of all the package versions with enforced monotonically increasing version numbers and a public, explicit chain of trust. Otherwise it's basically curl | sh with all its associated problems https://docs.monadical.com/s/against-curl-sh

Using URLs means there are no rules enforced, the code hosted at that URL can change out from under you without any warning. A new developer checking out our repo could fetch totally different packages than everyone else on the team and not have any warning about it being different, or any recourse if they wanted to fetch a previous version.


The beauty of Deno is that it's agnostic about where you import your code; At the moment deno.land/x only allows tags to be published - no semver range resolution, and doesn't allow versions to be removed/update. nest.land is another popular one, and is build on top of the arweave blockchain, bringing that chain of trust you mention.

The ecosystem is still in evolution but I expect that it stabilize around a few generic registries for smaller libs, and larger libs hosting their code themselves in the long run. The point is; while URLs _can_ be very loosy goosy ways to address code, they can also be made very strict - it will depend on the actual server behind it.

As a side note, npm is already pretty poor at providing those guarantees anyway, I find it interesting that it's usually assumed to be a safe way to install dependencies.


I would probably just vendor them into my own directory structure and import them as a local module.

You lose automatic updating, but I'm not sure I want that anyhow. A script that goes looking for new versions of 3rd party modules would be fairly trivial I think.


Vendoring is a last resort, that sounds like going back to the stone ages of package management. Why would I want that?


I think the advantages far outweigh the downsides. I don't really care how old the idea is.


>You lose automatic updating, but I'm not sure I want that anyhow.

I'm getting on this boat after mongoose changed its type definitions and ruined a whole morning of work for me.


There's nothing preventing you from still using a central repository (or its mirrors). I highly recommend "Deno is a Browser for Code" by Kitson [1] which discusses this subject in more depth.

[1] https://kitsonkelly.com/posts/deno-is-a-browser-for-code/


Hmm I don't find this article very convincing. From my perspective this is a security and sysadmin nightmare (for the reasons in my link above). Lockfiles actually provide repeatable builds almost all the time, the only times they fail are when depending on non-JS builds like node-gyp or other C++/etc dependencies where you can't lock on system build tooling versions because they're outside the scope of what npm can lock.

The real appeal of Deno for our org is the stdlib, which as a side effect means we can depend on fewer packages. The wholesale removal of the package manager seems like an unwanted pain that will only keep us away from switching and gaining the stdlib benefits.


You are still not thinking of it as a browser. Consider a use case where Deno is the client, not the server. One example is a replacement for piping random curls into bash:

deno run --allow-write=~/.myapp https://myapp.io/install.ts

It works quite well as secure scripting runtime.


I mean that's a neat feature and all, but my primary use for Javascript/Typescript is to write frontend and backend code using the tens of millions of lines of useful library code available in the existing JS/npm ecosystem.

It's strange to me that Deno seems as if it could decide overnight to be a drop-in replacement, but there's deliberate friction designed into the system here to try and push people away from node_modules? The upshot of that choice is that I'm unlikely to switch to Deno until that's changed (and I suspect that's the case for many other companies as well).

I love the direction the node compatibility layer is going in though https://deno.land/std@0.80.0/node, now I just wish it supported normal import statements from node_modules (not just require()). I'm quite excited about Deno overall, just waiting for it to get to drop-in point.


> It's strange to me that Deno seems as if it could decide overnight to be a drop-in replacement

The 1.0 announcement post the team mentioned:

> For some applications Deno may be a good choice today, for others not yet. It will depend on the requirements. We want to be transparent about these limitations to help people make informed decisions when considering to use Deno.

and:

> Over time, we expect Deno to be able to run more and more Node programs out-of-the-box.

I don't think they've ever claimed to be an immediate drop-in replacement.


Here's a quick example of how to use import maps to get Node-like imports for ES modules distributed on NPM like lodash-es: https://gist.github.com/MarkTiedemann/4ac5837195f52d24bb1dec...


I know I sound like a broken record at this point, but wouldn't it be helpful if Deno just built something like this in automatically to make those import maps given a lockfile:

https://www.npmjs.com/package/@import-maps/generate/v/0.1.0

This way you'd get both the benefits of web standards compliance with the generated explicit import maps, and backward compatibility and ease-of-migration for npm/yarn users.


> Using URLs means there are no rules enforced

How is using a URL different from using a NPM package? In both cases you can specify a module, a version, and need to trust some remote server that it is sending you the correct files.

> the code hosted at that URL can change out from under you without any warning

The same can and has happened with NPM. See left-pad.


The difference is that NPM as an org has a lot more to lose if they mess up everyones packages or serve incorrect versions than some random person's website.

Left pad was promptly fixed! That's an argument for a centralized package manager, not against. If it were hosted on some private server we'd all still be screwed.

https://www.npmjs.com/package/left-pad


> The difference is that NPM as an org has a lot more to lose if they mess up everyones packages

How is that important? Most Deno packages are imported from GitHub (or deno.land). Neither NPM nor GitHub want to lose your code.

> Left pad was promptly fixed! That's an argument for a centralized package manager, not against.

This is not an argument for a central package manager, but an argument for a central package repository.

Deno is already a "central package manager". Similar to NPM in Node development, Deno is the default tool to download code in Deno development. Both with Node or Deno, you can download code in other ways, too. Nobody forces you to load code from URLs via import statements or NPM packages via npm install and commonjs require. (Also, when it comes to executing random code from the internet, Deno has a sandbox. Node doesn't.)

And yes, well maintained package repositories are great. Whether centrally or decentrally managed repos are better is up for debate, though.

In any case, if you want to use NPM packages in Deno, I'd recommend https://www.skypack.dev/. It's "NPM packages from a URL", so, as we have established earlier, it's just as much reliant on trust and potentially unstable as anything in life, but at least their left-pad is patched...


I am not proposing that Deno remove URL support, I think it's great that they allow importing from URL as an option. I just wish they also supported importing from local npm packages installed in node_modules without needing to specify a URL/full path. This would allow full inter-compatibility with the existing packaging ecosystem and allow people to continue using whatever packaging method they prefer.


> I just wish they also supported importing from local npm packages installed in node_modules without needing to specify a URL/full path.

Import maps allow you to do that, see: https://deno.land/manual/linking_to_external_code/import_map...


Ah cool, that helps a ton. The existence of this feature as the linchpin for compatibility with npm is not clear from the rest of the Deno docs though. Perhaps it could be linked to from this page: https://deno.land/manual@v1.6.0/examples/import_export.

Also, it seems as though ./node_modules/ being the default could be assumed automatically though, no?

    "imports": {
        "moment": "/node_modules/moment/src/moment.js",
        "lodash": "/node_modules/lodash-es/lodash.js"
    }
Considering it's the default in all other JS environments, wouldn't that save the hassle of the dev having to define this for their entire tree of JS dependencies? Then Deno would be a drop-in replacement and we could move our whole codebase over to it overnight (once the Node APIs are up to par).


> Also, it seems as though ./node_modules/ being the default could be assumed automatically though, no?

No, there is no magic node_modules directory in any other JS environment, other than Node. Deno aims to be compatible with web standards. Import and import maps are web standards, require and node_modules aren't.

> Deno would be a drop-in replacement and we could move our whole codebase over to it overnight

The reason that you cannot move your existing codebase to Deno tonight is essentially the poor web compat of the existing Node ecosystem.


Node is the standard web environment, even frontend code is built 90% of the time using Webpack or another bundler running in a node environment and pulling from node_modules.

I don't think you can blame the incumbent tool with complete market dominance for "poor compatibility"...

I really like the direction of the Node compatibility layer though https://deno.land/std@0.80.0/node, I suspect it will be enough to make Deno a drop in replacement soon. Now it just needs support for normal `import` statements from node_modules instead of just `require()`.


Last time I checked jQuery was used on 9 out of 10 websites using JS with 8 out of 10 websites being powered by PHP.

How dare Node not be compatible with the market dominating jQuery! Silly server-runtime not having a browser window object!

> Now it just needs support for normal `import` statements from node_modules

Deno will never ever do that (other than by using standardized import maps) since that's not normal (normal meaning standardized).


I don't understand, jQuery works fine in node-built environments. Obviously any DOM mutation stuff stuff needs to happen in the frontend when it runs in the browser, but you can absolutely import jquery and use parts of it during server-side compiling or rendering steps in node.

    import $ from "jquery"
> Deno will never ever do that

Why would Deno take such an antagonistic approach to supporting the most common setup that everyone uses with npm? Wouldn't it be trivial to just fall back to checking node_modules for named packages? I want to use Deno! This seems like it's deliberately making transitions difficult for anyone using npm.


Sorry, jQuery was a bad example - I remember jQuery not working in Node at all, but that was roughly 10 years ago. Things have improved.

The fact remains that the most popular JS env is the browser. It has APIs such as window which are not compatible with Node and Node has APIs which are not compatible with the browser like __dirname or require. That's why tools such as browserify and webpack exist to bridge the gap.

In Deno the gap is much closer. Obviously, the Deno namespace is not available in the browser (but there's a shim for most APIs, e.g. Deno.writeFile and readFile are implemented with a virtual FS) and some web APIs are not available in Deno (yet), but the compat story is much better.

This is no surprise since web compat is a core goal of Deno. Node compat is not.

> Wouldn't it be trivial to just fall back to checking node_modules for named packages?

No, the resolution algo is not trivial (nor performant). Also, it's not necessary: There is already a standard for how to import code in JS; it's import statements. Import statements do not allow named packages, e.g. import $ from "jquery" does not work in the browser. Except, again, import maps.


> Node compat is not.

:'( That's a shame, it would make such a good node replacement with a great stdlib and Typescript support. I hope they reconsider in the future.


If Deno was only "Add TypeScript support", "Add security capabilities", and "Add URL imports", etc. it would simply be a new version (or multiple new versions) of Node.

These (and other) disruptive breaking changes are about fixing mistakes that cannot be fixed (or at least, are hard to fix) in Node.

Maybe, in a few years, you'll say something like "Oh, I wish legacy Node would be more Deno compatible" because Deno will be the de-facto server-side JS scripting runtime. Equally, it's possible that Deno will fail, but that many good ideas will be incorporated into Node as breaking changes.

Node and Deno as well as their environments can grow further together or further apart. I think it's too early to tell which future is more likely.


_If_ the node library uses explicit file imports, and ESM instead of CJS, it _is_ already possible to just do `npm install` and import from "./node_modules" -- again the problem is that Node's import resolution algorithm is horribly complex and not very explicit, but it's possible to be explicit using Node, and that would make it compatible in Deno


Considering you can generate the import maps from a given npm/yarn lockfile (https://www.npmjs.com/package/@import-maps/generate/v/0.1.0) wouldn't it be possible for Deno to provide a command to do this and get the benefits of both backward npm/yarn compatibility and forward web compatibility with import-maps?


I think that is a strong argument for a community immutable cache like Go uses though. Not a centralized namespace.


I think go is the perfect example of how not to do module source distribution. Because of their URL dependency system Golang projects work great as static binaries, but they're almost impossible to distribute as source builds via system package managers. A lot of my arguments for why this is a bad thing are already laid out here: https://docs.monadical.com/s/against-curl-sh (I don't know how many more replies we have left before we hit the HN nesting limit, but this is a thing I care deeply about and I'm down to keep chatting on Twitter or other forums if anyone here prefers)


I think what Deno is great because it lets me be decentralized. We did so much with open source only to throw it all away with npm and go back to a centralized corporate entity :(


package.json supports urls, git urls (and branch + commit ids), even local file paths. If you wanted that, what's stopping you?


Using URLs is like having an iframe to somebody else's website on your website.


Maybe... it depends on the amount of _trust_ that you put in the remote domain. My prediction is that the Deno ecosystem will aggregate around a few, large repositories that will have good guarantees araound immutability and good track records to addressing vulnerabilities.

For large projects like React, lodash, eslint whatever, I expect some of them will start hosting their libraries on their own networks, like it used to be when Javascript was only frontend and you would have a script tag importing jQuery directly from jQuery's CDN. The reason it worked was because jQuery was sidely known and trusted.


Whot? How is that even comparable? It's third party code after all, the same happens when you use a library in any other language


A library in any other language is saved local and built with your project.

My understanding is these are loaded from URLs if they are not in the cache. If a domain changes hands, you could be served anything.


Presumably you’d use deno bundle for a production build so you’re not directly using the source files.


My understanding is that Deno modules get cached after being used so that you can use it offline.


why not use artrifactory or similar proxy so you always have a local mirror?


That seems like a mediocre patch for a problem that wouldn't exist in the first place if you used a centralized package manager.


to be fair, you _should_ probably already be proxying npm and importing from an internal domain (same would be true in any language, really)

The truth is, even with a centralized repository, we're still importing user-code, made by humans that may not be well intentioned or simply not know that their code is vulnerable: proxying within your network and running periodic checks against the content of the local cache would be good practice, no where the code came from


That is interesting. Does it mean I will be able to run my Node.js program on Deno? That would be really huge in terms of letting users migrate from Node.js to Deno.


Last I looked, there was no clear crypto story for Deno. Has that improved, or are there any concrete plans for it? Deno was really quite nice when I kicked the tires, but didn't seem quite ready for prime time web development with no crypto functions.


Not yet[1] likely to see WebCrypto first and/or something similar to Node's crypto library. You could probably get away with wasm build of a C/C++ or Rust library if you don't mind getting dirty.

1. https://github.com/denoland/deno/issues/1891

edit:

partial webcrypto/wasm option already available

https://deno.land/x/god_crypto@v1.4.1


OK, so what is Deno? How is it different from Node? Why did you make Deno given that Node exists?

Looks like Deno can run .ts files without first compiling to .js. What are the other benefits?


Deno is a JavaScript runtime much like Node. For the reasons on why creating Deno I recommend "10 Things I Regret About Node" by Deno's author [1]

Deno is different than Node in several aspects; most notably:

- Deno supports only ES modules, there's no built-in support for CommonJS modules

- Deno's APIs are all promised based

- Deno does not use NPM, instead it can pull code from any URL, much like browsers do

- Deno has built-in permission system that by default runs your code in full sandbox allowing to opt-in into breaking out of sandbox (eg. to read a file from disk)

- As you've mention Deno can run .ts files without explicit build step

- Deno comes with a full toolchain in a single binary (formatter, linter, test runner, bundler, doc generator)

[1] https://www.youtube.com/watch?v=M3BM9TB-8yA


Node.js module system (kinda like CommonJS) is what made Node.js popular. ES modules while taking away features like scoped module support and dynamic import, it's very complicated and allows bad practices like include files.

Promises are very complicated compared to first class functions. What makes JS/Node hard to grasp is that it's async. Async is an (often unnecessary) optimization, with tradeoffs.

Loading modules from URL's is a cool concept! ES modules helps here, but you could also have a package-list file that lists all dependencies of dependencies as well as download mirrors, or hashes with peer-to-peer distribution.

A permission system is nice, modules should not have system access by default.

Not everyone wants to use TypeScript. It will probably become obsolete once optional type annotations gets added to JavaScript.

An opinionated toolchain is nice, but should be optional IMHO.


ES Modules have dynamic imports (https://github.com/tc39/proposal-dynamic-import). The proposal is at stage 4 and is available from Typescript 2.4. We're on version 4 now so I'd expect Deno to have it.

I'm not sure what you mean by "bad practices like include files" so I can't comment on that.

I've found most Javascript developers prefer the await syntax with promises to using callbacks. It gives the code the appearance of being synchronous with the ability to do things more asynchronously if you need. I haven't encountered many who actively prefer the callback style. It gets unruly fairly quickly.

I can't see optional type annotations ever being added to Javascript. They would have to be checked at runtime which isn't something I'd imagine browser vendors wanting to implement.

The feeling that I get is that the standards committee is trying to bring Javascript to be the best dynamic language it can be and if people want more comprehensive guarantees then there are excellent tools like Typescript which give that option.

The fact there have to be multiple implementations of the standard in the various JS runtimes makes it a hard sell to evolve Javascript too far.


Really like async await. The only case ive found for callbacks is for the top level function call in a script. Calling ".then()" is useful since top level await is still not a thing, and may never be.


Top level await is at stage 3: https://github.com/tc39/proposal-top-level-await

Its in Typescript 3.8, Node 14.8 and probably in your Babel setup. I probably wont get to use Node 14 in prod for a while but I get by with a `main` function that has everything in that gets called at the bottom.


It's important to remember that while, yes, CommonJS made Node popular, it did so because it filled a void in the Javascript syntax and specs. There was nothing to formalize the concept of a "package" back then.

That's not true any more, ES Modules have made it into the spec, so that's what Deno is using.

As for package-lists, the current convention in the community at the moment if you have a decently sized library is to have a `deps.ts` file where you re-export all of your dependency, making it an equivalent to package.json and helps with upgrading dependencies across a codebase.

TypeScript is already optional in Deno! it will run any .js file just fine, and you even skip the compilation part.


> Not everyone wants to use TypeScript.

You don't have to use TS. Deno runs plain JS, too.

> [TypeScript] will probably become obsolete once optional type annotations gets added to JavaScript.

What makes you think that type annotations will be added to JS? I think it's far more likely that browsers and other runtimes will natively support TS as a separate language rather than JS evolving to become TS.

> An opinionated toolchain is nice, but should be optional IMHO.

It is optional. You don't have to run deno lint, deno fmt, deno test, etc. But at the same time, they are pretty good tools so you might want to try them.


> What makes you think that type annotations will be added to JS? I think it's far more likely that browsers and other runtimes will natively support TS as a separate language rather than JS evolving to become TS.

It has already been tried with Dart. Dart was made because JS lacked a type system, preventing further optimisations. Support for Dart was added in Chrome.

Another popular JS transpiler is CoffeeScript, most of it's syntax is now in JavaScript.


I have no data to prove this, but I feel like TS is far more widely used than CoffeeScript and Dart have ever been.

Support for Dart in Chrome was added before Dart was popular (if you can even consider it popular at all). Since TypeScript is already popular now, I think if Chrome added support for stripping the types and running TS code as JS, most devs would welcome that.


Personally I appreciate being able to choose my linter, compiler, dialect etc. I also tend to prefer distributed solutions. Deno running the entire environment is a negative for me, at least for now. To me, it just shows an approach of ignoring what already exists and reinventing the wheel.

I've tried to get Deno to work before in production, but it had so many compatibility issues last I tried it would take weeks or months to refactor things so it would work.


I did too, but the amount of compiler configurations out there kills me.

If you write a library, you need to support several export types for node packages. In your `package.json` you must include a `main` `module` and `exports` object and provide two compiled outputs, one commonjs and one es modules.

Then you need to worry about mutating import paths to include the file extension, which can cause trouble when you keep the commonjs and esmodule files in the same folder.

Also the esmodule loader in node doesn't have access to things like `__dirname`, so certain things can break.

not to mention node_modules...

It's like IE support but on the back end.

Then you step into the front end and it's another whole layer of chaos.

Give me opinionated compilers with minimal configuration and let me write code.


> If you write a library, you need to support several export types for node packages.

This is a hassle, but didn't used to be true for node - you could say the same as all the above for commonjs, wasn't it nice to have a single opinionated standard.

In the short term, deno avoids this by dropping backwards compat, great! But in the long term, I don't see how it doesn't end up in exactly the same place as soon as the next big change to JS modules comes out, or the next new build environment or wasm integration becomes bigger or...

Unless they have a fundamentally different strategy for the future (either 'we will never evolve' or 'we will evolve with ecosystem-wide breaking changes') they're going to end up in the same state as node today, eventually. I haven't seen any discussion of such a strategy at all. It's just a temporary reset - unlike node, they get to break backward compat and support the One True Format because they're new, that's all.


> Personally I appreciate being able to choose my linter, compiler, dialect etc.

I completely agree with that! But on the other hand with plethora of tools available it can be quite overwhelming to configure all the tools, especially for new users.

> I've tried to get Deno to work before in production, but it had so many compatibility issues last I tried it would take weeks or months to refactor things so it would work.

Work on compatibility layer with Node is ongoing [1]. With every release there's some new API being compatible, but far from over.

[1] https://deno.land/x/std@0.80.0/node


> Personally I appreciate being able to choose my linter, compiler, dialect etc. (...) Deno running the entire environment is a negative for me, at least for now.

Nobody forces you to use the deno dev tools. You can still run eslint, prettier and closurescript, if that's your sort of thing.

Personally, I prefer deno lint over eslint and deno fmt over prettier since they are much faster. I'm even using dprint (which is a standalone project for code formatting, https://dprint.dev/) in Node projects.

Similarly, before deno test, I created my own deno testing tool. Now I use deno test instead since it's just better - not because anyone's forcing me to use it.

The integrated dev tools are a convenience feature. I hope that's kinda obvious.


What's your plan for importing other typescript packages, natively?

Last I remember is that every project/library has vastly different tsconfigs.


What are you wearing right now?


We use vercel/pkg to distribute our product as a standalone executable that runs in on-prem windows environments. Our product is actually comprised of multiple NodeJS servers that are spawned as child processes in one master Node procees, and that module gets built using pkg. We also have a windows installer that configures a windows service to run the executable/keep it up. It’s proven to be a really simple way to distribute our app to the enterprise that’s been working for a few years. It doesn’t really protect source code, but provides a decent enough level of obsfuscation for our needs.


We’ve been following a similar process for our internal tools and have found it to be a good solution. Manually including native libraries is probably the only lousy part. Out of curiosity, what are you using to achieve the windows service installation? We’ve been using nssm, which has worked okay, but I’m curious if there’s a better way of doing it.


Yep, we use NSSM as well, it does the job.


I wonder if it's possible for TypeScript to be a .NET CLR supported language. It would be great to have a powerful scripting language for the .NET Ecosystem.

I know C# can be used for scripting, but I want something like Python, an easy to use, dynamic language that has the performance of the .NET VM


Well, there's IronPython - https://ironpython.net/


YMMV but I think F# fills that gap nicely


F# is interesting but still pretty different from TypeScript.


creator of TS was a maintainer of F# at microsoft when he created it. I see lots of parallels in the type systems.


> I want something like Python

https://ironpython.net/


It exists and is called Powershell


deno seems really amazing. This is essentially single file distribution of a secure runtime for application code -- that is a lot of platform-capability-bang for the distribution-reach-complexity buck! This has to already be the best server container format for typical application code in terms of the security possibilities doesn't it?

I'd almost like to see deno grow some kind of puppeteer based browser api as server-app platform --

http request -> deno server process runtime (with secure sandbox) and puppeteer-like handle to the client browser for state transfer -> client ui render ... it would be quite interesting to think of the client browser page tabs as a "child process launched by the server" rather than as as a stateless request from an http client -- as most server rest api architectures tend to push you towards ...


Once ecosystem grows a bit Deno will be a very good alternative to Node. This particular feature is great for simpler deployment.


Just throwing it out there for visibility, ncc will compile a TS entrypoint down to a single file as well, without having to use Deno https://www.npmjs.com/package/@vercel/ncc

Edit: I completely missed that this Deno release packaged the runtime as well, disregard this as an alternative! Guess I’ll eat the downvotes I deserve :P


Not sure ncc is an equivalent. I think nexe or pkg are comparable, they bundle a runtime into the exe whereas ncc just reduces the code down to a single distributable code file in that you still need node installed to run it on the target host.


You are completely correct


This still requires a node runtime. As far as I understand, the deno usage creates a single executable - batteries included.


Yep, that’s the detail I missed. Thanks for raising it.


> without having to use Deno

But you need to use ncc? What's the relevant difference?


You need `ncc run` to run the generated file.

https://github.com/vercel/ncc#commands


I was wrong.


I really like this.

I've been thinking a lot over the last few years about Docker. Arguably docker is just another abstraction for "statically linked executable". But we've had static executables for years; and they work well, and the ABI for the linux kernel is very stable. So increasingly I'm not convinced docker is worth it, compared to just building and deploying bundled executables. And executables can be run anywhere, they don't need a separate testing environment, they can be debugged easily[1], and so on.

[1] Well, if you like systemd or have a reasonable replacement.


Full release notes can be found at https://deno.land/posts/v1.6 :-)


Node is almost perfectly matched to the "Oops, well, too late now" design ethos of JavaScript itself. Nobody was stupid. We humans just can't really predict what will work out and what won't in the future, and this was one of those frustrating cases like carving in stone, where every mistake you make is permanent.

But a combination of various factors made the web an enormously impactful medium. It's too important to take the approach of "well, let's just add some good stuff to the bad and live with it" where we don't have to. We have to in the browser, but we don't have to on the server. I want to see the "benefit of hindsight, rebuild it better" design of TypeScript matched with a server-side equivalent, which looks like Deno.

I hope Deno succeeds.


> Node is almost perfectly matched to the "Oops, well, too late now" design ethos of JavaScript itself.

Anything even remotely successful has to commit to its previous choices, even when they were unfortunate (did anybody say C++?).

One of the reasons why Node had the impact it had was that it used plain JS and committed to supporting the standard language.

I myself prefer writing in statically-typed languages, but the JS direction is frankly commendable; ES6 looks nothing like OG JS, and the fact you can run basically the same code on the browser and on node is a massive bonus.

With that said, I hope Deno succeeds as well, having more choices is a good problem to have!


TBF, on the browser side, they have been slowly fixing some of the worst "oops" stuff. First they added "strict mode" and now they have type="module", both of which turn off a lot of bad behaviors. Deno is kind of like that for the backend.


I find it interesting that your example for "benefit of hindsight" is TypeScript. TypeScript is a superset of JavaScript, so it's literally "just add good stuff to the bad and live with it". Am I misunderstanding something?


No, you're making a good point, but my meaning was that TypeScript is a language that is always (and has always been) written as an improved, benefit-of-hindsight, good-parts-only language. It does of course allow any old JS to be used, but that has always been for the purpose of allowing code already written in JavaScript to be called from code written in TypeScript. No TypeScript project, library, or tutorial is ever written like old JS, so TS's allowance of old code that was already written as JS is like a C compiler's escape to assembly, where inline assembly is valid code handled by the C compiler but isn't really C.

Traditional JS with new features added, or traditional Node with new features added is the opposite case, where the old standard is being extended, unlike the TS or C+asm case, where the new standard has a mechanism for calling back to the old if necessary. (Deno could still be what I'm talking about even if they added a Node-emulation-library to allow it to call modules written for Node, but I have no idea whether such a thing is planned.)


Unfortunately, that's not a very accurate analogy. TS also inherits all of JS runtime semantics unchanged, and there's just as much if not more wrongness there.


I wish for a version of TS where I could have === automatically rewritten to use _.isEqual, that way [1,2,3] === [1,2,3] would return true.


In clojure(script) equality works this way. Values are compared and not instances. It makes the language a little bit higher level and more ergonomic. Love that feature.


Also, it's not like TypeScript is without its own set of legacy cruft that they haven't been able to let go (yet). Namespaces and decorators with old, deprecated spec come to mind.

I love TS as much as the next person but I don't think it's particularly good example to use here.


Know the language and use automatic linting to exclude the bad parts without effort. Here you go, Javascript/Typescript is suddenly a decent language too.


I'm pretty sure multicore was a thing in 2009, though.


It was, and since 2009 the recommendation has been to run an instance of nodejs per CPU core. The justification is that if you're already scaling your app between servers, you shouldn't need a separate mechanism to scale across multiple cores in a single server.

I'm not sure when the cluster API was added, but its been in nodejs's core for a long time. (Not that you need to use it, but still.)

https://nodejs.org/api/cluster.html


yeah `pm2` has even made this painless since (many years ago)


What are you talking about?


Does this offer a speed increase vs running the the code directly using $ deno test.js ( not sure what the exact command is )


Not really (at least yet, I think). This simply bundles the Deno binary and the script (I think the pre-compiled, as in TypeScript -> JavaScript, then possibly as pre-compiled AST). This is why the output binary size is the original Deno binary + the script size (roughly).

So it's functionally equivalent to running using deno test.js


Correct. We are currently working on reducing size for these `deno compile` binaries. From preliminary testing we think we can reduce size by around 60%.

Regarding speed, we are investigating V8 snapshotting of the user code, which would give it a great boost in startup time. Actual runtime performance would be the same.


Would it be theoretically possible, to end up using this as sort of a scripting language for rust or whatever. I'm imagining Deno somehow getting complied down to Rust and then running at Rust speed.

I basically want low level performance without writing in a difficult language


No this is not possible. JS is too dynamic for that to work.


Not really, I think. Startup of a "hello world" seems to be 5ms faster on my Windows machine: https://gist.github.com/MarkTiedemann/c2f4013c3a60bb28df5005...


What's up with the page faults in the timing details?


Not much. Similar to the context switches column, a couple of thousand page faults are quite normal for a quickly executed Windows program. For comparison, listing all running processes, you'll hit ~1-10k page faults (see timeit benchmark results at the top of the readme: https://github.com/MarkTiedemann/fastlist).


No, there's not really any optimization. It's more of a bundling convenience. It still includes V8.


Whoa!

Potentially this can make React Native hit the same performance as Flutter+Dart.

as the Dart team claimed - https://hackernoon.com/why-flutter-uses-dart-dd635a054ebf

>Dart is one of very few languages (and perhaps the only “mainstream” language) that is well suited to being compiled both AOT and JIT. Supporting both kinds of compilation provides significant advantages to Dart and (especially) Flutter.


What this deno feature does is not AOT, but simply packaging the JS as an embedded resource in the executable. JS is still parsed and compiled at runtime.


I guess optimizations like storing the v8 compiler cache format in the binary should be possible.


Question for those who know better: wouldn't this better be accomplished by some kind of archive/bundle? ie keep the packaging structure intact, but just archive it?


Wow that’s really cool. I have a lot of hope for Deno, seems very promising.


I wish there was a NestJS equivalent to Deno, then I’d consider using it for projects


Check out https://github.com/liamtan28/dactyl

Its based on Nest


Whooooo! Lets go!

This is huge!


How to make it work for mac?


47mb executable for a simple cat command? Huh, there's huge area for improvements indeed


You should never use deno to make small programs like cat where you don't absolutely rely on deno features. Tons of other languages / tools can give you 100kb and faster binaries.


Yeah, and the executable is 47 MB. You can do the same thing with Go:

    package main
    import "os"
    func main() {
       for _, s := range os.Args[1:] {
          o, _ := os.Open(s)
          os.Stdout.ReadFrom(o)
       }
    }
and the executable is 1 MB.


This would be more comparable to packaging a Java program into an executable, which would have to also contain the JIT. I don't think it's fair to compare a JIT'd language to a AOT'd language.


A Java program can be AOT compiled to native code, no need to package a JIT.


Sure, and you can do something similar with C and obtain a ~10kb executable, or even less if you put some effort. Or write that in Java and need a complex setup including the JRE and a bunch of packages to support it.

The point is that this makes deploying a Deno application simpler. Binary size is kind of the wrong metric to worry about.


My point is, what is even the point of using Deno? If its for static typing... well Go has that.

So what is the benefit?


> So what is the benefit?

A better type system(covariance, dependent types) than Go's and generics, a richer ecosystem, deploying the same codebase on the server and in the client...


What's even the point of using Go? If it's for static typing Rust has that

So what's the benefit?

Same can be said about any language really, it's personal preference


If you wrote a browser client in Javascript / Typescript, then it makes sense to write a backend in it as well since it's easy to share code and data between them.

Now you can also write CLI tools that are trivial to deploy and can also share code with everything else.

This is the same motivation behind other languages that do the reverse; compile to JS / WASM.


Guessing here - is there any value from a team / engineering point of view to having shared code for custom classes / data structures between client and server? Here I assume client to be mean browser


Why is Go's binary so big?


Standard library and runtime.


I see a go binary, stand alone. And a lot of symbols. Jesus, that's a lot of symbols.


Yes standalone, all languages have a runtime, even C.

It gets packaged inside of the executable.


If the 47 MB executable has no dependencies other than the kernel, then in many scenarios it'd be a lot easier to deploy than if you had to separately install a language runtime.

Of course, Node.js can also do this with pkg, and also I don't know whether executables produced this way really are dependency-free (although depending only on widely-available shared libraries would be almost as good).


> then in many scenarios it'd be a lot easier to deploy

It's not. Most advanced deployments nowadays use container orchestration where deploying is as easy. For simple deployments (eg SSGs) there're enough products on the market.

Integrating the build step hides it at the same time (good for beginners) but creates many other problems in the long run if we just talk about repackaging the run-time.


I think you're underestimating the diversity of environments out there. Not everyone's using cutting-edge deployment tech; lots of folks are just SSHing or RDPing into a physical or virtual server, copying stuff there, and running it. Certainly, that's how things were done at my last job. And it can get worse; some of these environments are locked down in some way or another, by security policies that limit what you can do. In those environments, having the executable be fully self-contained is really helpful.

(For the record, I am a proponent of things like containerization and serverless, and generally try to bring them into use wherever I can. This doesn't require me to ignore the reality that lots of places don't use them, and that this will remain true for a long time to come.)


> Not everyone's using cutting-edge deployment tech; lots of folks are just SSHing or RDPing into a physical or virtual server, copying stuff there, and running it

Maybe a decade ago, tbf IDK anyone who deploys like this in 2020, people user either Docker and/or k8s or a stupid-simple netlify/surge/vercel push. Then, there's also server-less stuff but yeah, you get the idea.


I’m not sure this should be encouraged...


Please help me to understand: If I deploy my apps as Docker images anyway why would I need this? Deno 1.6 just packages the runtime creating a huge file, still smaller than a Docker image but with latter I have a better deployment experience meaning there's a huge ecosystem and tooling around. No rant, just trying to get what I miss.


Even server side, not everyone uses Docker. If you’re deploying to EC2, in house hardware, whatever, a single executable is simpler. And even if you’re building a Docker image, building the image itself is still a bit simpler - just pop the executable in there, and have the Docker entry point execute it, that’s it.

Then obviously for CLI tools, this is SUPER nice.


Yes and no, these are amenities but they are really small and IDK if they justify hiding/abstracting way an import build step.

> Even server side, not everyone uses Docker.

IDK, tried to find alternatives the last years but for a bit more sophisticated app you can't ignore images and container orchestrators like k8s. And latter is still easier than anything I've seen and has by far the biggest ecosystem. If I want to host some minimal app, I just push an SSG to netlify/surge/vercel, it's not an integrated build step which makes my life easier.

> just pop the executable in there

Otherwise you would just need one more line in your build file (npm install).

> Then obviously for CLI tools, this is SUPER nice

Also, yes no, Deno "binaries" have huge file sizes compared to an npm install -g and rarely used CLI tools can be fired off with npx, so which problem is exactly solved? That I can offer CLI tools to folks who won't have node installed? Then I rather write my CLI tool in Go and offer an appropriate package size.

I welcome competition and hence Deno but think this feature doesn't fulfill any (relevant) use case. Only beginners who struggle with the build step (which can indeed get hairy) profit from this design decision but a bit more advanced users will miss the control they had before.


I like K8s too, especially for a service oriented architecture, but there’s tonnes of other deployment targets out there. I’d bet the overall percentage of server side software running on K8s is in the single digits, thought that’s a pure wild ass guess. Hours ago I just finished debugging an outage where requests to one K8s service, through K8s ingress, slowed down 10x after an insignificant deploy, and then we cycled the pods (without changing the code) and it sped up again. No idea why - K8s is complex, and many ppl choose not to take on that complexity. TONNES of people like being able to deploy a single executable to their servers, it’s part of Go’s popularity, part of the popularity of fat jars in Java land (and those still need a JVM!), etc.

As for CLIs, the Deno executable overhead is about 47 MBs. Not nothing, but also ... that’s like a few extra seconds of download time for the tool, and insignificant disk space when people have hundreds of GBs on their laptops. If I’m writing some sort of command line tool, the tool being 50 MBs bigger probably does nothing to hurt adoption. But it having zero external dependencies WILL help adoption, vs. npx and screwing around with proper node versions and whatnot.


I think you already answered the question. You don't need to introduce docker cli, docker daemon, a container registry, etc. Not saying theres anything wrong with docker but having options for application packaging is nice!


Exactly. The fewer moving parts, the better.

Even if you still use Docker, a container wrapping a single binary is simpler than a container with dependencies and source files.

It’s also good for closed source use cases (blasphemy, I know).


Ok, the Docker client stuff is not always exciting but once you want to deploy something small, say, an app server, a DB and something like nginx or Traefik you need some orchestrator, eg k8s and then you need again images. If you prefer containerd over Docker also good.

What I am saying is which orchestration and deployment system does favor single executables atm and has a huge ecosystem? You still need to create images and do double the work. I like real binaries like Go creates but repackaging the run-time doesn't sound like a sophisticated idea but rather making the black box even bigger.

As a sibling said, for client side/3rd party apps, yeah this might be a nice-to-have but this space has rather other challenges.


CLI tools and other things that don't get "deployed" would be one use case


You are talking about server side use cases mostly. This executable will come handy for variety 3rd party apps.


OT: After reading and discussing this feature in this thread, I realize, it's not about the feature or if it's good or bad.

This Deno update and the whole thing shows once again that we want a node successor but Deno as great as it sounds doesn't offer enough benefits or is 10x better than just using node + Typescript in order to leave latter and their huge ecosystem.

Even worse, it creates the notion that the Deno team desperately tries to climb back on stage and get our attention with minor improvements. Maybe I am ignorant but Deno feels just like an opinionated node/Typescript distribution with too little improvements but not like the successor we hoped for.

Besides, I wonder if the Deno team solved all the performance issues which popped up the last time I've read about Deno. There were some debates with the ws community but can't remember details anymore.


> Even worse, it creates the notion that the Deno team desperately tries to climb back on stage and get our attention with minor improvements.

That seems an uncharitable interpretation. Ultimately they're creating tools for our benefit. They may or may not be useful to you personally, but the creation of value should still be applauded, not dismissed as attention seeking.


> Deno as great as it sounds doesn't offer enough benefits (...) Deno feels just like an opinionated node/Typescript distribution with too little improvements

Node is 11 years old. In the beginning, it was rough around the edges, too. I think you need to be a bit more patient until Deno reaches a similar level of maturity.


When node came out it was a perfect storm: Ryan did a brilliant job, right timing, right product, laser-sharp focus and x times better than the past (I liked node right from the beginning) and he was fast. All things I miss from Deno.

But I don't blame Ryan, he is a great guy, created the biggest server-side dev ecosystem and it's hard to top such an achievement but at least he tries and this is why I like him.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: