Hacker News new | past | comments | ask | show | jobs | submit login
Deno raises $21M (deno.com)
608 points by 0xedb on June 21, 2022 | hide | past | favorite | 382 comments



From the post:

> For example, it is integrated with GitHub in such a way that on every push it will provision a new server running specifically that code, deployed to the edge, worldwide, and persisted permanently. Want to access the code your app was running a month ago at commit f7c5e19? It will be served up instantly at a moment's notice. It costs you nothing to have that bit of JavaScript responding to requests indefinitely.

These things sound great and almost a dream come true but how realistic is it to consider being able to do this in most web apps? As soon as your application uses a SQL database and you have database migrations then you're out of luck because a commit from 2 months ago might expect a different database schema than the current version and while it's common to migrate in backwards compatible ways, the backwards compatibility is usually only temporary until you finish migrating from A to B.

Long story short, this sounds cool but in practice is really only applicable to static sites or dynamic sites where you plan to keep a version of your database and code base backwards compatible from day 1 to current day (which I've never seen done in any app developed over the last ~20 years of freelancing for many different companies). The post mentions "The open source Deno runtime shows how clean and productive a modern, batteries-included, programming environment can be" so it sounds like they expect you'll be running database backed apps and not only static sites.


I hope that app isn't actually running on a server somewhere for each commit indefinitely, and that it's moreso intended for serverless setups, like the Netlift edge functions they mentioned...

We could do with some more consciousness for energy & compute resources in this industry, as decoupled it may be from the real world, clicking the button to deploy an EC2 instance somewhere does use real power and will contribute to hardware wear.


It's serverless functions. So they should launch those instances on the fly when a request hits that url and that instance will shutdown after some inactivity. Same with all the 50+ edge locations they provide. They only launch an instance instance in particular region, when user hits that region.


I'd spawn a new DB for that instance too. It can still go a long way to give you a look at how your application worked in the past.


Am I mistaken in thinking that’s totally ridiculous?

Let’s charitably say you can run your app with a 1GB cut down snapshot of production.

You’re going to do what, save a snapshot of the database for every commit? Every schema migration?

…and then provision that up in a running database server… which is notoriously not container friendly…

? It doesn’t sound very plausible to me.

Our database is ~1TB, and the app runs with maybe a 10GB cut down version of it, which is a pain in the ass to deal with, just for example, for local dev environments.

I agree with the parent post; it’s cute, but I can’t imagine historical per commit deployments being useful for me, honestly.


What do you need production data for?

Just fire up a fresh DB with some sensible default fixtures.

Or pause an integration test mid-way.

Or call some factories from the REPL.

The last thing in the world I want to do development with is production data. Production data is the data of last resort when nothing else can repro.


> Let’s charitably say you can run your app with a 1GB cut down snapshot of production.

Plenty of apps can run from an empty database, if the data storage is focused on resources you, the user, have created for yourself.

But I imagine that people with their heads in the future are imagining you'll use one of these new database startups (the names escape me) to manage snapshots, seeding from anonymised production data, etc.


Historical deployments would be useful to me because our app rarely has database schema changes in the hot path.

Most of our effort is in user experience and post processing.

It would be nice to compare the experience of todays version with one from 6 months or 1 year ago for certain components.

We do already have a historical design storyboard but seeing it with live data would be more useful.


You can use a seeder script to generate fake data. If you are deploying something that is not production you shouldn't use real data, especially if it contains personal data.


There are ways to version your database schema. Then you just need to make sure your code version is tied to your database version.


You can't restart all of your web services atomically. What happens if two web services are running two different versions for a couple seconds?


If you want to get super fancy, always compatible database migrations. Have both versions live, migrate to the new schema in 3 steps:

1. Old version

2. Old version/new version both live.

3. New version live after step 2 is confirmed as fully done.


Right, I was just pointing out the flaw in the parent commenter's statement. You can't do "code version is tied to your database version" - that's just not a thing.


> The post mentions "The open source Deno runtime shows how clean and productive a modern, batteries-included, programming environment can be" so it sounds like they expect you'll be running database backed apps and not only static sites.

It suggests you'll be running a datastore, not necessarily an SQL database (the most overrated technology in existence IMO, especially for web apps where essentially none of its strong points are relevant). Storing old data as-is and migrating on read is definitely doable, and you can keep backwards compatibility to day 1 that way relatively easily. I worked on a system much like the one from "An oral history of Bank Python" that did exactly that, and had been doing so on a large scale for around a decade. Having a better-integrated datastore that can present multiple views of the same data is another way to achieve that, if you want to keep the migration out of the "application" code.


More practically: for example, I write my web service(s) with event sourced architecture. Now any worker, or any service sending a command, can operate against the storage. Regardless of version.

Other architectural patterns offer the same kinds of options. Clean architecture or hexagonal architecture limits the surface (coupling) between your versioned logic and the data so much, that they can version independent.

Alas, most web development uses some ORM as the center of the app. Active record, for example. An architecture known for its immense and tight coupling to the database. An architecture that has as major downside that businesslogic cannot evolve separate from the data store.

Your parent commentor very likely is, unaware, running into this downside and projects that experience to all of web development.


> More practically

> event sourced architecture

You must be joking. Please tell me you're joking.


It really is a lot more practical. Most webapp devs using SQL databases spend a bunch of time engineering some notional failover into their app that's completely undermined by using a non-HA datastore, a bunch more time mapping data to and from square tables where it doesn't really fit, and end up with a system that's still last-write-wins at the user-facing level. And the ones where it isn't aren't because they used those SQL transactions, but because they wrote their own versioning layer on top.


I'm not joking. You are missing (or deliberately leaving out) context:

I'm not saying event sourcing is a more pragmatic (practical) solution. I'm giving an example from practice.


So what’s a good web app datastore in your opinion, if SQL is highly overrated?

> Storing old data as-is

What does this mean? Can you give an example?


> So what’s a good web app datastore in your opinion, if SQL is highly overrated?

I've used Cassandra very successfully, but there are plenty of options.

> > Storing old data as-is

> What does this mean? Can you give an example?

I mean not migrating the stored version of old data (note I'm not suggesting "no schema" or anything like that; what I'm opposing is the idea that you have a single global mutable version of the schema and mutate historical records in place to match that). E.g. if you started by storing users with a single nationality field and then realised that actually the same user can have multiple nationalities, while you'd change the DTO representation of a user that you read into, you wouldn't rewrite the on-disk representation of previously stored users.


Cassandra is horribly inflexible.. you cannot use where conditions in any fields other than the PK (assuming you're not going to use ALLOW FILTERING for all your queries) and if your data isn't going to grow beyond baby size where allow filtering is okay, then you probably don't need a whole distributed java app running. Just do sqlite or newer local file storage dbs etc.

SQL has earned its place due to the expressiveness and flexibility .. you may hold your opinions but you'll hardly find anyone who would take 'SQL is overrated' seriously.


It means instead of migrating the database, you just leave it as it is. New code will expect to see both the old and new record formats.


Interesting how this will play out. It's an ambitious goal to consolidate client side and server side javascript ecosystems which is quite fragmented today. On the other hand, this may only increase fragmentation further by introducing another target to develop for (wait for transpilers that can automagically convert between deno and node code). I will always look at javascript as this problem kid that cannot get its shit together in life, perpetually chasing the romance of utopia.


There is already a system for building a NPM node package from a Deno package, called dnt (deno to node). It's maintained by some Deno team members. Here's a blog post they wrote about porting "Oak" (a deno HTTP server framework) to Node: https://deno.com/blog/dnt-oak

The nice thing about Deno conceptually, is that it's much more similar to the browser platform than node is. It uses ES6 only, has things like `fetch` built-in by default, and generally follows browser standards around interfaces like Request, Response, etc. Instead of needing a complicated build process to make Node code work on the browser, now we have a complicated build process to make Deno/Browser code work on Node. ¯\_(ツ)_/¯


Not to take away from the rest of your message, I thought you wanted to get up to speed on that `fetch` is now available in nodejs core since version 17 or something like that (think it got included early this year).


I think this requires the `--experimental-fetch` command-line argument (per https://nodejs.org/tr/blog/release/v17.5.0/), so I don't think it qualifies as "by default".


That's an old blog post; the current release is 18.4.0, which supports `fetch` without a command-line argument. You do get this warning the first time it's invoked, though:

    (node:340760) ExperimentalWarning: The Fetch API is an experimental feature. This feature could change at any time
    (Use `node --trace-warnings ...` to show where the warning was created)
(some later blog posts that mention fetch being enabled-by-default and in the global scope are https://nodejs.org/en/blog/announcements/v18-release-announc... and https://nodejs.org/en/blog/release/v18.0.0/)


Yeah, but you could use `undici`-package for older Node version as it is being used to provide this experimental Fetch in Node 17+


Can the same build pipelines be reused for CF/Fastly workers? The DX for workers is horrible with multiple buggy Wrangler implementations and zero observability into deployed workers.


What about using an npm package in deno?


Has been possible (and surprinsingly easy) for a while through CDNs like skypack or esm.sh



So deno runs JS? I thought it ran exclusively typescript.


Javascript is Typescript - TS is a superset of JS, so valid JS is valid TS, although you'd almost certainly get lint errors in dev complaining about lack of types.


Deno code is really close to browser code. If you have ESM code that runs in the browser and doesn't need access to the DOM then it's a good bet it'll run on Deno, and vice versa. They use web standards for most things, and anything proprietary is put on the Deno global object. For standards that need adapting to work outside the browser, they're working with Cloudflare and others on WinterCG, which is defining a common baseline for these non-browser runtimes.


there is a group formed including Deno that will work together to try to maintain standards + compatibility

https://blog.cloudflare.com/introducing-the-wintercg/


I do think Deno is a step in the right direction, by reducing the need for transpiler steps (e.g. TS to JS) and embracing JS standards instead of building their own (which is what NodeJS did, but this was in a time when JS had no standard for dependencies or per-file isolation).


It's what I kind of love about it though - the JS ecosystem is Neverland.


It's a battle of ideals, you could have have a high entropy ecosystem that's constantly evolving and perhaps "appears" unstable, or an ecosystem that's "gotten it's shit together" and probably trends toward stagnation and apathy


Is Go a counterexample?


It is. As is Rust.

Grandparent confuses correlation for causation.



That's not a fair reply. One of the main benefits of Deno is that they're using existing standards. It's Node that's the outlier.


I knew that xkcd before I even clicked. They should update it and increase the number dramatically.


It could store a cookie and increment the number every time you see the comic


I think we're in the 100s by now, within the javascript ecosystem alone! :)


Deno overstates the problem it is intended to solve because its founders needed to justify achieving funding by being developer-famous.

Let's say a company were to adopt this tech over Node, well, it seems like it would be slightly better, but probably not much of a game-changer.

I'll leave it to y'all to talk about what tech is truly interesting as I don't want to seem ideological/biased, I just don't see how Deno is particularly notable.


It's usually not enough to be a bit better than what you're trying to displace, you have to be 10X better than the status quo to have any real impact. For example, SVN tried to be a better CVS, while Git came out of left field and destroyed the competition by being 10X better.

In that, Deno reminds me of the once-hyped Meteor.js. Meteor.js also though that funding could be the answer, but it wasn't. They're both clever, great for demos, but not sufficiently so to overcome the sheer inertia of Node+NPM et al. When something truly 10X better arrives, it will be quite apparent. Just like how React spawned a new generation of frameworks, nothing has unseated it yet because all the competition is React-like and not 10X better.


This explanation resonates with me.

I first saw Ryan Dahl's talk about his Node.js regrets[0] during the early days of the pandemic. I thought I'd be eager to try Deno when it became more mature, but I still haven't tried it. I guess I'm not convinced that it's 10x better than Node.js.

[0] https://youtu.be/M3BM9TB-8yA


>Deno reminds me of the once-hyped Meteor.js

In substance they are similar in that they both want/ed tighter "vertical integration" of tools ("DX") in an opinionated way, with the money-making plan being convenient paid hosting. "Look how easy and fun it is to make powerful stuff; we hope you give us money to host it for you!" So far the only company that's had success with this pitch is Amazon of all people, by offering a staggering array of partially specialized EC2 instances behind an API.

For whatever reason, it does seem like devs really like their app authoring tools to be independent of their distribution mechanisms. Maybe AWS doesn't trigger this because it doesn't feel like hosting (even though it is). When a whole diverse community seems to act coherently in this way, it's probably a good idea to listen to them. (Alternatively, when a community acts coherently, its ALSO a good idea to try something new.)

Deno is quite good. It may even be 10x node, if only because it avoids npm and has a much more thoughtful module system. It's a (much) better DX, too. Can they make money? Time will tell and, luckily, that's not my problem!


> In substance they are similar in that they both want/ed tighter "vertical integration" of tools ("DX") in an opinionated way, with the money-making plan being convenient paid hosting. "Look how easy and fun it is to make powerful stuff; we hope you give us money to host it for you!" So far the only company that's had success with this pitch is Amazon of all people, by offering a staggering array of partially specialized EC2 instances behind an API.

I think NextJS and Vercel probably fall into this bucket as well.


There is also the "closed loop DX" strat where the tools hosted with your code. For example, Mike Bostocks ObservableHQ, jupyter notebooks hosted for free (at least for now) on Google Drive, codepen, and a bunch more especially for server-side demos. And of course there's no way they will all become profitable companies. In fact I believe there is a "DX metagame" which governs the entire space, because devs can get fatigued by too much choice, and just say "fuck it, I'm coding my thing in notepad and pushing with ftp."


> For example, SVN tried to be a better CVS, while Git came out of left field and destroyed the competition by being 10X better.

I agree with the overall sentiment, but this is a bad example. SVN was pretty successful at replacing CVS, even though it was not 10x better.

Until, of course the new generation came in (git, mercurial...).


Git was developed by Linux's creator and a requirement to take part into Linux development.

xckd is full of examples of much "better" it happens to be in reality.


> I just don't see how Deno is particularly notable.

I think the place they can really sell me is:

- source maps

- debugging

It’s an absolutely PITA to get those two things working across a JavaScript stack. There are so many runtime contexts to deal with:

1) the browser

2) your API server

3) your frontend test env

4) your API server test env

5) browser and backend for your end to end/integration tests

6) pre-packaged code from secondary repos you are importing into all of the above

7) All of the above in CI

8) Special production builds of much of the above

It’s truly a nightmare. I’ve been trying to set up a fresh JavaScript full stack and it’s so much work. Every step of the way I need to take days off to do deep research into how to set this stuff up.

And it starts to make sense why no company I’ve ever worked at had all of that stuff working. You just deal with wrong stack traces and use console.log instead of a debugger in the places where it doesn’t work.

If Deno can provide all of those runtimes in an integrated way, with debugging and source maps working automatically that’s a total game changer.

And I’m honestly not sure who else could do it. Maybe like Next/Nuxt and all them? But do those projects handle build/packaging across multiple repos? I don’t think so…

Deno can nail that because they own packaging, and they can just skip the whole build/sourcemap step entirely and just distribute .ts files.


TFA goes: Try Deno Deploy - you will be surprised at the speed and simplicity.

Deno Deploy (and Cloudflare Workers) is a big deal. At least for a small tech shop like ours, I've come to found it useful for >50% of the solutions we have to implement. Its simplicity and cost-effectiveness reminds me of S3 back when it launched: 5 APIs and pay-what-you-use billing. Sure, right now, Deno Deploy's capabilities are limited, but there's nothing stopping them from building a platform around it as they go along, and now they've got $21M reasons to keep at it.

I see parallels of Zeit/Vercel meteoric rise (no pun) in Deno.


> its founders needed to justify achieving funding by being developer-famous

I agree here, but

> Let's say a company were to adopt this tech over Node, well, it seems like it would be slightly better, but probably not much of a game-changer.

If I'm starting a new project, Deno will be compelling if it means zero configuration with sufficiently sane defaults. It's like Rails vs Ruby. With Node.JS, you can pick and choose then configure things like TypeScript, but then you have to manage configuration files for TypeScript, linting, and all these things not relevant to the app code.


What makes node _fantastic_ is actually a huge ecosystem of packages, tooling, cloud providers, talent pool, shared knowledge. etc. There's a massive network effect here.

As far as I'm concerned, as of right now Deno is _not_ better that node due to reasons above. It has the potential to be, and I wish the authors massive success and root for them. But I'm not gonna start a project on Deno due to lack of a proper ecosystem yet.


At this point I’d say node’s ecosystem of packages is an advantage, but its ecosystem of tooling is a disadvantage. It’s not node’s fault, but Javascript package management wasn’t a thing when it started out, and now we are still haunted by the legacy approaches of the past. I’ve wasted too many hours mucking with babel/webpack/tsconfig files just to get some stupid code to run.

If there’s any area of modern software development that could benefit from a clean slate like Deno is attempting, it’s JS development.


Maybe not the tech, but consider the vision.

Deno is working on the runtime, the cloud infrastructure, the dx, and a framework (fresh). Nobody is doing that afaik. I think this is where the value lies.


Sounds like Dark to me: https://darklang.com/


> its founders needed to justify achieving funding by being developer-famous.

If I remember my Javascript-of-the-week-drama correctly, didn't Deno become a thing because a couple node/npm devs were upset that there was someone in the core node/npm team who did a bad thing?

Am I remembering this right?


Deno is very much the technically-justified next generation clean rewrite to get rid of NodeJS/NPM warts. Mr. Occam says it's doubtful there's both a technical and a social explanation.


The rise of JavaScript-only clouds intended as "defacto" solutions for web development scares the hell out of me. Monocultures are dangerous. At least in the early Node era, it sat alongside all the other flavors of server infra out there with a near-infinite variety of languages, platforms, OSes. Now we're being told that the future of web development is…Deno? That's it? One tool? One language? One platform?

Not the web I intend to build and participate in, I can tell you that right now.


Deno is only possible because V8 is so hyper-optimized. If you think about it, V8 has to compile/interpret and then run javascript code on the fly after downloading it in the browser. That's how good V8 is. That puts javascript in a unique position to enable something like Deno

So if there was a compiler/interpreter for another language that was close to being as good as V8, then something like this could exist for other languages.

Also, it looks like wasm works on Deno, so that gives some other languages.


There are many compiled languages that produce more performant code than V8. As you noted yourself yourself, V8 has time constraints - it must be able to compile and run the code "fast enough" for a web page to load. This limits the extent of optimizations that it can perform.


And for the handful of routines where all-out performance is the bottleneck for your business those languages are the right tool for the job.

For everything else, there’s JavaScript.


If there was a reason to prefer JS, sure. But it's not really any better in other respects. The only thing it has going for it is that it's forced on you on the frontend.


IMO, that’s the only reason to use JavaScript: you can have a single language application.

With anything else you need a two language application.

Going from one to two is a MASSSIVE increase in developer overhead.

Not sure that outweighs the downsides of JavaScript, but that’s how I’d dispute your “no reason” claim.


There are many areas in which JavaScript "performs" poorer than other options. Productivity, stability, maintainability, to name a few. Other languages and ecosystems are much better at that, than JavaScript.

Luckily projects like typescript solve those issues, or try to.


> if there was a compiler/interpreter for another language that was close to being as good as V8

Like LuaJIT that has been around longer than V8?


LuaJIT is fast however it misses Node.js's (or Python's) standard library and package ecosystem.


So then you pull in LuaFileSystem or LuaSocket or whatever you need. I never understood this as a huge limitation. There are plenty of very mature cross platform libraries for anything I'd grab out of node's standard libs. There's even libUV bindings (luv) which would pretty much feature match nodejs completely. Add to that LuaJITs FFI you get all of C's ecosystem as well. Batteries are helpful, but it's not like Lua is completely devoid of helpful libraries.


It would be interesting to see a Deno competitor run on Lua!


PHP interpreted your code for every request until recently, and it was decently fast...


You can pretty easily argue Graal is better than V8 in many different ways. V8 is good at what it does but its ambitions are very limited. They only care about Chrome so anything Chrome doesn't need gets ignored. And JS is inherently difficult to compile to highly optimized code.


Go is as good as V8, no?

[ disclaimer: i was a V8 contributor 13 years ago, and it is good, but it is not the only game in town ]


Granted, I don't pay a lot of attention to it, but I thought Docker(files) ended up being the defacto standard, not Javascript?

I'm sure Deno will tell you the future is Deno. You don't have to believe them.


It may not be the case right now, but I'm pretty sure their platform will eventually run anything that compiles to WASM. This is just the first step.


History has proved the first-mover advantage. Many unreasonable places in the history of human evolution have been preserved because they did not affect survival. The best may not be widely used. Ecosystems that are large enough are easier to live to the end. The future is the future of the survivors. Js is like this.


It's less of a problem, I think, than you think it is. In the enterprise world, Java is / was a similar de-facto standard because they paid good money for Java enterprise servers, Oracle databases, trainings, frameworks, the works.

I'd rather work for a monoculture than a cowboy pick-whatever-you-want one; it's not a dichotomy, sure, but I'm aware of the latter for business continuity. You run into scaling and talent acquisition issues. Bus factor. Etc. If you as a company can say "We need a Deno developer" instead of "We need a full-stack Javascript/NodeJS/Scala/Java/Go/Rust/Erlang developer" (just to name a random array of languages) you can hire and train a lot better.


> Early in cloud computing, virtual machines were the compute abstraction […]

This is funny to me because serverless sounds to me like the return of PHP (etc) shared hosting. What's old is new again?


I changed my view on this―I hated how things were going full circle. But in reality, human civilization loves overdoing a particular direction and then reverts to the mean. You see this with the economy, moral fashions, everything.

I am an optimist in that I (have started to) believe that this slowly allows us to converge on better solutions for everything.

PHP was easy to set up, easy to host, easy to understand and easy to build stuff with, but it resulted in an unmaintainable mess over the long run.

Then Node was all of that, but JS was a better language than PHP. Then Node grew warts in the form of the clutter that is npm, then it grew complex build systems and unstable libraries.

Now there's Deno. It uses TypeScript by default, which is a surprisingly useful and productive language, it rethinks some things, it's much more secure by default, and now we're back at the PHP-level easiness to host using Deno Deploy.

We've ended up with an overall better solution and it only took us 20 years :)



Search, copy, paste


We didn't go back far enough, CGI is still where it's at.


We're there again with WASM and WAGI.

https://www.fermyon.com/blog/wasm-wasi-wagi


Are there any hosting operators providing wagi now?


I mean CGI still lives on just moved up the stack a bit with WSGI and Rack.


perl did nothing wrong


whilst technically correct, perl hackers on the other hand...


PHF was both fun and educational


> things were going full circle

Ya. Like a spiral. Or a spring, if you want to get fancy (add z-axis for time).

Each revolution seems redundant, but can be exploring a slightly different problem space, or trying a solution with a new angle.


It's worse this time, in that the old shared hosting environments (e.g. Apache with FastCGI and suexec, or nginx with fpm) were open source, and there were countless shared hosts. The new generation of multi-tenant isolate-based JS runtimes are proprietary, and AFAIK one can count the number of hosts on one hand.


> The new generation of multi-tenant isolate-based JS runtimes are proprietary

That's being addressed by the new Web-interoperable Runtimes Community Group https://wintercg.org/, driven by Deno and others.


This strives to standardize the runtimes and standard library surface but not the runtime orchestration, networking or other aspects.


You can self-host Cloudflare Workers. https://github.com/cloudflare/miniflare


This is very much specifically a developer and testing tool, not a robust runtime that is at all akin to self hosting Cloudflare Workers


FWIW, we'll be open sourcing the real thing in a couple months (I'm actively working on the necessary refactoring as we speak...). https://twitter.com/KentonVarda/status/1523666343412654081


It’s amusing to see a whole generation of senior developers (“senior” as in 5+ years of experience) that haven’t experienced web development without React[0] and who proceed to reinvent PHP/ASP.NET/RoR.

[0] Used here as shorthand for “modern js driven development”


Such a tiring take...

Nobody is reinventing PHP or RoR development. What is happening is the community taking all our favorite parts of these stacks and combining and implementing them in ways that facilitate a dev experience that we could only have dreamed of back in the PHP/RoR days.

- Senior dev that started off in the PHP and then RoR days


Yep totally agree with this.

- Engineering Lead who got his start in PHP/ASP(pre .net) and loves the modern ecosystem despite its flaws


Back in RoR days? It's still here with new releases offering modern features. ASP.NET is even more cutting edge.


Another developer here who’s been around for the tech you mention.

The evolution has been 2 steps forward but 1 step back. It’s how many complex systems evolve and it’s fine. Just because you see that some problems/solutions resemble what you saw 10 years ago doesn’t mean nothing was improved along the way.


Personally I've been using both PHP and Node/React extensively and let me tell you there are very good reasons for the React/Node/JS ecosystem to exist and they are decidedly not borne out of ignorance of the PHP ecosystem. Ryan Dahl (inventor of Node and Deno) said that he created (paraphrasing) "the PHP of this generation" and React was invented within the probably the largest PHP shop to date.

PHP is an unsafe, slow by default (execution model - the language itself fast), hard to use well, clunky, limiting and bloated language that is kept together by duct tape and the incredible effort and ingenuity of it's open source community by educating developers and improving/cleaning up the language at full blast, unfortunately often by breaking backwards compatibility.

Whether JS (including Deno) is a good alternative or the right answer is up for debate. But people who try to mimic some of the benefits of PHP in the JS ecosystem are not doing so accidentally or because of lack of experience.


We could really do without this kind of snark here. Not only is it condescending, but it's also just plain wrong, either on purpose or due to the writer not being familiar with the subject at hand.


Been writing web stuff since before the dot-com bust. Preact + TypeScript is probably my favorite front-end programming combo yet. I liked VB6 and C# WinForms, though, so maybe I’m an invalid sample.


C# WinForms was super comfy for when you just wanted to throw together some simple GUI tool for internal use. As powerful as WPF and similar frameworks are they definitely lose to WinForms in ease of use.


WebForms was WinForms for the web and for all the criticism it got, it was amazingly productive. The new Blazor framework is finally approaching that speed of development again.


Re-inventing PHP but with less of the garbage that is PHP isn't a bad thing.


You seem to be suggesting the Deno team has <= 5 years experience. Else, who is recreating PHP/ASP.NET/RoR?


Anyone who has had to maintain a shared PHP server would say that's very different. I used PHP for years, and now use serverless functions (both Lambda and Deno). Serverless means you don't need to worry about anything except the code. It scales automatically. It never needs updating. It doesn't go down in the middle of the night. It can roll back deploys instantly. Sure, you could pay for managed hosting, and then pay more for scaling, and set up automation for deploys, but give me serverless functions any day of the week. They just work.


What’s the SQL story? SQLite, or is PostgreSQL fast enough on startup?


You don't run the database itself there, you connect to it remotely. I've not had a chance to try them yet, but PlanetScale Portals look like a great solution there. https://planetscale.com/blog/introducing-planetscale-portals...


Ok I was too little literal-minded about the functions concept. Thank you


Virtual machines are still the abstraction. Each Lambda/Fargate (serverless Docker) runs in its own VM.


Lambda and most similar technologies use variants of containers, not VMs.

Outside HN that distinction might not be important. Here being pedantically correct does matter on this because the VM abstraction has been around for a long time, whereas containers were an enabling technology for serverless.

And containers and the fast startup, security and resource consumption guarantees they offer is a big difference to the old days of shared PHP hosting.

And this is why "reinvention" with new technology is different.


Lambda and Fargate use the Firecracker VM

https://firecracker-microvm.github.io/

Firecracker is a virtual machine monitor (VMM) that uses the Linux Kernel-based Virtual Machine (KVM) to create and manage microVMs. Firecracker has a minimalist design. It excludes unnecessary devices and guest functionality to reduce the memory footprint and attack surface area of each microVM

“How AWS’s Firecracker Virtual Machines work”

https://m.youtube.com/watch?v=BIRv2FnHJAg

The Lambda service team always emphasizes the level of isolation that Firecracker VMs gives you that containers don’t.


And Google Cloud Run use gVisor, which is also not a container.


But it's a "micro" VM now ..

rolls eyes


This question is going to make me sound like a jerk, but why do you want to write your back-end in JS? Deno looks like a great improvement over node.js, but I don't feel compelled to use it. It seems like people jumped to node based on some performance promises that didn't really pay off (IMO). And since then, we have newer options like Rust, Go, and Elixir as performant back-end options, and even older choices like Ruby and Python have continued to improve.

Seems like the standard arguments would be that developers already know JS, and that you can share code with the browser. I don't find these highly compelling.

EDIT: I haven't learned typescript yet, based on the replies, it seems like that could be a good reason to choose it. Seems like a nice middle-ground between typical scripting and compiled languages.


I've written up an at-scale production backend in Node.js and can very much stand by the decision to use Node over Elixir or Go (which I was considering at the time). I think fundamentally, the power of a JS-based backend is its pragmatism--it's not the best at most things, but it comes very close to it in so many categories that it's a safe option for a lot of use cases.

> It seems like people jumped to node based on some performance promises that didn't really pay off (IMO). And since then, we have newer options like Rust, Go, and Elixir as performant back-end options, and even older choices like Ruby and Python have continued to improve.

I'd agree that Node.js performance is generally not the best reason to be writing a backend in it since a static language will often yield better performance, but for the amount of dynamic power you get, it's extremely performant by default[1]. The next most performant dynamic language for I/O is, like you said, probably Erlang/Elixir, but V8 is generally understood to have better CPU-bound performance than BEAM.

[1]: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

> Seems like the standard arguments would be that developers already know JS, and that you can share code with the browser. I don't find these highly compelling.

I've found that developers already knowing JS is a very practical reason, if not ideological. I'm in a team with a lot of generalists who like to work full-stack, and being able to use the same mental models and syntax is a lot of cognitive load lifted off our shoulders. It also doubles the hiring pool of people who can hit the ground running on the backend, because now anyone who has experience with JS on the frontend can jump over to the backend with relatively little training.

The other key reason for a backend in JS is that the community is extremely large, which means that a lot of the troubleshooting I'd have to do in languages with smaller communities is done for me by someone who was kind enough to post a workaround online. This saves me a lot of time and energy, as does the plethora of packages.


And the performance argument isn't even just about CPU time, right? The fact that JS is heavily event-friendly, and all of its IO APIs are non-blocking by default, gives it an automatic advantage over busy-waiting languages like Python, and also languages where concurrency means writing threads manually. If your web server spends most of its time on IO (network, DB, file system), as many do, JS acts as a lightweight task-delegator to a highly parallel and performant native runtime.

I haven't worked on a large-scale JS back-end myself, but this is the case I've heard others make


You have to work pretty hard to make Python (or any other language with reasonable web frameworks) busy-wait!

Blocking is NOT the same as busy wait.


Ah, I didn't realize there was a distinction

Still, despite that it seems like there's a big advantage to be had


Is there thought?

What's the actual difference between (JS):

  let x = await doSomeLongProcess();
  console.log("done: " + x);
vs (Python)

  x = await do_some_long_process()
  print("done: " + x)


Sure, if you use Python's async feature. But my understanding is that it's relatively uncommon; blocking IO is still the norm, right? I for one have worked in or around a couple of nontrivial Python servers, and I've never once seen an await statement. My understanding (correct me if I'm wrong) is that this comes down to it being newer, and having worse ecosystem support (more "synchronous-colored" APIs, less battle-tested frameworks, etc). It's not a first-class citizen like it is in the JS ecosystem [1]

[1] Technically JavaScript's async/await syntax came later, but it's just sugar over Promises which have been around for a much longer time, and those are built atop the event loop, which has been core to the language since day 1


In non-async Python, generally the thing that blocks is a thread -- something Javascript doesn't even have! A different thread will happily run in the meanwhile.


Right, but some other JavaScript on the same thread will run while a different piece of JavaScript is awaiting. That's why JavaScript can get away with not having threads. Also- any number of background threads will be running at any given time to read data off of disk, load and process network requests, load data from a DB, delegate commands to system processes, etc, in true parallel with the JS code. When one finishes, it'll put an event on the event loop and JS will pick it up when it gets the chance.


Sharing code with the browser can be really sweet. At a past job we used TypeScript, and I had whipped up some shared types that our API was forced to conform to, and that automatically generated a strongly typed API client for the frontend to use. Sure, you can do that with some other protocol or server like GraphQL/Hasura/writing up a JSON schema, but it was pretty sweet that A) any of our engineers could figure out how to make the endpoints they needed to implement a new feature, B) these types could generally be inferred from the actual API implementation without having to explicitly write types out which completely eliminated bugs around API misuse with minimal extra code, and C) all of the code we wrote followed the same linters, formatters, idioms, and utilities (fetching, logging, error handling, and so on). There are projects out now that wrap up a lot of what I had done like tRPC [1], as a testament to the value of the shared abstractions.

[1] https://trpc.io/


I think developers knowing JS/TS is a highly compelling argument given the fact that most companies are struggling to find any devs at all. I would argue finding Rust/Go/Elixir developers is going to be _a lot_ harder for a lot of companies than finding JS developers.

Also then sufficiently proficient frontend Devs have an easy in into backend development which has turned out to be a very good thing at 5 different companies I have worked for in the last 4 years.

In addition to that not every backend or service needs much performance. I have written quite a few services that get hit maybe 3 or 4 times a minute at best and Node was great for that. Actually I have yet to work anywhere where performance was limited by the language choice and not by architectural decisions or inconsidered use of databases and data structures. I am not saying this does not exist but it does not mirror my experience with the majority of companies at all.

I'd also say that despite npm and dependency hell being a real problem there is a vast ecosystem of packages out there. I know python has that going for it too. But Elixir is much less developed in that regard - simply by being less popular.


- Javascript's async-everything is rather unique.

- Synchronous execution + async-everything is a great combo.

- First class Promise abstraction.

- Good general purpose language. Its warts are mostly smoothed over. It's not like Javascript from 20 years ago.

I like writing Javascript. I certainly prefer it over the other popular dynamically typed languages. (I don't see what Ruby or Python offer me over JS as general languages)

Rust and Go make different trade-offs. I would never default to either for general purpose projects. Meanwhile JS is my default for networked code.


>Python, Ruby

I like js too, but both python and ruby are much better languages. It's just that js won distribution, which means it won adoption, which means you have 10MM devs smoothing it's rough edges. Just to take one language feature, consider pythons list and map comprehensions. js has nothing like that - but it also has 10 high quality libraries that do that and more, like lodash. Consider also the ridiculousness around javascript's "OOP" features, like the use of "this", or it's behavior around truthiness, or (arguably) the misfeature of prototypal inheretance - all of which can be forgiven because functions are, after all, first class in javascript, which means you can, with enough effort, fix all the things.

In truth if you want to understand absolutely everything about your runtime, and have some feeling of safety in a sandbox, then the jvm is the best. The vmspec is great, as is the langspec. It's very fast. The tools are mature. Lots of languages are written for it. It's behavior under load is well-understood. Like PHP there is a lot of bad code written for it, which gives it a bad name, but it's still a real gem.


Well, they asked why someone would use JS if there were other options, and I'm saying that I don't think Ruby nor Python are better languages than JS. Part of that is due to their bolt-on async vs JS async everything, part of that is due to TS, but there's more.

For example, you bring up Python's collection comprehensions, but I could easily point out Python's gimped lambda support and thus poor FP abstractions and the need for a special comprehension abstraction.

I don't think it's worth arguing about. But I had to chime in before someone thinks there really are no objective nor subjective reasons why someone might prefer Javascript. I do.

> then [X] is the best.

Btw, there is no best. There are only trade-offs.

One of my points is that you can't say something is the best. The only thing you can do is enumerate the trade-offs that made sense for you, and much of that is only personal/aesthetic.


>you can't say something is the best.

You can. And you did. When you say something is your favorite, this is like saying it is the best, all things being equal, in your view. For network connected server processes, I say the jvm is the best runtime. Redbean, interestingly enough, may take that crown, but I'm only now playing with it, but I love its tiny simplicity. In some ways this is like love - do you shy away from saying that your woman is the best? I hope not. And I hope no-one holds your feet to the fire if you do.


Arguing that there is no objective way to measure "the best" and the saying an opinion as an opinion is not the same thing.


> the misfeature of prototypal inheretance

Prototypal inheritance isn't a misfeature, it's an interesting and powerful language design choice used by several different languages [1]

It may not be your cup of tea, but having written code in most mainstream languages since the 80's, I can tell you I definitely prefer it to alternatives like class based inheritance.

Runtime mixins are one of the most powerful composable concepts in any language, and this is a breeze with js. Take a look at the hoops c# had to jump through to come up with something similar but less powerful, as an example. Or the nightmare of multiple inheritance in C++.

[1] https://en.m.wikipedia.org/wiki/Prototype-based_programming


Something is wrong in this thread. People have misquoted me, twice. In this case you cut out "(arguably)" implying my position is stronger than it is. Elsewhere, they cut out the conditional in my endorsement of the jvm, implying my statement was absolute, and proceeded to lecture me on trade-offs.

Not sure what it is, but I don't like it and won't participate.


Well, you saw my personal opinion about why I prefer Javascript and proceeded to lecture me about things that you think are better. ;)


.NET / C# have all of the above, and had them for longer than JS did.

(promise abstraction since 2010, async/await since 2012)


I like TypeScript and would like to write backends in TypeScript.

I'm coming around on Elixir.

I never want to write Go that interacts with a database again.

Rust seems complicated. I don't think I'm smart or committed enough to get anywhere with rust.


> Rust seems complicated. I don't think I'm smart or committed enough to get anywhere with rust.

Rust is a bit rough yet to do the front and backend development, but IMHO it will catch in a year or two. Also for backend rust now with (actix and axum) looks pretty similar to expressjs, so you will feel like home there, but for the frontend there's still no killer framework nor a settled architecture that fits better rust model.


>I never want to write Go that interacts with a database again.

I LOVE Go ! I think it's the bee's knees until you have to write DB code ! I'm hoping "go generics" will take care of some of the table-mapping/orms/multiple-types etc madness.

But yea, I still love go :)


> I never want to write Go that interacts with a database again.

Why? I’ve been using sqlx and it seems fine.


Not OP, but one answer I've often heard and experienced myself is that statically typed datastructures feel painful when you come from dynamic languages like Python, Ruby or JavaScript.


sqlc.dev make it way easier to deal with db


I hope so, last time i checked about 6 months ago, I spend a day trying to make the most basic of queries work with mysql8, was a complete disaster.


Sqlx is the way.


yeah, i'm using pgx and it rocks


I use Deno from time to time, and in my opinion more than a back-end language it's a really powerful scripting language. Being able to reference libraries by URL and the Go-like standard library mixed with Typescript async makes it a breeze to develop simple one-use CLI tools!


It uses typescript predominantly, and only compiles down to JavaScript. (I think you can code in plain JavaScript directly but I don't think anyone would want to do that)

The benefits are in my mind not that you can share code directly but more that the std lib is the same (mostly) between backend and frontend. This means no mental context switching for developers.

The other bonus is that typescript is just such a lovely thing to code in. Expressive, universal and ubiquitous, performant - modern JS is a joy to use (...if you don't need to use NPM... If you need to use NPM at all then it is a shit show)


I've written backends using C#, PHP back in the day, a little bit of Go, Java and naked SQL, and I prefer Typescript language and npm ecosystem, hands down. May be not for writing highload distributed systems that are critical at RAM & CPU, but IO-heavy business logic where managing requirements and their changes is the hardest challenge, it's a godsend.


You're kinda stuck writing JS on the frontend. The (only) advantage of also using it on the backend is that you can share code. In some cases, I could see that being compelling.

FWIW, I think Typescript would make an even stronger case. And I think(?) that's one thing Deno is trying to do.


Sharing code is great, I've moved processing between backend and front-end or even share the same code in both. Makes SSR+SPA possible using the same code. Typescript is a must for me, but I don't need Deno for that. I haven't used beyond scripting but for the time being I don't see a mayor benefit over node or ts-node other than easier/zero project setup.


I may be the odd one out, but for me node.js (and hopefully in the future Deno) isn't about the backend, but instead it's a pretty nice scripting environment for tooling and automation. I've been using mainly Python for this in the past, but got burned out quite a bit by the python2=>3 transition.

(but still: don't underestimate the raw performance of V8, for many things it's definitely good enough)


I completely understand what you're saying, but Python is a lot better now. You don't have to worry about Python 2 anymore and the tooling is a lot better as well (Black, Flake8, Poetry are all really nice).


Hard to out my finger on a single reason for it but I prefer it for hobby projects.

Part of the reason is I am already using node for tooling for the front end.

It is less powerful than say C# for example in multitasking - but for hobby projects that rarely matters.

Typescript makes a big difference.

There is a lot of community support: stackoverflow, npm packages etc. Every SDK has a JS version.

It is fast enough for my needs.

NextJS is another good reason.


To me its very powerful to have the same developers do both front end and back end. With modern frameworks its probably easier for front end devs to do back end work than Rust/Go/Elixir devs to dirty themselves with JS. So even though ts is suboptimal it allows one language, one set of tools and developers.


+ People get paid, hopefully a few even get big $.

- Investors want unicorn returns.

Good luck!


What is their actual business model? I know there's Deno Deploy, and presumably they offer some kind of enterprise-support type deal. But the first is easily copied by competitors (in an already-crowded market), and the other isn't super lucrative. Is there anything else?

I love Deno and want it to succeed, but it doesn't feel like a unicorn, and I'm worried about it being expected to become one


Considering that one of those competitors is Netlify, then either the incumbent (i.e. Netlify) is in a great position to benefit from a succeeding Deno, or Deno Deploy does succeed and Netlify can profit from that. A bunch of other participants (GitHub/Nat Friedman, Automattic) seem like they could benefit from a thriving ecosystem around Deno as well, even if Deno Deploy isn't particularly successful.


Netlify Edge Functions use Deno Deploy, as do Supabase Edge Functions. I'd imagine that's a significant part of their revenue. (I work at Netlify, but have no insight into the financials of this arrangement)


Open source ecosystems that become big enough often end up either producing unicorns, or failing that, producing unicorn-level returns for existing companies.

There’s no guarantee that the creator of the ecosystem will be one of the unicorns, but they’re as good a bet as any other company to pull it off. VCs aren’t looking for 100% certainty, just a plausible path.


https://deno.com/deploy/pricing is their business model.

They charge you for CPU and bandwidth more than they pay for it.


Right, my point is that any number of cloud providers could swoop in and offer the same product for probably cheaper


This is generally true about every company though. Someone can always swoop in, and huge incumbents can always do it cheaper, at least in theory.


Warren Buffett only invests in companies with what he calls a "moat" that make it hard for any other company to offer a similar product or service.

Apple's moat, for example, is its institutional knowledge of design and maybe its relationships with its suppliers, e.g., their deal with TSMC to lock up most of TSMC's capacity on the 5-nm node. Another moat Apple has is consumers who do not like engaging in sysadmin battles with their consumer electronics. One of my friends, a 79-year-old woman, for example, told me once that she wouldn't consider buying a computer from any company except Apple. (I could probably induce her to change her mind on that, but it would take persistence and patient explanation on my part, and also I'm probably the only person who could change it.) The main way Apple has maintained (for 38 years!) its dominance in institutional knowledge of design is probably the fact that most of the young talented designers want to work for Apple (partly because most of the people willing to pay extra from good design in consumer electronics buy from Apple).

Google Search's moat seems to be institutional knowledge on how to build a good search engine and access to data on what people search for and which search results they click. Strengthening the latter moat is the reason they're so interesting in having all traffic from the consumer's browser encrypted: namely, so that the consumer's ISP cannot sell data on the consumer's interactions with Google's search engine to any competing search engines. Another moat Google has is that most consumers will not take the trouble to change the default choice of the search engine used when the consumer types a non-URL into the location bar of the browser. Strengthening that moat explains Google's willingness to pay Apple and Mozilla to be the default search engine and Google's interest in giving away Chrome and Android.

Microsoft's moat: practically every organization uses computers and needs employees who know how to use those computers. They mostly choose Windows and Office because that is what most prospective employees know. In turn, when a young person improves his or her attractiveness to prospective employers by learning computer skills, they usually choose to learn Windows and Office because that is what is running on the computers of prospective employers.


The three companies alone do have these moats but are outliers of outliers. Stuff we haven’t seen much of before. Combined they are worth $5.5T. All US public companies are worth $40T. The Nasdaq that they are all in has a total market cap of all companies of under $20T. Besides the two mentioned exchanges, Shanghai, and Euronext, these three companies alone might come up 5th in total market cap. They are right alongside Tokyo and Shenzen. Definitely dwarf Hong Kong and LSE.

The other big moats off the top of my head aren’t as strong but ASML, TSMC (for now), Tencent, Baidu, Yandex, Kakao Daum, are all worthy but still peanuts comparatively. I’m sure some Indian companies too.


Facebook has a sizable moat: it is easy for me to join a new social network, but it is kind of pointless unless everyone I want to communicate with also joins, and it is hard to motivate them to do so. The fact that Facebook was not the first social network is evidence that that kind of moat can be crossed, but still it would be hard. And if a company starts getting traction, FB will probably offer to acquire it like they did Instagram and WhatsApp (which never was a social network, but might've become one by gradually adding features if it hadn't been acquired).

But I agree with your point: it is really hard for a new company to create a large revenue stream with a strong moat around it.


Also https://deno.com/deploy/subhosting

(I'd bet this will be, or already is, the more profitable side of the business.)


I feel like they have a better business plan than NPM did at least. They'll probably get acquired by one of the big cloud players.


I guess that makes some sense. Still feels like it would be a weird thing to aim for, but maybe that's the state of the industry


Pecunia non olet. If the goal is to make money, an exit via acquisition is one potential way to make money


I think they should get a reasonable return. Deno has a lot of hype and Node has a lot of cultural weight, as does Ryan himself. Not sure about unicorns though! It does not seem like a unicorn product, but I'm not a VC


You don’t get $21M from investors for nothing. They will force the square peg through the unicorn-shaped hole - or kill the company trying


> You don’t get $21M from investors for nothing.

Do you think that investors see every company that isn't a unicorn as "nothing"? I think you're living in a fairytale


They do, but that's not a bad thing. Sequoia invests in companies they think could be worth multiple billions.

If Deno reaches 30% of the impact of Node, their company could be worth billions.

In the meantime, we get a better open source runtime because they have money to build it out.


a startup could be worth billions only based on the fact that investors continue throwing money at it

there are lots of companies, which are technically "unicorns", but haven't even got any revenue yet (Rivian and Nikola come to mind)


The VC model is not built on 10x exits. Later rounds, sure, but series A is definitely looking for larger multiplier.


If you're a venture capitalist, this is in fact the model.


If they’re expecting a unicorn why are they funding Deno?


VCs see a tiny chance that it can be a 100X return. So raising $20M at say a $50-100M valuation now, that means betting on a $5B+ exit. In return for getting money to take a shot at that level of artificially-fueled revenue growth, the founders are willing to cede control of the community to professional financiers. As others wrote, that's the VC model: make a ton of these bets, and as long as 2-4 turn out, the lottery winners pay for the bankruptcies.

Given every big company uses JS, and thus many SaaS VCs have JS in their annual set of thesis bets, it's reasonable that Deno got picked by a top group given their team & growth. Same story with npm, netlify, etc.

I wish the team luck in hitting significant revenue in the next 9-12mo, as that will determine a lot of what happens to the community. A lot of pressure & culture change to work through!


Maybe they see a chance of it but they cannot seriously expect it

EDIT: the comment I responded to was completely rewritten and replaced with a different one. Please don’t do this


The main base expectations are that out of all of Sequoias bets, some will become $5B+ co's, and for the next 18mo, deno can grow headcount with salaries unrelated to sustainability

Some people involved might expect more, and I'm sure hope for more, but I'm a language designer & CEO, not a mind reader :)


The color changing on this page gives me a migraine:

https://deno.com/deploy

I like Deno in principle, but I'd love to see how Slack, Github and Netlify are using it.


If you have "prefers-reduced-motion" set to true in your browser / OS, the animation will stop :)


Unfortunately browser's don't really give you an easy way to control this. Changing system-wide settings just because one website is bad is quite the ask ask.


Funny thing is after looking at it for a bit, I'd _swear_ HN is changing colors now.


Same, and a bit of a persistent queasy feeling. I think it’s something to do with how it’s slightly off white but pulsing, so your brain tries to cancel it out.

It reminds me of the feeling after surfing when you lie in bed with your eyes closed and still feel the waves going up and down.


Weird... it's having the same affect on me, something about slowly fading to white background is making me flinch. Is this a known phenomenon?


I suspect that it basically confuses your brain because it tries to adjust the "white balance" so that the background becomes normal white. But the background is just saturated enough and changing so quickly that it just can't follow.


As a second data point, I added the same effect to my blog back in, I don’t know, 2002 and all my friends complained about getting a headache as well, so I turned it off

For some reason it never had that same negative effect on me


> I like Deno in principle, but I'd love to see how Slack, Github and Netlify are using it.

https://docs.netlify.com/netlify-labs/experimental-features/...

All this dynamic processing happens in a secure runtime based on Deno directly from the worldwide network edge location closest to each user.


> I like Deno in principle, but I'd love to see how Slack, Github and Netlify are using it.

Slack: "Run on Slack"

Netlify: "Netlify Edge Functions"

They're both listed in the Showcase: https://deno.land/showcase


Nice, thank you!


Not as dramatic as you but it does feel a bit weird looking at it.


To a Denoid, that changing color shit is lit!


[flagged]


Yeah I love making prospective customers feel pain too.

/s


The world doesn’t exist to cater to your weird specific thing that gives you a headache


The rank of the root comment says others are displeased as well.


As a matter of fact, it's exactly the opposite.

We have web accessibility standards and companies get sued for not following them.


That is terrible


I don't know if marketing was involved with using the term 'isolate' or not but if they are isolates as described by companies such as Cloudflare and Google, it might help to speak a bit more about the actual implementation at the infrastructure level.

Isolates are a really interesting approach to deal with the inherent nature of scripting languages to deal with the lack of threads as most scripting languages are inherently single-thread/single-process. If you have a 2000 line ruby class named 'Dog' you can easily overwrite it with the number 42. This is awesome on one hand, however it makes scaling the vm through the use of threads too difficult as threads will share the heap and then you have to put mutexes on everything removing any performance gain you would have normally gotten. Instead the end-user has to pre-fork a ton of app vms with their own large memory consumption, their own network connections, etc and stick it behind a load balancer which is not ideal to their compiled, statically typed cousins and frankly I just don't see the future allowing these languages as we continuously march towards larger and large core count systems. I'd personally like to see more of the scripting languages adopt this construct as it addresses a really hard problem these types of languages have to deal with and makes scaling them a lot easier. To this note - if you are working on this in any of the scripting languages please let me know because it's something I'd like to help push forward.

Having said that, they should never be considered as a multi-tenant (N customers) isolation 'security' barrier. They are there for scaling not security.


> Having said that, they should never be considered as a multi-tenant (N customers) isolation 'security' barrier. They are there for scaling not security.

V8 isolates are absolutely designed for security. For 10 years V8 was the only security barrier between sites in Chrome. Now they have retrofitted strict site isolation for defense-in-depth, but that doesn't mean the V8 team suddenly doesn't care about security. Chrome wants both layers to be secure and will pay a bug bounty if you break either one.


I’ve never been interested in JS as I mostly work in C++ but the ease of install/use of Deno over Node made me actually want to try it. It’s nice to have access to a bunch of web tech with a single binary. Very excited to see where this project goes!


> Cold start (time to first response, ms) O(100) O(1000) O(10000)

Ugh, that's not how big O notation works.


On the other hand, it is how it is often used informally in speech. I would probably have written eg ~100 instead, but I easily understood what they meant.


Yes it is.

Big-O means given an arbitrary function of some complexity, it is definitely bounded by this other function from the top, i.e. that other function is always larger than our arbitrary function.

f(n) \in O(n^2) means n^2 (ignoring constant factors) is always larger than f(n). If you have no polynomial elements in your O(g), then you only state the constant factor. Like in O(1).

So saying Cold_start(service) \in O(100 ms) is exactly the same as saying the cold start will always be below 100 ms. It makes sense to not say they are all O(1), although strictly they are, as the interesting bit is the difference in magnitude of the constant.


I played with an early version of Deno a few years back and it was already way more comfortable to use than node. It's a real counterexample to second-system syndrome.

The only reason I didn't continue was a lack of ARM support.


Surprised this is the case because I use their Rust v8 bindings on an ARM laptop almost every day. Have you tried building it from source?


It was a few years back and I was mainly playing with a few early projects. I was very productive, but wanted to target my Raspberry Pi and it just wasn't there. I ended up going 100% rust instead.


There are still no RPi builds as far as I am aware, which is a shame as there are now Mac silicon builds so not sure what the hold up is. I do wonder if there are.more Raspberry Pi's out there than M1/M2 Macs :)

Someone is doing the builds here - been using them and seem ok: https://github.com/LukeChannings/deno-arm64


The M1 builds are made manually by us, as github doesnt provide any runners. the same problem does apply for linux arm, however we don't have any arm linux systems any one of the members use; and building on a pi is a no-go, as that can take 70+ minutes, so that would make our release cycle a lot more complicated. Either way, we are investigating potential possibilities besides waiting for github to have runners available


Have you tried building in a rpi virtual machine on a Mac? I have no idea what the performance would look like, but it’d be easy enough to try, especially since you are already managing your own max builders.

I want to say I followed these instructions for doing so recently:

https://gist.github.com/plembo/c4920016312f058209f5765cb9a3a...


I had a very different experience. I very much wanted to make it work, but trying to use it with existing npm packages and import maps was incredibly painful, and I ultimately switched over to node for that project.


Haven't tried import maps, but so long as the npm package uses ES Modules and doesn't rely on Node specific APIs, you should be good to go.


What made it "way more comfortable" for you?


It will be interesting to see if Deno can provide the level of productivity improvement node.js delivered (was supposed to deliver? continues to deliver?).

It's one thing for it to claim supremacy over node, but can it attract the TJ Holowaychuk's of the world and truly generate a full ecosystem.


> but can it attract the TJ Holowaychuk's of the world and truly generate a full ecosystem

Something like this? https://twitter.com/dassurma/status/1407048553768402949


Interesting raise. I leave the front marketing page of Deno Deploy open in my browser for a few moments and I get the ubiquitous "This webpage is using significant energy. Closing it may improve the responsiveness of your Mac."

Says it all about the state of the JavaScript ecosystem really.


unfortunately, webdev does not have a strong performance culture. V8 & Blink/Skia can be crazy fast in the right hands, but it does take some effort and profiling to eek it out.


Deno deploy seems cool and all, but I haven't seen any great rationale for using their service over say Cloudflare Workers.


As someone who dabbled with both workers and Deno Deploy: Deno Deploy actually works.

I was very excited about the idea of workers but their tooling is (or at least was when I tried it last) abysmal, buggy and hard to understand. To this day I don't know how to setup a basic dev vs. production in workers.toml. Local debugging was buggy for naked domains (even found a GitHub issue for it that languished for months with no bugfix or clear explanation / workaround) and very slow.

Great idea killed by poor dev tooling.

Deno deploy is the opposite of that: deploys are instant and it's obvious how to deploy. You can develop and test locally.

Cloudflare released wrangler v2 (which dropped rust for node) and maybe it's better now but the one experience I had with wrangler v2 was trying to deploy a small static website (pages) and it failed due to their backend throwing 50x errors.


Agree regarding tooling.

So much so that I wrote Denoflare (https://denoflare.dev/) to make writing Cloudflare Workers using standard Deno a breeze: no wrangler, toml, webpack, npm etc required


Yeah unless I'm missing something isn't this just a stand alone company offering roughly workers?


This is an open source JavaScript runtime with a hosted business model. Deno is great. Deno Deploy is a reasonable way for them to make money.


CloudFlare workers is on my radar because of an ecology of other "edge" products that are in the works or already released. That's what Deno would need to catchup on to be on the table for discussion for me. Otherwise it remains a toy I use in my spare time (which I do enjoy at the moment).


Meanwhile, Cloudflare says they are open sourcing Cloudflare Workers runtime: https://twitter.com/KentonVarda/status/1523666343412654081


You could say the same about AWS Azure, and GCP.

Competition is good.


Competition is great - sure, but I want to know what Deno gives me over its competitors.


Workers are not meant as a generalist runtime.


Neither is Deno Deploy, really. The limits on CPU time etc are very similar, with Cloudflare being more flexible on CPU and Deno Deploy being more flexible on bundle size.

https://deno.com/deploy/docs/pricing-and-limits https://developers.cloudflare.com/workers/platform/limits/


Yes Deno Deploy is for HTTP (just like CF Workers) but my point is that Deno is more of a generalist runtime compared to Workers and has a much broader appeal than just Deno Deploy.

Here's an example. Deno is like any other backend runtime and has regular DB clients. CF Workers do not. If you want to use Workers with PG, the CF docs point you to Supabase which provides REST over PG using PostgREST.

https://developers.cloudflare.com/workers/tutorials/postgres...


Yes, Cloudflare Workers originally couldn't make arbitrary outgoing TCP connections. Cloudflare wrote an adapter to transport Postgres over WebSockets (https://blog.cloudflare.com/relational-database-connectors/) and then added support for TCP connections (https://blog.cloudflare.com/introducing-socket-workers/).

Deno Deploy doesn't have much of a moat, and Cloudflare is better funded. Both are promising to be open source & self hostable (https://twitter.com/KentonVarda/status/1523666343412654081). We shall see.


I'm arguing about Deno (the whole thing) vs Workers, not Deno Deploy vs Workers.

Even when CF open sources Workers, they will be useless for anything non-HTTP related.


Grandparent said

> Deno deploy seems cool and all, but I haven't seen any great rationale for using their service over say Cloudflare Workers.

Note "their service". You replied with

> Workers are not meant as a generalist runtime.

So, we really were talking about Cloudflare Workers vs Deno Deploy.

Deno is something different, for sure. But it seems Deno Land Inc. is betting on Deno Deploy. Which makes grandparents' question interesting.


With all due respect, there's a point I think you're missing here.

What I'm saying is that, even if you're only going to use Deno Deploy, it is an objectively better proposition because Deno (the runtime) has a much broader use case.

Would you rather use something that (for now) can only be used on a single cloud provider for (let's call them) "edge HTTP" workloads and integrations with services of that single cloud provider...

... or use something that can be used on any cloud/hosting provider, for any HTTP workload (plus many other non-HTTP use cases), which also happens to have an "edge HTTP" service custom tailored for it?

And let's not forget, Deno (the company), is much more focused on real developer needs. Workers still have a mediocre DX (although it has improved considerably lately) and still no framework for Workers like Fresh.

https://fresh.deno.dev/


I'm happily deploying Deno projects to Cloudflare Workers. Outside of Deno.* (not like any isolate-based serverless thing has processes or files), it's all APIs specced by browsers.


> Cold start (time to first response, ms) O(100) O(1000) O(10000)

I think ~100 ~1000 ~10000 would be clearer than using the big O notation, since this has nothing to do with fuinctions.


I genuinely don't even understand it. I was either taught wrong, learned wrong or this doesn't make sense. O(100) is constant time, so it's no different to saying O(1) or whatever, I've only ever used O to talk about how we care about the time complexity of a function, e.g. comparing linear time to quadratic time. 1 is the same as 100, irrelevant.


Agreed! I'm pretty sure they're just using O(x) to mean on the order of x, since big O of any constant is the same (unless that's their point??? :O)


If it’s big O notation, it makes no sense because O(10000)=O(1000)=O(100)=O(1).


I think they're trying to communicate that those are worst case latency numbers. You're correct in that formal use of Big O notation doesn't distinguish constants.


I would read ~100 to mean "approximately 100", while O(100) to me reads like "measured in the hundreds". So 200-600ms cold starts are O(100) and 3-7s cold starts are O(1000).


I'm not sure, I'm pretty sure they're trying to communicate that these are worst case estimates. ~100 seems like it's communicating average case estimates (big theta notation), which isn't really the same thing


Big Theta isn't "average case". It describes an algorithm with the same asymptotic lower and upper bounds. How would you even formally define an average case, when the result depends on the input?


It's that easy to raise money these days, at the 10% inflation mark. Spin up a webpage full of nonsense promises and they'll come chasing after you to take their cash.


Comments are a bit negative. I for one think they are onto something here. Surprised it's only 21M. I would have expected in these market conditions to beef up more for the next 2-3 years.


> Surprised it's only 21M.

Indeed, especially when you compare with a company such as Supabase (which is working on way less interesting technology imho) who just raised $80M: https://news.ycombinator.com/item?id=31328783


Deno probably would have raised $80M if they'd raised the same time Supabase did. Investors got quite a bit more tight fisted between January and April.


Good point.


I find Deno very interesting. It's written in Rust by Mozilla, executing TypeScript by Microsoft on V8 engine by Google, and its name is sorted version of Node.

That aside, I have been very productive with Deno. Web Standards are going in the right direction, and Deno helps using them easy. The Request/Response model with streams make a lot of sense, and provides lots of way to optimize.

I understand performance is not the best compared to Elixir or Rust, but the ability to quickly download Deno, run a web server, import modules through URL, and start hacking and testing, then bundle into a cross-platform executable is a life-saver. No installation step, no build tool in between.


node can do all this stuff too


I'm no stranger to Node.js, but everytime I want to start a project with it, I need to setup package.json, install TypeScript, Ava/Jest/Mocha or what have you, a bundle tool like rebuild, and then I can start writing code. If I want to watch it, I need to install another dev dependency.

And if I want to write a web server, well, time to find out what the trendy web server these days, and which router to use.

Whereas I can just create a main.ts file, `deno run/test -A --watch main.ts` and start importing the server from std.

Yes, you can do pretty much anything in anything, but for me, Deno lets me start up faster.

Oh, and we're not getting to the cross-platform build yet.


As a Deno fan I was surprised to learn that the free tier for Deno Deploy includes 100k requests per day and 100GB of bandwidth monthly. I know I'll be trying it out now.


I think Deno Deploy and using Deno on fly.io could be a powerful combo. Deno has some advantages that definitely lend themselves to having control of your entire MicroVMs. An app could easily be split between serverless-friendly and non-serverless-friendly parts of the stack by using both together.


> Up to 10ms CPU time per request

So less than 17 minutes of CPU time per day - not a lot, but also not nothing.. But at 10ms per request, what would one use it for? Just server-side rendering for something simple?


Bear in mind this is CPU time, not wall clock time. I work with both AWS Lambda functions and Deno Deploy functions every day. I hit the 10s Lambda timeout all the time (things like slow API calls or network requests), but I never hit the timeout for Deno Deploy. You can even use it for things like Server-Sent Events or Websockets. It'll keep the connection open indefinitely, as long as it's not using the CPU.


proper applications as well. example is the https://deno.land website has an average CPU time of 6ms. CPU time means it doesnt include any IO bound operations, so ie doing a fetch request wont really contribute to the CPU time.


Average time 6ms with hard cap at 10ms sounds like a ticking timebomb you don't dare to put in production.

I've had to switch multiple webapps to Cloudflare Workers' "unbound" mode to go beyond CPU time limits.


Does this mean we can finally get a REPL where a file can be loaded, modified, then reloaded, without having to restart the whole thing?

Seriously my biggest pet peeve with both deno and node.js. In every other REPL I've used this is basic functionality. When I talk about this to JS people they look at me like I'm from mars.


JS modules can produce side effects on load. Does that present an obstacle to that kind of REPL pattern?


I don't think so.

Most repls use a special in repl keyword to accomplish this.

For some reason the JS community can't move past "but modules are special". Ocaml has modules too and the repl can reload stuff from a file no problem.


Naive question, but is that dictated in the spec? Probably?

Just wondering that, since Deno is rewriting all of npm anyway, it would have been a great time to revisit some of these historical script-y design decisions, to make JavaScript/TypeScript-at-scale suck less.

Granted, probably too late now, but as-is neither of Deno's url-based imports or sandboxed-by-default changes has seemed worth the migration cost.

But, if they also made something like "everything hot reloads for free" level ergonomic/platform changes, then yeah, I'd be hopping over.


Deno has a built-in repl

https://deno.land/manual/tools/repl


Well thanks for reading the first 9 words of my post at least.


Running little processes on the server without the VM/K8s expensive abstraction has a lot of potential. I'm sure Deno will do great! At least this way of deploying web apps has a lot of potential.


Isn’t this what an operating system is doing by default?


I should say "untrusted code" to be more specific.


> JavaScript is unlike other programming languages in that it is the universal scripting language. Its universality combined with security from the browser and its raw performance lends itself to a solution for these problems, at least for a certain common class of applications.

You want to sell Javascript based solutions, go for it. But don’t push this nonsense like “No language like Javascript”. The only reason Javascript has a wider adoption than others is because it’s forced upon us. Browsers understand just Javascript to display web pages, period. Doesn’t make it the best because of that reason. I say this as an experienced IT consultant managing a wide array of projects over my career across industries.

You know what usually bites me when I touch code that has not been touched for 6 months (usually a late contract renewal)? It’s not my Elixir or Ruby code that’s running on autopilot. It’s the stupid Javascript with its Node based dependencies all failing randomly in each direction just because some developer used a library for something trivial they could have written themselves or super likely because Babel or Webpack decided to change their config files so my entire JS pipeline breaks. JS is a language full of patchwork and its entire ecosystem, more so - it certainly has gotten better over the years. If it works for you, then great. But it’s a far cry from a “one solution for everything that’s better than everything else”.

Personally, I use Coffeescript. It has been a breeze from all aspects.


> You want to sell Javascript based solutions, go for it. But don’t push this nonsense like “No language like Javascript”. The only reason Javascript has a wider adoption than others is that it’s forced upon us.

It is forced upon us doesn't mean it can't be universal. On the contrary, it usually is. It's like how USD is kinda forced around the world but it's universal nonetheless.


> Doesn’t make it the best because of that reason.

This is a strawman. Where in the original post did they say JavaScript was the best. They said JavaScript is the "universal" scripting language in the sense that is ubiquitous. That is fact. You literally confirmed it here:

> Javascript has a wider adoption than others is because it’s forced upon us

> You know what usually bites me when I touch code that has not been touched for 6 months (usually a late contract renewal)?... It’s the stupid Javascript with its Node based dependencies all failing randomly in each direction just because some developer used a library for something trivial they could have written themselves or super likely because Babel or Webpack decided to change their config files so my entire JS pipeline breaks.

Okay....? Then write VanillaJS instead of depending on Node dependencies. It's not JavaScript's fault that you're a contractor dealing with other developers' app-level tech debt.

> Personally, I use Coffeescript. It has been a breeze from all aspects.

That's a cool story, but Deno uses TypeScript so the whole JavaScript rant crumbles here.

> Browsers understand just Javascript to display web pages, period.

Bits like this make me feel like you're just throwing a bunch of domain-related words together and hopes something sticks.

Browsers understand more than JavaScript, and JavaScript isn't the control layer for logic, which can including displaying page, but ultimately its HTML that is for displaying the web page.

> I say this as an experienced IT consultant managing a wide array of projects over my career across industries.

I say this as an experience software engineer deploying a wide array of web apps getting hundreds of millions of views a month over my career across FANG companies.


> I say this as an experience software engineer deploying a wide array of web apps getting hundreds of millions of views a month over my career across FANG companies.

> Okay....? Then write VanillaJS instead of depending on Node dependencies. It's not JavaScript's fault that you're a contractor dealing with other developers' app-level tech debt.

That’s a strawman. I challenge you to write a full on production level frontend application using pure Vanilla JS. You can’t. Not in 2022…unless it’s a simple DOM manipulation job.

> That's a cool story, but Deno uses TypeScript so the whole JavaScript rant crumbles here

“Bits like this make me feel like you're just throwing a bunch of domain-related words together and hopes something sticks.” See how easy it is to resort to ad-hominem?

So what you’ve worked across FANG companies? It’s irrelevant to the discussion. No need to show off - saying you’re an experienced software engineer is good enough. The rest of it doesn’t make your argument any more valid.

Javascript apologists like you who don’t see the problem with the ecosystem are the reason why the whole ecosystem is so fucked up with half baked systems that are barely reliable.


> deploying a wide array of web apps getting hundreds of millions of views a month over my career across FANG companies.

This bit is unnecessary. We don’t need a bunch of e-penis measuring. I personally see less credibility when social proof is used this much.


Did you mean to say Coffeescript? Thats just a thin transpiler to JS.


Where's the angle for the investors - what will Deno do to produce the returns necessary for such a high amount? I thought Deno would be the eradication of Node and bring stability to Javascript. From the outside it seems to be firmly on it's way to being just another Zeit/Vercel. :(


A lot of marketing fluff and exaggerations but the core of is very interesting. Combine this with a real serverless Postgres like https://neon.tech/ and this could indeed change a lot the way mainstream apps are build an deployed.


Deno sounds like a powerful runtime, but what is there to sell?

I won’t build on anything that isn’t permissively licensed. They can’t charge for anything without license protection.


They are selling a global Deno hosting service https://deno.com/deploy


Is it Docker Inc all over again?


joyent


And what happens when AWS, Azure, Google Cloud, Digital Ocean, OpenShift, IBM, … roll this out?


One of them acquihires deno for 100M!


Somebody doesn't understand O notation.


They're abusing the notation, but doing so in a way that's perfectly clear.


I also didn't get it.

so I think in their notaion O(10) has greater magnitude than (>) O(1)


You're expected to read it out loud (in English), where one of the conventional readings of O(f) is "order of f". Conveniently "order of 100" is usually taken to mean not asymptotic behavior bounded by a constant multiple of 100, but a range within an order of magnitude or so of 100.

Which is to say, they're saying Deno has a cold start time of ~100 ms, package size ~10M, a physical machine can support ~1k instances.


cool, it is good to know but, at the same time, it is weird. I would use x convention to deliver that message. That comparison table is there for a reason and that is to deliver a marketing message. I'm sure there could be some better ways to deliver that message.


I would love to try Deno deploy, but right now there are two deal breakers for me:

- why is the CPU time limit so low? 10ms (or even 50ms in Pro version) seems a limit very easy to blow for any sufficiently complex app

- why is there no data storage offering available? I'm not sure I see the point of edge deploys while data is still only accessible through a centralized database server. Having a way to deploy a sqlite database next to the running app seems like an easy way to realize the gains of edge deployments.

I'm currently trying Fly.io which seems to cover both of these issues, but I wonder if I'm missing something here - or perhaps this service is intended for very different use cases.


I _believe_ there is an important distinction here between "CPU Time per request" and "Time per request". Most complex apps are going to be IO-bound which shouldn't count towards CPU time.

It's strange this isn't more explicit.


Yeah, I found this confusing at first. But you can even run WebSockets on it for minutes at a time, or piped large files through it taking many seconds. As long as they're not using CPU, they can keep running.


Is Deno going to be the first runtime environment to IPO?


I think Sun Microsystems IPOd


Were they not making & selling hardware at the time of IPO?


I'm not deep in Node world but I use it and see it being used, who is using Deno out there? Have there been some notable/major ship jumps? Or is it not that kind of situation (perception from when it came on the scene is that's exactly what it is, constantly trying to woo Node devs over). And also there's alot of recognizable brand competition out there from things like Cloudflare, Netlify (but they're an investor in this?), Vercel on fire the last year or two, and not to mention the big FAANG types. Again, who's using this?


Netlify, Supabase and Slack whitelabelling Deno Deploy for starters, I'm sure there are others out there self-hosting as well


See now that's interesting - All of those have very out-there brand reach and recognition but who knew they were using Deno, nobody heh. Okay, then, so it's a bit under the radar but ticking away down there, hence the raise. Fairplay.


If you're not following those companies' offerings then it stands to reason you haven't heard about it but all three of them did broadcast pretty widely that they were using Deno, I would assume quite a few people were aware of this.


yup, can confirm nothing about it was hidden (good on them, but better for deno)


I think it's still pretty early days but it's clearly far superior to Node so I think we'll see more and more people use it.

The biggest barrier is the library ecosystem. JS's insane packaging history means a lot of NPM libraries don't work seamlessly with Deno even if they don't use Node APIs.


I imagine you are already aware, but the creator of Node, Ryan Dahl, is also the creator for Deno. He is trying to create a new language which integrates what he learned from building Node.


New language or new runtime?


It's all JavaScript, so language is probably not the right word to use. Runtime is correct, but both technically sit on top of V8. Although Node leverages C++ on the backend, and Deno leverages Rust.


Thanks that clears it up for me


Slack is the biggest I know of https://deno.com/blog/slack


From my horizon the most significant adoption has been Netlify edge functions. So not exactly ship jumping, but adjacent and growing spaces.



What does O(100) mean in the comparison table?


"on the order of 100ms"

It's misusing big-O notation for "order of magnitude", but after I initially tripped over it, it seemed clear enough.


I think they are using O(x) as a shorthand for antilog10(log10(x)±0.5).

Or, in simpler terms, O(x) = “approximately x, to one significant figure”


It’s used as a comparison to the alternative, O(1000), an order-of-magnitude improvement.


I think it's a good opportunity for Meteor, which is a very interesting framework but with some technical debt, to jump ship on Deno and start fresh again. Maybe they could try replacing their in-house synchronization mechanisms by managed services such as AppSync. Randomly wondering :)


"JavaScript is unlike other programming languages in that it is the universal scripting language."

WTF? How is that even slightly true?

C is the universal scripting language, in exactly the same way.

Nothing happening in this space makes any sense to me.


There's interpreters/runtimes for C? At least that's the go-to definition for scripting languages.


I think the point was merely that JavaScript is obviously not the "universal scripting language" and claiming it is demonstrates not just hubris but ignorance.

That said, I have tons of little one-off utilities I launch with "go run random-thing.go" because I don't care about the second or two it takes to compile, and if C were my everyday language I'm sure I'd have a compile-and-run wrapper handy. At that point, it's effectively a scripting language, right?


Which scripting language has a better claim to being universal? JavaScript runs on billions of diverse systems worldwide, both browser, server and CLI.


perl python lua scheme bash


Which browsers support these?


try running C in your browser lol


heh, I knew for sure Deno would be successful from the first moment it appeared on HN and people here were critical of it.

My rule of: The more HN criticizes it, the more likely it succeeded, still rings true.


So crypto is going to be a giant success?


Depends how you measure success. Being it a trillion dollar I would call it wildly successful.


Hi man, you seem to have found something. Transfer your bank deposits to btc eth, and in a few years you will find that they are far away from the banking institutions.


The business end here is Deno Deploy - a heroku-like. That's what the "startup" behind the funding is. The tech is in service of that.


JavaScript is popular because of browsers not because it is in any way a good language. Name another popular language with half its quirks.

WASM will eventually spell the end of JavaScript’s hegemony as it becomes easier to build web apps in other languages. So good luck Deno. You’re betting on a sinking ship.


C++ _easily_ has many if not more quriks than JS. There have been over a half dozen major revisions of the language since ~1988. Each one added in sometimes dramatic ways new syntax or idioms that required major relearning. If you learned and used C++ in 1995 you would be a liability if thrown into a modern 2015+ C++ codebase (well a liability until you learn all the new quirks and changes in the language).

You might drop into a C++ codebase from 1990 that looks more like C (it was made to be mostly backwards compatible after all), then see a codebase from 1999 that's chock full of the most cryptic template metaprogramming imagingable and complete spaghetti structure of OO "design patterns", and then see a codebase from 2010 making much more heavy use of the updated standard library and a functional style of programming.

The sheer scope of C++ as a language is frightening as it requires quite deep knowledge of the parts you're using. Sometimes even deep knowledge of the history and rationale for parts of the language you're using existing. And if you screw something up you might silently introduce a major security hole, OS crash, or worse--the language gives you a ton of rope to hang yourself from.

Oh and there's no standard build system, no standard package manager, no standard project layout, no standard for code formatting, documentation, etc. Everyone has their own bespoke solutions that make the JS world look amazing in contrast.


True, and, C++ is popular in that people use it. But I don't think many people really like it.


Wow I had a play, within 30 seconds I had deployed their hello world. Checked devtools and it took between 30-100ms (I am not on a great internet connection, but this proves it is somewhere near me globally!).

Also loving .deno.dev as a fairly nice top level domain for a blog. E.g. johnsmith.deno.dev.


I read half and thought "how did this get any funding".

Who validated this idea and with what measures?


Mainly because of Ryan who created nodejs. If you have a good track record, investors will throw money at you anyways.

Not being pessimistic about this though. Imho Deno seems like a solid improvement over nodejs.


I think it's similar to many critiques of crypto: It doesn't need to exist. But someone really wants it to regardless.


are there real-world, commercial products actually running on """serverless""" architecture?

no matter how much I think about that whole concept, I see no application for it that couldn't be done better, faster and easier with regular tools


> are there real-world, commercial products actually running on """serverless""" architecture?

Yeah, thousands I would imagine. The last two companies I have worked at have been 100% serverless or nearly 100%.

> no matter how much I think about that whole concept, I see no application for it that couldn't be done better, faster and easier with regular tools

Considering you have to ask if there are _any_ products running in a serverless environment, I would imagine you need more exposure to the concept before you make such a large judgement on it.


> Yeah, thousands I would imagine. The last two companies I have worked at have been 100% serverless or nearly 100%.

If you can answer (for legal or other reasons you might not be able to): What kind of monthly bills did your setup have and how many req/s did you serve (in lack of a better metric)? It'd also be useful to know average min/max response time if that's something you remember.

I wish that was a good way of evaluating "performance / price" but that's really hard... If someone know a better way to frame the question regarding price, it would be very helpful.


I can't list that, but both companies migrated to serverless and both companies are glad they did.

There is more than "raw dollar amount on monthly bill" to account for in cost, as well. For one, there's stability and the toll that takes on both your team and your customers. Not saying non-serverless apps are not stable by nature, but I've now been part of two teams that have seen the same types of benefits and those benefits line up with the "sales pitch" benefits.


what was the product and the scale then?


I'd rather not list my last two companies or their scale, but I can assure you they are real companies that exist, have users, and make money. I'm not sure why I even would be asked to.


In my opinion (and at most places I've worked) serverless has been a great _complement_ to traditional deployment models (usually k8s).

For example, you have some background tasks (sending email, processing files, etc). It's very convenient to just push those into a queue and have serverless functions chew through them. They scale to zero, cold start time has no negative impact on the workload, you don't have to worry about k8s resource requests and scaling, etc.

I too am skeptical of using any serverless offering for serving HTTP API's or server-side rendered pages, atleast for non-trivial amounts of traffic. CloudRun can do it but only because it's a thin management layer on k8s/knative and even then networking config for anything other than CloudSQL is tedious (you have to deploy a proxy to access your VPC).


I'm not particularly interested in serverless but my understanding is a developer doesn't need to know the server configuration and drops some code in. The service figures out how to route the domain to code and handle the storage, caching, etc

I rather know my configuration and use one big server with the static files on a CDN


Yes, there are many real-world completely serverless products and architectures. I've been working exclusively on serverless architectures for many years now, I have no plans on going back to provisioning servers or working with containers.


It depends on what you categorize as “serverless”. I wrote about how Khan Academy is built on essentially serverless architecture and has millions of monthly users:

https://blog.khanacademy.org/the-original-serverless-archite...


There's plenty of real-world, commercial products which do not get extremely high loads but earn much more money on each request. They're mostly in B2B without sexy names and media coverage, though.


I can assure you there absolutely are. When you need to handle billions of requests a day (like a worldwide seller must), then it works.


Hot take: investors really are stupid enough to think ”servers/datacenters are a massive cost for large companies. A serverless solution should save them lots of money”


It's not a hot take, it's just a bad take, and a complete failure to understand the value proposition of these types of offerings (and why we're seeing so many of them pop up these days)


Can someone more knowledgeable highlight what kind of security model do these Deno deployments have compared to virtual machines and containers, if they are isolated at the process level.


I can't believe how much of a nuisance JavaScript monoglots make themselves. My God, go learn a server-side language, please. I will teach you if you stop this.


Are there any sources for the claims made in the chart on the second half of the page or are they theoretical? Seems really cool in theory!


An "isolate cloud" is exactly how Google App Engine worked 10 years ago. It was a good idea then and is still a good idea now.


Just like GoLang, I only wanna use this because that dinosaur is SO CUTE!! hahaha


Deno Deploy, how is this different than lambda functions, or elastic beanstalk?


Is the intent of this product to compete with something like Netlify?


No, Netlify actually uses Deno to some degree: https://www.netlify.com/blog/announcing-serverless-compute-w...


Right, but the $21M of funding seems to be for Deno Deploy, the product, not Deno, the CLI.


The wording in that blogpost might be a bit off, here it is more in-depth: https://deno.com/blog/netlify-edge-functions-on-deno-deploy

Netlify is using Deploy


Is there a NextJS-like framework for Deno?


There's Fresh, but you can run also Next.js on Deno (and Remix and SvelteKit and Astro and Hydrogen and Qwik and 11ty and SolidJS and more). We do it at Netlify. See the examples linked here: https://docs.netlify.com/netlify-labs/experimental-features/...



deno seems like a great project that solves problems that don't actually exist


who?


Serious question, how do investors plan on making a return on their investment? Where is the cash in this?


There is extreme vendor lockin with services like Deno. I'm sure that it'll be much more expensive in the future.


It is the lock in and potential feature gates that worry me. Also, it scares me developers will skill up in something that can disappear like the wind because the company didn’t hit their goals.


It's all standards-based code. The WInterCG project makes it even more portable: code should be easy to move between Deno, CLoudflare etc.


[flagged]


Attack the technology all you want as those are some valid criticisms, but the sarcastic personal attack at the end seems unnecessary. Imagine how he'd feel reading that. Would you want to have that same feeling yourself?


> When I look at the ecosystem of JavaScript it’s frightening to see how little coders care compared to Go or Kotlin ...

I don't know about care but there is a clear difference between people with a computer science or formal programming background and most developers. You can bemoan that, but isn't it desirable that there be a democratization of development? In fact the number of simplifying tools and packages built by programmers to speed everyone up inevitably decreases barriers to doing development, and in many cases improves the quality by managing the harder stuff properly.

We had the same phenomenon a generation or more ago with Visicalc and Excel, where people who would never think of themselves as programmers wrote complex macros to get their jobs done.

And a decade before that we famously had secretaries writing EMACS macros who certainly never thought of themselves as programmers.

I think this is all a good thing, even if I am dubious about my bank's phone app.


> At least I’m happy for him he could finally cash out on sequoia money and become a SV millionaire like everyone else !

It doesn't actually work that way. This is an A round, so a priced round, so in theory the founders' equity could have some noticeable valuation but in reality the common will have no meaningful cash value and the founders aren't worth anything as a result of this funding. Yet, at least.


My experience with Deno has been:

- I don't need to mess with third-party tooling, half a dozen project configs, etc because everything is built-in and Just Works

- The standard APIs are mostly wonderful- modern, promise-based, practical, etc. Documentation leaves something to be desired and many core APIs are still unstable (in that they get breaking changes (though you can easily pin to a specific version)), though for most of the ones with direct analogs in Node you can honestly just follow the Node docs

- Standardized importing is awesome; we may finally leave behind the nightmare of multiple coexisting module systems

- Standardized testing is awesome

- The lack of an install step is awesome

"Isaac Ryan has so far solved none of the problems that exist in the node ecosystem" is simply wrong, and feels like a cheap and uninformed dig rooted in some personal beef you must have.


> Funny because I don’t see how they can make any money from this....

Deno Deploy makes money; it's $10/mo/app: https://deno.com/deploy/pricing


Is it $10/mo/app or is it $10/mo (for as many apps / the user) ?


> At least I’m happy for him he could finally cash out on sequoia money and become a SV millionaire like everyone else !

Tell me you don’t know how venture capital works without telling me you don’t know.


> Even the source of node it’s just a bunch of function from pre-ES4

Old code does not mean bad code. Seems like a silly thing to look at, honestly.

A bunch of java was written pre-java8, does that make it all bad? No, no it doesnt.


New language features are for new code.


agreed. but old code really doesnt need to be rewritten unless there are bugs.


Or there are optimization opportunities, but new language features are rarely for that, it's usually for developer comfort.


In most scenarios, new ways of writing the same code is a waste of time. I actively discourage use of things like type inference in newer Java versions as it goes against the purity of the original vision of the language. It's a detriment to mental agility, and it also makes scores of incredibly useful, deep books and learning resources outdated and useless.


How exactly does incessantly repeating yourself, in situations where the compiler is already perfectly aware what you're supposed to say, a "detriment to mental agility"?

And how does it make books and learning resources outdated and useless? Old code still works exactly the same as it always did.


being overly dogmatic when it comes to writing code is not a good thing, imo.


Personally I got the sense that the Go community doesn't really care for web apps or anything too close to React, but maybe I'm just not experienced enough yet with Go. Is there something like CreateRustApp but with Go?

https://github.com/Wulf/create-rust-app


I don't think it's needed. The new front-end build tools optimize for production use and have a great developer experience. Use Next.js or any of the many toolkits out there. There will be a lot more support for them too.

I use Go with Next.js and svelte-kit.


I think you are right. Go devs seem content with their mark position/ranking. Use Rust if it is providing more resources that are helpful to you.


They raised 21 million how are they millionaires lol




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: