If GP looks for a general purpose computer for their kids, M1s are a clear yes but for DL? No way, as long as PyTorch is not running on M1s and an Nvidia GPU in some cheap PC shell has the same price tag.
OT: After reading and discussing this feature in this thread, I realize, it's not about the feature or if it's good or bad.
This Deno update and the whole thing shows once again that we want a node successor but Deno as great as it sounds doesn't offer enough benefits or is 10x better than just using node + Typescript in order to leave latter and their huge ecosystem.
Even worse, it creates the notion that the Deno team desperately tries to climb back on stage and get our attention with minor improvements. Maybe I am ignorant but Deno feels just like an opinionated node/Typescript distribution with too little improvements but not like the successor we hoped for.
Besides, I wonder if the Deno team solved all the performance issues which popped up the last time I've read about Deno. There were some debates with the ws community but can't remember details anymore.
> Even worse, it creates the notion that the Deno team desperately tries to climb back on stage and get our attention with minor improvements.
That seems an uncharitable interpretation. Ultimately they're creating tools for our benefit. They may or may not be useful to you personally, but the creation of value should still be applauded, not dismissed as attention seeking.
> Deno as great as it sounds doesn't offer enough benefits (...) Deno feels just like an opinionated node/Typescript distribution with too little improvements
Node is 11 years old. In the beginning, it was rough around the edges, too. I think you need to be a bit more patient until Deno reaches a similar level of maturity.
When node came out it was a perfect storm: Ryan did a brilliant job, right timing, right product, laser-sharp focus and x times better than the past (I liked node right from the beginning) and he was fast. All things I miss from Deno.
But I don't blame Ryan, he is a great guy, created the biggest server-side dev ecosystem and it's hard to top such an achievement but at least he tries and this is why I like him.
What I like about not integrating the build-step as Deno does: You allow competition and the market comes up with great ideas like Vercel did with pkg.
Building TS projects is quite demanding and I doubt if one party monopolizes this important step and thinks it does the best job it will degenerate an ecosystem. Even the TS team says the build system is not the core of their work, they just have one for convenience but encourage the community to compete and complement. Integrating build systems is good for beginners who struggle with them but for the rest? IDK.
Or in other words, Deno wants to be more than just an opinionated node-Typescript-distribution nobody cares about but then they need to create this ecosystem and focus on the core (what's their core and value add other than repackaging node and TS would be the next discussion).
With integrating the build step they do the exact opposite, they shut-down an ecosystem before it can even start. There's a night and day difference between good and bad build systems and only competition and a rich ecosystem can bring up the best solutions.
FWIW, there're tons of ways to compile TS, every with different trade-offs and it's good that we have these options.
I know providing a fully baked compiler API isn’t high on their list (think babel). I couldn’t find any information from the Typescript team regarding anything you’ve stated about 3rd party build systems and the compiler being provided only for convenience.
Do you have anything you can point to from the team about this?
It's right in their wiki, "non-goal" number 4 in their TS Design Goals which is also referenced again in some issues:
[a non-goal] Provide an end-to-end build pipeline. Instead, make the system extensible so that external tools can use the compiler for more complex build workflows.
If it's good that we have these options, what Deno provides is just another option for us to choose from. I don't see how this is a shut-down for the ecosystem.
I don't think, Ryan wanted Deno to be just another build system for TypeScript but something a bit bigger. And even if it's meant as a build system there're much better options out just for this purpose.
Why anyone would look at the litany of mistakes that is npm and Node, then look at Deno and all of the same developers learning nothing except how to implement its "hurr durr URL loading code is cool" approach to security and think "this a good idea" is beyond me.
I appreciate Deno because I can ask job interview candidates what their thoughts are about it, and when candidates for senior positions don't point out any of the billion obvious reasons it's a stupid project for stupid people, it saves me a ton of time. Otherwise, it's a waste of time and effort, and all you have to do to convince yourself of that is look at the contribution history of the most prominent contributors on github.
I have never, not once, in my life as a developer wanted a project to die so badly.
Hiring people based on their ability to predict your own idiosyncratic hatreds of specific technologies is just a horrible idea. It also ensures your ideas never get challenged and you never improve.
> I appreciate Deno because I can ask job interview candidates what their thoughts are about it, and when candidates for senior positions don't point out any of the billion obvious reasons it's a stupid project for stupid people, it saves me a ton of time.
I hope you're either self employed, or that you run your own company. If not, this comment is a red flag for potential recruiters; you might want to consider editing it.
> then in many scenarios it'd be a lot easier to deploy
It's not. Most advanced deployments nowadays use container orchestration where deploying is as easy. For simple deployments (eg SSGs) there're enough products on the market.
Integrating the build step hides it at the same time (good for beginners) but creates many other problems in the long run if we just talk about repackaging the run-time.
I think you're underestimating the diversity of environments out there. Not everyone's using cutting-edge deployment tech; lots of folks are just SSHing or RDPing into a physical or virtual server, copying stuff there, and running it. Certainly, that's how things were done at my last job. And it can get worse; some of these environments are locked down in some way or another, by security policies that limit what you can do. In those environments, having the executable be fully self-contained is really helpful.
(For the record, I am a proponent of things like containerization and serverless, and generally try to bring them into use wherever I can. This doesn't require me to ignore the reality that lots of places don't use them, and that this will remain true for a long time to come.)
> Not everyone's using cutting-edge deployment tech; lots of folks are just SSHing or RDPing into a physical or virtual server, copying stuff there, and running it
Maybe a decade ago, tbf IDK anyone who deploys like this in 2020, people user either Docker and/or k8s or a stupid-simple netlify/surge/vercel push. Then, there's also server-less stuff but yeah, you get the idea.
Another apples to oranges comparison. OP should have compared each systems' TPU not GPU. He should redo the benchmark with a proper setup as requested in the comments here and on Medium, otherwise his post is quite misleading.
To facilitate Nvidia's tensor cores OP had to use Nvidia's own TF distr./image and configure it explicitly. Something PyTorch does out of the box. Nobody knows why Google doesn't do this, maybe they want to push their own Cloud TPUs.
> Adding PyTorch support would be high on my list.
Won't happen. PyTorch needs Apple's help bc of the lack of docs, they've asked already and Apple hasn't commented or promised any kind of support, nothing. That they've chose TF instead of the current market leader doesn't give me too much hope and might come from backroom deals we don't know of.
Wondering why OP didn't invest the money into a 2nd 2080 Ti.
Please help me to understand: If I deploy my apps as Docker images anyway why would I need this? Deno 1.6 just packages the runtime creating a huge file, still smaller than a Docker image but with latter I have a better deployment experience meaning there's a huge ecosystem and tooling around. No rant, just trying to get what I miss.
Even server side, not everyone uses Docker. If you’re deploying to EC2, in house hardware, whatever, a single executable is simpler. And even if you’re building a Docker image, building the image itself is still a bit simpler - just pop the executable in there, and have the Docker entry point execute it, that’s it.
Yes and no, these are amenities but they are really small and IDK if they justify hiding/abstracting way an import build step.
> Even server side, not everyone uses Docker.
IDK, tried to find alternatives the last years but for a bit more sophisticated app you can't ignore images and container orchestrators like k8s. And latter is still easier than anything I've seen and has by far the biggest ecosystem. If I want to host some minimal app, I just push an SSG to netlify/surge/vercel, it's not an integrated build step which makes my life easier.
> just pop the executable in there
Otherwise you would just need one more line in your build file (npm install).
> Then obviously for CLI tools, this is SUPER nice
Also, yes no, Deno "binaries" have huge file sizes compared to an npm install -g and rarely used CLI tools can be fired off with npx, so which problem is exactly solved? That I can offer CLI tools to folks who won't have node installed? Then I rather write my CLI tool in Go and offer an appropriate package size.
I welcome competition and hence Deno but think this feature doesn't fulfill any (relevant) use case. Only beginners who struggle with the build step (which can indeed get hairy) profit from this design decision but a bit more advanced users will miss the control they had before.
I like K8s too, especially for a service oriented architecture, but there’s tonnes of other deployment targets out there. I’d bet the overall percentage of server side software running on K8s is in the single digits, thought that’s a pure wild ass guess. Hours ago I just finished debugging an outage where requests to one K8s service, through K8s ingress, slowed down 10x after an insignificant deploy, and then we cycled the pods (without changing the code) and it sped up again. No idea why - K8s is complex, and many ppl choose not to take on that complexity. TONNES of people like being able to deploy a single executable to their servers, it’s part of Go’s popularity, part of the popularity of fat jars in Java land (and those still need a JVM!), etc.
As for CLIs, the Deno executable overhead is about 47 MBs. Not nothing, but also ... that’s like a few extra seconds of download time for the tool, and insignificant disk space when people have hundreds of GBs on their laptops. If I’m writing some sort of command line tool, the tool being 50 MBs bigger probably does nothing to hurt adoption. But it having zero external dependencies WILL help adoption, vs. npx and screwing around with proper node versions and whatnot.
I think you already answered the question. You don't need to introduce docker cli, docker daemon, a container registry, etc. Not saying theres anything wrong with docker but having options for application packaging is nice!
Ok, the Docker client stuff is not always exciting but once you want to deploy something small, say, an app server, a DB and something like nginx or Traefik you need some orchestrator, eg k8s and then you need again images. If you prefer containerd over Docker also good.
What I am saying is which orchestration and deployment system does favor single executables atm and has a huge ecosystem? You still need to create images and do double the work. I like real binaries like Go creates but repackaging the run-time doesn't sound like a sophisticated idea but rather making the black box even bigger.
As a sibling said, for client side/3rd party apps, yeah this might be a nice-to-have but this space has rather other challenges.
OT: Just a second ago, I was setting up unattended-upgrades for security updates
for a new Ubuntu box and I am once again puzzled why the largest Linux distribution has such an underwhelming UX for an crucial feature. Long story short, I welcome any new contender in the OS space.
> I am once again puzzled why the largest Linux distribution has such an underwhelming UX for an crucial feature
Are we talking about Ubuntu, or Android? Because honestly in either case... what would you improve? Ubuntu has updates rolled into GNOME's package management frontend and it's seemed to work well, and Android has decent UX around updates (especially with A/B system partitions) although of course it suffers from vendors not actually releasing updates.
You have to touch two config files. One is easy but the other one needs a bit of googling, nothing major but yeah why at all this? Auto-updating security updates should be the default if you run servers in the wild.
Not if you just want automatic security updates? The package is part of the server task, and defaults to installing security updates. It does not install other updates, and does not automatically reboot.
> Auto-updating security updates should be the default if you run servers in the wild.
OT indeed, this is about a kernel and you're asking for a userspace feature.
Also, there are large number of "contenders" in the Linux based OS space which may have the UX you want, and if there isn't one this sort of thing tends to be pretty easy to tweak and suggest changes for (unlike in most of Google's OSes.)
I don't think the OP story is very good; we all make mistakes but the point of the story is about organizations, not people for the most part.
There is no reason to "stay hyper-productive and focused" every single day. Just as the work ebbs and flows, a person's energy ebbs and flows.
The key is to develop _habits_ which serve you well on "off" days. Put forth the effort to do things like wake up at the same time, log in and do certain "ceremonies" of work, always do SOMEthing productive every day.
It's the exact same thing as forcing yourself to go to the gym every day no matter what. You don't have to set new personal records every time. But if you go and put in at least SOME level of effort, push yourself at least somewhat, then on the "bad" days you can just put yourself on autopilot and still put in a good (not GREAT, not superb, not fantastic, just GOOD) performance. And that's what matters.
Tried habit-forming measurements, they've never worked and feel like a scam from the self-help industry.
> forcing yourself to go to the gym every day
This also never worked for me. While going to the gym is super effective, I found it the most boring and brain-deadening activity ever—I degenerate there even more than browsing Reddit for two hours. In contrast, I don't have a problem to motivate myself to go and play Tennis, without any habit-forming voodoo.
I like the question and at the same time I’m not sure if I understand what you are looking for. If I’m “hyper-productive” and “focused” on my “worst day”, it obviously wasn’t my worst day? Bad days usually are days when I can’t focus. Are you looking for people who never have a bad day?
> I’m not sure if I understand what you are looking for.
I like OP's post but would love to get some empirical data.
> If I’m “hyper-productive” and “focused” on my “worst day”, it obviously wasn’t my worst day?
Good q, I could also have asked: What's your system to overcome set-backs which usually distract you and turn an ok day into a bad day. Or just, how to turn a lazy or bad started day into a good day.
(1) Separate work from life, be able to walk away for a bit when work isn't going well. And do walk away if things aren't going well. If you haven't been able to write any code for the past few hours, odds are you won't be able to write any in the next few hours.
(2) Focus on process over results. IE have a good process to minimize the amount of time you spend thinking about what you should be doing, whether you did the right thing, etc. What honestly helps in these cases is just having a task list of "I need to get XYZ done today" and then blasting through it without leaving room for thinking about things too much. I like Getting Things Done (ie https://www.amazon.com/Getting-Things-Done-Stress-Free-Produ...) because it helps separate work from life.
(3) Take the long view of your life/career. The truth is that you are going to make mistakes, bugs will get into prod, you're going to get burned out, etc, so you need to accept that you will have "bad" days (or days, weeks, moths where you just don't care about what you're doing in which case it is obviously going to be crap) and focus on the process for improving them to minimize them over the long run. I think the important question here isn't "did I make mistakes" but rather "is my process resulting in a slower rate of mistakes/less severe mistakes.
(4) Never forget to eat, sleep, drink water, and exercise. Especially sleep. when things are bad we tend to sacrifice sleep, that almost always makes it worse.
Most of it I think is summarized as having a process you can trust so that when things do go poorly you can focus on the process in those moments. The process will get you out.
Good point, I should have explained it. To get into vim takes time and to get muscle memories at least 9-12 months. Something a lot of coders even don't do, so why should a data scientist do this. The result is that you are so much faster because most of the time you don't write code but stare at your code, navigate around and do small changes. So, I think if you deal with code 80% of your day, vim binds are a must and this was my biggest gripe with these notebooks. Google Colab is the only notebook offering painless vim binds. No other notebooks has proper vim support because data scientist are not coders and rather care about math.