how does it work ? it is git integrated i assume...and do u export a Dockerfile or something to replicate the exact runtime ?
my fundamental question is : how easy is it to go from writing code to production deployment ?
This has been the big issue with cloud dev. Take an example of Python flask with Pandas. There are problems depending on whether u used an alpine dev environment or debian. Now if i just take the code and try to deploy it on redhat...it goes all screwy. lots of the c library extensions start screwing up.
Can i single click create a running docker environment with my running code EXACTLY like the code dev environment.
They do this for machine learning code. You can take your data and algorithms out of the ide and get a running docker container with all the build scripts and everything.
second - ur pricing is off. part of the charm of a cloud dev environment is never turn it off. like shutting the lid of my macbook m1 and opening tomorrow morning. even the cursor is in the same place. There is zero incentive for me to shutdown a system and reopen tomorrow morning.
In that scenario, ur pricing for an 8 core 32 gb instance is 421 USD per month.
Comparitively a Google Cloud 8core 32 gb instance with 100 GB SSD is 212 USD.
Sagemaker Python notebooks ml.t3.2xlarge are 0.399 USD per hour - about 288 USD per month. you will need to adjust pricing.
replit pricing of 7$ hacker plan is cheaper than equivalent Sagemaker pricing (about 36$ per month)
You can view Nimbus workspace as a linux machine that you have, but on the cloud. We built an internal Dockerfile-like IDL to replicate the exact dev environment every time when a new workspace is being created.
- We love Dockerfile, but we didn't directly build on top of it because there are more configurations we want to enable (such as on create/start/stop/delete lifecycle hooks, and personal/team configs).
- That being said, I can imagine exporting a Dockerfile can be feasible on Nimbus for the future (sufficient to replicate a new dev environment), but with certain Nimbus specific features missing there.
> it is git integrated i assume...
And yes, you are right! It's has git integration, and we are working on more tooling integrations right now to build better developer experiences (talking about all the source code management tool the team is using, credentials/env variable tools, etc)
> how easy is it to go from writing code to production deployment?
I'm totally with you. As an engineer myself, to me, only having code deployed on the production marks the completion of something, instead of just merging the code to the main branch. So it's important to have an efficient/stable way to move a piece of code to PR, to staging, and eventually to production.
As for Nimbus team, we don't solve this question as our main value prop, but we do facilitate that for sure, by
- making Nimbus a seamless part of engineers dev workflow (among all your task tracking, SCM, CI/CD tools)
- providing flexibly on setting up the dev environment (e.g. you can set it up in a way that is more consistent with the production setup, but still contain development-specific tools)
> Can i single click create a running docker environment with my running code EXACTLY like the code dev environment.
Not an expert of Sagemaker myself - do you mean auto-generating a Dockerfile based on your codebase? :-)
>making Nimbus a seamless part of engineers dev workflow (among all your task tracking, SCM, CI/CD tools)
tricky. unless u can allow my prod environment to be imported into nimbus (or the other way around - export). Otherwise my prod packages and your packages will always be out of sync. And that is too bothersome.
The problem is not code merging and branching. But if its a "dev environment", it has to be in sync with my production environment.
Everyone here has been burnt by different version of operating system libraries, so stuff doesnt work properly. Python is notoriously funky about this because most of its libraries are written in C actually.
We'd love to make Nimbus such a flexible platform that engineering teams can configure the Nimbus dev environment as close as possible to production environment. A few opinionated thoughts here:
- it makes sense that dev environment setup follow prod environment setup as much as possible (so that it's reliable to know code in dev environment works on prod environment, for example). However, on the other direction, prod environment should focus on solving its own problems (scalability, stability, etc..), so it might not be a great practice to set up prod environment following the dev environment;
- automatically importing prod environment into Nimbus is definitely a higher level of thinking once engineers can at least manually specify which package should be included in the dev environment (I love this one);
> Otherwise my prod packages and your packages will always be out of sync
From the dev environment perspective, we intend to not enforce everyone to use "our packages" (in fact, our built-in packages should be common ones with least surprises). In the future, we want to have a way so that users can define their own packages on Nimbus.
> Everyone here has been burnt by different version of operating system libraries, so stuff doesnt work properly
Not sure if everyone in your team has consistent dev environment, but just different from the prod environment; or everyone in the team also has inconsistent dev environment, which is also inconsistent with prod environment.
If it's the latter one, that's also a problem we are trying to solve - eventually in a team setup, there should be just one-time set up of the dev environment, and everyone else can just spin up a workspace and write code, without worrying about the version difference etc.
Nish here. Let me address the run time and pricing question - my cofounder wants to touch on the other question.
Teams have told us that they want to avoid 24/7 machines. It gets really expensive (and wastes energy) when people keep creating instances and leave them on. Our approach is to let people keep them on 24/7 if they want, but we want them to stop the instances too (we have automation for this).
That said, the way we’ve set up our environments is that the machine doesnt “turn off” - we just stop it. So its like closing your macbook lid and reopening it. You don’t lose your progress and youre not charged for the time in between.
And Replit’s Hacker plan is cheap but they aren’t very powerful (just 2vCPU and 2GB ram)
>And Replit’s Hacker plan is cheap but they aren’t very powerful (just 2vCPU and 2GB ram)
with all due respect, ur equivalent plans are far more expensive. because the equivalent on Nimbus is 30 hours per week. But on replit - u can actually host a website. The repl is "always on".
I think we probably have to agree to disagree on wasting energy here. The magic of cloud environments is always on environments where my scratch api is also running for my other developer to ping. Replit is pretty good for this. So is Sagemaker. But I do respect that ur target market is a bit different.
> The magic of cloud environments is always on environments where my scratch api is also running for my other developer to ping.
The magic of cloud environments is efficiency and economies of scale. Scale to 0 services are perhaps some of the most popular in the cloud era; especially for students or side projects which seem to be the inspiration for Nimbus.
I’m not defending any pricing but I think the model is useful.
Yeah - the targets are different. Thats probably why comparing the pricing seems odd too. Even our lowest tier has more dedicated memory (so its more expensive for us too). But don't get me wrong - I think Replit is really cool but just not what we're going for :)
every virtualisation product for 20 years has supported saving the machine state to a file and then restoring it at some point later (maybe years)
I do this a lot on my home machine, with an nvme ssd it's under a second to save or restore the entire machine state
it's a shame the big cloud providers (Azure, GCP, AWS) virtualisation offerings are so... crappy compared to VMware workstation/ESXi/Xen/KVM/... from 20 years ago
my fundamental question is : how easy is it to go from writing code to production deployment ? This has been the big issue with cloud dev. Take an example of Python flask with Pandas. There are problems depending on whether u used an alpine dev environment or debian. Now if i just take the code and try to deploy it on redhat...it goes all screwy. lots of the c library extensions start screwing up.
Can i single click create a running docker environment with my running code EXACTLY like the code dev environment.
Funnily enough, the only good one here is AWS Sagemaker - https://docs.aws.amazon.com/sagemaker/latest/dg/docker-conta...
They do this for machine learning code. You can take your data and algorithms out of the ide and get a running docker container with all the build scripts and everything.
second - ur pricing is off. part of the charm of a cloud dev environment is never turn it off. like shutting the lid of my macbook m1 and opening tomorrow morning. even the cursor is in the same place. There is zero incentive for me to shutdown a system and reopen tomorrow morning. In that scenario, ur pricing for an 8 core 32 gb instance is 421 USD per month. Comparitively a Google Cloud 8core 32 gb instance with 100 GB SSD is 212 USD.
Sagemaker Python notebooks ml.t3.2xlarge are 0.399 USD per hour - about 288 USD per month. you will need to adjust pricing.
replit pricing of 7$ hacker plan is cheaper than equivalent Sagemaker pricing (about 36$ per month)