As long as this works for the author, good for them. What I found out looking at post after post, tool after tool, press release after press release, is that these tools optimize for the workflow of n=1 person working on a toy project with no stakes.
There is a disconnect:
- Most non-trivial machine learning projects I've seen involve more than one person and have stakes.
- Most people do not work on non-trivial machine learning projects with stakes
The map becomes the territory: many people then develop "ML project lifecycle management" or improve the "notebook experience" with better stylesheets for that kind of experience. i.e: the experience of one person working on a toy project for a YouTube video or a Medium blog post on "production machine learning" from someone who's never done it before.
I'm not pissing on those who produce this kind of content; they're likely doing it for feedback. I'm selfish in that it is part of my job to stay up to date but the low signal to noise ratio gives the feeling of being rickrolled with every piece of content about machine learning.
Agreed, but I highly recommend checking out nbdev.
It's used to build all of fastai's libraries, and with nbdev, the notebooks ARE the docs and tests.
I use pytest and standard IDE for most projects, but have used nbdev/jupyter on a few work projects and been astounded at the productivity boost.
Git issues with notebooks, producing pip modules, two way syncing between notebooks and code, if there is a problem that naked jupyter has for development, Jeremy Howard and team have solved it with nbdev.
We're experimenting with `nbdev`[0], especially in our effort to support fast.ai's[1] latest course 'fastbook'[2] on iko.ai[3], to test their notebooks faster. Though scheduling notebooks on our platform is a breeze[4][5] and we could launch the 20 notebooks really fast even manually and check their output while running (some fail that way as they required user interaction for FileUpload or something, and we decided to use fixtures).
Your platform looks interesting. Definitely going to check it out. I've recently been working on a project for my employer to turn jupyter notebooks into deploy anywhere, run anywhere functions.
Thanks. Not interesting enough for you to immediately sign up, so let's figure out where we screwed up...
>I've recently been working on a project for my employer to turn jupyter notebooks into deploy anywhere, run anywhere functions.
What's the "job to be done"? What will this solve for you, and what's your current workflow? If it's similar to our needs, you're welcome to become our cherished customers. We'll socially distantantly throw flowers and rice and hand sanitizer.
For us, for many years, there were always "chasms" between domains and universes in our team. Data scientists lived in notebooks, however they suffered to set up their environments. Libraries' versions, dependencies, NVIDIA drivers, etc. They'd either lose time setting up their laptops and workstations, and then were afraid to change anything. It also lead to the "it works on my machine" when they wanted to try a notebook from another team member, who also happened to have a different environment.
We then had a beast of a workstation for heavier compute jobs. It was a tragi-comic to coordinate. People had to be on-premises. They also had to assign different ports to run their Jupyter notebooks (before JupyterHub, etc). They had to coordinate: I'm running a job, can you not launch your training until I'm done?
Notebooks flying around, then committed to repository management but some data scientists were not familiar with git.
Ad-hoc experiment tracking when people remembered. Either in physical notebooks, logs, spreadsheets, random text files and notes, "memory", or not at all because they forget.
Deployment was manual. Build a quick Flask application, get the weights, put them somewhere, spin up a VM on GCP, scp the weights, etc, etc. Data scientists had to tap someone on the shoulder to deploy their model.
In other words, pretty much every mistake and bad habit.
There was then a change in the company and we started this to be able to execute our projects in a more systematic way. The way I used to describe it is: "I want models to become espresso cartriges plugged into the machine".
Around 2019, we started to push for more remote and more asynchronous. It bothered us that people suffered through commute (sometimes up to six hours daily). It was nonsense. We built whatever allowed that. We had members who had a bad internet connection, and we worked on cache and compressing static files, reconnecting Jupyter servers seamlessly, scheduling long-running notebooks and visualizing their output notwithstanding front-kernel connections, etc.
We wanted to be able to manage data, collaborate on notebooks, run training, track experiments, deploy and then monitor models. And we went at it chunk by chunk. We took a look at several other products and platforms but they either were too restrictive or handled one piece of the cycle (even though claiming complete project lifecycle management) and one of the problems is precisely that "fragmentation", or they were internal platforms at companies that heavily relied on ML (FB Learner Flow, Uber Michelangelo, Airbnb Bighead) but we couldn't access these.
There were also products that solved what was in our opinion the wrong problems. The reason I think is that those who start them come from a web dev background into ML and think the ecosystem is what it is because CI/CD and Devops were lost to the "ML people" and if only they could see the light. That, or their hypothesis is that better stylesheets will solve the cluster-mess when in fact it'll just make a beautiful mess.
Generally speaking, the tests were writing the code in the notebook and looking at the results. At least most of the stuff I use notebooks for is either one off or difficult to test. One off stuff might be some exploratory analysis or glue code to do simple tasks like loading and formatting data. Numerical tasks like ML are frequently very difficult to test because the author doesn’t know exactly what the output should be. If you don’t know the answer it’s hard to test for it.
For these reasons, formal tests rarely make sense in notebooks. If the code expands its use to being used repeatedly then clearly the notebook should be refactored into scripts and packages that have tests for the glue code and maybe some attempt at boundary the behavior of the numerical code.
If you're into machine learning and work with notebooks, take a look at the machine learning platform[0] we're making. We've been profitably shipping complete ML products for enterprise clients for many years now. Throughout these years, we've discovered patterns and inefficiencies that slowed us down, threatened the success of our projects, and overall made them more expensive than they ought to be.
It has collaborative notebooks, long-running notebook scheduling that survives network disruptions and closed tabs (you can view them while running without opening Jupyter), automatic experiment tracking for metrics, parameters, and models, and seamless deployment into a REST API. It also enables you to publish a notebook into a parametrized AppBook to allow domain experts to interact with it without being overwhelmed by the notebook interface, and to change parameters without mutating the notebook. Their runs are also tracked.
We're focused on solving actual problems we have faced on paid projects, as opposed to the infinity of features one can build to solve imaginary problems.
One minor critique: the vim stuff looks like a red herring given the topic. I'm personally not particularly interested in vim but am interested in jupyter notebook-alternative setups - I'd either remove it or explain why it's relevant here or contextualise it in some other way.
Good point, I should have explained it. To get into vim takes time and to get muscle memories at least 9-12 months. Something a lot of coders even don't do, so why should a data scientist do this. The result is that you are so much faster because most of the time you don't write code but stare at your code, navigate around and do small changes. So, I think if you deal with code 80% of your day, vim binds are a must and this was my biggest gripe with these notebooks. Google Colab is the only notebook offering painless vim binds. No other notebooks has proper vim support because data scientist are not coders and rather care about math.
There is a disconnect:
- Most non-trivial machine learning projects I've seen involve more than one person and have stakes.
- Most people do not work on non-trivial machine learning projects with stakes
The map becomes the territory: many people then develop "ML project lifecycle management" or improve the "notebook experience" with better stylesheets for that kind of experience. i.e: the experience of one person working on a toy project for a YouTube video or a Medium blog post on "production machine learning" from someone who's never done it before.
I'm not pissing on those who produce this kind of content; they're likely doing it for feedback. I'm selfish in that it is part of my job to stay up to date but the low signal to noise ratio gives the feeling of being rickrolled with every piece of content about machine learning.