Same experience, my workflow is to run the container from a podman run command, check it runs correctly, podlet to create a base container file, edit the container file (notably with volume and networks in other quadet file) and done (theorically).
I believe the podman-compose project is still actively maintened and could be a nice alternative for docker-compose. But the podman's interface with systemd is so enjoyable.
I don't know if podman-compose is actively developed, but it is unfortunately not a good alternative for docker-compose. It doesn't handle the full feature set of the compose spec and it tends to catch you by surprise sometimes. But the good news is, the new docker-compose (V2) can talk to podman just fine.
I appreciate the motivation of the author but the examples of this blog do not convince me at all. For every situation, there would be an alternative (with init or not) that addresses the design issues. The first example is solved with namedtuple[1] which is not a recent feature. Then the rest of article is about IO. Although, the special methods for the context managment interface and the std lib for context managment (which do a great job at async btw) are not mentioned.
__init__ in Python already has a lot of alternatives. IMO a situation encountered similar to the example is when you do not take advantages of the alternatives or your code design is bad.
Also, specifically for this case, if I need an object for this IO operation, I __would not__ uses dataclass. The first solution would be to create a context manager[2] and a NamedTuple (or dataclass if needs some mutations).
If it's not a fit, the classmethod and the dunders __enter__, __exit __ are good if this fd implementation is important for the design of my object. Alternatively, I would just make a builder function outside my object if the implementation of the fd capability should be considered as extra and not a builtin for the object.
The dataclass decorator could be added to save 1 line of code but would confuse the intent of the object which is a content manager with some state management.
A side geniune question, why not a type[3] statement to alias the int as a fd? What is the difference with using NewType[3]? In which situation creating a NewType from a base type - here int - is useful?
Eh, ML/scientific Python is large and not homogeneous. For code that should work on cluster, I would lean towards a Docker/container solution. For simpler dependancy use cases, pyenv/venv duo is alright. For some specific lib that have a conda package, it might be better to use conda, _might be_.
One illustration is the CUDA toolkit with torch install on conda. If you need a basic setup, it would work (and takes age). But if you need some other specific tools in the suite, or need it to be more lightweight for whatever reason then good luck.
btw, I do not see much interest in uv. pyenv/pip/venv/hatch are simple enough to me. No need for another layer of abstraction between my machine and my env. I will still keep an eye on uv.
I always enjoyed the "one-stop" solution with conda/mamba that installed the right version of cudatoolkit along with pytorch. How do you handle that without conda? (I'm genuinely ask because I never had to actually care about it.) If I manually install it, it looks like it is going to be a mess if I have to juggle multiple versions.
I do not know the people's background of a lot of comments here. They might have much more experiences than me with tensors. But, in my deep learning code and works, when I need to design an operation that involves a mix as little as 3 tensors with 4+ dimensions, I always struggle. I need to draft some slices to understand which slice should be contracted etc.. Many times the shape of the output is not even clear in my mind. Plus, add some padding maskS on the tensors and it confuses me quite a lot.
I really like this notation, the last example of 1.1 is readable in its sum formulation, but the diagram formulation is much more "alive" in my mind.
I am really lost here if I have missed something about indices notations with tensors or some visualization techniques. Or maybe the confusion of a tensor operation depends of the field? Or maybe I just miss practices and experiences with indices notations...
Standard tensor notation in pytorch and other libraries is very indirect, and often shapes are only documented in comments.
I definitely find it helps to draw parts of your architecture as a tensor diagram. Or perhaps use a library like tensorgrad which makes eveything explicit.
I would be curious to know how you managed to do this. I really tried to do this but the tons of dev tools I am using was too much for transitioning to neovim for my daily work. Namely, I need a DAP, multiple dev tools (lsps, linters, formatters) because I work with several projects which do not have the same tools[^1]. Luckily, I do not mix multiple programming languages. Plus, I containerize all my dev env. There might be some elements missing, but the point is the number of tools is overwhelming and it makes me think that I should do the whole configuration/setup on my free time.
Did you face similar issues? If yes, how did you solve them? Or maybe your work does not need that much tools? Or you have been more minimalistic than me for the number of features to be included in the neovim configuration?
[1]: I work in R&D, I need to tweak and contribute in many papers code or different toolboxes/frameworks on top of the team projects.
I don't remember what exact year I did the transition at, but around 2014 - 2016 I think. At the time I was working on a PHP Symfony application (and its frontend made with Backbone.js) powering Typeform, and I think this was right about when docker entered the scene, we were still using Vagrant with what I think was NFS syncing or something else dog-slow. But both Docker and Vagrant works fine with vim, as long as you have a generic VM/container setup, it shouldn't matter what editor you use, in my mind.
But before that I was using Sublime Text 2, with minimal plugins/extensions, so moving to vim was mostly getting used to moving around and manipulating text, using some very basic text-based autocomplete, before eventually migrating to a "proper" setup years later. Since then, I honestly haven't touched my config much, so I'm sure there are smoother/better ways now.
Since then, I've used (neo)vim to write JavaScript (+HTML+CSS), Ruby, Go, Python, Rust, various other languages, but mostly Clojure/Script. When trying out a new language, I find some (neo)vim plugin that seems suitable and try it out. If it works well, great, otherwise try another one.
LSP and formatter were really fast to set up. I used kickstart.nvim to get started and lsp and formatter are already mostly configured.
DAP is trickier to set up but is doable. How often are you really debugging though? In the beginning just run both neovim and your ide and just switch when you debug.
Back when LSP wasn’t a thing I still used vim but would just switch to an IDE when I needed to go code exploring and needed to be able to jump to definition and stuff like that. Wasn’t a big deal and was worth it to use both tools because vim is such a superior method for editing text.
Thank you for your answer. Some code base have chaotic execution path through a monolithic code base (by design). So, for these code bases, I heavily rely on the debugger. But I like your suggestion to use both of them, I think it's a good way to transition slowly and efficiently.
Great benchmark, very interesting. Although, I am not sure about the extrapolation of the H200 from the lambda bench. From my understanding, Lambda and theirs bench used different models - LLama 405B and Mistral 123B - with different bench and inference libs. Since the study is focused on memory-hungry scenario, I am really curious to know why they took H100 instead of H200.
Yes it’s a different model + backend and obviously the extrapolation will never be as good as experimental values.
but,
1. We have only used the multiplier value 3.4, and not the exact throughput from Lambda’s experiment.
2. We have also used the same input/output sequence length as Lambda's experiment.
3. Also our extrapolated value is inline with the specs of H200 when compared to Mi300x
An inspiring project. I am looking forward to see some gloves connected to a VR device. I think that some cheap sensors, a bit of bayesian modelling and a calibration step can offer a proper realtime hand gesture tracking.* I am already picturing being able to type on a AR keyboard. If the gloves are more expansive there might be some haptic feedbacks. VR devices might have more open OSes in the future or could use a "streaming" platform to access remote desktop environments. I am eager to see all the incoming use cases!
*: a lot of it. Plus, the tracking might be task-centered. I would not bet on a general hand gesture tracking with cheap sensors and bayesian modelling only.
Tap (tapwithus.com) had a IMU-based solution early on in the current VR hype cycle using a IMU for each finger and some kind of chord-based letter typing system. Was a fancy proof of your geekiness to wear them during VR meetups back then.
I think they have a camera-based wristband version now.
Still doesn't have any room positioning info though, AFAIK.
Arguably, it's off-topic, though I agree with the point. Lebanon has been struck by poverty, and as a result, they might have far fewer choices when it comes to providers in general. Manufacturing within Lebanon or trading with neighboring countries might not be affordable for them.
It’s important to take a step back before generalizing an economic or political statement that may not be applicable in other contexts. There are little chances that the supply chain in Lebanon is in the same state as Europe countries' ones, for instance. Thus, this is not another example.
Just because something is not affordable doesn't mean its affordable alternative is a viable option, especially when information asymmetries caused by foreign manufacture obscure plastic explosives in the devices or whatever triggered these.
It is the same attitude. "Outsourcing is the only way we can be competitive" / "Buying these cheap pagers is the only way we can afford it"
I believe the podman-compose project is still actively maintened and could be a nice alternative for docker-compose. But the podman's interface with systemd is so enjoyable.
reply