I think IQ is useful in aggregate (for example, a finding that exposure to local toxins reduces a cities' performance on IQ by 10 points), but not useful an an individual level (e.g. you have an IQ of 130, so we can say with certainty you will earn $30,000 more per year). It's similar with MRI scans of ADHD: they find brain differences at a large scale, but you can't use a MRI to diagnose ADHD.
When I was learning programming, my coding class used a Bukkit plugin that connected to Python. I can't remember what it was called, but that was for Minecraft 1.7.10.
Not sure if you were wanting Python specifically, but KubeJS lets you use JavaScript for mods. I think there's also a clojure integration.
I think these are just general pitfalls that happen when you port something.
Also thinking everything is safe from a security perspective because you don't use the keyword `unsafe` seems kind of naive to me. For one safety and security are two separate issues. Also this assumes you really did not understand why Rust has the keyword in the first place and what it's used for.
Obviously Rust can't prevent all logic errors, and doesn't promise to. If you adhere to idiomatic Rust (e.g. using sum types to make impossible states not representable) you'll probably prevent quite a few. But if you port line by line it won't be idiomatic.
Rust does offer a lot of error checking at compile time that you only get at runtime with a combination of ASan, LSan, UBSan, TSan, MSan in C. But it doesn't mean you can stop thinking entirely.
Oh for sure! It does take understanding the idioms of the language you're porting to, and rust only guarantees memory safety when (iirc) 1. Being in an interrupt free and linear memory environment, and 2. All unsafe code maintains invariants (XOR mutability, etc). There's some others but I can't think of them off the top of my head.
But, I wouldn't say this article is only about porting in general. It's very specific in using example drivers written for Linux (which is very different than application C) and showing pitfalls _specifically_ when going to rust. For example, in one of the later parts it talks about Rust mutexes, which are RAII. If you lock twice, you'll have a double fetch that can cause a race condition. That's not a general rule of thumb for code porting.
From what I've seen when code is generated from formal specs it ends up being inflexible. However, do you think it would be valuable to be able to verify an implementation based on a formal spec?
People do that, and find very tricky bugs. One person did it by line-by-line translating C code into TLA+, another by representing the state machine in Coq and checking for predicates validating it in the source. But I don't think a visual representation of the state machine would have diagnosed the bugs the formal checkers did.
I just realized my previous comment left out what I was trying to say—my bad! I think what I was trying to ask was: would it be possible to generate a formal specification from a graphical representation, and then use that specification to verify the source?
Also thank you for those links! I'll definitely give them a read.
I'm far from an expert in formal verification, I probably should be doing more of it than I do. From my two links, the way I've seen formal verification work is to either translate the code line-by-line in an automated or manual way into a formal language, then check for possible orderings that violate properties you care about; or define a formal model of the state machine, and insert validation of all the transitions in the code.
If you were going to do formal verification from the graphical representation, it would be on the algorithm; namely does it always converge, does it ever deadlock, does it ever fail mutual exclusion. If the goal is for a computer to analyze it, it can be precisely as complex as the source code, so yes. But at that point it's not useful visually for a human to read.
I have mixed feelings on cpal: on the one hand, it's been really wonderful to have a library that just works on different platforms. On the other hand, it's an absolute pain in the butt for doing anything simple. I really wish it would have a simple interface for when I'm only worried about floating point data (I ended up creating my own library to wrap cpal's idiosyncrasies for my mixed midi/audio node program: https://github.com/smj-edison/clocked).
This seems like a great blend of LLM and classic analysis. I've recently started thinking that LLMs (or other ML models) would be fantastic at being the interface between humans and computers. LLMs get human nuance/satire/idioms in a way that other NLP approaches have really struggled with. This piece highlights how ML is great at extracting information in context (being able to tell whether it's go the language or go the word).
LLMs aren't reliable for actual number crunching though (at least from what I've seen). Which is why I really appreciate seeing this blend!
Yeah, I guess it is pretty mainstream :) Though another view I keep hearing is that LLMs are going to replace all jobs in 5 years, which quite frankly is disconnected from reality. Even assuming that we could create a ML model that could replace a human (which I think the current paradigm is insufficient after this[0] discussion), there's still the matter of building data centers, manufacturing chips, and it would need to be cheaper than paying a human.
I personally would like AI to help humans be better humans, not try to replace humans. Instead of using AI to create less understanding with black box processes, I'd rather it help us with introspection and search (I think embeddings[1] is still a pretty killer feature that I haven't heard much noise about yet).
Stupid question: could you literally just put a grid of coolant tubes through a cube processor? Think like the shape of control rods for a nuclear reactor. Power supply is also tricky with a cube chip, but could you electrify the coolant flowing through the tubes? Half of the tubes positive, half negative. So the tubes through the cube double up thermal and electrical conductance.
EDIT: stupid idea #2: what if you also used peltier cooling to route heat out of hot spots?
You could. Tighter cooling integration for denser ICs is an area of active research but is something that needs to be economical at scale to matter. If a rack full of flat chips does more work per dollar than a complicated-to-manufacture 3d-stacked coolant-permeable IC, there's not a very strong argument for building them.
Peltiers are inefficient as all hell and not likely to be part of such a tightly integrated solution.
You mentioned you're passionate about good tooling, would you mind if I asked you what your thoughts are on Glamorous Toolkit[1] and how it compares to other tooling you've seen?
There's so many wonderful and exciting things going on in this space right now. It's hard to keep up with them. They all take 100s of hours to get half-way decent at.
Really, FWIW, the things I need are:
1. Introspection tools to find out what's going on without having to rewrite the build system.
2. Comprehension and navigation tools (like copilot) that I can interface with and give insight into where things might be happening
3. Timeline tools like rr
4. Advanced diagnotistic tools like ebpf
And that's really it. emacs and vscode are the things I cycle around. I still think these are all open problems.
If you haven't already heard of it, you might like marginalia. It's a search engine/website finder/experiments. I've found it really useful for finding small blogs and interesting perspectives!
reply