> Atrophy. I've already noticed that I am slowly starting to atrophy my ability to write code manually...
> Largely due to all the little mostly syntactic details involved in programming, you can review code just fine even if you struggle to write it.
Until you struggle to review it as well. Simple exercise to prove it - ask LLM to write a function in familiar programming language, but in the area you didn't invest learning and coding yourself. Try reviewing some code involving embedding/SIMD/FPGA without learning it first.
I think understanding stellar processes and then using that understanding to theorize about other observations is a skill. My point was that observing can be a fantastic way to build a skill.. not all skills, but certainly some skills. Learning itself is as much an observation as a practice.
I tried vibe-coding few years back and switched to "manual" mode when I realized I don't fully understand the code. No, I did read each line of code and understood it, I understood the concepts and abstractions, but I didn't understand all nuances, even those at the top of documentation of libraries LLM used.
I tried minimalist example where it totally failed few years back, and still, ChatGPT 5 produced 2 examples for "Async counter in Rust" - using Atomics and another one using tokio::sync::Mutex. I learned it was wrong then the hard way, by trying to profile high latency. To my surprise, here's quote from Tokio Mutex documentation:
Contrary to popular belief, it is ok and often preferred to use the ordinary Mutex from the standard library in asynchronous code.
The feature that the async mutex offers over the blocking mutex is the ability to keep it locked across an .await point.
9 yo Honda and yes, lane keep assist feels more like ruts on a road - it will steer when closer to the lane, but not always, and won't align car with the lane .
Ownership problems with pointer/references don't end with allocation.
A codebase can use only std::make_unique() to allocate heap, and still pass around raw pointers to that memory (std::unique_ptr::get()).
The real problem is data model relying on manual lifetime synchronization, e.g. pass raw pointer to my unique_ptr to another thread, because this thread joins that thread before existing and killing the unique_ptr.
I don’t disagree. That’s why I don’t write C++ anymore. It’s a masochistic language and trying to do it in a team environment with people who do understand how to do it properly is a mess let alone adding in people who don’t know.
In 00s, Rational Rose UML was a mandatory course in my Uni undergrad program.
At that time I had a chat with a small startup CEO who was sure that he'll fire all those pesky programmers who think they are "smart" because they can code. He pointed me to a code generated by Rational Rose for his diagram, and told that only methods should be implemented, which also will be possible soon, the hardest part is to model the system.
For example "Bath Royale Slow Close Toilet Seat" is between $60 and $70.
I stayed in AirBnB in 2021 with soft close seats and I don't know why, but for everyone in my family it felt like a luxury/comfort item. Knowing that there won't be bang, especially at night, especially with small kids, makes a difference. I'd never thought it would unless I stayed at that AirBnB.
It's a miracle that in a world where everything becomes a service, proprietary and cloud based, you can download a collective human knowledge (while some argue it's biased, not truth-based and consensus ran - yes, but I think it's one of the best outcome for a socially constructed knowledge).
When you train your neural network to minimise cross-entropy that's literally the same as making it better as a building block in an arithmetic coding data compressor. See https://en.wikipedia.org/wiki/Arithmetic_coding
Indeed, KL-divergence can be seen as the difference between the average number of bits required to arithmetically encode a sample from a given distribution, using symbol probabilities from both the original distribution and an approximating distribution.
This is a dilemma for me that gets more and more critical as I finalize my thesis. My default mental model was to open source for the sake of contributing back to the community, enhance my ideas and discuss them with whoever finds it interesting.
To my surprise, my doctoral advisor told me to keep the code closed. She told me not only LLMs will steal it and benefit from it, but there's a risk of my code becoming a target after it's stolen by companies with fat attorney budgets and there's no way I could defend and prove anything.
Until you struggle to review it as well. Simple exercise to prove it - ask LLM to write a function in familiar programming language, but in the area you didn't invest learning and coding yourself. Try reviewing some code involving embedding/SIMD/FPGA without learning it first.
reply