CS fundamentals is about framing an information problem to be solvable.
That'll always be useful.
What's less useful, and what's changed in my own behavior, is that I no longer read tool specific books. I used to devour books from Manning, O'reilly etc. I haven't read a single one since LLMs took off.
The point of the argument is that meaning emerges in conversation. A session between human and AI is a conversation.
Current AI storage paradigms offer lateral memory across the time axis. What exists around me?
A bit branch is longitudinal memory across the time axis. What exists behind me?
Persist type checked decision trees within it. Your git history just became a tamper-proof, reproducible O(1) decision tree. Execution becomes a tree walk.
Same here. I'm really curious about this too. What do they mean by "wonderful"? I suppose some of the pieces here might not be very well-known or popular, but they are inspiring, or maybe they are a good resource for learning something. All I can see is they are maybe associated with a read-later app?
> Non-offending pedophiles should be more widely accepted by society. It’s unfair to ostracize someone for a desire they were born with, and integrating them into society makes them less likely to cause harm.
There's no evidence that anyone is born with particular sexual deviations. It attempts to simultaneously absolve and normalize attitudes that ideate rape of children, so long as they don't act on it. That's a pretty thin and permeable line to draw.
It depends on how you test it. I recently found that the way devs test it differs radically from how users actually use it. When we first built our RAG, it showed promising results (around 90% recall on large knowledge bases). However, when the first actual users tried it, it could barely answer anything (closer to 30%). It turned out we relied on exact keywords too much when testing it: we knew the test knowledge base, so we formulated our questions in a way that helped the RAG find what we expected it to find. Real users don't know the exact terminology used in the articles. We had to rethink the whole thing. Lexical search is certainly not enough. Sure, you can run an agent on top of it, but that blows up latency - users aren't happy when they have to wait more than a couple of seconds.
This is the gap that kills most AI features. Devs test with queries they already know the answer to. Users come in with vague questions using completely different words. I learned to test by asking my kids to use my app - they phrase things in ways I would never predict.
Ironically, pitting a LLM (ideally a completely different model) up against what you're testing, letting it write human "out of the ordinary" queries to use as test cases tend to work well too, if you don't have kids you can use as a free workforce :)
It solves some types of issues lexical search never will. For example if a user searches "Close account", but the article is named "Deleting Your Profile".
But lexical solves issues semantic never will. Searching an invoice DB for "Initech" with semantic search is near useless.
Pick a system that can do both, including a hybrid mode, then evaluate if the complexity is worth it for you.
Depends on how important keyword matching vs something more ambiguous is to your app. In Wanderfugl there’s a bunch of queries where semantic search can find an important chunk that lacks a high bm25 score. The good news is you can get all the benefits of bm25 and semantic with a hybrid ranking. The answer isn’t one or the other.
Can you have a coding philosophy that ignores the time or cost taken to design and write code? Or a coding philosophy that doesn't factor in uncertainty and change?
If you're risking money and time, can you really justify this?
- 'writing code that works in all situations'
- 'commitment to zero technical debt'
- 'design for performance early'
As a whole, this is not just idealist, it's privileged.
Will save you time and cost in designing, even in the relatively near term of a few months when you have to add new features etc.
There's obviously extremes of "get something out the door fast and broken then maybe neaten it up later" vs "refactor the entire codebase any time you think soemthing could be better", but I've seen more projects hit a wall due to leaning to far to the first than the second.
Either way, I definitely wouldn't call it "privileged" as if it isn't a practical engineering choice. That seems to judt frame things in a way where you're already assuming early design and commitment to refactoring is a bad idea.
Your argument hinges on getting the design right, upfront.
That assumes uncertainty is low or non-existent.
Time spent, monetary cost, and uncertainty, are all practical concerns.
An engineering problem where you can ignore time spent, monetary cost, and uncertainty, is a privileged position. A very small number of engineering problems can have an engineering philosophy that makes no mention of these factors.
It’s the equivalent of someone running on a platform where there would be world peace and no hunger.
That’s great and all as an ideal but realistically impossible so if you don’t have anything more substantial to offer then you aren’t really worth taking seriously.
You forgot “get it right first time” which goes against the basic startup mode of being early to the market or die.
For some companies, trying to get it right the first time may make sense but that can easily lead to never shipping anything.
So: The author wants to work for a company with resources.
Unfortunately, details take time and time takes money.
For a business's survival, the company's relative positioning in the market, access to sales and marketing channels, financing are much stronger concerns.
I think it's a trick. It seems to be the article is just a series of ad-hoc assumptions and hypotheses without any support. The language aims to hide this, and makes you think about the language instead of its contents. Which is logically unsound: In a sharp peak, micro optimizations would give you a clearer signal where the optimum lies since the gradient is steeper.
> In a sharp peak, micro optimizations would give you a clearer signal where the optimum lies since the gradient is steeper.
I would refuse to even engage with the piece on this level, since it lends credibility to the idea that the creative process is even remotely related to or analogous to gradient descent.
I wouldn't jump to call it a trick, but I agree, the author sacrificed too much clarity in a try for efficiency.
The author set up an interesting analogy but failed to explore where it breaks down or how all the relationships work in the model.
My inference about the author's meaning was such: In a sharp peak, searching for useful moves is harder because you have fewer acceptable options as you approach the peak.
Fewer absolute or relative? If you scale down your search space... This only makes some kind of sense if your step size is fixed. While I agree with another poster that a reduction of a creative process to gradient descent is not wise, the article also misses the point what makes such a gradient descent hard -- it's not sharp peaks, it's the flat area around them -- and the presence of local minima.
I see your point. I'd meant relatively fewer progressive options compared to an absolute and unchanging number of total options.
But that's not what the author's analogy would imply.
Still, I think you're saying the author is deducing the creative process as a kind of gradient descent, whereas my reading was the author was trying to abductively explore an analogy.
True, but my point is that not only does the analogy not work, the author also doesn't understand the thing he makes the analogy with, or at least explores the thought so shoddily that it makes no sense.
It's somewhat like saying cars are faster than motorbikes because they have more wheels-- it's like with horses and humans, horses have four legs and because of that are faster than humans with two legs. It's wrong on both sides of the analogy.
I enjoy maths and CS and I could barely understand a word of it. It seems to me rather to have been written to give the impression of being inappropriate for many, as a stand-in for actually expressing anything with any intellectual weight.
That'll always be useful.
What's less useful, and what's changed in my own behavior, is that I no longer read tool specific books. I used to devour books from Manning, O'reilly etc. I haven't read a single one since LLMs took off.
reply