> Yeah, it can look a bit repetitive if the code is already clear, but the context of why a thing is being done is still valuable. In the modern era with LLM tools, I'm sure it could be even more powerful.
Is that because of literate programming, or is that because practicing literate programming made you focus more on writing high quality code and docs?
I'd argue it's the same thing. When doing literate programming, I started by first writing a description of what I was going to do and why. Then I wrote the implementation. When I finished, I went back and updated the description to match what I'd done. Maybe I'd get the idea to improve the approach and repeat this for a few cycles.
But the specifics of the flow aside, it's the mindset difference that makes it all feel special. The docs are the primary artifact. The code is secondary.
In an era of Copilot-style inline suggestions, taking the time to write a lengthy description effectively feeds the prompt to get a better output.
I tend to write doc-comments before the functions they document, because it helps me think more clearly about what I want to happen - and sometimes causes me to entirely re-think my approach and abandon the function entirely.
I can definitely see such a practice improving LLM output.
Meanwhile, there are programmers that think comments are a "code smell".
"Better" is always "for what metric" but if nothing else having the source code to the stack is always "better" IMHO even if one doesn't choose to self-host, and that goes double for SigNoz choosing a permissive license, so one doesn't have to get lawyers involved to run it
---
While digging into Honeycomb's open source story, I did find these two awesome toys, one relevant to the otel discussion and one just neato
https://github.com/honeycombio/refinery(Apache 2) -- Refinery is a tail-based sampling proxy and operates at the level of an entire trace. Refinery examines whole traces and intelligently applies sampling decisions to each trace. These decisions determine whether to keep or drop the trace data in the sampled data forwarded to Honeycomb.
People also like to forget that from the dawn of modern computing and AI research like 60 years ago all the way to 7 years ago, the best models in the world could barely form a few coherent sentences. If LLMs are this century's transistor, we are barely beyond the point of building-sized computers that are trying to find normal life applications.
That's often potential customers. It's common to have other HW companies invest in HW start ups. Unfortunately there are not many good VCs for HW development. Even the ones marketing themselves as much don't like the meager returns in 5 to 10 years.
Might be a ridiculous question (I'm a software guy,) but is it at all possible to go the other way and increase the velocity of shipping and iterating on hardware to make it fit into the standard VC timelines?
Very valid question. The whole industry is trying to do it, mainly with the hope of increasing profit margins to SW levels. But there are fundamental issues that is in the way (this is valid for ASIC design):
- We only use models to simulate the chip, and this is at best partial. Verification coverage if the code is one thing. There are thousands of effects from power supply network to thermals, reliability (device aging, electromigration etc) and a bunch of analog stuff which has weird failure modes which is almost impossible to fully cover before shipping it. So, we never actually know what would fail before tapeout.
- Tapeour cost is immense. A full mask cost of an advanced node is easily $10 Million or more. You can always go to an MPW, but then they are rare for advanced nodes (1-2 times per year), putting an immense pressure on schedules.
- Chip production takes time. For old nodes ~3 months, advanced nodes it's getting close to ~5 months.
- Package design, test PCB design, their production takes a lot of time and money too. Typically package costs as much as the silicon to produce if the design is heavily IO limited and uses and advanced packaging solution.
- Lab test preperation and test itself takes time. Typically you would need months of test to get a meaningful picture of the issues. You woul need to go through temperature and voltage cycles, on/off cycles etc. This of course depends on the end application. Automotive and data centers are quite demanding.
- There is a lot of competition for pretty nuch the same product and there is a lot of vender lock-in as the customers don't want to redesign their system ever.
So at the end, if you are designing a complex ASIC, you will spend a lot of money and time per tapeout cycle. If you have a big issue, your customer will go to the next guy. You lost them for forever (or for this product cycle of 4-5 years, if you are lucky but that's death sentence for a start-up). Now you are tens of millions in negative without your main customer. Again, if you are lucky you can either find another customer or repurpose your design. This is increasingly difficult as complex chips often aim a narrow market. This makes everyone very risk averse, including your customers.
For less complex chips in old process technology nodes things can be sped up, and is already being sped up, by a lot of IP reuse or buying ready silicon proven IPs. The problem there is, the time to market isn't the determining factor anymore as anyone can make a functional chip relatively quickly, but what matters is who can do it cheapest. There's a reason why most audio codecs and 1Gbit ethernet PHYs in PCs are Realtek. This type of products aren't attractive for start-ups.
It's often a happy middle ground of these with a niche application which resonates well with the experience and the talent of the engineering team makes a good beginnings of a HW start-up. Even with the best team, you need minimum 2 years to show something though.
Depends on what hardware, who is developing it, and for who. Proof of concept is almost always cheap enough to make. Beauty of HW is that you get PMF straight away - even if you have the worst completely broken product, but it it brings value to someone, they will pay for it. From there you can bootstrap, take on credit etc. Capital intense part can wait - refinement, certifications, patents, packaging, documentation, mass production etc. This is of course all under assumption that the core team knows how to build everything, if you're outsourcing in the prototype phase then you're probably toast anyway.
> So the right kind of investor could add significant value if they were aligned.
I always perceived value of investors to be everything but the money. If you need just the money then get a loan.
Anycast is a very different beast, though. Anycast is just unicast but you announce the same IP space from multiple destinations, and the network figures out how to get a packet to the closest one. If one of those destinations fails, it just goes to the next closest one.
Unicasts, multicasts, and broadcasts all actually work differently underneath and require specific handling by network equipment. Anycast is just a special case of unicast and generally speaking network equipment is completely unaware of it.
Because the work to prevent y2k started long before y2k and was all about centralized systems. This is about millions or even billions of embedded systems all over the world. No one is going to send someone to replace the control units in your car.
If you’re hiring humans just to use AI, why even hire humans? Either AI will replace them or employers will realize that they prefer employees who can think. In either case, being a human who specializes in regurgitating AI output seems like a dead end.
“Prompt Engineer” as a serious job title is very strange to me. I don’t have an explanation as to why it would be a learnable skill—there’s a little, but not a lot of insight into why an LLM does what it does.
> there’s a little, but not a lot of insight into why an LLM does what it does.
That's a "black box" problem, and I think they are some of the most interesting problems the world has.
Outside of technology- the most interesting jobs in the world operate on a "black box". Sales people, psychologists are trying to work on the human mind. Politicians and market makers are trying to predict the behavior of large populations. Doctors are operating on the human body.
Technology has been getting more complicated- and I think that distributed systems and high level frameworks are starting to resemble a "black box" problem. LLMs even more so!
I agree that "prompt engineer" is a silly job title- but not because it's not a learnable skill. It's just not accurate to call yourself an engineer when consuming an LLM.
It's an experience thing. It's not about knowing what LLMs/diffusion models specifically do, but rather about knowing the pitfalls that the models you use have.
It's a bit like an audio engineer setting up your compressors and other filters. It's not difficult to fiddle with the settings, but knowing what numbers to input is not trivial.
I think it's a kind of skill that we don't really know how to measure yet.
When an audio engineer tweaks the pass band of a filter, there’s a direct casual relationship between inputs and outputs. I can imagine an audio engineer learning what different filters and effects sound like. Almost all of them are linear systems, so composing effects is easy to understand.
None of this is true of an LLM. I believe there’s a little skill involved, but it’s nothing like tuning the pass band of a filter. LLMs are chaotic systems (they kinda have to be to mimic humans); that’s one of their benefits, but it’s also one of their curses.
Now, what a human can definitely do is convince themselves that they can control somewhat the outputs of a chaotic system. Rain prognostication is perhaps a better model of the prompt engineer than the audio mixer.
And how to verify the output and think through it. I hear time after time that someone asked something from AI. It came up with something and then when corrected apologized and printed out it was wrong...
But how do you correct it if you do not know what is right or wrong...
Is that because of literate programming, or is that because practicing literate programming made you focus more on writing high quality code and docs?
reply