I think there is a difference between type system or Language Server completions and AI generated completion.
When the AI tab completion fills in full functions based on the function definition you have half typed, or completes a full test case the moment you start type - mock data values and all, that just feels mind-reading magical.
Aren't the insufficiencies of the LLMs a temporary condition?
And as with any automation, there will be a select few who will understand it's inner workings, and a vast majority that will enjoy/suffer the benefits.
do tools like cursor get a special pass? Or do they do some magic?
I'm always amazed at how well they deal with diffs.
especially when the response jank clearly points to a "... + a change",
and cursor maps it back to a proper diff.
I know very little in this field, but does this mean they the LED color is serving as debug/log messages for the training process? It sure seems so to my naive reading, and seems so clever.
We use different LED lights to indicate transitions between dynamic modes
in the automata. Similar to segmentation techniques in computer vision,
the learned hybrid modes can help us analyze motion patterns more
systematically, improve interpretability in decision-making, and refine
control strategies for enhanced adaptability.
Interesting take. Completely agree that product requirements document is a good mental models for system description. However, aren't bug-reports+PRs approximating a chat-interface?
I recently completed a take-home assignment with the following instructions:
<instructions>
This project is designed to evaluate your ability to:
- Deconstruct complex problems into actionable steps.
- Quickly explore and adopt new frameworks.
- Implement a small but impactful proof of concept (PoC).
- Demonstrate coding craftsmanship through clean, well-architected code.
We estimate this project will take approximately 5–7 hours. If you find that it requires more time, let us know so we can adjust the scope.
Feel free to use any tools, libraries, frameworks, or LLMs during this exercise. Also, you’re welcome to reach out to us at any time with questions or for clarification.
</instructions>
I used LLM-as-a-junior-dev to generate 95+% of the code and documentation.
I'm just an average programmer, but tried to set a bar that if I was on the other
side of the table, I'd hire anyone who demonstrated the quality of output submitted.
- The 5-7 hour estimate was exceeded (however, I was the first one through this exercise).
- IMHO the quality of the submission could NOT have been met in lesser time.
- They had 3 tasks/projects:
- a data science project,
- a CLI based project and
- a web app
- They wanted each to be done in a different language.
- I submitted my solution <38 hours of receipt of the assignment.
- In any other world, the intensity of this exercise would cause a panic-attack/burn-out.
- I slept well (2 nights of sleep), took care of family responsibilities and felt good enough to attack the next work-day.
I've been on both sides of the table of many interviews.
This was by far the most fun and one to replicate every chance I get.
5-7 hour interview take homes is already a nightmare. LLM assistance or not, I would absolutely not bother with such an assignment unless I was far into the process. Meanwhile, I'm given such tasks half the time before I speak to any human.
This was the final technical screen so definitely something worth doing in my case.
The reason I posted a reply was there is a lot of negativity around AI in the hiring process. This was an excellent example of using AI to the benefit of all parties.
Instead of nit-picking on stylistic things from a smaller code-sample,
one can nit-pick on the implemented complexity. I think it is a higher quality signal.
When the AI tab completion fills in full functions based on the function definition you have half typed, or completes a full test case the moment you start type - mock data values and all, that just feels mind-reading magical.
reply