One area of business that I'm struggling in is how boring it is talking to an LLM, I enjoy standing at a whiteboard thinking through ideas, but more and more I see push for "talk to the llm, ask the llm, the llm will know" - The LLM will know, but I'd rather talk to a human about it. Also in pure business, it takes me too long to unlock nuances that an experienced human just knows, I have to do a lot of "yeah but" work, way way more than I would have to do with an experienced humans. I like LLMs and I push for their use, but I'm starting to find something here and I can't put my finger on what it is, I guess they're not wide enough to capture deep nuances? As a result, they seem pretty bad at understanding how a human will react to their ideas in practice.
It's not quite the same but since the dawn of smartphones, I've hated it when you ask a question, as a discussion starter or to get people's views, and some jerk reads off the wikipedia answer as if that's some insight I didn't know was available to me, and basically ruins the discussion.
I know talking to an llm is not exactly parallel, but it's a similar idea, it's like talking to the guy with wikipedia instead of batting back and forth ideas and actually thinking about stuff.
In my personal social circle, it's but faux pas to even take out your phone during a discussion, without a statement why. Not explicitly a rule, it just kinda evolved that way. “Ok, I need to know, I’m gonna check wiki” is enough, and honestly makes everything more engaging — “oh! What does it say? How many times DID Cellini escape execution?. Bring up the talk page that should be fun!”
I think the faux pas is regurgitating the first hit you find on StackOverflow or Wikipedia without considering if actually adds something to a discussion.
I recently had a related issue where I was explaining an idea I'm working on, and one of my mates were engaging in creative thinking. The other found something he could do: look up a Chinese part to buy. He spent quite a few minutes on his phone, and then exclaimed "The hardware is done!" The problem is what he found was incomplete and wrong.
So he missed out on the thing we should do when being together: talk and brainstorm, and he didn't help with anything meaningful, because he didn't grasp the requirements.
Some of my colleagues will copy/paste several paragraphs of LLM output into ongoing slack discussions. Totally interrupts the flow of ideas. Shits me to tears.
I know what you mean. Also, the more niche your topic the more outright wrong LLMs tend to be. But for white-boarding or brainstorming - they can actually be pretty good. Just make sure you’re talking to a “large” model - avoid the minis and even “Flash” models like the plague. They’ve only ever disappointed me.
Adding another bit - the multi-modality brings them a step closer to us. Go ahead and use the physical whiteboard, then take a picture of it.
Probably just a matter of time before someone hooks up Excalidraw/Miro/Freeform into an LLM (MCPs FTW).
My experience has been similar. I can't escape the feeling that these LLMs are weighted down by their training data. Everything seems to me generically intelligent at best.
The LLM will know. One day they will form a collective intelligence and all the LLMs will know and use it against you. At least with a human, you can avoid that one person...
But who has truly stress tested their teammates to see how much level of question bombardment they're willing to take? You really haven't, since they can't just generate tokens at the rate and consistency of an LLM.
I wasn't suggesting we literally bombard teammates.
The whole point is that with LLMs, you can explore ideas as deeply as you want without tiring them out or burning social capital. You're conflating this with poor judgment about what to ask humans and when.
Taking 'bombard' literally is itself pretty asinine when the real point is about using AI to get thoroughly informed before human collaboration.
And if using AI to explore questions deeply is a sign you're 'not cut out for the work,' then you're essentially requiring omniscience, because no one knows everything about every domain, especially as they constantly evolve.
Exactly. You're literally making my point while trying to argue against me.
Whether it's because humans can't handle the pace or because it would make you a jerk to try: either way, you just agreed that humans can't/shouldn't handle unlimited questioning. That's precisely why LLMs are valuable for deep exploratory thinking, so when we engage teammates, we're bringing higher-quality, focused questions instead of raw exploration.
And you're also missing that even IF someone were patient enough to take every question you brought them, they still couldn't keep up with the pace and consistency of an LLM. My original point was about what teammates are 'willing to take', which naturally includes both courtesy limits AND capability limits.
> That's precisely why LLMs are valuable for deep exploratory thinking, so when we engage teammates, we're bringing higher-quality, focused questions instead of raw exploration.
This isn't really new though. We used to use search engines and language docs and stack overflow for this
Before that people used mailing lists and reference texts
LLMs don't really get me to answers faster than Google did with SO previously imo
And it still relies on some human having asked and answered the question before, so the LLM could be trained on it
No offense, if this is your response, then I don't think you have any experience to qualify you to have this discussion, you clearly do not have basic experience in using LLMs.
To make my point, let me know when Stack Overflow has a post specifically about the nuances of your private codebase.
Or when Google can help you reason through why your specific API design choices might conflict with a new feature you're considering. Or when a mailing list can walk through the implications of refactoring your particular data model given your team's constraints and timeline.
LLMs aren't just faster search: they're interactive reasoning partners that can engage with your specific context, constraints, and mental models. They can help you think through problems that have never been asked before because they're unique to your situation. That's the 'deep exploratory thinking' I'm talking about.
The fact that you're comparing this to Stack Overflow tells me you're thinking about LLMs as glorified search engines rather than reasoning tools. Which explains why you think teammates can provide the same value: because you're not actually using the technology for what it's uniquely good at.
Thinking things through is desirable. But in many discussions both sides basically "vibe-out" what they think the objective truth is. If it's a fact that can be looked up, just get your phone and don't stall the discussion with guessing games.
Both indeed. I'm older, I do consulting, often to the new school AI CEOs and they keep thinking I'm nuts for saying we should bring in this person to talk to about this thing...I've tried to explain to a few folks now a human would be much better in this loop, but I have no good way to prove it as it's just experience.
I've noticed across the board, they also spend A LOT of time getting all the data into LLMs so they can talk to them instead of just reading reports, like bro, you don't understand churn fundamentally, why are you looking at these numbers??
I talked to a friend recently who is a plastic surgeon. He told me about a young pretty girl came in recently with super clear ideas what she wanted to fixed.
Turns out she uploaded her pictures to an LLM and it gave her recommendations.
My friend told her she didn’t need any treatment at this stage but she kept insisting that the LLM had told her this and that.
I’m worried that these young folks just trust whatever these things tell them to do is right.