I disagree with your framing cynicism as an "agenda". For the record, I agree that the maker movement hasn't actually ended, and most of your points are correct; however, the idea of LLMs teaching Electronics worries me about as much as people using LLMs to learn Chemistry.
A little while ago I had to dissuade someone from learning Chemistry via an LLM, because the advice that they had been given by the LLM would have very literally either blown up the glassware, throwing molten chemicals all over their clothing, or killed them when they tried to taste whatever they were trying to synthesize. There was no consideration of safety protocol, PPE, proper glassware, or correctly dealing with chemical reactions, and nary a mention of a fucking fume hood. NileRed and a few other chemistry youtubers have utterly woeful approaches to laboratory safety (NileRed specifically I have a chip on my shoulder about — I've seen him practice bad lab work on a number of occasions and violate many of the common safety practices from e.g. Vogel's), but even then they do still take precautions! Let it not be forgotten that safety practices are born through bloodshed. Now we have a whole new wave of people who are excited to learn, and that's great, but one stray hallucination will kill them. I'm sure that the LLM will be more than happy to write an "Oh I'm sorry, it's my bad that I forgot to tell you to double glove when handling organic mercury!" but by then it is too late.
The idea of someone learning, say, House DIY from an LLM and then sawing through the joists or rewiring their electronics is utterly terrifying to me, quite frankly. Likewise, the idea of someone following an LLM's instructions and then blowing themselves up in a shower of capacitors or chemical glassware is also utterly terrifying to me.
Yes, you could do all these things before. But at least the most commonly available learning materials to you were trustworthy and written by experts!
I guess we have to agree to disagree, because I am not particularly interested in chemistry and ChatGPT has been extraordinarily helpful in demystifying electronics. Having 24/7 access to a patient person who can unpack the difference between TTL and CMOS logic or when you'd choose a buffer instead of a Schmitt trigger without belittling you for not already knowing what they know is awesome and not going to get anyone even slightly killed.
Electronics can kill too. IIRC capacitors in CRTs are particularly deadly. Though I suppose someone using LLMs only as a first step, much like Wikipedia, is probably at much less risk than someone using it as their only source.
Yeah, okay but... look, I concede that someone who shouldn't be doing anything except watching passive entertainment could absolutely take insane advice from an LLM (or a sociopathic human) and seriously hurt themselves.
But raw dogging capacitors in CRTs is such an overtly straw man argument in this conversation. People who are cleaning bathrooms for the first time can hopefully be trusted not to drink the bleach, right?
If someone licks a running table saw because an LLM said it would be fine, we're talking about entirely different problems.
Again: not doing anything at all with health or chemistry. They aren't what I am interested in, even peripherally.
What you seem to be missing is that LLMs are better at/for some things than others. Legal review, 3D geometry, therapy and apparently chemistry are off the list.
It doesn't make sense to project that onto domains where it excels.
> What you seem to be missing is that LLMs are better at/for some things than others.
I guarantee it is using the same system to write code and teach you about electronics that it is using to teach people about chemistry, and if you can't see how that means the resulting information is suspicious at best, then I don't even know what to say anymore.
My concern is that about half of folks are below average intelligence. And new generations will be exposed to AI from a young age, possibly lacking the rigor and experience of those who came before. I'm afraid they will trust the AI too much, and find themselves quickly in over their heads with no way back.
Perhaps I'm just pearl clutching. I guess time will tell.
Reasonable question and hopefully an interesting answer...
The simple lack of reasons to use TTL logic in 2026 was exactly why I didn't know what the deal was. It'd never come up, but I'd see it referenced.
I'm self-taught and in defiance of the people who insist that LLMs turn our brains to passive mush, the more things I learn the more things I have to be curious about.
LLMs remove the gatekeeping around asking "simple" questions that tend to make EEs roll their eyes. I didn't know, so I asked and now I know!
I'm actually pretty thrilled that you asked, because I think that this chat is an extremely solid example of LLM usage in the EE domain, and I'm happy to share.
I definitely led some questions to try and squeeze new-to-me perspectives out of it; for example, there could be tricks that make the active high variant more useful in some scenarios.
I think it does a good job of surfacing adjacent questions you might not realize you were eager to ask, as well as showing how it's able to critically evaluate real-world part suitability. I do find that ChatGPT in particular does better with a screengrab of the most likely parts vs a URL to the search engine.
I see the chat, but it looks like you’re not actually considering using TTL anywhere, and ChatGPT isn’t giving any explanations about TTL?
> I would definitely like to understand HCT vs HC (CMOS vs TTL) much better than I do, which currently isn't at all.
I think what ChatGPT should have explained at the beginning is that both HCT and HC are CMOS logic families, it’s just that HCT is designed to interface with TTL (receive TTL signal levels as inputs). The outputs are the same (CMOS outputs are rail to rail, which you can feed into TTL just fine).
Actual TTL logic, like the 7400 series and the variations (LS is one of the more popular variations), uses NPN transistors as inputs and to pull output signals low. It uses resistors to pull the signals high. The result is a lot of current consumption and asymmetrical output signals… maybe a good question to ask ChatGPT is “why does TTL use so much current?” CMOS, by comparison, uses a tiny amount of current except when it is switching.
I would probably choose AHC first as a logic family these days. It’s a slightly better version of HC, but it’s not so fast that it will cause problems.
Just peeking at one of the recommendations in the chat, if you search for 74HCT125 or 74AHC125 on Mouser, you’ll see that the AHC has more options available and more parts in stock. That’s a sign that it’s probably a more popular logic family than HCT, which is something I consider when buying (more popular = better availability).
Thanks so much for the additional context. You've given me more to dig into.
What I would like to know from you is:
1. On the whole, is the information you see it present more or less coherent and useful? Is it better to have this information than not have it at all?
2. Where does this land in terms of your expectations? Did anything surprise you?
It's clear from your reply that you know what you're talking about, while I'm still clawing my way up from nothing... so it makes sense that you have fewer things that you need to ask about.
I've bootstrapped my entire EE skillset over the past 2-3 years, largely with the help of LLMs to interrogate. It's helped me design and build my first product. I'm confident that without these tools, it's not a question of how long it would have taken so much as the truth: it would have died on the vine.
I asked it about the AHC family equivalent and it recommended against using it, suggesting either AHCT or sticking with HCT. For what it's worth, the reference board that I'm tracing uses an HCT, so the LLM isn't wrong.
Note that at the time I'm writing this, I have an extremely fuzzy understanding of the difference between these three... but I'm working through it.
I’m mostly just curious about how people use LLMs to learn. I don’t know what your goals are, and even if your goals were the same as mine, I don’t know how LLMs stack up against the way I learned (mostly from books). At least, not long-term. I’m not that good at electronics, I’m just a hobbyist that went through Forrest Mims mini-notebooks and later Horowitz and Hill.
What I like about information from humans is that humans are always trying to figure out how to say things that are relevant and informative. By “relevant”, I mean that we try to avoid saying things that don’t help you. By “informative”, I mean that we try to include information that you want to know, even if you didn’t specifically ask for it.
Picking on the chat for a moment—when you started out with the question, my first thought was, “This person is specifically asking about HC versus HCT, but maybe they want a broader overview of logic families, and maybe they want to understand which logic family to pick for their hobby project.” That’s an example where I think ChatGPT could have identified something that you wanted to know, but didn’t. (It wasn’t as informative as it could have been.)
Then there’s some times that ChatGPT gave you information of dubious relevance.
> Important: On the HCT125, the enable is active-LOW.
I don’t think that’s contextually important. It’s like saying, “Important: On the Honda Civic, the gas tank is on the left.” That’s contextually important when you’re at the gas station, but not when you’re buying a car.
I’m not sure why the LLM is recommending the TTL-compatible chips. IMO, the right thing to do here is probably to run everything at 3.3V, unless you have something that specifically needs 5V. When everything is at 3.3V, you don’t have to think about level shifting and you can just pick a very boring logic family like AHC. But I don’t know what you’re building. Likewise, I would lean towards using normal CMOS logic levels, unless I had a specific reason to choose TTL-compatible. The regular CMOS versions have better noise margin, because the threshold is in the optimum place—right in the middle.
I can actually clear a lot of that up. ChatGPT has accumulated a significant amount of ambient knowledge about what I'm working on and how I typically progress through asking questions, so the path isn't as blue sky as it appears.
For example, I'm working with a specialty SPCO switch IC that runs at 5V. There's never been and likely never will be a 3.3V version of the AS16M1. Being able to drive the switch (which functions like a shift register) from my ESP32 is top-of-mind.
The HCT125 being active low is directly responding to my question about why to choose it vs the 126 version; since the board I'm studying (which again, it has seen) uses 125s, it's reasonable to wonder why they'd choose one over the other.
Overall, the tone of chats on EE topics tend to be task-focused with permission to go on interesting side quests. I'm trying to get stuff done with room for relevant exploration along the way.
I think you're in the minority of people that are using LLMs for one of the best uses - for augmenting your own understanding and intelligence. Of course you have to triangulate and triple check what they say, but that's a good habit to get into anyway. Many of my teachers would repeat tribal myths all the same.
I'm glad that you brought that up, because I actually hovered on my response precisely because of those words. Specifically, I wondered if I could reliably count on someone showing up to say something patronizing and unnecessary.
This particular combination of snark, faux-concern and pedantry doesn't help the point you're trying to make about my loving AI wife.
It was not my intention to be patronising nor snarky, nor was I the least bit concerned for you (faux or otherwise). Though on a reread I do understand how my reply can be understood as unkind. I regret that and apologise for it. It was not my intention but it was my mistake. I should’ve made it shorter:
> It’s not a person, It’s a tool. There’s no reason to anthropomorphise it.
Without wanting to be argumentative, I would push back and say that I really did stop to consider my implied assignment of personhood before committing to it. I went with it because it reflects both the role it plays - you'll be relieved that I stopped short of deploying "mentor" - and the fact that English is highly adaptable and already the linguistic tug to use They feels very comfortable in relation to LLMs. Buckle up!
> you'll be relieved that I stopped short of deploying "mentor"
Funnily enough, I think that might’ve been better. I don’t think a mentor has to necessarily be human; one can learn from nature or pets. Or even a machine: Stockfish can teach you to play better chess and give context as to why you fumbled and how to do better next time.
I just don’t think LLMs are people and that we should avoid anthropomorphising them (for a whole plethora of reasons which are another discussion). I’m not even saying I think there could never be a robot which is a person. Just not what we have now.
> The idea of someone learning, say, House DIY from an LLM and then sawing through the joists or rewiring their electronics is utterly terrifying to me, quite frankly.
Can't wait for the load-bearing drywall recommendations coming from LLMs that were trained on years of Groverhaus content.
A little while ago I had to dissuade someone from learning Chemistry via an LLM, because the advice that they had been given by the LLM would have very literally either blown up the glassware, throwing molten chemicals all over their clothing, or killed them when they tried to taste whatever they were trying to synthesize. There was no consideration of safety protocol, PPE, proper glassware, or correctly dealing with chemical reactions, and nary a mention of a fucking fume hood. NileRed and a few other chemistry youtubers have utterly woeful approaches to laboratory safety (NileRed specifically I have a chip on my shoulder about — I've seen him practice bad lab work on a number of occasions and violate many of the common safety practices from e.g. Vogel's), but even then they do still take precautions! Let it not be forgotten that safety practices are born through bloodshed. Now we have a whole new wave of people who are excited to learn, and that's great, but one stray hallucination will kill them. I'm sure that the LLM will be more than happy to write an "Oh I'm sorry, it's my bad that I forgot to tell you to double glove when handling organic mercury!" but by then it is too late.
The idea of someone learning, say, House DIY from an LLM and then sawing through the joists or rewiring their electronics is utterly terrifying to me, quite frankly. Likewise, the idea of someone following an LLM's instructions and then blowing themselves up in a shower of capacitors or chemical glassware is also utterly terrifying to me.
Yes, you could do all these things before. But at least the most commonly available learning materials to you were trustworthy and written by experts!