Hacker News new | past | comments | ask | show | jobs | submit login

I’d need to see it.

I can’t get ChatGPT to outperform a novice. And now I’m having candidates argue that they don’t need to learn the fundamentals because LLMs can do it for them.. Good luck HTML/CSS expert who couldn’t produce a valid HTML5 skeleton. Reminds me of the pre-LLM guy who said he was having trouble because usually uses React.. So I told him he could use React. I don’t mean to rag on novices but these guys really seemed to think the question was beneath them.

If you want to get back into front-end read “CSS: The Definitive Guide”. Great book, gives you a complete understanding of CSS by the end.




Requirements vary. It certainly can't produce really complex visual designs, or code a designer would be very happy with, but I have a hobby project work in progress where gpt4 has produced all of the CSS and templates. I have no doubt that the only reason that worked well is that it's a simple design of a type there is about a billion of in its training set and that it'd fall apart quickly if I started deviating much from that. But if t produced both clean CSS and something nicer looking than I suspect I would have myself.

A designer would probably still beat it - this doesn't compete with someone well paid to work on heavily custom designs. But at this point it does compete with places like Fiverr for me for things I can't or don't want to do myself. It'll take several iterations for it to eat it's way up the value chain, but it probably will.

But also, I suspect a lot of the lower end of the value chain, or at least part of them, will pull themselves up and start to compete with the lower end of the middle by figuring out how to use LLMs to take on bigger, more complex projects.


This meshes pretty well with my experience.


I'm always asking it to stitch together ad hoc bash command lines for me, eg "find all the files called *.foo in directories called bar and search them for baz".

(`find / -type d -name 'bar' -exec find {} -type f -name '*.foo' \; | xargs grep 'baz'` apparently.)

I would have done that differently, but it's close enough for government work.


This is funny to me, because I would _always_ use -print0 and xargs -0, and for good reasons, I believe. But if you base your entire knowledge on what you find online, then yes, that's what you get - and what _most people will get too_. Also, I can still update that command if I want.

So it's not any worse than good-old "go to stack overflow" approach, but still benefits from experience.

FYI, this is the correct, as-far-as-I-can-tell "good" solution:

find . -type d -name 'bar' -print0 | \ xargs -0 -I{} find {} -type f -name '*.foo' -print0 | \ xargs -0 grep -r baz

This won't choke on a structure like this: ls -R .: bar foo

./bar: test.foo 'test test.foo'

./foo: bar bleb.foo

./foo/bar:


...actually

find . -path '/bar/.foo' -print0 | xargs -0 grep baz

;-) no regex, no nested suff, much shorter. My brain went back to it ;-)


Using better languages like Powershell or Python becomes a lot more valuable here. I definitely think bash is going to be mostly useless in 5 years, you'll be able to generate legible code that does exactly what you want rather than having to do write-only stuff like that. Really we're already there. I've long switched from bash to something else at the first sign of trouble, but LLMs make it so easy. Poorly written python is better than well-written bash.

Of course, LLMs can generate go or rust or whatever so I suspect such languages will become a lot more useful for things that would call for a scripting language today.


> I definitely think bash is going to be mostly useless in 5 years

I'll take that bet


This is kinda side to my main point: while online knowledge is great, there are sometimes surprisingly deep gaps in it. So I can see AI trained on it sometimes struggle in surprising ways.


I would generalize even more and say that any scripting language is going to be deprecated very soon, like Python etc. They are going to be replaced by safe, type-checked, theorem-proved verbose code, like Rust or something similar.

What do i care how many lines of code are necessary to solve a problem, if all of them are gonna be written automatically. 1 line of Bash/awk versus 10 lines of Python versus 100 lines of Rust? Are they any different to one another?


  $ find . -type f -regex '.*bar/[^/]*.foo' -exec grep baz {} +
I wonder if you created a GPT and fed it the entirety of Linux man pages (not that it probably didn't consume them already, but perhaps this weights them higher), if it would get better at this kind of thing. I've found GPT-4 is shockingly good at sed, and to some extent awk; I suspect it's because there are good examples of them on SO.


If SO had known to block GPTBot before it was trained, GPT4 would be a lot less impressive.


Same here! Thats the main use I have for ChatGPT in any practical sense today - generating Bash commands. I set about giving it prompts to do things that I've had to do in the past - it was great at it.

Find all processes named '*-fpm' and kill the ones that have been active for more than 60 seconds - then schedule this as a Cron job to run every 60 seconds. It not only made me a working script rather than a single command but it explained its work. I was truly impressed.

Yes it can generate some code wireframes that may be useful in a given project or feature. But I can do that too, usually in about the time it'd take me to adequately form my request into a prompt. Life could get dangerous in a hurry if product management got salty enough in the requirements phase that the specs for a feature could just be dropped into some code assistant and generate product. I don't see that happening ever though - not even with tooling - product people just don't seem to think that way in the first place in my experience.

As developers we spend a lot of our time modifying existing product - and if the LLM knows about that product - all the better job it could do I suppose. Not saying that LLMs aren't useful now and won't become more useful in time - because they certainly will.

What I am saying is that we all like to think of producing code as some mystical gift that only we as experienced (BRILLIANT, HANDSOME AND TALENTED TOO!!) developers are capable of. The reality is that once we reach a certain level of career maturity, if we were ever any good in the first place, writing code becomes the easiest part of the job. So theres a new tool that automates the easiest part of the job? Ok - autocomplete code editors we're cool too like that. The IDE was a game changer too. Automated unit tests were once black magic too (remember when the QA department was scared of this?).

When some AI can look at a stack trace from a set of log files, being fully aware of the entire system architecture, locate the bug that compiled and passed testing all the way to production, recommend, implement, test and pre-deploy a fix while a human reviews the changes then we're truly onto something. Until then I'm not worried that it can write some really nice SQL against my schema with all kinds of crazy joins - because I can do that too - sometimes faster - sometimes not.

So far ChatGPT isn't smarter than me but it is a very dutiful intern that does excellent work if you're patient and willing to adequately describe the problem, then make a few tweaks at the end. "Tweaks" up to seeing how the AI approached it, throwing it out and doing it your own way too.


Except you should at least try to write code for someone else (and probably of lower level of competence - this also helps for your own debugging later) - obscure one-liners like these should be rejected.


I wouldn't call it obscure, just bog standard command line stuff. How would you have done it?


The lower level person need only plug that one liner into chatGPT and ask for a simple explanation.

We're in a different era now.


Yep! It’s something some aren’t seeing.

The AI coding assistant is now part of the abstraction layers over machine code. Higher level languages, scripting languages, all the happy paths we stick to (in bash, for example), memory management with GCs and borrow checkers, static analysis … now just add GPT. Like mastering memory management and assembly instructions … now you also don’t have to master the fiddly bits of core utils and bash and various other things.

Like memory management, whole swathes of programming are being taken care of by another program now, a Garbage Collector, if you will, for all the crufty stuff that made computing hard and got in between intent and assessment.


The difference is that all of them have theories and principles backing them, and we understand why they work.

LLMs (and "AI" in general) are just bashing data together until you get something that looks correct (as long as you squint hard enough). Even putting them in the same category is incredibly insulting.


There are theories and principles behind what an AI is doing and a growing craft around how to best use AI that may very well form relatively established “best practices” over time.

Yes there’s a significant statistical aspect involved in the workings of an AI, which distinguishes it from something more deterministic like syntactic sugar or a garbage collector. But I think one could argue that that’s the trade off for a more general tool like AI in the same way that giving a task to a junior dev is going to involve some noisiness in need of supervision. But in grand scheme of software development, is devs are in the end tools too, apart of the grand stack, and I think it’s reasonable to consider AI as just another tool in the stack. This is especially so if devs are already using it as a tool.

Dwelling on the principled v statistical distinction, while salient, may very well be a fallacy or irrelevant to the extent that we want to talk about the stack of tools and techniques software development employs. How much does the average developer understand or employ said understanding of a principled component of their stack? How predictable is that component, at least in the hands of the average developer making average but real software? When the end of the pipeline is a human and it’s human organisation of other humans, whether a tool’s principled or statistical may not matter much so long as it’s useful or productive.


Yes, but this is not something that has been enabled by the new neural networks, but rather by search engines, years ago - culminating in the infamous «copy-paste from Stack Overflow without understanding the code» / libraries randomly pulled from the Web with for instance the leftpad incident.

So what makes it different this time ?


So now the same tool can generate both the wrong script and the wrong documentation!


Wait, wait, let me show you my prompt for generating unit tests.


Assuming there is a comment just above the one-liner saying "find all directories named 'bar', find all files named '*.foo' in those directories, search those files for 'baz'", this code is perfectly clear. Even without the comment, it's not hard to understand.


If the someone elses on my team can't read a short shell pipeline then I failed during interviewing.


To me, the only obscure thing about this is that it is a one-liner.

If you write it in three lines, it is fine. Although, I guess the second find and the grep could be shortened, combined into one command.


I agree - but my point still stands.


You can use gpt for government work??


Shh. What they don't know won't hurt me.

(Serious answer: it's just an expression. https://grammarist.com/idiom/good-enough-for-government-work...).


I haven't practiced or needed to use the fundamentals in literal years; I'm sure I'd fumble some of these tests, and I've got err, 15 years of experience.

It's good to know the fundamentals and be able to find them IF you find a situation where you need them (e.g. performance tuning), but in my anecdotal and limited experience, you're fine staying higher level.


I had a chilling experience of late when, out of curiosity, I tried the actual online practice exam for driving school. Boy did I fail it. I realized that there are quite some road signs I never saw in my life, and more important, that my current solution to all their right of way questions is "slow down and see what the others do" - not even that wrong if I think about but won't get you points in the exam.


And I suspect you would be a lot less likely to be involved in a crash than someone who had just passed the test.


There are levels of fundamentals though, since parent mentioned HTML/CSS/React I guess they're referring to being able create a layout by hand vs using a CSS framework/library. You don't need to know how a CPU works to fix a CSS issue, but if all you know is combining the classes available you'll have trouble with even the simplest web development.

Everyone should know enough fundamentals to be able to write simple implementations of the frameworks they depend on.


Kind of same sort of situation, but i do like to refresh some of the fundamentals every 3~4 years or so. Usually when i do a job hop.

Its kind of like asking an olympic sprinter how to walk fast.


>If you want to get back into front-end read “CSS: The Definitive Guide”. Great book, gives you a complete understanding of CSS by the end.

Do you realize for how many technologies you can say the same thing? I don't want to read a 600 page tome on CSS. The language is a drop in the bucket of useful things to know. How valuable is a "complete understanding" of CSS? I just want something on my site to look a specific way.


> I don't want to read a 600 page tome on CSS.

The latest edition is closer to 1,100 pages.

It’s worth it.

It’s usually worth it for any long-standing technology you’re going to spending a significant amount of time using. Over the years you’ll save time because you’ll be able to get to the right answer in less time, debug faster, and won’t always be pulling out your hair Googling.


Sometimes it requires expert guidance to get something meaningful out.


This is the correct answer. I have 23 years of experience in datacenter ops and it has been a game changer for me. Just like any tool on one's arsenal, it's utility increases with practice and learning to use it correctly. ChatGPT is no different. You get out of it what you put in to it. This is the way of the world.

I used to be puzzled as to why my peers are so dismissive of this tech. Same folks who would say "We don't need to learn no Kubernetes! We don't need to code! We don't need ChatGPT". They don't!

And it's fine. If their idea of a career is working in same small co. doing the same basic Linux sysadmin tasks for a third of the salary I make then more power to them.

The folks dismissive of the AI/ML tech are effectively capping their salary and future prospects in this industry. This is good for us! More demand for experts and less supply.

You ever hire someone that uses punch cards to code?

Neither have I.


I think it's more akin to using compilers in the early days of BCPL or C. You could expect it to produce working assembly for most code but sometimes it would be slower than a hand-tuned version and sometimes a compiler bug would surface, but it would work well enough most of the time.

For decades there were still people who coded directly in assembly, and with good reason. And eventually the compiler bugs would be encountered less frequently (and the programmer would get a better understanding of undefined behavior in that language).

Similar to how dropping into inline assembly for speeding up execution time can still have its place sometimes, I think using GPT for small blocks of code to speed up developer time may make some sense (or tabbing through CoPilot), but just as with the early days of higher level programming languages, expect to come across cases where it doesn't speed up DX or introduces a bug.

These bugs can be quite costly, I've seen GPT spit out encryption code and completely leave out critical parts like missing arguments to a library or generating the same nonce or salt value every execution. With code like this, if you're not well versed in the domain it is very easy to overlook, and unit tests would likely still pass.

I think the same lesson told to young programmers should be used here -- don't copy/paste any code that you do not sufficiently understand. Also maybe avoid using this tool for critical pieces like security and reliability.


I think many folks have those early chats with friends where one side was dismissing llms for so many reasons, when the job was to see and test the potential.

While part of me enjoyed the early gpt much more than the polished version today, as a tool it’s much more useful to the average person should they make it back to gpt somehow.


Just out of curiosity: is the code generated by ChatGPT not what you expected or is it failing to produce the result that you wanted.

I suspect you mean the latter, but just wanted to confirm.


The statements are factually inaccurate and the code doesn’t do what it claims it should.


Right. That's an experience completely different from the majority here that have been able to produce code that integrates seamlessly into their projects. Do you have any idea why?

I guess we should start by what version of ChatGPT you are using.


ChatGPT might as well not exist - I'm not touching anything GAFAM-related.

Any worthy examples of open source neural networks, ideally not from companies based in rogue states like the US ?


Falcon is developed by an UAE tech arm, not sure if you would consider it a rogue state or not: https://falconllm.tii.ae/


What is «tech» supposed to mean here ? Infocoms ?

The United Arab Emirates ? Well, lol, of course I do, that's way worse than the US.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: