If you're a computer science student who is thinking of using this, don't.
Don't use Copilot, Gemini, Cursor or any other code assisting tool for the several first years of your study or career. You will write code slower than others, sure, but what you'll learn and what you build will be a hundred times more useful for you than when you just copy and paste 'stuff' from AI.
Invest in fundamentals, learn best practices, be curious.
This is bad advice to people with the correct posture.
If you want to learn: don't use these models to do the things for you. Do use these models to learn.
LLMs might not the best teachers in the world but they're available right there and then when you have a question. Don't take their answers for granted and test them. Ask to teach you the correct terms for things you don't know yet, so you can ask better questions.
There has never been a better time to learn. You don't have to learn in the same way your predecessors did. You can learn faster and better, but must be vigilant you're not fooling yourself.
I feel bad for anyone learning to code now. The temptation to just LLM it must be sooooo high.
But at the same time, having personalised StackOverflow without the negative attitude at your fingertips is super helpful, provided you also do the work of learning.
> personalised StackOverflow without the negative attitude
Phrased in that way, it does sound very tempting.
Over the past few years it's become pretty much a waste of time to post on SO (well, in my experience, anyway).
But wouldn't you learn if you actually have to enter and test that code, even if it's LLM generated, every day? Maybe you learn bad models, which can happen from SO as well, but you do learn. I'm more worried that the juniors won't have this opportunity anymore, or rather, they won't be needed anymore. So when I retire, what? Unless AI gets better and replaces everybody, then it won't matter at all what and how you learned.
Don't feel bad: LLMs make it so much easier to learn thinks without getting stuck at frustrating nonsense, yet there remain enough hurdles so that you need to develop resilience.
The errors and inefficiencies LLMs make are very subtle. You’re also just stuck with whatever it’s trained at. I echo OP, learn from documentation and code. This is as true now as back when Stackoverflow was used for everything.
They don't neccesarily do, but you can get them pretty far, one of the most interesting parts of LLM's (I guess chat based, haven't tried copilot style ones well enough) is that smallish rewrites are really low cost
Don't like the way it did something convoluted or didn't do early returns? say it, it will do it, chain as many requests as you need, it won't get fed up with you, and if you see it losing detail because memory, use those requests for a significantly more polished prompt on a new chat for a cleaner starting point
It's interesting you write this. I have a long experience and I use these auto complete on drugs now... I can't see myself writing all the damn code myself anymore.
I remember the days of using books, having to follow the code bits in the book as I typed them. I don't remember diddly squat about it. Same from years of stack overflow. I'd just alt-tab 12 times, read the comments, then read another answer, assess the best answer. Massive waste of time.
Use all the technology you have at your hands I say. But be sure to understand what you auto-completed. If not, stop and learn.
> But be sure to understand what you auto-completed. If not, stop and learn.
But that's IMO exactly what your parent commenter says. Use LLMs only after you actually have a clue what are they producing. So if you are a beginner, basically don't because you'll not have any understanding.
Agreed with you, although I'd say there is a spectrum between "do everything the hardest way" and "don't learn anything". I think LLM-based tools can be a great time-saver for boilerplate repetitive code, and they can sometimes help you get a first draft of some implementation you're not fully seeing yet, but you definitely should not rely on them to write the whole code for you if you want to learn.
>Don't use Copilot, Gemini, Cursor or any other code assisting tool for the several first years of your study or career.
I totally disagree with it too and think its no different than using a book or SO from the past. As a Junior you copy paste many more lines of codes that you don't full appreciate and sometimes it simply just takes time of doing that to absorb the knowledge.
I don't think we disagree at all btw. IMO we all agree understanding of what the spat-out code actually does is mandatory and absolutely is NOT optional. The rest are really just details.
I agree that nowadays LLMs can fill in for SO and Reddit.
I agree, but I'll add that you can still use a standalone LLM window as a "teacher". If you don't know how to do something, ask it how to do it, and make it explain every piece of what's going on so you truly absorb it. But don't let it write the code FOR you, you should implement it yourself.
I think this is not a good idea or suggestion at all.
If i use google maps to find my way around, i'm faster by a lot. I also do remember things despite google maps doing the work for me.
Use Code assistent as much as possible but make sure to read and understand what you get and try to write it yourself even if you just write it from a second window.
In this age and pace, writing code will change relevant in the next few years anyway
I tend to agree. But will extend beyond computer science students and say especially people who are self-learning. When I was getting started, I actively tried to minimize the number of packages, and abstraction tools I used. Consequentially, I was able to properly and deeply understand how things worked. Only once I was able to really explain, and use a tool, would I accept an automated solution for it.
On the flip side, I've now found that getting AI to kick the tires on something I'm not super well versed in, helps me figure out how it works. But that's only because I understand how other things work.
If you're going to use AI in your learning, I think the best way you can do that is ask it to give you an example, or an incomplete implementation. Then you can learn in bits while still getting things done.
I just interviewed someone for a Senior position who's been using these AI copilots for 1.5 years as a contractor. In the interview I politely said I wanted to evaluate their skills without AI, so no Cursor/Copilots allowed. They did not remember how to map through an array, define a function, add click/change handlers to input, etc.
What I've found after developing software for many decades and learning many languages is that the concepts and core logical thinking are what is most important in most cases.
Before the current AI boom I would still have had a problem doing some tasks in a vacuum as well. Not because I was incapable, but because I had so much other relevant information in my head that the minutia of some tasks was irrelevant when I had immediate access to the needed information via auto-complete in an IDE and language documentation. I know what I needed to look up because of all that other knowledge in my head though. I knew things were possible. And in cases where I didn't _know_ something was possible, I had an inkling that something might be possible because I could do it in another language or it was a logical extension of some other concept.
With the current rage of AI Coding Copilots I personally feel like many people are going down a path that degrades that corpus of general knowledge that drives the ability to solve problems quickly. Instead they lean on the coding assistant to have that knowledge and simple direct it to do the tasks at a macro level. On the surface this may seem like a universal boon, but the reality is they are giving up that intrinsic domain knowledge that is needed to be functional at understanding what software is doing and how to solve the problems that will crop up.
If those two paragraphs seem contradictory in some manner, I agree. You can argue that leaning on IDE syntax autocomplete and looking up documentation not foundationally different than leaning on a coding assistant. I can only say that they don't _feel_ the same to me. Maybe what I mean is, if the assistant is writing code and you are directly using it, then you never gain knowledge. If you are looking things up in documentation or using auto-complete for function names or arguments, you are learning about the code and how to solve a problem. So maybe it's just, what abstraction level are we, as a profession, comfortable with?
To close out this oddly long comment, I personally use LLMs and other ML models frequently. I have found that they are excellent at helping me formulate my thoughts on a problem that needs to be solved and to surface information across a lot of sources into a coherent understanding of an issue. Sure, it's possible that it's wrong, but I just use it to help steer me towards the real information I need. If I ask for or it provides code, that's used as a reference implementation for the actual implementation I write. And my IDE auto-complete has gotten a boost as well. It's much better at understanding the context of what I'm writing and providing guesses as to what I'm about to type. It's quite good. Most of the time. But it's also wrong in very subtle ways that require careful reading to notice. And I'll sum this paragraph up with the fact that I'm turning to an LLM more and more as a first search before I hit a search engine (yet I hate Google's AI search results).
The situation opened up a very interesting discussion on our team. All of us on the team use AI tools in our job (you'd be a fool not to these days). I even use the copilot tool that the candidate used. But the difference is that I don't rely on it, and any code it produces I'm actively registering in my head. I would never let it write something that I don't understand without taking the time to understand it myself.
I do agree though. Why do intellisense and copilots feel so different from one another? I think part of it is that with intellisense you generally need to start the action before it auto suggests, whereas with copilots you don't even need to initiate the action.
Don't use garbage collection or high level dynamic typing when building web servers. It's important to understand what the machine is actually doing at a low-level. Implementing REST API's in C++ will write code slower than others, sure, but you'll gain so much in your fundamentals of how memory management and OS processes work.
The more direct comparison might be "don't use compilers for the first few years; learn assembly directly instead". LLMs aren't going away, it doesn't make sense to learn how to do things LLMs already do now and are only going to get better at.
If you're a student who is thinking of not using LLM's, don't.
You put yourself at a significant disadvantage by not availing yourself of an infinitely patient, non-judgemental, and thoroughly well read tutor/coach.
I think there is a lot of room to leverage it, though resisting the temptation to have it do your work.
You can get it to provide feedback on code quality, suggest refractors and its reasoning (with the explanation rather than the full solution), basically treating it as an always available study group.
There is probably room for a course or book on a methodology that allows students to engage in this practice, or models with prompts that forbid straight completion and just provide help aimed at students.
I made my career investing in fundamentals, but started many decades ago. I can't in good conscience recommend it today. We will still be need some to do fundamental things (and they will be self-motivated and know who they are), but I don't think that's where most of humans will be in the stack. We should be at the top directing the machines, not competing with them.
Same sort of advice as don't copy verbatim to write your essays - ie for you to learn it has to flow through your brain.
However the above advice for essays doesn't include not looking at textbooks or papers - just not to blindly copy.
So perhaps you should use coding assistents - but always in a mode where you use as a source to write yourself rather than cut and paste/direct editing.
Ye you probably want to get grip on the fundamentals. My old college still have the students write some programs by hand with paper and pencil on exams to enforce this.
But, exercise pressure in courses will probably increase to recalibrate for the difficulty level. I feel LLMs would make you make assignments so much faster I don't think you can not use it.
Pen and paper in my opinion is absurd. You will never write code by hand. Ever. Writing things by hand teaches you how to do something in a way you will never use, so the memories being developed are going to be attached to a context that is alien to the reality of what your end goal is.
The programs written by hand where really simple and like a third of the exam. (Edit: Programming assignments were done on computers of course.)
Fizzbuzz to bubblesort level and a hard one we hadn't been exposed to that I failed. Required knowing the hare and turtle pointer walk thing.
I think it is good as an exercise. Just like manual assembler to machine code transcription is good to have part of a course in computer architecture, like a small part.
I regret doing so much of my studies with the least effort approach. In the end I would probably have saved time if I tried to learn like the teachers tried to force me to. Study during the whole course and not just the last week. Try to understand the concept instead of studying for the exam. Etc.
If your goal is to become a fundamentally sound computer scientist, this may be good advice.
However, if - like 99% of software developers in the workforce - your goal is to work on software until you earn enough that you no longer have to, then ignore this awful advice and focus on learning the tools that are becoming ubiquitous and mandatory is most roles.
Otherwise you are pre-assigning yourself to irrelevance, equivalent to programmers refusing to use operating systems, compilers, runtimes, etc.
> Do you not understand that, at the end of the day, you'll also still stuck living in the nightmarish hellscape you spend your days trying to usher in?
I live by the sea and spend most of my time with family or working in healthcare. Software for me pays the bills and the less time I spend on it, the better.
LLMs allow me to spend ~1/3rd the time I used to without impacting my remuneration. Of course I use them, in the same way I use email rather than snail, machine language rather than soldering, python rather assembly, git rather than _v244_FINAL, and any number of other abstractions and tools that separate me from the actual bits.
> Healthcare workers aren't negatively affected by having to put up with shitty software that doesn't actually work?
Is your position that code produced with LLMs is _worse_ than the average human developer working alone? I would be far happier if the software I am forced to use professionally had been produced by developers guided by LLMs.
But even if you are right, most software developers don't work on anything important and most don't care about the software they output. You're an enthusiast, which is very much an outlier in software development, and so maybe learning the fundamentals from logic gates is worth it to you (it was to me). LLMs can accelerate that learning immensely.
For the vast majority of people employed writing software, that's not true and forgoing LLMs will permanently stifle their employment prospects.
I think only for those editors. Google does also give a free API Key (aistudio.google.com) for the underlying models (though not the coding fine-tuned one) But IMO the free tier is rate limited a bit too much to build your own extension out of it for free.
As usual for Google services, where other providers just make you sign in & you're set, Google requires you to create a specific "Cloud project" and then makes you look through the menus to specifically enable the "Gemini Code Assist" feature.
I don't believe that this is the future of computer programming, all those coding assistants feel like companies trying to make mechanical horses and caravans when the combustion engine was invented.
IMHO they should be inventing cars, planes and trains.
Why? Because they write code using tools made to accommodate people and when you are taking out people from the loop keeping those is useless. It's especially evident when those AI tools import bazillion of libraries that exist only to help humans tot reinventing and spending time solved problems and provide comfort when coding.
The AI programming tools are not like humans, they shouldn't be using tools made for humans and instead solve the tasks directly. IE, if you are making a web UI the AI tool doesn't have to import tons of libraries to center a div and make it pretty. It should be able to directly write the code that centers that and humans probably shouldn't be looking at the code at all.
When is the last time you tried Cursor? It definitely doesn't import libraries. It can if you prompt it to but you have control.
I find it works great it you prompt it one step at a time. It's still can be iterative but it allows you to tighten up the code as you go.
Yes you can still go yolo mode and get some interesting prototypes if you just need to show someone something fast but if you know what you're doing it just saves time.
It has been some time since my last try but importing libraries isn't my core concern.
I still feel more comfortable with the chat interface where I talk to LLM and make it generate the code that I end up putting together in a dumb editor because I'm still writing code for human based analysis that will be interpreted by machine and my claim is that if the code is actually to be written by machine for the consumption of a machine, then the human should be out of the code creation loop completely and fully assume the role of someone who demands stuff and knows when its done right and doesn't bother with the code itself.
That would be the case of this were actually AI. But LLMs actually do procedurally generate language in a fairly human way, so using human languages makes sense to me.
The problem with this is that we can't get rid of the baggage of higher and higher level of libraries and languages that exist to accommodate humans.
I agree that it makes sense to use these currently but IMHO the ultimate programming will be human readable code-free, instead the AI will create a representation of the algorithm we need and will execute around it.
Like having an idea, for example if you need to program a robot vacuum cleaner you should be able to describe how you describe it to behave and the AI will create an algorithm(an idea, like "let's turn when bump into a wall then try again") and constantly tweak it and tend it. We wouldn't be directly looking at the code the AI wrote, instead we can test it and find edge cases that a machine maybe wouldn't predict(i.e. the cat sits on the robot and blocking the sensors).
What makes the AI context window immune to the same issues that plague us humans? I think they will still benefit from high level languages and libraries. AIs that can use them will be able to manage larger and more complex systems than the ones that only use low level languages.
Particularly, "Data excluded from training by default" is not available in the free and first paid tier.
Google was obviously irked that Microsoft got all this juicy training data since everyone is on their walled git garden, and tried to come up with a way to also dip some fingers into said data.
Don't use Copilot, Gemini, Cursor or any other code assisting tool for the several first years of your study or career. You will write code slower than others, sure, but what you'll learn and what you build will be a hundred times more useful for you than when you just copy and paste 'stuff' from AI.
Invest in fundamentals, learn best practices, be curious.