I empathize with not wanting to become cocky, but I feel like you're lying to yourself. I could see them as truths if we were talking about triathlon, but something intellectual and logical as programming? If you are able to reach the point where you find yourself enjoying programming, then I think there's good chances that it's only a matter of time - and rightly focused effort - until you are able to produce work of any level of quality, including the current level of the best programmer, and beyond. What sort of limitations make you think otherwise?
Project this onto math, and we all could be Ramanujan or Galois with the right investment of time, right? Of course not. People differ in level of intelligence, and this is a fact. Some programmers are just brilliant people - had they chosen math they would probably be professors churning out papers. Some others just don't have the mental skills required.
This isn't about race or gender or whatever - just innate difference between individuals.
"Intelligence" isn't some stat you train to make your spells do more damage. It's a complex mix of what you've learned, how you learn, and who's teaching you. "Intelligence" is just as hard to define as "consciousness".
In my experience, it's the capacity for holding abstraction in mind that determines how far up the hill a person can go. I don't think there is ever a time where you can't move uphill, but the effort required to move increases more the closer you get to saturating that capacity.
I see what you mean. This is about very complex structures and interactions.
But in fact, I believe with enough time you can get used to any level complexity and find your way through it. It is a bit like a big city: however complex it is (intertwined roads, subways, highways, etc.), if you live there long enough, you will get a precise mental model. Same with software projects: after some time, abstraction will not be those foreign short lived unstable concretions that you hold in your head and that vanish away if someone says hello. They will be good old friends and the sole invocation of their name will instantly call a lot of solid knowledge to the bar.
So in this line, I'd say clever guys are just faster and more agile (maybe both are the same thing). Above a certain level of tolerance to abstract thinking, anyone can understand and work on big complex projects, given enough time.
There is another difference between good and great developers, though. It is the "best path-finding skill". For instance, given enough time, I would probably be able to write a Python script solving most sudokus, but I fear I wouldn't find as elegant and straightforward a path as Norvig's. http://norvig.com/sudoku.html
Even if you are only moderately good, you'll have a unique combination of knowledge that allows you to see things that others can't, and that will enable you to do amazing things.
Of course you should always try to improve. But consider where you invest your time. If you know X well, and most people that know X well also know Y well; then maybe you should learn Z instead of Y. Don't always choose Z -- maybe X and Y are complementary and so it may make more sense to learn Y than Z -- but if you follow someone else's curriculum exactly, you will only be re-discovering what they already know.
Does anybody know if Notch is self taught? I thought he had a CS Degree from somewhere?
I can fluctuate between thinking I am a fairly competent developer to thinking that I am possibly the worst programmer there is. I'm not sure which is a better attitude to have, hopefully I am somewhere in the middle.
I think the issue with reading some of the discussion on HN is that you get people talking in detail about things like functional programming languages , systems with huge scalability , hardcore math problems and the finer points of memory management in the Linux kernel that you feel it is obvious that you should understand this stuff.
I have been trying to do some more book reading to improve, of course the issue is that whenever you read any book recommendation threads of HN there is always at least 30 or so recommendations of some pretty thick books and no chance I'd have time to read them all.
There is also a difference between having deep knowledge of the tools and libraries that you are using right now and having a deeper understanding of theory for example learning git vs learning graph theory etc.
Does anybody know if Notch is self taught? I thought he had a CS Degree from somewhere?
I can't say with any degree of authority, but based on what I've read of his code, I'd guess he's self-taught.
My bigoted preconception is that when it comes to abstraction in code, people with CS degrees err on the side of doing too much, creating extremely elaborate object frameworks that close down options as much as they help reuse code. Self-taught folks, on the other hand, err on the side of doing too little, relying on cut and paste, util packages, and promiscuous sharing of data.
When it comes to algorithm design, people with CS degrees tend to pull in libraries of esoteric things, sometimes overengineered for the purpose at hand. Self-taught folks tend to write something from scratch that almost, but not quite, does the job right.
Based on what I've read from decompiled Minecraft, I'd guess Notch is self-taught. The abstraction is just barely enough to get the job done, and the algorithms are decidedly homebrew. That's neither praise nor criticism, just a comment on style.
I think that's because university teaches abstractions and pretty much nothing else.
A project that is done in the average CS class will be presented as a way to teach a design pattern rather than teaching how to solve a problem.
Also the marking scheme will tend to favour a broken solution that is an attempt at an elegant abstraction rather than a more basic abstraction that is well tested and works.
There is a certain danger in being educated, for example a self educated programmer might have a problem and just implement a O(n^2) solution and move on whereas a college educated programmer might spend excessive time trying to work out a way to do it in O(log N).
> I think that's because university teaches abstractions and pretty much nothing else.
Maybe my CS degree was unique (I don't think it was), but at the 400 level we had specialization choices including security, graphics, web development, databases, operating systems, embedded, software engineering, etc.
> A project that is done in the average CS class will be presented as a way to teach a design pattern rather than teaching how to solve a problem.
Design patterns != algorithms. My professors actually spent precious little time teaching me how to program, most of that was self-taught. What they taught me to do was how to solve problems, and that education has been very useful.
If your CS professors are spending most of their time teaching you design patterns, you should ask for a refund. The only CS class that taught me design patterns was my 400-level software engineering course.
That's not quite what I meant, most CS classes cover many other things than design patterns.
The problem is more that in an undergrad degree you are unlikely to need to do any large scale (by industry standards) project. So the way that they teach you good design is largely forced and the most academic way to do that is through a "design patterns" type class where the project that you are building is small enough not to require design patterns in the first place.
This will then educate people that you should really always look for patterns to apply and ways to abstract things since you will be artificially marked up at college for building abstractions and will look for them in every problem you have.
On the other hand someone who is more taught by experience will start by writing awful code like putting their whole program into 1 or 2 functions , they will then experience pain points because of this and realise certain re-occuring problems and either come up with their own solutions to them or read "gang of 4" , this will teach them design patterns the natural way.
On the other hand someone who is more taught by experience will start by writing awful code like putting their whole program into 1 or 2 functions , they will then experience pain points because of this and realise certain re-occuring problems and either come up with their own solutions to them or read "gang of 4" , this will teach them design patterns the natural way.
This is exactly how I taught myself. The transition from unorganized to organized (patterns) happened for me pretty naturally. For me, it was learn how to organize, or quit. Now-a-days, patterns are everywhere (and probably have been, but I didn't get it until I got it :)
>There is a certain danger in being educated, for example a self educated programmer might have a problem and just implement a O(n^2) solution and move on whereas a college educated programmer might spend excessive time trying to work out a way to do it in O(log N)
The CS graduate will have covered big O classifications but if they immediately focus on this form of optimization it's probably because they have spent years reading blogs that tell them this is the nature of tests at places like Google.
If every CS grad automatically thought this way by virtue of their education there would be no reason to test for it in interviews simply because a CS degree is often a minimum requirement for the kind of job where you'll be asked these questions.
It's only a priority if you make it one and those self-taught guys can probably self-teach themselves big O.
Your assumption that self taught programmers (ie one that did not get a "formal" education) do not understand fundamentals is utter bullshit. You could reverse your statements and they both still work. I've worked with people that have a masters in CS and in general it matters about 1% of the time.
A programmer today has a wealth of information they can pull from that does not require a single ounce of formal education. It takes dedication to the craft not bucket loads of money.
I don't think I made that assumption in that post at all.
My point was more that a formal CS education is likely to give you a different perspective on programming vs being self taught.
I would imagine most self taught programmers focus on results oriented learning, when I first learned to program before doing any formal CS my approach was "I want to do X , what is the minimum set of stuff I need to learn in order to do that good enough", after learning more formal CS and being forced to consider things like abstraction and efficiency for their own sake I would always focus more on them in every program I write.
Not suggesting that you can't be completely self taught and learn everything you could from an academic education (you can) but you are less likely to spend a month learning a bunch of design patterns and algorithms unless they directly apply to something you need to do right now.
You are more likely to just start hacking away at something then think "oh, this code is a mess how can I fix that?" rather than reading the entire gang of 4 book to start off with.
I guess its obvious, but I am (mainly) a self-taught programmer, and I think that you really don't know what you are talking about.
When I was around seven years old, we had a Tandy Color Computer 2. I wore out the book that came with that, playing with BASIC. We also had a Vic 20, a TI-99a and an Ohio Scientific, and eventually an IBM AT compatible. For years, I spent a lot of time playing with short BASIC programs.
Eventually, maybe around seventh grade, I got a book called Turbo Pascal Disk Tutor (or something like that). I loved that book and I spent many months studying the book and doing the exercises. I was very serious about learning object-oriented programming. Over the next couple of years I experimented with a simple wireframe 3D CAD-like program. I became very familiar with abstraction, polymorphism and other object-oriented concepts before I entered the 9th grade.
Anyway, I'm not going to list every single program I ever wrote or design pattern or programming language or concept I taught myself, but the point is, I did read books and learn a lot of things that are actually apparently missing from many undergrad and even graduate CS-like programs. A guy at Stanford just recently came out with a Rails course partly about software engineering, which apparently is practically revolutionary. There is more contemporary software engineering baked into Rails than what probably more than half of CS or even SE graduates in the last five or ten years ever saw in their courses.
And ever since I dropped out of college (only took like two CS-related courses while I was there), I have been extremely motivated to learn as much about CS and software engineering as I can, mainly because of attitudes like yours.
>> The abstraction is just barely enough to get the job done [...]
> Or, in other words, the perfect amount...
Well no, because the job doesn't end when it's "done". Minecraft is the perfect example:
In the earliest versions of the game, blocks were all basically homogenous cubes of some material, so they didn't need to be oriented. Later, blocks were added that did need to be rotated in various ways, e.g. torches, stairs. But each of these blocks had their own private system for choosing, storing, and rendering their orientation. These systems were often similar, but not identical. At this point, roughly half the blocks in the game are orientable in some way and there is still no generic orientation system. Such a system would have avoided massive amounts of redundant code, prevented many bugs, made the user experience more consistent, and made various 3rd party tools much easier to develop.
"You ain't gonna need it" is a cop-out. You are going to need some things. The trick is anticipating which things, and it will definitely pay off if you can guess correctly.
Hmm, from what I can tell, Minecraft is fairly successful, despite it being a "perfect example" of not having enough abstraction.
Of course, you fail to really acknowledge the risks of premature abstraction. Sure, if you could see the future, and know what patterns could be usefully factored out into abstractions, it would be good to start with those abstractions. But what happens if you incorrectly predict that an abstraction will be needed? You create a bunch of unnecessary framework code that is harder to understand, likely less efficient, and worst of all, you wasted time writing code that you didn't need.
YAGNI is not a cop-out. The best way to create abstractions is from concrete examples. Write something once. Then, once you actually find yourself writing it twice, abstract it out. That guarantees that you don't waste time on things that you don't use. It also generally leads to better abstractions, because you have concrete use-cases to work from.
Anyway, back to the Minecraft story, who are you to say that the game would be better if Notch followed the premature abstraction strategy? Isn't it possible, perhaps, that he would have wasted enough time implementing ivory towers of abstraction that he might have left out the features that actually made the game fun?
But what happens if you incorrectly predict that an abstraction will be needed?
Then you made a mistake and hopefully learned something. I didn't say architecting software was easy or without risk, just that you can't avoid doing it by following simplistic rules.
The best way to create abstractions is from concrete examples. Write something once. Then, once you actually find yourself writing it twice, abstract it out.
It's nice when things go that way, but it's not the general case. Often, by the time there is a concrete use for abstraction, the damage is done. For example, adding network multiplayer to a game that has been architected for single player is a nightmare of hacks and duplicated code (Notch has explicitly lamented about that one).
Anyway, back to the Minecraft story, who are you to say that the game would be better if Notch followed the premature abstraction strategy?
Notch himself seems to be saying as much in that blog post. But that aside, I'm a fellow game developer who has spent dozens of hours reading and modifying the Minecraft code. Even after a round-trip of obfuscation, it tells the story of its creation quite vividly, and the theme of the story is "hack around it!" Though I still have tremendous admiration for the game and its developers.
I think the point was that the source code doesn't make the game better or worse.
A nightmare of hacks is fine as long as the product is great.
IMO writing such code is sometimes even better, especially for a solo developer. If you aim to write beautiful code, it might eventually outweigh everything else, while giving a false impression that you're doing the right thing. Your goal is the product, not code. I'd argue that you can't focus on both (it's called ‘focus’ for a reason).
Of course source code makes the game better or worse. The game is made of code. The code makes the game what it is.
If you write good code, your product will work better and be done sooner. This is the definition of good code. If you write bad code, your product may be overbudget, buggy, inadequate, and so on. This is the definition of bad code.
There is no dichotomy between the product and the code. To suggest that you can make better software by neglecting the code is absurd.
No arguing with that. Same as building material makes a house what it is. The question is, does the success depend on material used? You can build a great house amidst the desert.
But that's an analogy. More real-world example—imagine two startups:
- Startup 1: bad programmer, good QA.
- Startup 2: good programmer, bad QA.
Where would you invest your money?
> If you write bad code, your product may be overbudget, buggy, inadequate, and so on.
Here I disagree. Overbudget? It depends on product success. Buggy? If you have good QA, it's not buggy. Inadequate? You can write the cleanest code, but your product won't work as users want it to.
When we say ‘great product’, do we mean that it has nice clean code, or it's something else? How many great products have bad code?
> There is no dichotomy between the product and the code.
As long as you are ‘just a’ developer, and there are other people focusing on product and its functional quality. In that case you receive specific tasks with deadlines, and yes, you should focus on writing good code.
Not so if you're a solo developer.
1) No one will focus on the product, except you.
2) You most likely would be heavily biased towards writing good code. (Because you're a developer, you're supposed to write good code, right?)
You need to force yourself to focus on the product, to avoid becoming the Startup 2 from above example. Intentionally writing bad code is one way to do that. In that case you at least can be that Startup 1—you'll be forced to pay more attention to functional quality (as opposed to structural), so you'll be good QA.
You can argue that one can focus on both. My opinion is that it's too risky. You need to have priorities set as clear as possible.
> To suggest that you can make better software by neglecting the code is absurd.
Yes, it sounds really controversial (especially to a programmer). I'm far from satisfied with that statement. What would be a better way to be a good QA while being a great programmer?
Multiplayer mode can be extremely hard and can take any fun out of being an indie game developer. I'm still not sure if it is the right abstraction to work in on day one.
"""Well no, because the job doesn't end when it's "done"."""
By definition, it does.
"""In the earliest versions of the game, blocks were all basically homogenous cubes of some material, so they didn't need to be oriented. Later, blocks were added that did need to be rotated in various ways (...)"""
So you are suggesting that they should have set up a system to allow that from the beginning.
Have you sat and thought how adding things like that could delay the initial release?
Also, have you sat and thought that if the initial release was not successful at the marketplace, all that extra work would have been in vain?
[downvote? Thanks, parent]
Just build what you need at the time, and make it flexible enough so that it can be refactored to something else later.
Philosophically, I agree. I view premature abstraction in the same light as premature optimization. I believe both abstraction and optimization are incarnations of your understanding of the problem. You want them in important places, not necessarily everywhere, and hence you want them late enough in the engineering process that you understand which places are important.
That said, within the comfort zone, there are high and low levels of abstraction. For example, in a project the size of Minecraft, I would expect CS major code to contain an ObjectFactoryFacadeCollection or two. Minecraft has nothing of the sort. It sticks almost exclusively to the Mob::HostileMob::Zombie inheritance we all grew up with. This is not bad or good[1], it's simply a reflection of the low-abstraction style of attacking problems that I associate with self-taught programmers.
On the other hand, its Magic Number to Constant ratio is downright scandalous . . . ;) Though it's possible that some of that is an artifact of the compilation/decompilation process.
Have you ever looked at a class named ObjectFactoryFacadeCollection and thought to yourself, "oh boy, this part will be fun to read?"
On one hand there's the complexity of the problem you're solving. On the other hand, there's incidental complexity. The ObjectFactoryFacadeCollection class squarely falls into the incidental complexity category. In other words, the moment you are writing a class of that sort, you have stopped working on solving the problem you set out to solve -- you're solving a problem that was invented by your tools, design, or limits of your understanding.
This isn't necessarily true. Sometimes you do in fact need these types of abstractions. This is why they've been made into patterns. The trick is to not use it before its necessary. The mere existence of it doesn't imply overengineered code.
Yes, of course you do sometimes need these types of abstractions, but you seem to have missed my point: they are a factor of incidental complexity. To restate, they are not at all inherent to the problem you are trying to solve. They are inherent to the tools with which you are solving the problem.
For instance, if your problem is calculating the trajectory of a projectile, a solution certainly exists that does not involve anything at all like an ObjectFactoryFacadeCollection. However, certain solutions involving unnecessarily complex abstractions could conceivably require one. This is incidental complexity. On the other hand, all solutions will require some information about the projectile's velocity, gravity, and so forth. This is complexity that is inherent to the problem itself.
Philosophically, I agree. I view premature abstraction in the same light as premature optimization.
It's usually easier to optimize later since optimisations are often just taking sections of code independantly and making them quicker. There is the whole 90/10 rule (or whatever it's called) that says it's better to highly optimise a few sections of bottleneck code rather than the whole thing.
Trying to retrofit an abstraction to a piece of code is almost always a horrible experience frought with mess and compromise.
Trying to retrofit an abstraction to a piece of code is almost always a horrible experience frought with mess and compromise.
Yes, and unless the problem is trivial or your experience in the domain is such that your foresight borders on the clairvoyant, this is guaranteed to happen. No matter how much (or little) design you do up front.
The key is to recognize the right time to stop and refactor, so as to keep the pain that comes with learning the problem space to a minimum.
Premature abstractions can have similar issues: unless you have more than 2 cases you don't necessarily know what your abstraction should look like. As the cases pile up you find yourself increasingly shoehorning implementations into abstractions that don't quite abstract correctly.
> Trying to retrofit an abstraction to a piece of code is almost always a horrible experience frought with mess and compromise.
It is amazing to me that our experiences are so different: I have found the exact opposite of this statement to be true. The only way that I've ever come up with a good abstraction is by starting with something concrete (preferably two or more instances) and factoring out the commonality. Retrofitting a piece of code to an abstraction that was designed in a vacuum tends to be an exercise in frustration, due to the abstraction being shortsighted and insufficiently suited to the problem space.
A well-abstracted program can be easier to optimize because modules are more loosely connected and their internal implementations can be replaced/optimized without breaking the rest of the code.
I disagree. Abstraction is fundamental to programming. The more abstractions, the better. I'm not talking about design patterns here, but abstractions that hold explanatory power in your problem space. They necessarily increase code comprehension, reduce potential bugs, etc.
Abstractions reduce potential bugs by reducing the 'interaction-space' of a particular entity in your code. Think about a program that has 100 variables all in one function. That is potentially 100! interactions between entities in your code. When you make a change, you have to reason about all 100! interactions to be sure you're not introducing a new bug.
Abstracts greatly reduce this space. If instead you have 10 objects who each contain 10 variables, within the object you have 10! interactions to reason about. In the main function that ties each object together you now have 10! interactions to reason about. This is many many orders of magnitude easier than the original problem.
The more (natural) abstractions, the better your code.
No. A hundred times no. If you have ever had to make sense of a complex program that was over-engineered with unnecessarily complex abstractions, you cannot possibly think that this is true.
> Think about a program that has 100 variables all in one function [...]
This isn't an example of code that is not abstract enough, it's an example of a basic failure to understand the principles of writing a program meant to be read by other humans. Sure, breaking that code up into understandable chunks is a form of abstraction, but it's not exactly the kind of abstraction that the grandparent was talking about. She was talking about "extremely elaborate object frameworks." I maintain that elaborate object frameworks are a bad thing, unless they are "barely enough to get the job done." Anything beyond that adds unnecessary complexity.
This beautiful and poignant quote sums things up much better than I ever could:
“Perfection is achieved not when there is nothing left to add, but when there is nothing left to take away” – Antoine de Saint-Exupery
He may have been referring to framework-level abstractions, but I was specifically referring to natural abstractions for the problem space. Blanket statements against abstractions in code is missing the point--abstractions are fundamental to programming. "Elaborate object frameworks" could mean a few different things. If you're referring to design pattern type structures, then I somewhat agree that the fewer the better. But in the general case it is not true that perfect is not being able to take anything more away. Not when humans are the ones writing and maintaining the code.
Abstractions reflect a persons understanding of the problem space. Iterative game development is exploratory. There is some trade off between redundant code and ease of local modification without worrying about global impact.
In my experience as a person who has been hired before, I can say that I would absolutely prefer well-factored, but rather concrete code over towering abstractions. Sure, if something is a general feature, it should be abstract. There's nothing more frustrating, though, than wading through layers of abstractions, only to find that hidden behind them is a singular concrete implementation.
Right. If you're coming from a C background, you'd be amazed how much information from the source is preserved in a Java .class file. To my knowledge, C++ turns into a binary blob, and a lot of magic can happen along the way. But Java turns into a slightly more machine-readable version of Java. A lot of the structure in the code is actually directly meaningful to the JVM. All the details of the object -- its variable names and types, its methods and their signatures, the class hierarchy, even sometimes the line number in the original source code -- is still there. The preservation is so complete that it has been said all Java programs are essentially open source (the sort of thing people would normally say about, say, Javascript).
Now, last time I played with Minecraft, it had been run through an obfuscator as well, so some of that doesn't apply. In particular, the variable and method names have been reduced to gibberish, and I don't know what sort of monkey business might have occured around inlining constants.
But macro structure doesn't change. In Java, given the way .class files work, it really can't change. And that's where a lot of the abstraction in a project lives. The class hierarchy is still there. Use of interfaces is still there. How you organize data, how you manufacture objects, which function calls which function, it's all still there. Even little things, like whether your functions return objects, enumerations, or magic numbers, is unchanged.
Even in decompiled code, even right at the start, you can blur your eyes, and at a glance you'll see either a lot of little functions or a few big ones, a lot of little classes or a few big ones, a lot of inheritance relationships through abstract classes or a few simple ones.
I once spent about nine months creating a largish program, and then moved on to another job. I spent about a day educating the fellow who would be carrying on the maintenance. I barely knew him.
About a year later, we met again. I still didn't know him, but he sure felt that he knew me. I remember he greeted me with, "I love your sense of humor", and "If you really want to know someone, you should work on their code for a year." No doubt he knew me as a programmer much better than I know myself.
Code reviews are a great way to get feedback on your work. Like open-source, knowing your code will be seen by co-workers is a strong personal incentive to produce better work.
If your work does not do reviews now, you can still ask a co-worker to look over you code changes. Many people would be flattered that you would ask for their programming wisdom. ;)
Can you point to resources/books where I can read more about design as is taught in CS degrees? I'm curious on their problem solving strategies compared to mine.
"Does anybody know if Notch is self taught? I thought he had a CS Degree from somewhere?"
The two are not mutually exclusive.
I was talking with a friend recently who's working on her Ph.D, observing that everyone who thought I was so smart in college is now better educated, officially, than me. She was like "You're basically self-taught, but with a piece of paper that says you were willing to stick around in college for 4 years. Even in college you were self-taught."
One of my teammates at work recently said I should become a professor. I was like "Don't I need a Ph.D for that?" All my other teammates were like "Naw, man, visiting lecturer!"
If you're actually self-taught, you can treat formal education options as a menu that you might or might not choose to sample, depending on your goals at the moment. You don't have to define yourself as one or the other.
This is interesting, when I studied CS at university I'd say there were 2 different types of people that got high marks.
There were people who were self taught , either before they started CS or once they learned some programming at university they identified other areas outside the course that interested them and they could apply their new programming skills.
These people typically got jobs in the software industry after graduation and become software developers.
The other group of people were people who were just generally high achievers and learned enough programming to pass the course with good marks but nothing much else. They got equally high marks because they were good at passing exams.
Most of them either retrained for a career in finance , went into academia or got management gigs at tech consultancy companies.
I can't think of anybody I know who is a working programmer who is not self taught to at least some degree.
> They got equally high marks because they were good at passing exams.
I don't understand how this makes sense, unless being "good at passing exams" means "cheating". Can anybody explain? I hear this said so often, and I usually chalk it up to the speaker rationalizing his own poor scores.
My CS exams were always hard, and the only way to "get good" at passing them was to learn the material.
Different types of learning. I got very high grades in all my courses during college, but a number of them were due to the fact that I am really good at cramming and figuring out what to study and what to ignore. That's what it means to be good at passing exams.
Most of my CS courses I actually spent the time to fully understand and internalize the material. That was the material I truly learned.
But when you write an exam, clues to your mental state are all throughout the material you write on the exam, and that mental state includes everything you know. Someone who's "good at passing exams" can extract clues to your mental state from the wording of the questions and figure out what kind of answer you want. On multiple-choice exams, they only need to come up with 2 bits of information about which answer to choose, and the proposed answers themselves give additional clues to the exam-writer's state of mind. 2 bits doesn't feel like mastery; it feels like an educated guess.
It depends on the subject area, of course, and the skill of the exam writer. But it's very common to be able to pass multiple-choice exams without knowing anything at all about the material.
Knowing the material, of course, helps you come up with the right answer --- but it also helps you a lot with "reading" the exam writer. And you don't necessarily need to know a whole lot about the material to get an advantage that way.
It's also often possible to get acceptable marks on exams by parroting rather than deep comprehension.
I think my test-taking skills were usually worth one to two letter grades' worth on exams when I was in school. I could usually get a D or C on exams where I should have gotten an F, and an A on exams where I should have gotten a B or a C. A little while back, I got 97% correct on the ai-class final exam without having learned more than half of the material. (In that case, though, I think the test also failed to cover most of the material.)
I think non-multiple-choice math exams are probably the hardest to "fake out" this way.
There are other people whose test-taking "skills" actually have a negative effect on their scores. First, they study the material in their bedroom or at the kitchen table, rather than the classroom, unnecessarily impairing their recall when the exam comes. Then, they show up to the exam exhausted and sleep-deprived from cramming all night, damaging their ability to think creatively or tolerate stress, and then they have an extreme stress response from the test-taking situation, further handicapping their ability to think. It's easy to imagine that someone like that could fail a test I'd get an A on, with the same level of knowledge.
"Someone who's "good at passing exams" can extract clues to your mental state from the wording of the questions and figure out what kind of answer you want."
I actually tried an experiment on this when I was in high school. I took the AP Comparative Government without ever having taken the course, or really having any sort of academic exposure to it (hey, it was free with the purchase of the AP US Government, and I was taking the day off from school anyway for the latter test). My only knowledge consisted of what I read in the newspapers, plus half an hour with a test prep booklet at breakfast that morning, plus whatever I could glean from the test questions themselves.
I scored a 3 on it. Not a great score, but passing. Pretty good, actually, considering that the test involved writing 4 essays on a subject I knew nothing about. So I figure perhaps 50% of the outcome of a test is knowing the material and the other 50% is test-taking skills.
Ironically, though, I think that the skill of extracting subtle clues to the mental state of the people around you, and the answers they expect, is far more valuable than any subject matter you learn in school. It's absolutely essential if you work in an organization, so you can understand who the decision makers are, what their priorities are, and what will really impress them without them having to tell you anything. It's absolutely essential if you manage people, so that you understand why they're working for you and what will motivate them to do their best work. And it's absolutely essential if you choose to strike out on your own and be an entrepreneur, because that's how you tell what customers want. They're generally indifferent to you and often have no clue what they actually want; there's no way you'll get them to tell you.
Interesting... if true, what this could mean is that virtually no-one who is taught programming at school ends up enjoying it. You have to be one of the ones who seeks it out before you are required to learn it, otherwise you're not the kind of person who would tolerate it as a job?
I don't think that's really true. I knew someone in college who had never touched a compiler before her sophomore-year intro CS course. She ended up graduating with high honors, went to work at MIT Lincoln Lab for a few years, and is now doing a Ph.D in CS.
I know several other people here at Google that went into college studying things completely different from CS, took a couple courses in it, and discovered they loved it.
I think the real determiner is what happens after college, after the academic support net is taken away. The people who go on to become really great programmers use it as a springboard to start seeking out information on their own. You can always tell whose these are once you hire them, because they'll ask you several questions about the system to orient themselves, get some code & documentation pointers from you, and then go about their merry business learning everything they can, including tons of stuff you didn't tell them (and often, didn't know yourself). The mediocre ones learn just enough to accomplish the task at hand and then ask you again as soon as they have to do something new.
I have similar issues. For most of my adult life I've had so many interests that it is just impossible to keep up with all of them to the level I want. Even within computing as a field I am overloaded. I read a lot of stuff on HN and elsewhere I find interesting and I agree that it makes me feel like I should understand it all. First time I've came to that realization.
My current solution to the problem of overwhelming interests is starting grad school in CS; by going through a program, I am forced to focus on certain things by outside forces, thus saving my sanity.
I already am one, kind of.. I work for these guys: www.mitre.org
My day job is broad enough that I get to sate most of my professional desires in one way or another. Now I have grad school to explore some topics in depth and expand my education. It's working out very well so far.
That's an interesting dichotomy. I have a CS degree from a well-respected school, but as far as programming goes, I would say I'm at least 90% self-taught. I guess I assumed that all programmers are essentially self-taught, regardless of educational background.
There is much knowledge than you can learn. Try to focus your learning on what help you solve your clients problems. You'll become an expert only in a thing or two, and you'll be able to talk about it like the people here on HN.
Also the smart comment writer is not a representative of the average HN visitor/population. There are always people who are expert in their niches.
This is true, However at some point I want to move my career in a different direction and most of what I learn on the job is industry specific and the industry itself is not something that hugely interests me (although I have not decided for sure what I want to do). So I am trying to learn as much fundamental stuff as possible.
The issue is deciding how much one should know about something before you confidently put it on your resume.
For example I would say that I am an OK Java programmer but I avoid using enterprise frameworks which are commonly used in many companies so there are many parts of the language which I am not familiar with simply because I have never had cause to use them, for example I do almost all of my persistence using a database and ORM so I almost never have to use Java's concurrency locking features in the wild as I do all my locking in the DB.
The same with functional programming, there is no reason to learn it for my job but I get a feeling it will become more important as time moves on so I should know something about it.
There is allot of stuff on HN with people saying that everyone should have implemented a toy compiler at somepoint and if you haven't then your not a serious programmer , or perhaps it is a Toy OS etc etc.
"I still stubbornly believe the whole “private members accessed via accessors” thing in java is bullcrap for internal projects. It adds piles of useless boilerplate code for absolutely no gain when you can just right click a field and chose “add setter/getter” if you NEED an accessor in the future."
Is this a controversial stance? It seems like common sense, unless I'm misunderstanding something.
EDIT: To clarify: I assume he's saying "don't add accessors by default for all private members unless you need to, because you can always go back and add it if you really need it", which is common sense for any project, external or internal. I'm pretty sure he's not saying "don't add accessors, just use public members", which is controversial, IMO, even for internal projects.
Eh, using protected/private members is one of those things that makes sense only insofar as you are paranoid that you can't just access internals directly.
That said, consider the following. I've got a renderer that accepts a sprite to draw.
Version 1: I pass the sprite object to the renderer, and the renderer gets the texture contained in the sprite and draws it, scaling it according to the sprite's size and position public members.
I then decide that all of my sprites draw slightly differently (different scales/glow effects/color tint/whatever).
Version 2: I pass the renderer my sprite, it gets the size and render settings members from the sprite, and then draws it.
Oops! I now decide I want a hierarchy of sprites, so that I can add hats.
Version 3: Same as version 2, but in the draw routine now the renderer asks the sprite for its parent to calculate its position and size, and now I've got more renderer code and more sprite code. I can't just use the exposed position member of the sprite--it has context and logic behind it that must be done.
Oh wait, I have a problem in which the sprite glow might change depending on how many times its been drawn.
Version 4: The renderer now has to keep track of how many times it's drawn a sprite. My renderer is getting a bit chunky now.
Did I mention scale should change according to time but also the sprite's velocity?
Version 5: Renderer grabs these variables directly, and does all that other stuff too.
...and that now I want to let that scale be overridden by a isConstantSize flag on the sprite?
Version 6: aaaaaargh
...and that now the sprite can decide it wants to defer to a parent sprite's settings sometimes?
Version 7: what did i do to deserve this?
~
Proper encapsulation and use of accessors is clearly a sign of paranoia. That said, as your code evolves, you might find that this paranoia is completely justified. Hiding queries behind interfaces lets you future-proof them and trivially add more advanced behavior and hooks.
Even when you are working by yourself on an internal project, you are never working alone. You are also working with your future self, and that person is guaranteed to want something different from you and to make assumptions that you are not.
Oh, and for good measure--everytime you force yourself to use an accessor, you are providing the opportunity for somebody down the line to add debug and validation code. Or to add locking and critical sections. Or replace a dumb get and recalculation with caching. Or replace caching with a dumb get. Or a database access.
This flexibility is not something you might think you need.
You might also never need to debug your own code, or check your assumptions about variables, or track who is accessing what when.
(for what it's worth, I don't use protection for data members of Plain-Old-Data types [say, messages or vectors] very often--they're too dumb to deserve this sort of extensibility or cost)
For that I prefer the ActionScript3.0 solution: methods that imitate attributes.
You replace
public var myAttribute:int;
with
public function get myAttribute():int;
The caller uses both with the same "object.myAttribute", so you can just replace one with the other when the need arrives, without changing any other code.
This allows you to use all sort of syntactic sugar ("object.myAttribute++", "+=" and other assignments), and for dynamic languages you can get and set values just from a reference to the object and the attribute's name. This last part is invaluable for animation engines.
> Oh, and for good measure--everytime you force yourself to use an accessor, you are providing the opportunity for somebody down the line to add debug and validation code.
So add it then. Don't pre-generalize everything just in case.
What, is there some kind of run on parens in your neck of the woods? Running such a lean startup you can't afford a function invocation?
This isn't premature abstraction or architecture astronautics--this is just good practice.
If you are in a language like Java or C, your compiler should optimize away the call if it doesn't do something clever.
If you are in Ruby, this is so easy to do that it doesn't even need mentioning--you just call the variable directly and behind the scenes you have replaced the variable name to be a function doing your magic instead.
If your fingers protest at the additional "get" in your function names, go and buy yourself a real IDE.
EDIT: Parent thread added clarification worth noting here. I'm not saying you should add accessors for every protected member--that's just as bad. I mean to say you should only expose members that absolutely require it (the fewer the better, generally), and only do so through function interfaces you can hook. :)
Sigh. If I had a dollar for every "your compiler should optimize away X", I'd be retired. Should doesn't mean does.
Plus, no, often I can't afford a function invokation. Because it's yet another cache miss.
If all of those things don't matter (or you work on a large enough team that the benefits of abstraction are worth it), yes, add getters/setters everywhere. For the rest of us, we might have actual _reasons_ why we don't cargo-cult our code.
one good reason would be because setters and getters are way easier to mock out than direct access to member variable
And we all want to have code tested , don't we ? :-)
Running such a lean startup you can't afford a function invocation?
This actually isn't as wildly hypothetical a situation as one might guess. At least in the Android world, "Avoid getters and setters" is on the short list of performance suggestions.
Ok, but in the general case (Sun/Oracle JDK) I'm pretty sure that there's almost no overhead for using an accessor anymore. That wasn't true of early JDKs though.
If you are in Ruby, you don't get a choice about accessors: "@foo" and "@bar" are private member variables. You have to create accessors for them. Like you said, though, the accessor / property generation command is easy: "attr_accessor :foo, :bar"
Even if the (Java, or otherwise) IDE can generate piles of crap code, I still hate reading piles of crap code, though. Sometimes I just want a "struct" :-)
> If your fingers protest at the additional "get" in your function names, go and buy yourself a real IDE.
If you can't see how abstraction layers affect performance, go buy yourself a real education. This isn't about premature optimization, it's about not applying premature generalization.
Sometimes it immediately makes sense to add accessors. In that case I add them right away.
Sometimes there is no apparent need for accessors, so I don't add them yet. They're easy to add (you argue so vehemently yourself), why would there be a problem adding them as needed?
A blanket rule of "just add them, always" is just a crutch for mediocre developers who aren't capable or willing to actively think about what they're working on.
This smells of bucking conventional wisdom simply for its own sake. Unless you have a good reason not to follow best-practices, you're a bad programmer if you don't (this doesn't always apply if you're writing a project for yourself).
Conventional wisdom is there for a reason--it was wisdom hard fought over many iterations from people who came before. Conventional wisdom has won out against many competing ideas, a survival of the fittest of sorts. You may not always fully comprehend the reasoning for a particular 'best-practices' rule, but you're foolish not to follow it for that reason.
Adding accessors is not 'premature generalization', it's just how you write well engineered code.
Accessors are hardly a pre-generalization. I believe he was simply telling various use-cases that can be applied. Yes, software requirements change and some things don't (in practice), but you won't know precisely what will and won't need them until you ship. And even then, you may not know what will be required years from now. It seems like the rational option is to sink a second or two more time into the development in order to save potentially long and annoying refactors later. I've been using Visual Assist and I can actually get it to dump an accessor into the console faster than I can type out the full variable name (for variables name that aren't i,j,k).
On a more serious note, trivial accessors generally don't result in any more compiled code than direct field accesses, but do have the advantage of abstraction. If your development process is limited by the time it takes you type out the name of an accessor (or in my case, the first 3 letters), then you must be really productive or not do other things like documentation or testing.
Except there are cases where it will be way more pain later and no perceptible loss ahead of time. Imagine you are debugging something and you want to know where your public variable is getting set to 3. If you are using java and public members you will have to check every single call site, whereas if you have a setter you can just add a single if(x==3){} and put a breakpoint in there and you have done in 30 seconds what would have taken 30+ minutes.
Eclipse can add a breakpoint to a variable. Any time that variable is accessed or set, the breakpoint is tripped. That's less than 30 seconds, that's like 5. You literally just click to the left of the variable definition.
The point of my example is you can put a breakpoint under any complex condition, I guess I should have made it more complex something like if(someComplexCalculatedValue() == 10). Anything that can be expressed can be added as a breakpoint if you use setters, then you won't have to step through every assignment, which could be thousands in applications.
Engineering has been always about trade offs. Every time I'm raced to use reflection to get to private members, I'm reminded that these abstractions are never free.
Controversial might be too strong a word, but there is a counterargument to be made.
Suppose you have a member named "x." Someday, you might want to stop storing x, and instead make it a calculated value. At which point you'll need a method. Or, when setting x, you may someday want to increment a counter, or transform the input data, or take some other action. Again, you'll need a method.
Yes, it's very easy to add an accessor later. But then you'll most likely have to edit every line of code that accessed the now-defunct member variable. Your IDE may make this easy, but I'd still prefer not to have to do it.
That's not to say writing accessors is always the best choice. I'm just saying there can be good reasons to do it. Like all things in software engineering, there's no one-size-fits-all rule about this. Notch is right about the bloat accessors create, so you have to weight the advantages and disadvantages yourself in each case.
C# allows it but but Java doesn't have this. Hence why it's standard operating procedure to just use setters from the beginning, because it might be nontrivial to update all references.
> The way current C# does accessors / properties is just about perfect
As far as I'm concerned it's quite far from it, because C# still allows public fields, properties and fields are incompatible, properties and methods are separate and Microsoft specifies different naming conventions for fields on one hand and properties & methods on the other.
Perfection is Smalltalk's way of doing it.
And in a syntactic line closer to ALGOL, Ruby is about as good as it gets: no public fields, "properties" are normal methods (setters have a little bit of syntactic sugar, but not much) and auto-generating a bunch of getters, setters or both is a class-level method call.
> What is wrong with different naming conventions for fields vs. properties, and methods? It makes it immediately obvious what is what at a glance.
Which is precisely one of the problems of properties in C#, it makes fields and properties look and feel extremely different from the outside which violates the uniform access principle.
As I noted, ideally C# should not have public fields in the first place.
You say "No." so defiantly, but the reality for Java - which is what everybody else was talking about in this thread and the article - the answer is quite often "Yes."
Code bloat, at least _compiled_ code bloat, tends to be less on an issue with trivial accessors in C++ since they are generally inlined, but it does add a lot ()s. For non-trivial accessors, yeah, you can inline a lot of copies of that logic.
Vector v1(v2.getX(), v2.getY(), v2.getZ()); vs
Vector v1(v2.x, v2.y, v2.z);
I can't say I know the average shelf life of a software component in various languages, but I can say that at least things like Java have managed to justify their use of accessors by that metric. Maybe the focus should be on writing less throwaway code than deciding whether to "invest" a few extra minutes or not on accessors.
I think most companies have tons of legacy code that is stilly lying around and in production use. When I was at Dreamworks Animation, we were still using tools written back in the 1980s to create animated features; albeit we kept improving on it whenever it didn't fit the bill.
"""Most software are outdated after a couple of years."""
You'd be surprised. Tons of code runs in production, even in the latest of shiny systems, that was written 10 and 20 and 30 years ago -- either in whole or in parts, refactored etc.
From 1986's NeXT OS that is now OS X Lion and iOS 5, to Bill Joy's TCP/IP, to Emacs.
And tons of enterprise/banking/financial/military systems use ancient code, even 70's COBOL...
Gosu supports a property syntax that lets you use = to assign things to get/set method pairs. In fact, when you load existing Java code, it replaces getFoo() and setFoo(foo) methods with a property.
This makes the code neater and enforces the abstraction--the user of the property does not need to know whether you've implemented it as a simple variable or a complicated method.
A bunch of other language let you do this as well.
"""Yes, it's very easy to add an accessor later. But then you'll most likely have to edit every line of code that accessed the now-defunct member variable. Your IDE may make this easy, but I'd still prefer not to have to do it."""
Easy is an understatement. It's 2-3 clicks away in Eclipse.
The getter/setter stuff in java is a huge waste of space and time. However, at least in the enterprise space, all of the tooling and frameworks assume your code follows the java bean naming conventions. For this reason alone I always make sure all my classes follow the pattern and whenever I train new developers I make sure they get a large amount of experience building out bean definitions and understand how much time is saved with Spring and Hibernate as long as the conventions are followed.
But yeah, its a bunch of ridicules, generated spam.
Oh thank God somebody in authority is calling this bean bull_hit what it is.
After reading the Eiffel book and thinking about things like programming by contract and class invariants, the bean pattern which constructs empty, initially useless (or at least unreliable) objects seemed like a huge step back.
The crux of the argument (and most of the book for that matter) is that you want to avoid mutability whenever possible. Adding setters for every field by default means mutability is the default mode for your application. Adding getters for every field by default means you lose any advantages of encapsulation.
At one time, Java Beans were a heavily marketed pattern by Sun. Joshua Bloch came along and said "hey, this is a bad pattern" (and maybe others, but he's the one I always think of).
Why does everything have a getter/setter by default?
That is a pretty horrible anti-pattern, is it something to do with being able to serialise the state (including internal state) of the whole object?
It's about hiding implementation. If you access an object only through methods then the implementation can change without the client code having to know or care (aka be recompiled)
The problem is when those getters and setters have other effects (maybe invalidating a cache?). In Java, it's considered best to just go ahead and add the getters and setters first, so that when you need to add these side effects later you don't have to modify lots of code using your library (if that is even possible). Languages like Python make this unnecessary since they have first class properties:
# Old
class Foo(object):
def __init__(self):
self.x = 0
foo = Foo()
foo.x = 1
# New
class Foo(object):
def __init__(self):
self.__x = 0
@property
def x(self):
return self.__x
@x.setter
def x(self, val):
self.__x = val
do_other_stuff()
"In Java, it's considered best to just go ahead and add the getters and setters first"
I don't think this is true. I'll concede that it may be a requirement for certain things like an ORM API, but in the general case, it's bad practice. The mere act of adding a getter violates immutability.
> The mere act of adding a getter violates immutability.
Uh what? No it does not. Writing stupid getters (or classes) might, but writing getters does not "violate immutability".
Naturally, getters are not of much use if all your fields are final and hold immutable objects, but the latter can be tricky in many OO languages.
Getters significantly improve the situation there, by providing a point at which you can clone your internal state and return a copy, letting you keep your object immutable even if you have mutable fields (of course this assumes you can deeply copy all your mutable member objects)
On the other hand, you could have "setters" using the same naming convention which do a clone-and-replace (and return the new object), that would not violate immutability (and would be easier than building objects from scratch every time from the outside) e.g.
Type setFoo(FooType foo) {
return new Type(
this.field0,
this.field1,
this.field2,
foo, // bam
this.field4);
}
many functional languages have that behavior when manipulating "records".
All of your objects are immutable? Besides that's hardly an argument for having public members, being able to remove the setter or getter (or conditionally throw exceptions in them) is one of the exact reasons why you write them.
The intent of the comment you are replying to "best practice is if you would add a public member, instead add a private member and a setter & getter" which is generally correct.
We just discussed this at our local JUG meeting last night. My take is that accessors are popular in Java because most popular frameworks (esp. Spring) have a heavy reliance on JavaBeans. Once you start using objects everywhere as JavaBeans, you've no choice but to add them. And the IDE makes it trivial. Yes, there's constructor instantiation in most DI containers, but it's not frequently used.
I do agree that most modern languages have a much more elegant solution for this.
It's a bold stance, at least. From my experience, it's idiomatic Java and just one of those things you have to do.
In practice, it's useful because it allows more easily for instrumentation later on. And you can easily throw a "synchronized" on the method if necessary.
When watching Notch code I noticed a lot of code smelly habits, relying on inheritance over composition is one example. But my god his level of productivity and ability to get shit done is lightyears ahead of your average programmer, and you got to respect that more than anything.
He uses Java, there's only so much stink you can remove from Java code, mostly you just push it around and try to make it as small as possible. Java is still a decent production language for some tasks, but its lack of some basic features and syntactic niceties means you have to hold your nose while coding most of the time.
Can you provide any examples? Was it the tools he was using, notes he was making, or diagrams he was drawing? I haven't seen him program, so I'm curious as to what makes him so much more productive.
1. I think programming is a multi-disciplinary task. Notch is obviously among the best at "writing a lot of working code". He's pointing out that he could improve in software design. I've work with many folks who are obviously strong in one area but weaker in others.
2. I think his coding sloppiness comes through in minecraft: it's an incredible game, but I've stopped playing at times due to frustration with crashes and corruption.
...oh, and 3. My guess is that the "large US based game developer" was Valve.
Just guesswork, but I'd imagine it was EA, who have a subsidiary (DICE) in Stockholm, where Notch lives. Valve (as far as I know) don't have any Scandinavian subsidiaries.
"I had to work on programming more carefully and think things through before diving in, or I’d have a hard time working in a large group." This pretty accurately describes what I went through before joining a larger organization. However sometimes larger companies train you into a mentality of working fast and letting QA sort it out, which isn't always the best thing.
After watching notch code a bit, I'd say that the way he dives in and works has been very successful for him. It's a different style but I don't think it's bad in of itself. I might definitely be bad when working in large groups, but that is not the only way to work.
I personally don't think working in a large group is particularly rewarding for a programmer.
It's easier to "dive in" when it's a short term project that you are working on alone and it's in a problem domain that you are familiar with (in notch's case games programming).
I'm at the point know where I can build an E-Commerce site in an MVC framework and implement Client & Admin Logins , Stock management , product search and a shopping cart without having to engage my brain because the structure to me is fairly obvious.
On the other hand if I had to design a game I would have to think long and hard about how the different components would fit together and even then I would most likely get it wrong somewhere.
I think there's something to be said for starting out with this sort of confidence, especially when you're self-taught. That sort of confidence can provide some incredible motivation. If we realized how much it work it would take to become well and truly good at things like coding, would we start down the path with such fervor?
As a self-taught programmer I started out with completely the opposite level of confidence, and considered myself a really lousy programmer.
I always assumed there were all these "real", "professional" programmers out there that really knew what they're doing, pick crystal clear abstractions, had profound knowledge of security implications, database optimization, data structure usage, good use of object orientation, introduced no memory leaks, knew just what algorithm to pick and wrote great, clear and understandable code.
I still kinda think I suck, but at least now I know that I'm not alone :)
Have the humility to know you can always get better.
The moment you believe your the best at anything it becomes very dangerous because you have no reason to improve, your the best. Generally speaking there are very few people that are the best at anything (In fact only one for each thing). Most likely there is plenty of room for growth.
I'm glad the author used the negative comment as motivation to get better.
"""
But. I still stubbornly believe the whole “private members accessed via accessors” thing in java is bullcrap for internal projects. It adds piles of useless boilerplate code for absolutely no gain when you can just right click a field and chose “add setter/getter” if you NEED an accessor in the future.
"""
It's comforting to see that other people fluctuate between thinking they're awesome and thinking they're awful.
Experience has taught me this:
Never build your self-esteem on comparison with another
And by that I mean, you should never judge your own programming abilities based on other peoples' apparent abilities. If you're programming new stuff regularly, enjoying it and listening to what other programmers have to say, then the chances are you're getting better at it, and that's enough.
Best Practices are like patterns, solutions to common problems. Sometime there are problems that require uncommon solution. They are rare, but they exist.
I just hope people don't disagree with Best Practices just because they want to show their displease with the authoritative...
Best Practices are like patterns, solutions to common problems. Sometime there are problems that require uncommon solution. They are rare, but they exist.
This is very interesting, and mirrors some thought I've been having myself.
I would like to see someone create a list of examples of uncommon design patterns, and list cases in which they might be warranted.
I think he wants to improve his design skills , not necessarily "be like everyone else"
Just because minecraft is a success does not mean there are not going to be parts of it's codebase that could be improved in such a way that it would improve their ability to iterate and add more features quickly in future.
In my experience adding new features quickly is a loaded term. My guess is that Notch would know exactly how to add new features to Minecraft very quickly. He has intimate knowledge of the codebase. However, asking someone else to do so is where this ability to iterate will be lost.
I admire well written code, but a lot of times I see gold plating where is isn't really necessary, because "maybe one day we could add X". You still have to maintain the knowledge of where that easy addition could be plugged in.
Code is great, we can just throw parts of it away and refactor it to add new features. Which is possible if you follow best practices. ;)
Being able to hand of code to someone else is always going to be useful in any non trivial program. Very rarely are successful projects handled by one person in a vacuum.
I think the issue is in designing the initial structure you have to make decisions about which parts of the code will be important and how the relationships of objects etc will work. Sometimes if you have something that differs massively from your initial assumptions then you are in for an enormous refactoring job (as well as often a big data wrangling job if your app is already in production).
When I have done work for clients I have had fairly simple feature requests which I had not anticipated which have required fairly fundamental re-structuring of the codebase, if I had known these in advance I would have designed it differently.
Even if I had not known the specific changes in advance but I knew how much it would be likely to change then there are places where I would have also altered the design , but of course having said that there were areas where I spent allot of time creating code that allowed for flexibility where it was simply not required.
I think the biggest thing he emphasized is the fact that you can never be "The Best Programmer" because there is always something to learn and ways to grow. That is probably one of the keys to a good developer, they know that there is still plenty for them to learn.
Yeah. A couple of times, I've met people who have "dabbled" in programming - maybe done it for a year or two professionally - and said they left the profession because they's pretty much learned everything there was to know about programming. I'm always impressed with their capacity to absorb so much, since I started programming when I was 8 (on a ship, in the middle of the Atlantic, without a computer) and I'm now 37 and estimate I know about 5% of all there is to know about programming (an estimate that continually drops as I learn more).
Well it depends. You could probably never stop learning the tools or the new languages; they're continually re-invented. However you can certainly become bored with it; learning new tools and languages becomes a skill in its self. At which you think i've learned a new tool, so what?
I have no beef with someone getting bored of what they're doing, but then they should say just that. To say "I know everything there is to know about programming" after programming for a year is laughable.
You're right that if you know C++ and you learn Java, you're not really learning all that much. But if you've spent your entire life programming in those types of language, and then learn Lisp, or Forth, or even just assembly language, your entire mental model of computation is turned on its head. Heck, learning C would be an eye-opener for someone who has known only Java.
You can go even further. My own trade is ASIC verification - writing testbenches to test functional correctness of chip designs. I've done a bit of FPGA design, too. I've chatted to software guys far above my humble skill level who don't grok either of those two domains because they're completely foreign to their way of thinking. But I'd file both under the broad umbrella of "programming".
I think you can learn constantly for a lot longer than a year or two without learning any new languages or tools, just working on different project, in different domains, using different paradigms. [EDIT:clarification]
They probably didn't mean that they "know everything there is to know" unless they are very arrogant.
Perhaps they meant that they knew the syntax to at least one programming language and where at a point where they felt it would be easy to learn more if needed but they no longer felt the need to study it as an end in itself for their purposes.
having said that it's easy to overestimate your level of knowledge when you haven't been introduced to anything more difficult.
You may believe for example that you can easily solve say traveling salesman because you figured out how to write the naive n! solution and tested it with 5 nodes on a fast computer and it worked fine.
If you never have to test it with 1000+ nodes then you may never find the performance inadequate so will never have to open the dynamic programming rabbit hole.
My Mum had bought me a book called the Usborne Guide to BASIC Programming, as my Dad has just bought an Osborne 1 computer for his new consulting business. So on our way across the Atlantic on a Polish vessel from Montreal to Dover, I was enthusiastically learning about programming while the machine sat, frustratingly, in the cargo hold, inaccessible to my eager mitts.
I still have the Osborne 1. I last powered her up in 2005, when Adam Osborne passed away.
I think that the languages and platforms used shape the way one thinks about programming more than anything and that being exposed to different ecosystems is important. For example, working in a team of experienced developers that uses Java will probably rid one of most "cowboy coder" tendencies and instill a tendency for reflection on architectural patterns, etc, however it can also lead to over-engineering, "architectural astronauts", etc so then going to a more dynamic platform like Ruby on Rails (or at least using their methodologies in projects) helps balance this out. Programming is constantly evolving and it can be hard to figure out which patterns and processes to follow but one thing I feel is important is working in teams as it provides a good way to gauge yourself against others and help expose your own weaknesses.
Reiterates the fact that the more you realize you are a terrible programmer, the better you are. Very awesome of someone with his stature to flat out admit the fact that he has flaws. Kudos!
I have a Master Degree in CS but programming mostly is self taught by reading other people's code, good code and ugly code. I know about design patterns only enough to avoid them like shit. It is just easier to hide shit under a shinning cover. There is a saying for writers : "every word tells". My principle is "every line counts". However, this is almost impossible in a team environment, so I still think that best code is one man job.
I honestly don't see what about this post is Hacker News worthy. Is it just the fact that the post was written by the Minecraft developer? Or it it the (somewhat questionable) display of humility?
It's sort of like posting a long list of your accomplishments and then saying "but I don't consider myself special, and I have much more to learn". If the sentiment was true, you probably wouldn't say it, and almost certainly wouldn't say it this particular way.
There is definitely a star power associated with someone like Notch which automatically gets more views. Same goes for a lot of the people you see get submitted here and voted up.
Is that the only reason? Probably not, but when someone famous says something it will get a lot more attention than if a nobody says it. Cet par.
If you aren't interested in a post, don't vote it up and don't comment on it. This type of meta-commentary is not useful. The (as of right now) 161 votes and 99 comments on this post indicate that it certainly is HN worthy to a significant number of people.
1. There are people who are so much better at programming than me, that I could work my entire life and never be as good as they are right now.
2. There are people who are so much worse at programming than me, that they could work their entire lives and never be as good as I am right now.
Its a continuum, a hill. Feel the gradient, walk uphill.