Hacker News new | past | comments | ask | show | jobs | submit login
The Biggest Difference Between Coding Today and When I Started in the 80s (thecodist.com)
164 points by rb808 on March 21, 2017 | hide | past | favorite | 112 comments



I miss that too (90's for me). Honestly I don't enjoy programming much anymore for this reason. I remember when it was normal for people to write their own data structures for their programs -- and while not "practical," there's a certain joy you get debugging your own sorting implementation that you don't get, say, copying and pasting frameworks' error messages into Google.

One problem with today's coding environment from an enjoyment perspective is that if something is fun, it will probably be written and packaged into a library. Data structures, algorithms, visualization libraries, abstractions on top of OpenGL, mathematical functions, data stores, neural networks, etc. -- all these things are great fun to actually try to write, and all of these things already exist in better forms than we'll ever write as individuals. Sometimes practical programming today feels more like gluing everybody's fun code together with your not-so-fun code.

Programming when I was growing up: https://www.toysperiod.com/images/lego-parts.jpg

Programming now: http://www.toysrus.com/graphics/product_images/pTRU1-1912094...


Seriously?

I see SOOO many people making AMAZING things gluing stuff together. Maybe they're gluing together three.js with WebVR. Maybe they're gluing together the JS Magic Leap support with a Kintec and an arduino. I see artists making art throwing together Unity with a few plugins for networking etc.

Sound way WAY more fun than me writing

    10 moveto rand(320), rand(240)
    20 lineto rand(300), rand(240)
    30 goto 10
If you like writing the low-level stuff that's great. Knock yourself out. Me, I want to move on to the bigger stuff. Some people like to build cameras, Others just like to use cameras to make movies. I'd prefer the later (but I'm glad someone likes to make the cameras so I don't have to)


Well at least I understand what your three lines of code do. Though not why you want them.

I don't however understand what "three.js" is or what "JS Magic Leap" or... well let's just say I couldn't understand what any of those thing you say are SOOO AMAZING.

Of course they might be amazing, I wouldn't know. But it has the smell of breathless chearleading.


three.js is a quite robust WebGL framework. I'll let JS Magic Leap be your homework.

Sometimes libraries and frameworks make our lives easier. I'm glad the tools exist. I had much more fun writing all the little systems that go into a working WebGL app, and working with WebGL API directly, than using any of the popular libraries. But good luck teaching any meaningful amount of WebGL to a group of students who don't even know what HTML stands for without a library like three.js



> Seriously?

Yes, seriously. Artists are great. I'm not an artist. Unity bores me; it sounds more fun to homebrew an engine and build on top of that.


I assume you meant Leap Motion instead of Magic Leap?


>One problem with today's coding environment from an enjoyment perspective is that if something is fun, it will probably be written and packaged into a library.

People don't write code for free because it's boring. Open Source is all about making something cool. No surprise that cool code gets packaged in an OSS library.


I don't wish back those days. I get that it's interesting to work to the bottom of things yourself, sure. But I've run into several problems that I couldn't have hurdled on my own, in any amount of time, without the help of the internet.

- dll broken from MS. This happened to me. I did a lot of sleuthing, taking things apart, and it just didn't make sense. It was for a DB adapter, so the official documents would have been enormous. And unfruitful to read. In the end I got a hotfix from MS.

- Sort algorithm broken in Swift. Another one of those jobs where you take apart everything, because you assume it was your use of it, not Apple's library. Did a workaround after I got someone on SO to confirm it wasn't just me.

- Any number of little things that are unfindable in a manual. Heck, how often do you even look at a manual rather than an example?

- Learning issues. When you're new to a language, you don't necessarily know what you don't know. So being on an island with a manual is going to lead to you dying. Witness the many calls for help on SO from people with score 1. They don't know what's wrong with their code, and they don't know how to ask. Sometimes someone with have mercy on them and help, because it's normally something quite trivial. I remember being 15 and trying to learn c++. It was hard, because every little error you get is cryptic.

- You can make so much more with so much less now. There's a library for just about everything you can think of. The coder is mostly a chef who mixes existing ingredients. This means you can explore writing things that you'd never have time for otherwise. For example I did some web and mobile side projects beside my financial c++ coding. Always good with a breath of fresh air.


> Sort algorithm broken in Swift

I really want to believe you, but I can't find any information on this, what are you referring to?



I received my first computer in 1980, a TRS-80 color computer, when I was a pre-teen. A few years later, I received a TRS-80 Model 100 laptop, and a few years after that, in 1987 or so, an Amiga 1000, which I used through university.

I agree with most of this article, but with one big difference: I still really enjoy programming, professionally and personally. (And maybe I'm over-stating the authors lack of enjoyment today.)

Back in the beginning, for me, there was the thrill of discovery, in a rather low-level sort of way. My first functional, written from scratch in BASIC program was enormously exciting. A couple of years later, as frustrations with the poor performance of a truly interpreted language pushed me toward learning a faster way, the same thrill was felt when my first machine language program started working. And then again with my first C program for the Amiga.

35 years later, I ask Google questions throughout day, every day, while I'm programming.

Sometimes I of think of it as a compiler for an optimized but very high level language. I rarely have to sweat the details, and when I come across a very powerful and clever solution (typically from Google), it doesn't really mean much to me, because I didn't 'earn' it. And I may or may not remember it in any detail, so I'll probably end up finding it again later.

But gcc does that same kind of thing, right? It converts C++ code into a highly optimized executable using all kinds of tricks that you probably don't know about, and that you very rarely need to explore. (Not never, though, given that every abstraction leaks over a long enough time period.)

The thrill I get today is from higher level and more abstract 'data and design things'. As one example, powerful and novel ways distributed systems can work together.

I'm intentionally leaving unaddressed a lot of the other interest, meaty things in the article, because nostalgia got the better of me.


>there was the thrill of discovery, in a rather low-level sort of way

Same here, but whenever I embark on a new project today I remind myself of this mantra: "Do not reinvent the wheel" Someone out there has probably already solved your problem, so why not speed up the process and use the fruit of their brain power as a tool to speed up your process? Carpenters or mechanics do not invent a new type of hammer or drill every time they embark on a new project. Why should we?


to me this is more like you learned to be a woodworker because you liked actually creating things with your hand tools, and now all day long is just using a CNC router is copy-pasting CNC patterns from cncoverflow and then glueing the machined wooden pieces together.

it's not woodworking anymore, it's glueing and google-fu to find the best patterns on cncoverflow, together with maybe some shim-building here and there and making your own custom stain


I just searched for cncoverflow based on your comment. I was disappointed to find it doesn't exist. It sounds cool. ;-)


hah surprised it doesn't honestly, you'd think there'd be a need for people to discuss feed rates, cnc bits, materials and so on, especially with quite a few techie woodworkers nowadays building their own CNC from kits


There are places like thingiverse.com and shapeways.com though, where people share & buy designs, mostly intended to be 3D-printed.


That's a great analogy. Its as if programming moved from creating to assembly. (The act not the language)


Speaking for my own professional experience, while there is a ton of glue and google-fu, the net results are frequently interesting and at least someone novel.

I guess it's a question of what is done with newly available, powerful tools and abstractions.


Because tool-making is the fun part.


Another huge difference was how slow compilers were. Programming is a completely different task when you don't mind hitting compile. I remember programming as a kid in the 1980s with my older brother. He would hit compile and we would wait and wait and wait and finally get an error. After a few times he would slam his fists on the keyboard in frustration.

These days I sometimes compile rather than looking up the correct syntax for something.

Fast cycles changes everything. When you don't get up for a cup of coffee during compiles you program differently, learn about your code differently and can think abiout high level design more


At the risk of dating myself, when I took my first (FORTRAN) programming course in college, you ran batch jobs from punch cards. Make more than a couple mistakes and you were out of CPU time. This involved begging the grad students running the facility for additional CPU time which they grudgingly gave while making it clear what they thought of freshmen who couldn't get a program right on the first try. :-)

The school swapped out the IBM 360 for a VAX a year or two later.


You can always add Boost to your project if you pine for the days of slow compiles.


Haha- I couldn't be happier to be rid of it. I used GWT for a couple of projects. It's a good useful thing but it was like being back in the 80s (or mayber early 90s)


Google Web Toolkit was great for putting your java in your javascript. Those were the days. I didn't really know Java or JS that well but I could write stuff! We actually used it for projects at Google. Plus anything interesting in java was always not implemented in GWT.


What, is that project gone? I used it recently. I'm too lazy to learn javascript


Maybe that's what people are missing: the emotional thrill of it. How often do you find yourselves fixing a bug for days and nights? Almost every possible bug is now a few web searches away.


I frequently find myself frustrated by a bug for days and nights and too often eventually have to just work around it. Only the most common bugs are searchable. The rest are clouded by layer after layer of unnecessary dependencies.


Having programmed alone in the eighties, I can just feel it's much better.

For example, I needed to do some UI but back then I hadn't the money to buy a toolkit (Windows was still some years away). Therefore I had to spend much time on making a UI and that UI was just a small concern to me.

Today, I have an idea for a program, then I gather all the libs I can to progress as fast as possible and I just need to code the missing/most important bits.

That is, I can concentrate on what's really important and leave plenty of the 80-other-percent to the open source community...


What a great comment. There aren't many people with your history posting on forums (before Windows...WOW!). Thank you for sharing your history and wisdom.


To me the big difference is the appearance of libraries and frameworks. Back then your code depended on or built on very little pre-existing code. You wrote to devices such as the screen, tapes, disks and printers almost or even literally directly. If you needed a data structure more complex than a discrete value or array you had to include the underlying code yourself every time. Every program was a creation ex nihilio because even if you re-used code you had to copy or write it in again yourself. Environments like Delphi and VB or later versions of Turbo Pascal which came with library code were a revolution.


Very true, and I'm wondering now why this wasn't a Thing in the old days. The obvious answer is "no distribution mechanism for shipping libraries separately from the machine/OS/compiler/interpreter", but that's not entirely true.

The pre-internet Amiga had a thriving public domain and shareware scene where volunteers would advertise in the magazines and you could pick and choose apps/demos/games for the price of postage and a floppy disk. But it was all binary, no source, and I don't remember any libraries ever being distributed that way. Maybe dev tools were too fragmented for that to be practical.


Take a look at early open source licenses, those that did not make it into the present (e.g. the POV-ray licence). Even people who opened their sources where often fiercely protective of their creations. Today it's usually all about protecting the creator (from various kinds of litigation), only the GPL adds a bit of impersonal "for the cause" flavor. The extinct licences, in contrast, were often bristling with protection of personal attribution. "Mine forever" used to be a big concern.

Today there is the idea that being the one person who knows more about a piece of code than the rest of humanity combined is a thing of value by itself and that this does not require protection. Letting your code free won't take that away from you, at least not as long as you care. I guess that idea just wasn't part of the mindset of that age.


> Letting your code free won't take that away from you, at least not as long as you care.

Good point, and maybe this genuinely wasn't true back then. It would have been much easier to pass someone else's code off as your own back before Google and Github and GrepCode and so on.


There are so many options these days, and evaluation often requires all the work of using the package only to find out that it's incompatible or broken or unusable for some other reason. So it's often more efficient to write your own version anyway. At least then you'll know the jerk who made it.


I agree, and posts like this [1] leave me floored. How does anyone have time to evaluate all of these individual components. My CS education taught me to build components for myself but not to necessarily do a complete evaluation of 3rd party components in a short time.

" We use Heroku for hosting, and run automated tests on CircleCI. Slack bots report what’s being deployed.

There are a lot of external services we rely on heavily. To run through them briefly: Help Scout and Dyn for emails; Talkdesk and Twilio for calls and customer service; HelloSign for online contract signing; New Relic and Papertrail for system monitoring; Sentry for error reporting.

For analytics, we’ve used a lot of tools: Mixpanel for the web, Amplitude for mobile, Heap for retroactive event tracking. We mainly use Looker for digging into that data and making dashboards."

[1] https://stackshare.io/opendoor/the-stack-that-helped-opendoo...


Yeah, I find that kind of intimidating too. But don't forget that this is an entire company's worth of infrastructure that they're calling out. Also some of those in the middle of your quote are like saying "and we use Comcast for our ISP, and Microsoft Office 365 to create our documents, Google Drive to manage them, and Atom for editing source code..."


Oh man, do I feel this. I started programming in the late '80s and it really was a creative exercise. Now you just string together other people's stuff and call it a day. Lower barrier to entry is better for the world overall, but as a career, it can be a bit of a bummer.


Old days you would think and then program more (even being repetitive among programs you made).

Nowadays you think and integrate more, find which pieces fits in which one. You do program, of course, but just to glue those pieces together.

Sometimes, when you came from programing microcontrollers and so on, you might miss that you had control of every single bit of what was happening. You just rely that some package will deliver you what it says.


While I would much rather write code in our modern environment and with modern tools, I think programming as a profession is getting more unpleasant due to much higher expectations and lower development budgets, and because a software developer's time is increasingly taken up by (usually necessary) non-programming tasks.

I suppose that's one reason why creating a startup is so attractive to software developers; it's a means by which productivity improvements work for them by allowing them keep more of the wealth they create, instead of against them by setting a higher and higher productivity bar that must be met each year just to keep the same job.


One big thing is you never even knew what you didn't know. I had a C=64 for years and never even heard of the PEEK and POKE commands until years later when I was reading about it on the web. There were no computer experts to talk to back then, at least not where I lived. If it wasn't in the book or discoverable from just fiddling you weren't likely to find it.

The worst part for me is that my library had a computer section, but it was filled with stuff like "FORTRAN for System/360". Apparently the library decided that they had enough computer books and didn't bother getting new ones.


This reminds me a lot of Handmade Hero. https://www.youtube.com/user/handmadeheroarchive For educational purposes, Casey Muratori codes a complete game from scratch, pretty much without any libraries (Win32 and OpenGL).

In one of the videos he said that he doesn't use the internet at work. Hates it. When he has a question, he writes it down. When he needs to learn something, he googles it at home, downloads a bunch of articles as PDFs, then goes through them at work, but offline.


The second biggest difference is that the internet has changed the nature of debugging.

You used to follow the program all the way through on your local machine and interrogate all the variables.

Now, you're debugging across the client and the server. I think that is one of the reason unit testing has taken off. It is much harder to know where something happened.

The third biggest difference is that languages can be a lot more complicated because documentation is searchable so you can have a lot of functions that do the things you might have written yourself.


This really brings me back. I used to write "algorithms" all the time, then at some point it just turned into debugging SQL queries. When I was younger, it was so much more "real" programming. But it's easy to look at things with rose-colored glasses, because there were two huge problems.

First, programming was sooo much slower. Nowadays there's a function or a library for everything. Back then, if you wanted to serialize something to disk, you had to write a serialization function first. Fun, but SLOW.

And secondly, debugging was even moooore slower. The amount of time wasted on trial and error was astounding. Nowadays, if a library or browser or OS is misbehaving, a quick search will usually find the problem and workaround options. Back then, it could easily take days and days to try different solutions (hence the "invention" and "creativity") until one turned out to work, and it might not ever work at all.

So while the author says:

> I have to admit I think programming was actually more fun back then. Without all the modern trappings of working as a programmer that suck major time out of your day we were able to spend a majority of every day actually programming.

What I remember is a majority of every day spent debugging mysterious problems with OS calls and libraries, or writing "grunt" algorithm code for stuff there ought to be a function for already, as opposed to writing the fun new stuff. And let's not even talk about waiting minutes to recompile, or how primitive debugging tools were back then.


>What I remember is a majority of every day spent debugging mysterious problems with OS calls and libraries,

Ah. Not everyone had same experiences. Being Windows programmer was total hell in the 80's and 90's. OS API was battleground for Microsoft. They introduced all kinds of things to break competing software.


You can still find low level coding opportunities in embedded or systems programming. Low level network software, mobile networks, embedded signal processing applications etc.

What I really love in programming is the flow and focus you can attain when coding for hours with little interruption when you master all the libraries you use. You feel how everything quiets down around you, not because there is less noise, but because your concentration is so strong that they fade away. When you stop and go outside it feels like you have been on long trip in a faraway place. You look at people and they are acting just like before you left but you feel like foreigner. When you go to sleep you have weird dreams where you move in some data structure.

http://catb.org/~esr/jargon/html/H/hack-mode.html

edit: I don't think its fundamentally low level vs. high level problem. It's the quality of api/library problem. Compact and logical high level library that you can understand and master without continuous stream of surprises is what is needed. Verbose libraries with unnecessary "enterprise" cruft kill the hacker inside me.


Sometimes as a discipline/indulgence I go to a coffee shop that charges exorbitantly for wifi so I cost myself immediately measurable and painful (because exorbitant) cash if I don't invent the solution on my own. These hours are non-billable but they stretch my creativity and cleverness in ways OP describes, which increases productivity during billable hours etc. Also keeps me sane. :)


This is exactly why MIT stopped teaching Scheme/Lisp. Graduates could think and reason about programming well, but Python is a much better intro to the world of modern programming: 1.) what is my problem 2.) stack overflow 3.) get libraries 4.) read bad documentation of library 5.) hack solution together with vague understanding of black box libraries


As a programmer, I often feel the funnest part of the job is when you're writing new code, but at the same time, it's often more efficient to search for an existing library. This internal conflict between what's enjoyable and what's efficient often leads down a path of least enjoyment and efficiency (e.g., writing a library whose functionality already exists and then seeing it die and replaced by open-source alternatives after many person-years of wasted effort).

As a side note, I'm one of the creators of a tool called Sourcegraph (https://sourcegraph.com) and this post actually captures a big part of the problem we're trying to solve. Being able to jump to def and find references / usage examples across open source makes reading / understanding / grokking code a lot more fun and efficient. Would love to hear people's thoughts.


I didn't see anybody mention compile times. Today, it is instant -- click to run.

My first programming job - our product was a CAD package for Windows 2.1 and the new v3.0 beta. It took 9 hours to do a full recompile on our fastest computer, a fancy new 486dx-33 with 8MB of RAM!

NINE HOURS TO COMPILE.

Now I get impatient if it takes more than 15 seconds to compile, build the firmware image, download it to hardware and reboot.

It's a different world, for sure.

I think it makes us a little careless. The approach to coding is different. THEN there was a huge penalty to break the build, and we tended to think through a solution very carefully before implementing it. Now, I'll confess, I'm just as likely to plug in some magic numbers and see what happens, or set a breakpoint down in the guts of some heavy code and see what's happening as I am to very carefully think through all the permutations and be sure of everything before hitting Go.


It depended on what you coded with. I used Turbo C a bunch on my own, and polyFORTH for what paid work I did in the 80s, and they were both lightning fast.


Of course, there also wasn't as much you needed to know back then. I look at something like a modern "Roadmap to becoming a Web developer in 2017" [0] and I wonder how anyone is able to get through all of that.

[0] https://github.com/kamranahmedse/developer-roadmap


You only need to know all that stuff for the online poser olympics.

In the real world, 99% of people just pick 1 stack and get on with their lives.


Can't help but see the parallel between this craft and many other crafts. In the beginning the people doing it are independent, isolated artisans. You might've been the one guy in your village who knows how to, let's say, make barrels. Except for maybe your own sons, whom you yourself taught. In fact the skill is so uniquely identified with you and your family, people actually go ahead and make it your surname - William the cooper slowly becomes William Cooper.

Fast forward to let's say the early 1900s, and people have figured out how to automate and mass-produce barrels very efficiently, such that there's really nothing much to it anymore. It becomes a question of your materials-sourcing abilities. One person can make lots more barrels a lot more easily, and do it better than you can. Because of this, most people just buy a barrel when they need one; they don't make them by hand anymore.


tl;dr: The good-ol' days were simpler back then, but today's amenities are nice too.


Well, my workplace didn't have an espresso machine in 1987 when I was writing apple basic games based on a crappy script. But you mean the tools are better now :-)


Biggest difference between coding in the 80's vs. now:

Then: memory and CPU cycles were not cheap

Now: everything is multithreaded and often asynchronous


Mid-late 80's. A young, precocious kid spending summers at the library on their Commodore 64 creating text adventures set in Tolkien-esque vignettes. A well-worn book of BASIC exercises and puzzles at his side for reference.

Early 90's. The young teenager pilfers another Turbo-C demo disk from a thick book on the discount table at the back of the bookstore. His last demo ran out of time and he can't continue work on his game without it. He can't afford the software license. He logs onto the local BBS when he gets home and reads about something called, 'Mode-X'. Mind blown.

Late 90's. He drops out of high school to write Perl for a living. He makes a bigger salary than either of his parents ever did. More than his peers who were flipping burgers for minimum wage. He writes scripts to generate a website from his journal log files and shares it with his, "other friends." The ones who know the right incantations to make computers do things beyond playing video games or listening to music.

I wasn't a professional programmer at the time by a long stretch but I do remember having to figure most things out for myself. I caught the tail-end of the mid-80's craze to teach every kid how to program in BASIC. As a lone geek in astronomy club and pilfering his fathers' textbooks on classical mechanics I knew that computers were for programming and that was how you made computer games. I don't think I would've understood trigonometry or linear algebra any other way.

It's amazing how much the Internet has changed absolutely everything. Somewhere between 1999 - 2007 when persistent, high-speed access became the new normal programming changed. CPAN was a big deal and a huge tool... that idea caught on like wildfire. Now every language has a package manager and you can hardly start getting anything done without downloading a hundred or so megabytes of source code first. Learning has changed completely. We forget as easily as we discover since the knowledge is persisted for us. Learning about Duff's Device was a huge step for me... now I work with programmers who don't even know what the size of an integer is (a silly question of course... but totally unaware of how such a construct is implemented in the machine) and they do great work and provide immense value. Yet they couldn't construct a binary tree or heap if they needed to; the default is to, "just google it."

Yet when push-comes-to-shove I still find that sometimes forgetting all of that lets you get real, productive work done. Analysis, paralysis is a real problem in the face of an abundance of choice. Especially when there's not a clear "match" to your requirements. Sometimes it's just easier to solve the problem with your own solution that fits your use case.


Main difference, IMO: now you're at least 10 times more productive. No waiting for floppy discs nor 75-125KB/s hard disks (!), better tools (better SCMs, compilers, etc.), using the Web for searching (ideas, fixes, help, etc.). If it was fun back then, now is amazing -:)


Nah, you just feel that way because you aren't really building anything.


First paid programming gig was working on network equipment in the 90s. If you thought MacOS was primitive, we had to write everything from bootstrapping, to the OS, to the network stack and up from scratch. On some platforms we had to debug with morse code on an LED, uphill both ways in the snow. I miss those "primitive" times as well, but.. We're so much more efficient now. Things that used to take person years now take minutes to download, and we can move on to making our stuff do things that are much more valuable to society, rather than duplicating the same work as everyone else over and over. We have the shoulders of the Stallmans, Torvalds, and many others of the world to stand on now.


Anytime I read something like this I just want to quit the entire industry. It's depressing. It's all just shit now.


No, don't give up. I know a lot of people are doing a lot of assembling packages into building something, so there's little pride of craft. But there is still great handmade ('bespoke') software happening. In Seattle, there are jobs where you are writing something new, solving a problem. I think databases are where it's at.


Back in the 80s I would cite this sketch whenever someone talked about how good coding used to be back in the day: https://www.youtube.com/watch?v=Xe1a1wHxTyo


Honestly, when people ask me in job interviews what the hardest (programming) thing I've done, I default to "I taught myself pointers in C in the early 90s". Kids today have it so easy!


markets prefer efficiency and using best of breed 3rd party software/libs etc is efficient vs. everyone re-inventing the wheel (inefficient).

So much of programming these days is glue code, even with IOT, devices and sdk's and libs are getting better and better.

The creativity of today comes from ingenious ways of developing new processes the arise from gluing together existing ones.

If you want to be doing greenfield algos go get a PHD in comp sci...and stay at the uni.


Absolutely THIS:

"...and the real skill is in finding it, relating it to what you need, deciding if it is useful or adaptable, and if it is of a decent quality"


Definitely. I also think there is a lot of skill in knowing that something should already exist. I've seen a lot of really bad code which just more convolutedly implements something that exists in the standard library or is an easy download.


His 'out of date' point about help resources is a key point.

SO posts should come with unit tests and library version manifests.


As somebody else who started coding in the mid 80s I feel this is one of the disconnects of the interview process: interviews evaluate you on the basis of an "80s programming model" where if you needed a red black tree library, you likely had to write it on your own.

It would be a lot more representative of today's work if you were asked: given these 3 github repos with packages that all purport to do X, which one would you pick for this set of requirements and why? and how long would it take you and a team of 3 to get it done? Then you have an hour sitting next to the interviewer that can see what you are looking at in the code, what you are googling, how you are estimating and so on.

You could be the best algorithm writer in the whole world, but these days when do you ever have the luxury to write greenfield code? It's all "let's leverage open source" and "we don't have the budget to write our own frameworks" and "why do you want to spend X months writing this, when I googled and in 5 minutes I found 8 packages that do it" and "estimate how long it would take you to do <insert fuzzily defined huge task>" etc. etc. etc.

Personally I do miss the days where I felt that all I did was coding, as opposed to putting together a collage with code found elsewhere.

I sometimes feel like doing an Ask HN about "how do you find a coding job where you actually code most of the day when you are 20 years into your career"


I think if you miss doing proper algorithmic coding and feel stuck gluing frameworks between a db and a web browser, you should just try to avoid web related development.

In embedded, desktop, games and a ton of other disciplines, there is lots of old school and fun dev and a minimal amount of CRUD boilerplate.


Yes, this has worked well for me. Stay out of web development and it's not hard to spend your time doing "real" programming. Embedded work is especially good if you feel nostalgic about 80s-style PC hacking - the downside is that product cycles are limited by the hardware side, so the pace can feel really slow.


> Embedded work is especially good if you feel nostalgic about 80s-style PC hacking.

True, and reciprocally, it is terrible if you hate it. Back to the world of having no debugger, zero tool, potentially no auto completion and unable to do a print.

I still remember an old embedded IDE, when I scrolled down or up with the mouse, the code sometimes becomes a screen of garbage. It's not a display bug, if you save the file, it really become garbage of random characters :D


Yeah, that's true - but the primitive conditions are a constraint that breeds creativity. I love to build tools, and embedded systems work offers lots of excuses for what would be called "reinventing the wheel" if you did it on a PC. No print? I'll write my own! No debugger? I'll build one! No autocomplete? Don't care, I never use it anyway! IDE sucks? Ehh, whatever, I'll use the terminal! Serial port driver doesn't work? Bootloader crashes? No idea what's going on? Well... okay, I'll debug it by making this LED blink!

But yeah, if that's not what gets you going, then embedded work is not for you.


I will never go back to neither logs nor stacktraces.


If you can't tell apart a O(n^2) solution from O(log n) one, you still shouldn't be doing server-side development though.


Of course when you're gluing libraries together it is hard to know what they're doing under the hood. There are a lot of libraries with catastrophically stupid design decisions hidden behind the API, and then when everything runs like shit you find yourself having to dig into the libraries until you find the problem.


Anecdotal evidence of "just gluing libraries": I once had users complain about a web-application being slow. As it turns out, it needed 1 second to generate 1KiB of data. They must have been using some sort of slow ORM in the back-end doing 1 DB query per FK.


That's not a slow ORM. That's literally the model of every single ORM I've seen! Sets, who needs them!


I feel that you're driving "we're just gluing libraries" argument to the point of absurd. Do you think that designing database schema and queries that run over it is something that is extremely rare and high-level in modern web development?


It's not rare, yet I find myself redoing this over and over with every job. Web apps that are a HTML table fronting a CRUD back-end and simple database schema are all too common.

I hear you saying "well, that's your fault you keep working at those companies" - and that's not too far from the truth.


Details matter, quality development matters, and experience matters a LOT!


Okay, pure anecdotal but: exactly the same "complaint" including exactly the same advice (minus the "web" ) could be found some 20 years ago. Not only it seems the field is not "structurally" changing much, it seems people are people, still ;) ... and "CRUD" moved to the web.


Yes there has always been demand for what today is the archetypical web app. Data input. Data washing, data into DB, data out of DB data presentation. Basically the same CRUD we do in webapps, but just a DOS interface, VisualBasic form... That's what you don't want to do.


I disagree. A lot of times it still makes sense to write your own code from the ground up. There are so many instances where a lot of stuff out there is way too overcomplicated for the task at hand and takes more time to configure and resolve dependencies than it takes to just code the part you actually need from scratch.

A lot of times, if I'm looking at some random github repo out there and its dependencies aren't apt-gettable or pippable or npmable, I just say "screw it" and write it from the ground up using only things that are apt-gettable, pippable, or npmable.

Except OpenCV 3.1. I'm willing to deal with that one. But I have a script to install it that's 55 lines long. (How ridiculous is that -- we need 55 lines of code to install things these days. Can't we just tell our computers to install stuff and aggressively figure out how to install it, no questions asked?)


sure but you often have to fight tooth and nail for the right to invent anything yourself in a lot of modern web-based environments. The idea of actually inventing something yourself is seen as crazy and dangerous in a lot of places.


That's because there's some truth in that. Remember the time when searching for ' would spit out a detailed error message containing the database name and query that failed? Or adding <h1> to your profile page would break everything for everyone?

That was because everybody was writing their own stuff instead of relying on frameworks that handle fuzzy things like user input for you. Many webpages have gotten more secure, but no without a cost for creative programmers everywhere.


If you're going to code something yourself, you have to be skilled enough to do it. Admittedly, one of the benefits of frameworks is that they allow low-skilled developers to produce reasonable and safe results quickly.

But if you are skilled, you can easily produce code equivalent or better than the average framework out there. And it will likely be simpler because you're only coding for your own use case and not every case that exists.


There are ways to write your own code and not be stupid. Parametrize SQL queries, use libraries to strip HTML, and so on. Unit test your stuff. Have your server software throw a generic 500 for anything that isn't a 200 from your upstream socket.

I am not saying write everything from scratch. Just use only the stuff you have to.


I can't even get OpenCV to install properly on my Mac, so there's that. That or GQRX/libsdr. Yay Python. If you have, please share!


Unless you have a really, really good reason, use Homebrew (https://brew.sh/) and it's a simple "brew install opencv" or "brew install opencv3".


that failed for me last year when I tried, but it was after I tried and failed with gqrx which probably screwed things up. Will try again.


>I sometimes feel like doing an Ask HN about "how do you find a coding job where you actually code most of the day when you are 20 years into your career"

Find a company who requires any non-approved outside code, no matter how well known and even if using an already approved license, to go through a multi-month approval process that is almost always longer than the current project's allotted timeline.

As it is, if I need a red black tree, and if it isn't in any approved libraries, it is faster for me to write it myself.


Multi-month approval sucks, but to be honest I wish more companies were more skeptical of adding 3rd party dependencies. At the very least, do a code review of the dependency (and it's dependencies..) and acknowledge the actual cost associated with the addition you are making.

Hardly ever happens.


The issue that most "dev shops" can't estimate anything beyond three months so the best they can do is "this 3rd party dependency speeds up current process by X"


I don't want to seem sarcastic, since this is a genuine question: What are the positive aspects that would induce you to work or stay in such an environment?


You can code all day and stay in your corner without any external hassle.

That process being enforced will ensure that most code is buggy and development is totally unproductive. That will usually create an environment where noone expect anything from developers and you have zero accountability.

if it all goes well, you can toy around all day and not have to ship anything, still getting a paycheck, and not risking the comparison to your peers.


>That process being enforced will ensure that most code is buggy and development is totally unproductive.

I don't think I'm following you. The procurement process is quite different than our coding process which involves team reviews and the like. Yes, productivity isn't as high as it could be because we sometimes have to develop something in house instead of using open source code, but we still have code review, unit testing, and similar required.


sure, but… what about the mirror?


Approval process is outside the dev shop's control. Overall this is one of the least stressful positions I've been in, has the best pay so far, I enjoy working with my team, and has a short commute.


What industry are you with?


If doctors were hired like programmers, they would be quizzed on chemistry trivia and then asked to mix their own drugs in vials using raw pharmaceutical ingredients.


Pharmacists actually have to mix some drugs and that's part of the training.


> It would be a lot more representative of today's work if you were asked: given these 3 github repos with packages that all purport to do X, which one would you pick for this set of requirements and why?

I play this game with my students on the second day of class, along with "read all these people fighting on StackOverflow and tell me the answer you have the most faith in." They split 50/50 between terrified and super pumped about how much trust they'll be putting in Random Internet Code.


There was actually a web page that would answer your stack overflow answer by running and benchmarking all of them.

There was also a plugin that would allow you to directly grab code from stack overflow and put it right into your project, kinda of like a search engine for laziness (I think it was an Atom plugin)


There's stacksort, a sorting algorithm that searches StackOverflow for sorting functions and runs them until it returns the correct answer.

An implementation is here https://gkoberger.github.io/stacksort/


Also https://github.com/drathier/stack-overflow-import which turns upvoted answers into modules.


I get why some places would want to test your programming skills, I mean, google does a lot of 80's style invention talked about in the article. But for 90% of programming jobs, you're better off with a gaffatape programmer who brings social skills as well as business understanding.

Especially outside of full software companies, the ability to evaluate business processes as well as digitizing them is simply invaluable, and you'll almost never need to write your own x anyway.


I'd say that skill set is useful for the 10% of jobs where programmers are pretty close to being their own managers.

Really it is better if "product owners" (or whatever) could actually do their job and talk to clients to distil out requirements that are clear and achievable. After that, the kind of programmer right for the job depends on, well, the job.


What clients? The vast majority of programmers and programming quantity happens in in-house functions in non-tech companies. There's no "product owners", there are business functions that need changes to their internal mess of stuff, so they hire programmers for that.


But then it still depends on the size of that in-house programming organisation. If it is just a few guys, then the team lead needs to take on those roles as well as code. If it is larger, then there should be some specialisation and professionalism in the non-coding roles.


Even in the 70/80's at a world leading rnd place we did not write our own fourier subroutines we brought them in from the NAG https://www.nag.co.uk/

Like wise we brought in Gino-F to plot results nicely




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: