Hacker News new | past | comments | ask | show | jobs | submit login

What happens after 10 years?

I have been a software developer for more than 25 years. It is getting boring. Any "new" problem is just the same old problem with a new layer of abstraction. Look at this new language/framework that is so great! You could do all that with the previous language if you cared enough to learn it.

Anyone has experienced the same frustration?

I now use my spare time to learn to draw. I found it way more exciting and I see a challenge that I do not find in coding anymore.




I've been doing this for about 20 years now, and I solved the problem through "hobby programming". I've been doing a bunch of AI stuff the last couple of years that has nothing to do with my job or my company. I did it out of pure interest in AI. Then I started speaking about what I was learning at conferences [0].

Yeah, it's tough to squeeze in hobby programming with two toddlers and company to run, but it's the only thing that keeps me interested. Also, I involve the kids in it (you'll see that the kids help me with my experiments as test subjects and assistants if you watch the video).

Incidentally I'd considered learning to draw instead at one point as well.

[0] https://www.infoq.com/presentations/deepracer/


After ~10 years[1] you will hopefully have developed a reasonable skill set and general competency as a programmer. Whether or not it is still interesting is entirely up to you. Assuming you've found an area of specialty in the industry... congrats and condolences: you've likely also found your rut. If you're looking for the tech industry and whatever current wave of tools/languages are popular to feed your interest, I think you will find that increasingly unsatisfying as the years go by.

The trick to keeping it interesting is to self-motivate and push yourself into areas that you find interesting/challenging which will often not be the same thing people are willing to pay you for directly in the short term. But it will definitely increase your odds of having skills that pay dividends down the road. For example, if you were into mobile development prior to 2008[2] you could pretty much write your own ticket for a few years. Similar situation more recently with ML etc. etc. all the way back to the beginning of the microcomputer era. Ignore the tech industry's flavor of the month unless it interests you and/or someone pays you to care about it. Instead, pay attention to the larger concepts and trends.

[1] Give or take depending on the individual and the languages/tools they've been exposed to. This also assumes that one doesn't spend the 10 years repeatedly learning the same lessons using languages built on identical paradigms.

[2] This was entirely possible as consumer mobile devices in various forms existed at least back to the early 90's.


But software has changed drastically in 25 years.

Rise of the internet, totally new app platforms (browsers), x64, ARM, smartphones, open gl, neural networks, development of increasingly sophisticated timing and side channel attacks, and on and on. I feel like you could pick any sub field of programming and find any number of revolutions in the last 25 years.

You say it’s all just new abstractions, but don’t you feel they bring some benefits and allow for new challenges?

Take computer graphics for example. There is so much more abstraction than there was 25 years ago, but go pick up any modern AAA video game and you can tell there is also so much more depth than we used to have. No?

But I haven’t been programming 25 years, so maybe my opinions will change.


> smartphones

I will take this as an example. I was developing in 80386 machines. When smartphones appeared and people said that "this is different because you have limited resources compared with a desktop machine", I just thought "it is new for you, not for me". I said nothing because they are right. It was new to them. I was happy to share an environment with people excited with "a new challenge".

> there is also so much more depth than we used to have. No?

Most intelligent behaviour in games is based on agents and finite state machines. With more states than ever, with higher polygon count, but it is the same that you had 20 years ago. The results are way more impressive because the hardware is faster and there is way more memory. But, the algorithms are the same.

Look for A* search algorithm or Dijkstra's algorithm. Been there for a while. Graph theory is still a building block for most things that look intelligent.


Dude, you can strap your smartphone to a quadcopter and you have a flying 20-megapixel 60 fps camera with a multi-megabit internet connection. You can do real-time control of the flight. That's something you couldn't do with a 386, not unless you had a Predator to put it in.

And there's a whole category of new challenges not because of limited resources but because of abundant resources. Your photogrammetry drone can generate 72 gigapixels per minute of photogrammetry data. You have 200 teraflops on your desktop. How good of a 3-D model can you make? With how little human effort?

How about making things simpler and more flexible? Old GUI toolkits were designed around the need for 2-D raster operations to be hardware-accelerated. Are there simpler designs possible now that that's no longer a constraint? I'm exploring this in BubbleOS.

Sure, lots of game AIs are finite state machines. So in lots of games there's no challenge unless the NPCs gang up on you. But AlphaGo is also a game AI. It's a bit more sophisticated than an A* search! What would a game look like where you worked on a team with such AIs?

Smartphones also have multitouch. Yet >90% of people's interaction on them is using an on-screen keyboard, one-finger scrolling of lists, and tapping on prepackaged options. Can we do better? Are there UI paradigms that multitouch enables that would allow more creativity, despite the horrifying levels of lag in existing systems?

Security is a big problem, and most of the world is wasting their time on approaches like virus-scanners that can't work even in theory. But then there's seL4. What would a personal computer based on seL4 look like? How could we translate the guarantees it provides into practically useful power in the hands of everyday people?

Can you do voice recognition on every FM and AM radio channel in your area at once? What's the minimum hardware you'd need to do it?

There's lots of interesting challenges out there.


> I will take this as an example. I was developing in 80386 machines. When smartphones appeared and people said that "this is different because you have limited resources compared with a desktop machine", I just thought "it is new for you, not for me". I said nothing because they are right. It was new to them. I was happy to share an environment with people excited with "a new challenge".

Is performance the only or even primary challenge from smartphones though? It brings the internet, multiple cameras, GPS, an accelerometer, a touchscreen, and a compass all into one pocket sized device. We've had so many cool things come out of smartphones people didn't expect. I mean in 1994, was there anything like the iPhone Secure enclave, or the applications we have for photogrammetry today?


The fundamental concepts haven't changed but the applications certainly have. The amount of things you can "do" with technology these days is far more vast than it was 25 years ago.

It seems to me like you expect that coding itself should provide you with interest and excitement. Maybe you should try focusing less on the skill itself but what the skill can be used to build? Do you think that an engineer would look at a drone for the first time and think "boring - its just a bunch of propellers provide lift for the device, I've seen this all before.."? Similarly, do you think that a programmer would learn about RSA crypography and think "boring - its just modular arithmetic and prime factorisation, I've seen all this before"?

All knowledge is built upon that which came before it, so in that sense I dont see how anything could be considered "a new challenge" by your standards?


Software deployment has definitely changed dramatically. Instead of including the software along with the hardware, or shipping cartridges/tapes/disks/CDs/flash drives, we can have software as a service. Infrastructure is now written as declarative code and tracked in distributed version control. You can spin up thousands of instances of some software in seconds.


OpenGL is more than 25 years old (https://en.m.wikipedia.org/wiki/OpenGL) IrisGL predates it and is from the early 80s. Neural networks are from the early 40s and back probation is from the 70s. Graphics programming has changed radically in the last 20 years, increased compute and data has actually made neural networks effective for industry usage, etc so I get what you're saying, but in some ways it supports OPs original comment. A lot of stuff is advancing incrementally but there aren't as many radical new ideas in software, and in some ways many things are worse.


Well I agree about the dates, but that's not really what I was getting at.

The number of working programmers using neural nets or OpenGL twenty-five years ago would have been vanishingly small. Today either one is completely common place.

Perhaps "revolutions" was laying it on a bit thick, but it does seem to me that software now faces different challenges than in 1994. Volume of data is different (even a moderately successful modern web app could easily receive more traffic than the entirety of the 1994 Internet). Hardware is more complex (accelerometers, multiple cameras, microphones, touch screens, compasses, widespread GPS). Security attack vectors are very different.

Again, maybe my opinion changes in 15 years, but it's hard for me to look at something like the 2019 linux kernel, and think "this is basically just more of the same."

Maybe we are all just making slightly different points about this topic...


> but go pick up any modern AAA video game and you can tell there is also so much more depth than we used to have. No?

No. There is a lot more texture memory and some good shader programming, but we knew how to render as good or better graphics 25 years ago - we didn't know how to do it anywhere fast enough. For your example most of that gain is in hardware and art budgets.


Not exactly. The methods to achieve these things in real time are new and evolving. It's not just a matter of hardware. Sure, you knew how to get the same end effect with raytracing ages ago, but the path from A to B is extremely different.


That's true but 2nd order effect. The global illumination folks 20 years ago were still ahead of todays AAA titles in terms of "depths" in the above context. And the big bang-for-buck on the modern visuals is more modeling and model complexity, art budget. Sure there are a bunch of engineers working around hardware limitations etc, and there are some clever things being developed, but that's not where the the big shift lies, imo.


I can not disagree with you more.

I have been a software developer for more than 25 years too. And it never getting boring. Oh, should I say "globally-boring". It becomes "locally-boring" from time to time, each project has such period when you need to fix stupid boring bugs, for sure. But on global scale? No-no-no.

Every year your get new opportunities.

It was PIC16F84 25 years ago and it is STM32F103 now, same budget, same footprint, whole new world you could do on the nail of your finger — you could do with microcontroller what needs big iron 25 years ago! It is not same old problem, because it allows you to solve problems which is unimaginable 25 years ago.

It was 80286 or 68K 25 years ago, it is multicore monster now. Again — completely new challenges. Lock-free algorithms and data structures, new view on memory/computing tradeoffs, new problems, new solutions, each day you could learn something new from cutting edge of CS.

It was modem, FIDO and ZModem 25 years ago (at least in my country) and now it is global network, fault-tolerant geographically-diustributed systems, real-time protocols and such.

Quantum computing on the horizon, and I hope to be able to grasp this too in next 25 years!

Ok, you could glue together frameworks 25 years ago and now, but nobody says you should and nobody says it is only one way here.


Not that I want to interfere with your drawing practice, meaningful time away from the keyboard is essential. But have you considered raising the bar and stepping out of your comfort zone?

I've been writing code for about 34 years. Finding interesting problems to solve was a major issue for me until I started creating my own programming languages [0].

It's possible that I'll eventually run into the same problem again, having said everything I have to say about languages. I figure can always write my own OS if nothing else works.

[0] https://github.com/codr7/cidk


On the other end of the spectrum, compared to raw technical challenge, is building something (esp an application) that actually has users.

We as craftspeople tend to overfocus on raw technical challenges and technical superiority, and we lose sight that the purpose of our tools and expertise is to create things.

Creating things that other people find fun or useful is even farther out of most of our comfort zones. And it puts a lot of things into perspective, like how unimportant technical superiority is when picking among trade-offs when you're actually trying to get things done that isn't just a self-inflicted technical puzzle that will die on localhost.


>I have been a software developer for more than 25 years. It is getting boring. Any "new" problem is just the same old problem with a new layer of abstraction. Look at this new language/framework that is so great! You could do all that with the previous language if you cared enough to learn it.

That's because you keep working with similar languages/solving the same problems.

Try different languages and/or problem domains.


It's not super easy to move to a more interesting role that pays the same. A lot of the time, just having Java for 10 years on your resume and C# for 2 years will automatically rule you out of any "senior" C# positions.


I've done accounting, financial, gaming, embedded, & industrial. There's few languages I haven't worked in, and it'd be nigh on impossible to enumerate the different 'problem domains' I've worked in.

The only area I haven't worked in in security research, but it doesn't interest me personally as a work choice.

I left because I was bored. Perhaps I was a pre-millennial millennial.


On the other hand, isn't it obvious that we should be cultivating more than one interest/hobby and tool for life-fulfillment beyond programming?


You were coding for 15 years when I started coding. Still excited, but oh, didn't think that I can get bored.

Anyways, I keep myself excited between 2 fronts. Officially I'm sysadmin, but programming is just a coincidental skill that I apply starting from automation to the point where stuff gets built from ground up.

Probably excited because within sysadmin+programming world there is so much to learn. Only now getting the skills to debug .net 3rd party sofware, uh, that's so interesting. Lots of internals to learn.

Getting to know embedded world is also something interesting which I'm now only barely touching.

I have no knowledge but would find it interesting if I could know electrical engineering. Understand, fix and build stuff at that level seems exciting. Maybe someday.

I have had to build knowledge from high-level stuff to low level stuff. For you perhaps it was other way around? I think that high-level stuff was probably easyer/complementary to low-pevel knowledge, which gets you further much faster. I don't mean by developing a solution, but understanding systems, problem space and solution.

Ofcourse I'v thought about which I like more: sysadmin or programming tasks. Both get me excited. And then I thought if I'd be doing only one of them, probably I would be less excited.

Perhaps apply your knowledge to some different problem domain? I have this dream maybe someday I can build stuff for agriculture (whatever helps grow something). Be out on the field to see what I have to do and then test, improve it. Well, the other half says me the reality probably is a bit different - sit in the office 95% of time, and maybe the rest eith some real stuff. Dunno. Ohwell, that's just some wild dream.


Exactly.

Taking microservices as an example, once people are using it, they have to reinvent all kinds of ideas in a distributed way - distributed service registration, distributed stack trace, distributed process managing, stream processing etc. But really, there are almost no new ideas, even if there are, there must be someone already tried that a lot of times in Erlang.

The fact is very few people really tries to learn programming. This makes people who really learn experiencing the same frustration you've mentioned. If everyone tries hard to learn all programming concepts they've found, we've all been using fully dependent type languages with magical IDEs 100x powerful than Intellij right now.

In fact, most people are just driven by the market, and just learn randomly. Even a lot of senior Java programmers who mention SOLID occasionally don't really understand covariance and contravariance, thinking and struggling inside the box of their language everyday.


Anyone has experienced the same frustration?

I've experienced the same feeling that new things often aren't all that new. I wouldn't characterise it as frustration, though. For me, it's more liberating: it frees to me to concentrate on what I'm trying to build, not how I can build it.

I no longer care whether I am using the latest this or greatest that according to anyone else. Sometimes I still learn new tools or new ways of using tools as I go along, and I still find that interesting. Other times I don't, because I already have what I need. Either way, what I choose to use is a means to an end.

In short, my interest has mostly risen above implementation details. I prefer using good tools and techniques to bad ones, but any suitable good ones will do. What matters more is what interesting, useful or simply enjoyable things I can make with them.


By that logic we should all be using punch cards.

I get the notion that most new problems are just old problems being rediscovered/resolved though. I’ve been at this professionally around 10 years and closer to 20 as a hobby. It’s more and more frequent that I see a new framework/language/pattern being touted as the new hotness and I can remember using it years ago and migrating away for one reason or another after the inevitable shortcomings were discovered. Just last week I saw a blog post about using event pub/sub in JavaScript to decouple your code. Welcome back to 2002 (and probably a bunch of years before then too)

I still really enjoy my work, but today I tend to gravitate towards higher level systems design and mentoring the younger folks. This is kind of a weird turn here but you might consider a career coach. I was hesitant at first but mine really helped me get out of the boredom/rut that it sounds like you’re in.


> By that logic we should all be using punch cards.

There are zero limitations in punching cards to express any modern program. And that is my point. Keyboards are faster, I use a fancy mechanical keyboard to write my code in a fancy IDE with autocomplete. The nature of the problems is the same, thou.

> I tend to gravitate towards higher level systems design and mentoring the younger folks.

I do the same. At work, I make sure that teams communicate between them, that they take into account the big picture and help them to push back agains deadlines when quality is at risk.

It is when I get home that I do not feel that urge of coding anymore. I still code at home, but I prefer to spend more time doing something that feels new and more challenging.

Machine learning is another area that I am interested in. But, not as a developer, that is the same thing that coding for anything else. I am improving my math skills, there I see a challenge. But, first I want to become better at drawing. One thing at a time.


I don't mean this at all in a morbid way, but I've long suspected that our lifetimes are slightly too long for our current environment.

If you're a smart autodidact, you can get really far, really fast, to the point where most remaining challenges, especially in a similar field, feel isomorphic to ones you've previously solved.

If you like drawing, and it feels new, and you have enough money to support yourself...go for it! Don't feel guilty; you may well have exhausted most of what programming has to offer you personally.


> By that logic we should all be using punch cards.

That's a bit hyperbolic. High level languages have been around for literal decades and modern languages and modern programming feel mostly like they're just re-discovering or remixing old ideas. There's no reason to go all the way back to punch cards.


> That's a bit hyperbolic

It was meant to be, in response to OP's hyperbole that there's nothing new being created.


> It was meant to be, in response to OP's hyperbole that there's nothing new being created.

You are right. Sorry for not expressing myself with clarity. To say that there's nothing new being created is hyperbolic and it is not what I wanted to say.

I just wanted to note that the pace of change is too slow to keep things interesting for decades. I am sure that if I do not code for 20 years, I will need to learn new things to get back into it as enough changes will have had accumulated in that time.


I can't say i've got anything close to your experience, but for me programming isn't necessarily about the challenge of solving some programming problem, but using it to make things. I like being able to type words and numbers into a computer, give them meaning, and use them to create new things. I don't really care much about whatever's new or awesome. I just come up with things i want to make that are interesting to me and make them for the sake of it. It's using programming to make things that interests me, not so much the solving the challenges of programming itself.

For example, to use something you mentioned in another post, a* or djikstra's algorithms. They're used over and over again because they work well. But that can be anything from a massive AAA open worlder, to a simple 2d platformer, down to something like dwarf fortress. All of those are games, all of those are using those algorithms for similar purposes, but each of those games couldn't be more different than eachother.

It's similar to the way nearly all music is made from the same 8 notes and scales. But people don't usually learn to play music for the challenge of learning scales. People learn to play music so they can play songs they like or write their own songs.


Yea, I don't disagree at all!


Hold on, modern pub/sub frameworks are WAY nicer than dealing with CORBA. shudder


I was neck deep back in the days when the only acceptable API was SOAP. I still wonder if anyone actually designed their systems around that POS or if they all did the same thing we did, wrapped a sane protocol in umpteen layers of XML crap at the outer edge.


Can confirm. "Use this new framework and you won't have to learn to..."

Yes you will. It will just require more steps.


I know what you mean.

After so much time the code, programs, libraries and OSes melt into each-other and it all seems so... pointless.

Experimenting with things outside software, especially those which involve physical aspects seems to be the right choice. There's much more to life than ones and zeroes.


I've only been doing this for a little over ten years, not 25. Still, I find that though most problems are indeed quite boring, the solutions don't have to be.

It seems there are always new ways to write solutions, and I don't just mean picking a new language or trying to optimize performance. Some challenges seem impossible to ever fully solve: Making the code more robust, easier to read, or just to communicate the intent of every line is clearly. (At least for now my solutions tend towards making the code as declarative as possible.)

I get that this may not interest everyone as much and I wouldn't mind a genuinely new or hard problem to solve every now and then, but still something to consider.


> There need be no real danger of it ever becoming a drudge, for any processes that are quite mechanical may be turned over to the machine itself. https://wikiquote.org/wiki/Alan_Turing

i.e. you make libraries, frameworks, engines, tools, languages; develop new formalisms like kleene algebra, relational algebra, longest common subsequence, levenshtein distance.

Though it turns out this automation/amplification, at least of some tasks, is itself a very difficult, time-intensive, yet non-mechanical drudge...

Drawing is fun. What happens after 10 years?


Twenty Four years doing software development and I have the same frustrations.


Uplevel yourself to solve a business problem. Maybe growth marketing, maybe robotics (think of all menial tasks, they are being roboticized now). Coding by itself feels boring when it’s isolated from the actual need you’re solving


On the one hand, I agree with you. At some level the libraries and frameworks matter less and less. It becomes boring since you know what you want and the only question is to do it with the current tools.

In the other hand, I'm still fascinated by programming because it is the closest thing to magic we have: We utter some arcane phrases and a nearly unlimited set of things can happen. Most of Harry Potters magic is not impressive compared to our technology.

I have not found that in other activities yet.

It is still magic even if you know how it's done. -Terry Pratchett


I think there is always something new to learn. People spend their lives finding optimal solutions to problems at every level of the stack. Even just in computer architecture people are experts in developing specific modules like a cache, arithmetic logic, etc. I feel like you’re at a great point and you should be using the knowledge you have to build specialized systems.


If life's a game then it means it's time to start dual classing. Use your skills as a step up in another field? Even as a hobby maybe.


Started programming around 19 years ago, been doing it professionally for a bit over 8, and have been feeling the same thing you describe.

Aside from personal projects outside of work, I recently asked to be moved to our maintenance team instea of new development. It surprised my manager, he assumed I'd pefer new development, but so far it's been much more interesting to me.


Anything can (but not necessarily will) become boring eventually. It sounds like you might be ready for a change. I recently saw an article about a journalist who decided to change jobs. I cannot remember exactly how long she had been a journalist (a decade or so sounds right), but now, she is training to be a firefighter.


Sounds like you need to Branch out more to me. There's more than one way to skin a cat, but have you tried cooking one?


Obvious question, but have you learnt a Lisp and something like Haskell or Scala? Can introduce you to a totally new way of thinking, not just another language.


+1 for Lisp. (Common, Racket, Scheme etc).


Get into data science/AI/ML. It’s similar enough that you’ll be able to make sense of it, but new enough that you’ll still be interested.


data science/AI/ML is just statistics with modern computers.


Have you tried changing languages? Keeping up with framework churn should keep you busy for the next 25 years.


> Have you tried changing languages?

Yes. And it is a good idea. I have moved between Java backend applications and C++ videogames jobs. The difference in language and domain keep things interesting, and you discover that also there is a lot of sharing and you can apply patterns across very different domains. I have around ten years of experience in each.

I have also developed professionally in Clipper, Visual Basic, C# and even a little assembly. But, nothing for so many years as C++ and Java.

> Keeping up with framework churn should keep you busy for the next 25 years.

Oh. Yes. I just find it boring. To learn to do the same thing again and again with different named functions losses its appeal after a while. I try to keep my knowledge close to the core of the frameworks and google anything more complicated than that. I suck at interviews where I get questions about how to do this or that in a concrete version of a concrete framework, thou.


Java and C++ application programming are almost identical. (Well, Java is a subset of C++/. CC++ is just Java plus extra work because you don't have the convenient shortcuts that waste CPU and RAM. There are many interesting novel languages for different problems .


> Keeping up with framework churn should keep you busy for the next 25 years.

I find it tedious. It's the same thing in a different way. The hot new framework doesn't do anything fundamentally different from the old framework. It's change but not improvement.


Have you tried learning Chinese.


I've been making my living as a programmer since 1986, though I've done some other jobs in between... I think "more than 25 years" covers my professional experience as well.

First, I always say, "programming is programming". People often get caught up in the application they are developing, thinking that it is new and exciting. However, the programming side of it is just programming for the most part. The number of hard domain specific problems you have to solve is quite small compared to the amount of code that you have to slog through.

As you say, languages and frameworks are only marginally interesting. React was interesting to me for a few days because it solved some problems in an interesting way. But I've used over 20 different languages in my professional career (and multiple different frameworks) -- almost one a year :-) For me, a lot of shops have a lot of legacy code in a variety of different languages and frameworks. I tend to specialise in legacy code so I see them all.

But there are definitely challenges -- it's just that they are challenges that most people don't see. One of the reasons I specialise in legacy code is that it's much more interesting for me. Greenfield development is about learning a new language/framework and arguing with people about "the way to do it right". Legacy is about taking code that everyone has already abandoned and making it good. That's hard.

I think it's a bit like drawing, actually. When you first start drawing, it's a big challenge: how do you get the proportions right? How do you draw hands? But eventually, it's all the same: How many hands do you need to draw? I often think about manga artists who have to draw 15-20 pages of manga every week: how many hands do they draw?

But it's not about the hands. It's about the composition. You can get the proportions all right. You can draw the hands perfectly, but it still might be an uninteresting drawing. People can teach you about the technical side of drawing, but it's hard to teach people about art.

For programming it's the same. Eventually you are just grinding through code and there is very little technical challenge. So why are all our code bases so awful? If you think that it's because we're always rushed, imagine those poor manga authors who are cranking out 150 hands every week (as well as faces and feet and bodies) -- and some of those are masterpieces. Maybe we can't meet all the expectations of getting it done yesterday, but why can't we right good code day in, day out and end up with a masterpiece?

That's the challenge. I don't suppose I'll be able to master that in my career, but it's fun to try.


If you don’t mind me asking, how do you specialize in legacy code? Do you focus on short contract work? Have you built relationships with local companies. I’m genuinely curious in your process.


I actually do long term contracts at the moment. However, I have found that there are usually opportunities in full time positions. I tend to look for companies that are just past the startup stage, and moving into that awkward stage where they have to scale, but everything up to this point has been based on heroic efforts of a few people. Usually there are gaps from people who have left and as the business expands, there is a lot of greenfield work that people want to do and nobody left over to do the legacy work.

Short term contracts can get you in the door, but I would try to negotiate up into longer term contracts because legacy work is slow. It's not so much that you can't provide a lot of value up front, but if you want long term benefits, it usually takes 2-3 years before you've refactored enough to make a big difference. If going for the employee route, just keep your ears open during interviews about systems that are messy. If you express an interest in working on those kinds of systems, it usually provokes both surprise and interest ;-)

I think the key for legacy work is that normally people don't want to work on it. There are almost always systems that people are afraid to touch -- to the point where you may even be forbidden from touching it. However, you can often find big wins with small changes for such systems because they are usually neglected for a long time. If you cautiously make a change and manage not to break things, then usually there is a huge pent up demand for more.

I think there is one other thing that I always have in mind for some reason. In Taoist literature they frequently make the analogy that water runs to the lowest point. In the same way, it's easiest to make good changes in systems that are the worst. If you are working on a very nice system, you have to spend a lot of energy staying on high ground. But if you are working on a very poor system, you spend no energy maintaining your position at all -- you just fall into the low ground. So paradoxically, I tend to look for companies and positions where there is something wrong. If there is nothing wrong, then there is nothing to fix and my job becomes quite a bit harder.


Thanks for the detailed reply.


I've been writing software for 14 years. I've done it professionally for almost 2.

There aren't many interesting problems professionally (and early-career you don't get many). I'm getting bored and stupid, but goddamn I'm well paid. I don't think I could remedy this without going for a PhD or switching fields.

My hobbies are mostly outdoors now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: