We haven't given up, there is simply not enough financial incentive to make the software any better. See 'We Could Write Nearly Perfect Software but We Choose Not to' https://blog.inf.ed.ac.uk/sapm/2014/03/14/we-could-write-nea... "The simple truth is that bug–free on-time software is just more expensive than we (or our clients) are prepared to pay."
The sad part is the authors example is easily avoidable and does not cost more to avoid. The fact that a developer was making manual changes in a production environment points to bad process and lack of knowledge of a better way. In every environment I have been in since 2000 production is locked down it is modified by scripts that are tested in several environments before the script even gets to production. With virtualization and continuous integration the cost of doing it correctly is marginal and the savings in avoiding crisis and recovery is substantial. There in lies the core issue, it seems more and more developers are loosing focus of the process of managing change across environments and are resorting to manual processes that are not reproducibly. Simple point is, if you are making manual changes in any environment other than a local dev environment you are doing it wrong.
Source control should be your single source of change, and check-ins should cause a chain of events via continuous integration and issue tracking. A checkin should cause a build to kick off if it is successful it should deploy to an integration environment where tests should smoke test it. If it smoke tests it continuous integration should up merge to a test branch, deploy to test and then update the ticket system to update all associated tickets to ready for test. When the tester verify the tickets automation should again up merge to release and automation should release to production on a release schedule. This is a solved problem.
> The sad part is the authors example is easily avoidable and does not cost more to avoid.
Except for all the costs of maintenance. It always costs more, generally in the way of time.
> With virtualization and continuous integration the cost of doing it correctly is marginal
Until you have to debug while the integration fails.
> This is a solved problem.
Even with unlimited funds, you get a cost of time and communication. There are tickets and tickets and a queue and a release schedule. The optimal path is not as simplistic as you would fantasize.
> The fact that a developer was making manual changes in a production environment points to bad process and lack of knowledge of a better way.
I'm guessing this is relating to my SQL mishap in the opening lines? You're making assumptions that I was doing something manually in production here, and not that I was executing a script that was buggy or against the wrong machine.
Sure, we could build safeguards against this, but it wasn't something we really considered would happen. Needless to say, we've learned from that one!
Great link and article. I've always thought the NASA story is truly fascinating. Knowing that your bug could kill the person sitting across the conference table certainly changes the attitude towards being careful with software :-)
As others have said, software is a young field, and in many ways we're still pushing and exploring the boundaries of what's possible with software. All of us are still very much "early adopters" in this nascent technology experiment / revolution. The nature of demand and untapped potential for software systems creates a financial incentive for high-stakes rapid experimentation at the expense of sloppiness (in attitude and in software quality) we've seen for as long as there's been software.
I do think investing a little bit more time up front and using TDD combined with good leadership / experience is ultimately healthy for your product and team, if you can manage it. However, we shouldn't forget that being "cowboys" got us to where we are today, and continue to break open exciting new opportunities for software.
This is misleading. Most of us couldn't possibly write significantly better software, even without time and money constraints. Money can buy existing talent, but it can't create talent where there is none.
Nonsense. We're in the software industry. We can use tools, and if the tools aren't good enough we can make our own. Writing correct software isn't a matter of talent, it's a matter of process.
Writing correct software isn't just a matter of having the right tools and processes, even if those can help. It's primarily a know-how issue. No amount of process can fix using the wrong algorithms or coming up with the wrong designs.
Suboptimal algorithms may lead to poor performance but you can enforce correctness. If you let the requirements drive the design rather than trying to design up-front (i.e. good process) you end up with good design. Asking the right questions to gather the correct requirements is arguably in-score or not, but again it's primarily a process issue.
You can't always enforce correctness. Consider the case of a typical web dev shop that hires a developer who thinks escaping is a good solution to SQL injection. It might get snuck past code review if you're unlucky.
There are static analysis tools that can identify such problems ... sometimes ... maybe ... depending on the language and frameworks in use. But to deploy such tools you have to know about them to begin with. And most devs can't simply produce such a tool if there isn't one on the market because that's not what they're being paid to do.
Unfortunately, knowhow can't be automated away entirely just yet.
> Consider the case of a typical web dev shop that hires a developer who thinks escaping is a good solution to SQL injection. It might get snuck past code review if you're unlucky.
> There are static analysis tools that can identify such problems ... sometimes ... maybe ... depending on the language and frameworks in use.
Analysis tools are the wrong approach - that code should be simply impossible if you're typing things correctly, and inadequately typed code should be very obvious in code review.
> And most devs can't simply produce such a tool if there isn't one on the market because that's not what they're being paid to do.
Most devs could write such a thing if the company told them to. The reason "that's not what they're being paid to do" comes down to process.
> Most devs could write such a thing if the company told them to. The reason "that's not what they're being paid to do" comes down to process.
Most organizations barely have enough of a budget to implement the applications they actually need using already existing infrastructure (frameworks, tools, etc.). Asking these organizations to roll their own infrastructure is like asking ordinary people to run their own water or power utility.
I meant more that you need to let the things you encounter during implementation drive the design. This is particularly relevant when requirements change (as they often do), but even if you can nail down the spec perfectly up-front, trying to design at that stage is still a mistake.
Is there some reason to believe that with enough money software could be bug-free? Developers are often guilty of just refactoring for no particularly good reason other than their personal sense of aesthetic, which can lead to endless (and destructive) refactoring. More money also makes people design lazy, so that instead of finding clever ways to reduce effort and improve uniformity, you just have everyone make their own forms.
It isn't a practical solution for the majority of software out there. I also think that while their method works, it is pretty archaic. Looking to the future I hope we can find a way to achieve the level of results they do, without the ridiculous effort and cost.
I work on safety critical systems. At least 80% of the work for building such a system is not critical for the actual quality of the product, but is necessary to satisfy compliance rules. Documentation of the development process is really extensive.
> Looking to the future I hope we can find a way to achieve the level of results they do, without the ridiculous effort and cost.
That is unlikely. For computational complexity theory reasons, writing correct software is extremely computationally expensive (regardless of whether or not the language is Turing complete). In fact, it is "the hardest problem in computer science", in the sense that any problem with bounded complexity can be efficiently reduced to software verification. Even verifying finite-state-machines (the simplest computational model) is PSPACE-hard.
What we can do is find many ad hoc ways, each helping to some extent with some kinds of correctness properties.
Most software problems I encounter in the real world ARE of bounded complexity, and they can be reduced to software verification. But this is still a very tedious process and there are significant gains to be had by making this more accessible without having to solve the hardest problem in computer science.
Certainly. It's just that there cannot be one (or a few) advances that would make writing correct programs generally easy. We can concentrate on domains (in the safety-critical embedded world this is almost a solved problem for some kinds of applications thanks to the invention of synchronous languages in the '80s, that make formal verification relatively efficient). We can also address some "internal" properties, like memory safety and transactions, that on the whole might make writing correct programs easier (though not provably correct).
It probably could and in testing it could, they just decided not to find out whether it would really work properly when seven people's lives depended on it working. If you can practically avoid an edge-case you haven't ever relied on in decades before when people's lives depend on it, not risking it doesn't seem that crazy.
Actually, they could but given the fact that the shuttle is not much different to strapping equipment and people to a gigantic bomb they seem to have decided that even though exhaustive testing found no issues a small change in the schedule was less risky.
In case of NASA, it's not only because of money though. It's because of a very rigorous process of designing and testing their software, sufficient time to follow this process, as well as writing a single product for a single customer. Of course money enables such process, but alone it is not sufficient.
Enough money and the goal of making software bug free can make it bug free.
But enough money and the goal of making it infinitely flexible, having all the possible features, or the most perfect UX that a team can not decide on what it is will only make the software more bloated than it's reasonable, and fill it with bugs.
I think "bug-free" software is a distraction. It's fine to make the occasional mistake. The real problem here is that nobody seems to take these mistakes seriously. Everyone seems to expect software to be occasionally broken, including the people writing it.
Aren't your second and last sentences in conflict? How could we not expect software to be "occasionally" broken, if it's "fine" to make the "occasional" mistake?
Of course, making mistakes is only human, but mistakes should never get past the build process (compilation, automated testing). Any mistake that does reflects an error in your design (not making your code amenable to verification) and/or your process (not capturing requirements in tests).
Yeah, using "occasionally" in both places drew a parallel that I didn't intend. We can expect people to make mistakes without accepting broken software. As you say, there are processes that can reduce the impact of mistakes on the final product. I think a culture of taking issues seriously is also important. Small mistakes are easy to make, but they're easy to fix if the organization has a good way for them to be reported and allows people to spend time fixing them.
> that with enough money software could be bug-free?
According to empirical studies, practicing TDD leads to a 90% (yes, that number is correct) post-ship bug reduction at an upfront cost of 15-35% more development time.
Which seems like a pretty good tradeoff, to me. (And I've experienced it firsthand.)
Some of the above is empirical (read: scientific, no-bullshit) data.
My opinion on it has grown to the point that I think TDD should literally be inseparable from programming, and the two together should simply be called, "programming." Lacking any unit tests whatsoever (TDD or otherwise) should be called, "taking stupid, extremely hazardous risks to save a little time, like reading your phone while driving"
I've not found a programming task in at least 5 years that wasn't waaaay better-written when done via TDD. Some things, like IO, can be a challenge to unit-test, but that's why we have great ideas to solve that like Gary Bernhardt's awesome "Boundaries" talk https://www.youtube.com/watch?v=yTkzNHF6rMs
> Is there some reason to believe that with enough money software could be bug-free?
Yes. It's pretty easy to write provably correct software, just expensive.
> Developers are often guilty of just refactoring for no particularly good reason other than their personal sense of aesthetic, which can lead to endless (and destructive) refactoring
Not my experience, but even if so, if you require your software to be proven correct then it won't matter - any refactor that breaks it simply won't be accepted.
> More money also makes people design lazy, so that instead of finding clever ways to reduce effort and improve uniformity, you just have everyone make their own forms.
Good point. But choose the wrong hardware and you can never have provably correct software. E.g., the Intel x86 ISA makes all x86 processors impossible to prove correct. (there are 2^120 possible instruction encodings - the universe will die heat death before anyone could verify all of them in an implementation http://www.emulators.com/docs/nx06_rmw.htm )
Shrug. We have provably validated diodes AIUI. If you can get provably correct basic components you can build up complex components using the same techniques as in software.
> Developers are often guilty of just refactoring for no particularly good reason other than their personal sense of aesthetic, which can lead to endless (and destructive) refactoring.
There are lots of good reasons to refactor - if indeed frivolous refactoring is more prevalent I would think this kind of opinion should be supported by a source/reference.
Economics is only a part of it. Often, poorly built software increases costs for the company that built the software, like when customers call the helpline to get an agent to do what they could have done on the website. Or, when I call the helpline and the agent asks me to call again because their software isn't working today.
As another example, when I was logged in to my mutual fund web site, I found a button that says Get Statement. Since I wanted to download one, I clicked it, and it said, "Your statement will arrive in the post soon." They sent a paper statement without confirming me what I wanted. When it arrived, I threw it away, of course.
In many cases, poorly built software increases costs for the company as well, so financial incentives are only part of the story. Incompetence is another part, maybe a bigger one.
Considering the upfront cost has just as much to do with economics as considering the overall cost of high quality software. Economics is a description of human behaviour and all the irrationality and incompetence that comprises it so decisions of short-term gain vs long-term gain are very much included in the study of software from an economic point of view.
The problem lies with the amount of financial resources available to software users. Users can afford buggy software. They can't afford non-buggy software except for a very small number of uses.
Often the buggy software turns out more expensive. It can be a false economy not putting that extra time in to ship stable software.
If you ship something that loses data and you have to start restoring backups and patching data, it probably would've been cheaper to do a little extra testing.
Like everything; it's a tradeoff. I think we're universally bad at playing the game!
The high cost of bug-free FOSS is the time the developers can volunteer to develop it, which takes away from time spent on additional features. The developer's time is a limited resource that has to be spent where it is best utilized, and making the software bug-free might not be the right place.
Yes, good point. In both cases the limit is time. How do you think one should decide on what qualifies as best utilized? What would count as additional features? Does documentation count as a feature in software?
I see what you mean and am often equally frustrated, but you know what this reminds me of? The state of infrastructure in a booming underdeveloped country, which is a good thing. Let me explain it a bit further.
Have you tried hiring a programmer lately? It is very hard, there is a huge demand and most programmers I know receive several offers a month. The demand for software is CRAZY. So we all do what we can: quick and dirty when it is good enough. Just like in a country with no roads, any dirt road and crappy pavement is better than nothing if you have hundreds of trucks that need to go through RIGHT NOW.
So here it is, websites are made hastily, tech half work, but are better than nothing. How much of the things you screencaped have more than 5 years of existence? Like you said, we are software developers. We write software and we write bugs. Right now, there is far more need to implement new features than to correct bugs. Hopefully it will change at one point but right now this is the crazy race forward, and that is a good thing!
> most programmers I know receive several offers a month
:'|
Could you take a look at how they're receiving offers and perhaps put some suggestions together on how to duplicate that effect? Most programmers I know have had extreme difficulty getting a job, especially one that would pay at-or-above the average rate (per Glassdoor).
Programmers can do very well by becoming contributors to high profile open source projects and standards committees. Post pull requests on github, write tech articles, answer tech questions on reddit and hacker news, and do it under your own name. Register yourname.com and put your resume there. Essentially, make a name for yourself instead of waiting for someone else to do it for you.
Open source involvement is overrated. Yes, companies say they value it, but you can't listen to what they say; instead, you must look at what they actually do.
Are candidates with copious open source contributions getting hired primarily because of those contributions, or at the very least being spared the indignity of the white board and trivia questions during interviews? In my experience, no. Interviewers generally don't care, or perhaps their process is too rigid to admit the deviation that caring would require. In fact, when pressed, many will even admit outright that they don't care, claiming (as I've seen here on HN) that they have no way of knowing for certain that you're the true author of your purported contributions or that your contributions alone can't really demonstrate how you write code (like a white board presumably can).
The only reason they value open source contributions is that it amounts to free labor, and it demonstrates "passion"--a quality that they associate with susceptibility to exploitation.
If you contribute to open source, your name and work are out there. People can find out about you and see some of your work.
There are other ways than contributing to open source too. For example, I've got a number of offers because I run a local programming language meetup.
Basically anything that puts your name out there will help get offers. If nobody knows you exist, they can't offer you jobs. That, I think, is where open source can help a lot.
The job I'm at currently, the CTO skipped standard technical interviewing and went right to culture fit. He said explicitly that was because he had seen me giving a technical talk and read through my Github.
Some of that may depend on company size, but in my experience (mostly small startups) it has been a huge benefit.
I'm reminded of the old joke of the advertising executive who said: "I know I'm wasting half the money I spend on advertising, the trouble is I don't know which half."
Can you think of any other profession that does anything like this? No doctors or lawyers or accountants or civil engineers or pastry chefs find work by posting on Reddit.
And besides, that is very specific to web jobs. There's an awful lot more to the industry besides.
As a non software person (civil engineer but going in for a CS masters), this is my biggest gripe when it comes to speaking with developers about their jobs: they lack perspective.
I can assure you that the bar for entry into my field is much higher than software. Like medicine and law, engineering requirements are strictly set by state law and professional boards. You need at minimum a bachelors in engineering to get a foot in the door. There is no self studying or civil engineering bootcamps. Afterwards, you need to become an engineer in training (EIT) by taking an entrance exam. Four years of experience later, you can sit for your professional engineers exam to become a licensed PE.
So now you're 8 years down the hole, finally licensed and ready to actually practice in a professional role. The kicker is that you're still probably making less than a fresh CS bachelors student working at <foo> corp in a tech hub.
I'll stick to posting on Reddit and contributing to projects.
If I am looking for a surgeon for an operation, it's enough for me to know that he was trained at a good school and has enough experience -- for that a bio online is already decent. I trust his education, after all the field of medicine is one of the oldest in existence.
There is no such trust in the education of software developers, perhaps because software development is such a new field. I've worked with great ones without a formal education and terrible ones that graduated from good schools. The way to 'prove' yourself to someone when they can't work with you (e.g. first stages of a hiring process) is to demonstrate your work by using technical articles, GitHub commits etc. Also, that stuff is not specific to web devs, I'm sure you can find every kind of project under the sun on GitHub.
Aren't they? I don't know much about musicians, but graphic designers are expected to bring their portfolio with them to the interview. There's no interview at all without the portfolio... and you're not always lucky enough to have all your best works paid for by someone, especially if you're just starting.
EDIT: and, come to think of it, don't musicians need to practice for a long time before even getting a chance to perform for pay?
Hrm. Musicians play because they enjoy it. Know many a people who, despite or regardless of success, will play just for the enjoyment of it.
Programming is similar. You hear the stories about how some tech founders focused their energy on programming, it took off and they dropped out of wherever.
I'd argue that for every one of those, there are scores of people who do it just for fun like they did, get nowhere but do it anyway because they like it. Just like there are scores of guitarists for your Clapton, Hendrix, etc.
Open source contributions to me look a lot like people, for the most part, enjoying themselves and getting better at a hobby.
Having family members who are all three of those... yes. They, unfortunately perhaps, are. They all have to have portfolios of some kind to prove their ability.
It perhaps depends on the specifics of the jobs you go for. For example, if you're a musician producing audio for eg film, then you definitely are expected to have a portfolio, but if you go out and play for an audience, then you probably don't.
I can't think of any profession where this kind of personal brand management and contributions to the public corpus of [whatever domain] knowledge wouldn't be a boon.
Perhaps not by posting on reddit, but they have other ways to raise their profile.
Doctors have conferences, Chefs have "Chopped", and those are only the "mainstream" ways the general public knows about. Most others you mentioned have industry/trade magazines for example.
I've seen a lot of artists and craftsmen post their work on Reddit, and they're clearly marketing their names.
I suspect the other professions would benefit as well, it probably simply hasn't occurred to them yet. It reminds me of when I was watching a bodybuilding competition in the 80's and thought the posers could really use some advice from a professional dancer. Sure enough, this occurred to one of them, they did a routine that blew away the others, and then the rest of them did.
I think you are just not very familiar with some of those professions. Lawyers frequently take on pro bono cases to get themselves publicity. Pastry chefs give out free samples of their work all the time to get customers. In basically any competitive profession some people will take on some non-paying work if they think it will benefit their career.
But there is not the expectation in those professions that they will give up all their free time to it. No lawyers are suing people on their own time just for their own entertainment, I guarantee it. But programmers are expected to do their day jobs then go home and do more programming...
You can land a job in cranking out CRUD websites without any kind of side-projects. Hell, with just a little bit of luck you can land such a job without any kind of experience or education.
If lawyers want to rise to anywhere near the top they need to meet such stringent billable hours requirements that they will have essentially no free time in which they could sue people.
> No doctors or lawyers or accountants or civil engineers or pastry chefs find work by posting on Reddit.
Is it good (for lawyers, accountants, etc.) or bad in your opinion?
It's true that the widespread expectation that a programmer will be programming (or doing related things) in his spare time is slightly unusual compared to other professions. Personally, however, I find that expectation natural and I would do it - write code in my spare time - anyway.
I also know for a fact that many companies go to programming conferences to look for people to hire. Attending those conferences is a great way to network. Even better is to present at a conference.
I've also seen a lot of recruiters going through GitHub and contacting people that way lately. People with public source on GH tend to be really passionate devs because they're frequently spending time outside work doing dev.
Maybe the difference is between just contributing to open-source, and contributing highly-differentiated stuff to open-source, such that your name(or your creation's name, and not something that you just helped in a bit) gets passed around ?
Honestly, the first step is: have a job at a well regarded company.
There is no stronger signal that you're a good developer than already having a job at a company that has stringent hiring standards.
I get multiple serious recruiter contacts a week. Mostly via Linkedin, but also via direct email for the enterprising recruiters. All from companies most people would kill to work for. I even get linked-in email from linkedin recruiters.
I ignore most of them. They know it's part of the game. One day in the future I may need them, and there's no point in pissing them off - they're only doing what they can. You gotta have a huge funnel to hire even 1 person (it's probably 1000:1 for contacts/hire)
This is absolutely true and was how I found my dream job or more like how a recruiter found my dream job for me. I'm using Xing since I live in Germany but I think it should work similar on LinkedIn.
I can understand that if you are starting fresh you won't be able to be already employed by a well known company. However you have to start somewhere and it's important that you get some experience in the field you want to work in. Obviously this depends on the job you are applying for whether it's a junior or senior position etc.
I think it's important that you don't have big gaps in your CV and can show or present strong social/soft skills. One of the most important things for me is to be honest. Be absolutely honest about what you can do and not. Don't flood your profile with all the latest buzzwords and technologies if you never worked with these before. This might attract couple recruiters but it will be most likely the black sheep amoung them.
Be serious, honest, friendly, open minded and just yourself. :)
Thanks for mentioning Xing - I might be looking for something new around the beginning of next year, and while I enjoy working remote (in the US), my wife and I were just talking about how a job Europe might worth moving for :)
But of course, I'd only ever heard of linkedin as "the" professional networking site - I didn't even realize they had competition!
True. I interviewed once and they told me flat out that they didn't really read my CV but they saw I was hired at a big company and they liked that.
People are just people no matter what role they have in that moment and most people are susceptible to be impressed by superficial things (like your current company name).
Also most interviewers failed the interviews I was in... :)
Makes you want to give them a joke CV with a section "Companies I have not worked at yet" and just list some buzzwords like Google, Apple, Facebook, Microsoft, etc.
This is already highly contentious. Some consider startups who produce write-only code fast to be a joke, some consider slow moving behemoths like Microsoft, Apple or IBM as less than ideal.
1. Where are you/they? I got a ton of recruiter spam when I lived in SF and a fair amount now that I live in Austin (the majority of which are companies that want me to relocate back to SF). I received 0 when I lived in El Paso, TX (where I'm from). If you're in an area that doesn't have a hot tech market, you're likely going to have a harder time finding a tech job.
2. What technologies do they specialize in? I see lots of job posts for Rails and React and relatively few for COBOL. I see a lot of jobs for Java, but a higher percentage of those are of the lower paying variety. This is likely location dependent and is a bit of a catch-22.
3. Are they any good, but perhaps more importantly, are they able to demonstrate that they're any good in an interview setting? There are a fair number of truly bad programmers out there. There's also a fair number of programmers who may be good but are unable to demonstrate that in an interview. Interviewing is hard to get right -- everyone has their own take on it, and it's rarely backed up back actual data.
The people you see getting tons of offers probably have the right combination of location, resume, and interview ability. If I had to pick the single most important factor in getting a lot of offers, it's location. If you are in an area with limited tech jobs, the number of offers you can get is obviously going to be limited as well.
Maybe I'm being overly literal, but do some people just receive offer letters without going through an interview process?
Sure, I get lots of recruiter emails, but almost all of them are sent identically to 100 other people as well.
Most emails I get come from external recruitment firms, only sometimes does someone at a company reach out to me directly. But even then it's only an invitation to apply, it doesn't mean I'm actually especially likely to get the job.
I got a no-interview, no resume offer for a contracting gig simply off the basis that my LinkedIn showed that I worked at "reputable company." I couldn't believe it -- but the contract worked out and everyone was happy. Social proof often means more than a resume or even Github.
It happened to me recently, the offer was from someone I had previously worked with who just knew and trusted me.
(I also get about 1-2 recruiter spams a week and occasionally one that seems genuinely tailored to me specifically. This might be what the OP was calling "offers".)
Yes. I was taking my wife out to dinner once and a recruiter stopped us (me) in the street to get me to work for their company, technically I interviewed but it's kinda just a formality when they want you that badly. Apparently a former co-worker said I was a genius.
In my community, most programmers receive several inquiries a month (or week, or even day)... but saying there are several offers a month would imply they are actively interviewing enough for inquiries to become offers. The "several offers a month" is an exaggeration of reality that I hear often, but when you really dig into it is just that... exaggerated.
Thank you for the clarification - this is exactly my experience too, and that of most others. I don't think I've known anyone in the past 20 years of working who consistently received multiple job offers per month, especially when they weren't interviewing.
Most of the inquiries I know of (for myself and others) are generic recruiter spam, or word of mouth referrals by people looking for really cheap work. So yeah, there's lots of work out there to be done, but much of it is in far less than ideal situations.
I do receive several "inquiries" per week, mostly due the fact of having left my CV (obviously now very outdated) in many job sites when I was looking for a job in London, almost 10 years ago.
They are of the type "Dear [Name], I hope you are well. Would this role be of interest for you in your current situation ? [role description]. If not, do you know some friend of yours that would ? Best regards, [name of person totally unknown to me]."
This is not even a serious inquiry, much less an "offer".
I get random recruiters poking me on LinkedIn. Only about 1 every couple months. But I'm not looking for work and not a big user of LinkedIn. I'm in Auckland, New Zealand and have a bunch of experience if that matters.
Hey, from what I can tell, Auckland is still not a great place to work as a software developer. I would probably recommend looking at some companies in Wellington or Christchurch, or becoming a freelancer for clients in the US.
I don't have a big name on my resume (unless you count freelancing for EFF). All I have is a public portfolio and a well-manicured LinkedIn profile, and I get several cold calls or emails from recruiters per month.
Post on LinkedIn, add recruiters as friends (or whatever they are called on linkedin), look for what jobs they are looking for, add those skills to your linked in profile. Wait for them to find you.
the only programmers that I know that have a hard time finding a job are those who only have a very narrow specialization, typically on some web technology.
People who focus on solely ASP.Net without an understanding of the underlying tech will have a harder time finding a job than someone who knows several language and understands how a computer works.
I live in the South of France and my LinkedIn profile says so.. I get at least 10 recruiter emails a week with 3 or them being for remote positions. Geography is not a huge factor, it's about the skill match.
And I've received 17 this week with difficulties finding jobs still. Mostly within Chicago, but a lot are remote - what's your point? You're still oversimplifying the issue.
i am very good at what i do but when software engineers try to act like they know how to hire people, you have problems. i have been interviewed by a company that prided themselves on hiring smart, diverse people who weren't necessarily experienced in a certain field and language. they literally called this out in the job description.
the interviewer proceeded to ask me a deluge of very specific questions about said language, including implementation details of the language itself! all of the questions could have been looked up by an intelligent person online within minutes. and this was a well established, small, but well known company.
all of my software interviews can be summed up with two words: algorithm questions. but yet, my skills lie in architecture, writing bug free code, general design skills, testing, UI dev and client feedback, etc. none of these skills have never had the chance to be discussed in an interview. i came from another field and haven't spent a lot of time in the algorithm space. i can work through them as needed in interviews just as i do in a job, but that is rarely cared about in interviews. people want binary answers.
It's often possible for you to drive the interview rather than waiting for the interviewer to ask your questions. Bring up and expound on your skills if they're not asking the right questions.
Whether or not I get an offer, I tend to the enjoy the process more when I try to be collaborative in an interview process, vs waiting to be asked questions. I naturally want to show off some stuff and ask questions - if even that doesn't go well, it's probably not going to be a good culture fit, regardless of whether I can do the raw work.
I've worked with programmers from many of the big names (amazon, apple, facebook, google), and I've met quite a few programmers at least as good in little out of the way companies. I think recruiters massively overvalue having one of the big names on a cv.
Then again, if you do have a big name on your resume, it lets the recruiter reasonably safely assume you can program well enough. That sort of confidence in a candidate is hard to overvalue.
There are lots of things that can give you as good or better confidence that a candidate can program well enough, and a big name on a CV is no guarantee either.
How do you signal goodness, though? I wound up with a cushy position (largely by luck), but practically everyone else is barely scraping by. Side projects? Prestigious internships? l337 hax0r skillz? Maybe the 80/20 rule [1] applies and there are only a few, key factors that make some applicants stand out?
There is a drastic oversupply of programmers, and I firmly believe more than half of us should not be in software development at all. I think you have a choice to make as a programmer:
a) do something interesting
b) do good/high-quality work
c) be excellently paid
- pick two. Unsurprisingly, most people choose high-paying jobs that sound interesting, resulting in shitty bloatware from hell. Of course, many programmers don't even get to choose at all, they can call themselves lucky if their job has one of these benefits.
>there is a huge demand and most programmers I know receive several offers a month
You mean recruiter spam? That's not a job offer. At best it's a liberally-dispensed invitation to submit yourself to some company's drawn-out, degrading, and ultimately capricious interview process.
The "huge demand" is for 20-somethings who graduated from top schools who are "passionate" and can be over-worked and under-paid.
I think it's fine we are using what we have to make things work today. But it seems like very little research and development is being performed to build a brighter future. Computer science is a field seemingly led by the loudest voices, not necessarily the brightest. It feels like OS, language, database, and other computer science research is at an all time low and instead is being driven by ad-hoc open source development for better or for worse.
Money is the loudest voice, every time. And nobody has lost a single dollar because OS or database research is lacking.
Even a major revolution in database tech would maybe reduce costs, downtime, or latency by 10% or so, max. Since that sort of expense is never the dominant factor in a company's success, it will never be worth serious commercial investment. And as far as academia goes, there is research happening, it's just a slog because all the low hanging fruit has been picked and now people are just optimizing. Fields always slow down after the good stuff is discovered.
IMO AI is the only pure CS field worth studying right now, that tree has barely been touched at all. The mundane stuff like OS, databases, and languages have been studied to death, and unless an Einstein comes along, there's not going to be any serious motion unless the AI nut gets cracked first.
There is a saying in professional kitchens. You can make the best soup in the world but if it takes you all day to make it, you will never make a money in this industry.
That's probably true only inside the valley, otherwise there will be a market for contract-based remote coding, a-la "pay per github commits (which passes all tests).
Back in 90s one could easily land a job or a subcontract if one is capable to do so. I used to be UNIX system administrator and Informix DBA, and there were literally tons of requests for complex database server setups and performance tuning even in such shithole like Russia. That was the market.
Nowadays, except for bunch of valley startups and megacorps, which routinely select top graduates from the ivy league universities (as we have seen in now famous salary disclosure posts), there is no real demand for programming. There is no demand for high-quality components or specialized solutions outside established insanely beurocratized packer's ecosystems, such as Java EE, MS, etc, which allows middle-aged mediocrity to get its salary.
Suppose you are a freelance programmer, who has that golden classic CS education (algorithms, data structures, programming languages). Is there any offers beside Jomla theming or JS or occasionally Android coding? Nope. All remaining software engineering action is inside startups or megacorps, and even it is usually nothing but piling up layers upon layers of Java (or nowadays Node) crap, which can't even be polit-cortectly called "over-engineered", because it is simply arrogantly-stupid.
The mantra "write your own Nginx first" does not work either. Igor had a nice sysadmin's job in an oil company so he was able to spend couple of years for prototyping and slow knowledge-based engineering, as opposed to fast and "productive" copy-pasting in an IDE.
If there is a real demand, there must be the market for it. Have you seen any demand for good stuff (Erlang, Haskell, CL, Golang, kernel-quality C)? Me neither. What we see is mere evolution of valley's sweatshops for top graduates we call startups.
I remember what software, and the internet, used to be like 20 years ago. My user-experience, and expectations, has increased in almost every way imaginable during this time. From a 10,000 foot view, I'm extremely pleased with the way things have changed in the past 2 decades.
To answer your question with a question of my own: If you think that software/service X sucks, why not see this as an opportunity to do something about it? If X really does suck, and if the reason X sucks is because it's being designed/managed all wrong, you could make a ton of money for yourself by building a company around building a better X which doesn't suck. Build an alternative that prioritizes reliability over agile/fast-releases/new-feature-rollout, or whatever you think the problem is.
If you're right, if users genuinely care so much about reliability, if reliability is important enough to sacrifice feature-experimentation, time-to-market and development-costs, then you should be able to achieve great market success and win over the current unreliable dinosaurs. More generally, some other company/startup that espouses the above reliability-centered philosophy should be able to enter the market and start dominating it.
The fact that neither you, nor anyone else, has killed off the companies/services/products that you're complaining about, leads me to suspect that users in general are willing to give up some reliability, in exchange for other benefits like low price and novel features. I know I certainly do.
Blaming users is part of the problem. Yes, they vote with their wallet, but their vote is usually made in ignorance and is often mislead by companies that are so used to dissembling and exaggerating they call such antisocial behavior "best practice".
That said, it is true that this is largely a problem with the economic incentives. Capitalism optimizes for businesses that are financially efficient, so the business that sells the lowest quality product they can get away with is "successful". This becomes even worse in software, where quality is harder to see directly. Even the people that write software can have a hard time evaluating "quality".
The solution for this situation is simple, but it's basically taboo to talk about it: liability. If you sell software, you need to be liable for any damage it causes when "used normally". For a decent sketch of how this might look, see Dan Geer's explanation[1]. There may be other ways to implement liability. I suggest that the software industry should find a way to implement this as soon as possible, if they want any say in what "liability" means.
Yes, this will raise development costs; spending more for better development practices was the goal.
While custom software typically comes with guarantees (if it doesn't work, the provider is generally liable to fix it for free. If it's not "finished" on time, it may even pay damages —determined by contract); shrink-wrap software (free or proprietary) generally comes with a nice piece of text saying that if it shreds your hard drive, or lag so much it makes you mad enough to throw yourself out the window, it's not their fault.
Liability towards one customer is also not the same as liability towards thousands, or even millions of users.
"I remember what software, and the internet, used to be like 20 years ago. My user-experience, and expectations, has increased in almost every way imaginable during this time. From a 10,000 foot view, I'm extremely pleased with the way things have changed in the past 2 decades."
I have to agree wholeheartedly with this statement. I'm relying on software to basically help me run my life. Thanks to software, I can accomplish more than I would otherwise. (OTOH, I also take on more, which is a whole other post on making ourselves crazy by trying to do too much.)
Still, there is room for improvement. The problem is that no single point of failure exists. It's a layered problem involving economic incentive, unskilled people jumping into development to shore up the shortage of labor, an immature consumer base, a dizzying array of tools, methodologies, and standards for developing software, etc.
At present, I don't see any way out of the dilemma.
If you think that software/service X sucks, why not see this as an opportunity to do something about it?
A lot of the examples in the blog post are about sucky corporate websites. You aren't going to set up a competitor to IKEA, let alone an electricity company, because the quality of their IT is poor. That argument works in only one case:
• The product is pure software
• It doesn't have any barriers to entry
• The quality of the incumbent is so dire, and the chances of improvement so low, that it makes sense to replicate their entire product investment just to "do it right this time"
Yeah, that's basically it: software sucks for economic reasons. It's insanely expensive to create good software, and even if you spend a lot of money, you're not guaranteed to get it.
I think there are some tragedy of the commons situations though, where everybody is relying on extremely underfunded common infrastructure. It was only after a bunch of expensive security holes that the users of this software started to pay more attention:
> If you think that software/service X sucks, why not see this as an opportunity to do something about it?
I try very much. I try to report any issues I find like this (I spoke to 5 different people at NPower trying to explain their issue, and nobody cared or understood).
I have a job/family/life. There's only so much time I'm prepared to put into trying to make silly things like these better, and I'm certainly doing far more than most! :)
What I do in this "war" is constantly reminding and educating my coworkers that they should care more and also teaching them HOW they can do that.
I had already several very hard and harsh fights with my bosses, even with the CEO telling them "everyone is not very good at this company" because I care about quality very much and rather fight over it than sit quietly and just produce shit.
I gave "talks" about specific topics, constantly grab the opportunity when I can tell them about a new concept, Clean Code, better tools, whatever. I introduced TDD, CI, automatic deployments, will introduce CD next month over a year I have been working at my current company.
If you are one of the better developers you can and should fight against lazyness and low quality, even teach the ones who cares/know less.
Also it's very comfortable to blame your boss, but you can do very much about this. I introduced automated testing not because they asked me for it, but because I thought is important. I educated them that this way, development will take a bit longer, but will be higher quality.
You can even do things which improves quality and they don't even know about. My next step is to introduce third party services (which they never used before and always went with the open source or cheap solutions) which makes our work easier, so we can focus on what's important, and make it better.
> If you are one of the better developers you can and should fight against lazyness and low quality
This can be thought of as a type of error correction. If you don't fix problems as they rise up the chain of command, those problems eventually become policy.
I can't agree with this more. Apathy and clock watching has done more to damage our industry than many other things. It shocks me when developers are willing to do the bare minimum, scrape by and avoid arguing to make things better.
Much like the OP, I spend a great deal of time trying to guide my clients to work in ways that guarantee better quality. The mantra in a lot of these places though is 'Features first, feature bug fixes second, other improvements never'. Without the folks delivering the software (Everyone from QC, DBA's, DevOps Engineers/Ops and Developers) pointing out the inherent danger in this nothing is going to change.
I totally agree, and since I became team leader where I work I like to think we've made MASSIVE strides in quality. We've moved from CVS (lol) to DVCS; we review every checking; we have build servers and perfectly matching local QA and production staging environments. We devote time to fixing bugs and eliminating technical debt.
We're still far from perfect, but I we're definitely on the right track. It's possibly because we try so hard to improve that I get so frustrated when big companies appear not to :(
I don't have that many people around me that are worth influencing directly, so here is the way that I do it :
1) I divide up the TODO list and customer requirements into the 'Little Things' and the 'Big Things'.
2) Customers / Managers / Users seem to love the 'Little Things', and feel like they are getting great value for their dollar when so many of the 'Little Things' can be delivered in a relatively short time, it looks like great progress to them.
3) I always do the 'Little Things' at the Customer's site, in the Customer's presence. I always do them during the day, and I get the managers involved as much as possible. Be a good corporate citizen, and get paid on time.
4) Doing lots of little things on an hourly rate is a good way to generate cash quickly, and get paid on time. Its useless in the grand scheme of things in terms of software development, but it is needed to keep everyone happy and ensure that you can pay the bills on time. As you can see, this is all about cash, fast cash, and smiles and happy customers all around. It can be borderline degrading at times, but there you go. Milestones and Invoices and Smiles, and more Smiles and Cash.
5) I don't even bother trying to explain the 'Big Things' to these same Customers / Managers / Users, because it just plain scares them. They will never believe that software can ever be that complicated or difficult. It really scares them in fact. They want their world to be manageable, understandable, easy to estimate, easy to achieve ... that want to feel like they are 100% on top of things and in control. So I just don't talk about the 'Big Things' at all. Peering into the void that is the Big Things is the quickest way to make a Happy Customer (who pays on time at a good rate), into a very upset individual.
6) Bank the money and take time off. Turn off the phone, close the email app, stock up the fridge, tell people you will be out of town for a while. Get comfortable .... and code. Code the big things. Do it in the comfort of your own setup, away from everyone, and take your time. No deadlines, no clocks, no timesheets .... just you, an editor, a github account, some emergent ideas that may or may not be well defined, and some code.
Working on the 'Big Things' is simply not negotiable.
I never charge for doing the 'Big Things', and I never discuss this with customers, or try and use it as leverage in future billing arrangements.
Working on the 'Big Things' is simply not negotiable.
Working on the 'Big Things' is why I program and why I ever started to get into this in the first place. Being able to properly immerse myself in the Big Things, on my own terms, on my own time, out of my own pocket ... is compensation enough, and worth far more to me than any paycheck ever will be.
Working on the 'Big Things' is simply not negotiable.
Having a set of 'Big Things' in my toolbox then enables me to re-enter polite society for a while, in order to crank out some more 'Little Things' easily and cleanly, and get the bills covered. You need happy customers for that. Happy customers with Milestones and Ticks in Boxes and Smiles and Invoices and Cash on Time.
But working on the 'Big Things' is simply not negotiable.
I just spent half a week with the following... I re-joined as a contractor in a new role at (big corporation), I arrived my first day (only gone a couple weeks), was able to pick up my security badge (no problem), issued laptop, bag an accessories, no problem..
AD login was written down, missing the last character that was there before, tried as it was, finally tried with the missing character, still no go... three calls to cusomer support later, disconnect ethernet, and able to login with secure wifi.
Able to login, setup a couple things... hmmm, no access... Told my email address was now a different one, ask about original email address. No bueno... wait a day, call back (still waiting on response from original issue), decide I can't wait, needed to get in. No email access, no lync.
Finally get a call from email support the next day (weds), but it's 7:30pm and I'm out to dinner, didn't recognize the number, sent to vmail... the issue was that I had no email, the message on the voicemail said "email me", no callback number, no email address, how the hell was I supposed to email them.
Two days later, I come in, I'm able to send mail, but not receive... there were somehow two email addresses configured. After the weekend, I'm finally able to send and receive, but now have a 3rd email address, and have to manually fix my profile in a bunch of internal services that were auto-populated at first login. Not to mention a VP in another division (with the same name) getting a bunch of email meant for me, because my email was fubar'd.
It isn't just software, it's entire processes. I basically sat for a week, twiddling my thumbs (mostly), because I couldn't communicate... still waiting on access to our ticket tracking system (was told to wait until I had email and lync).
Coder are soldiers paid to do a job. If the army fails, blame the generals that totally do not care about quality and will prefer to pay dozens of obedient coders lower and lower that do not care than be eventually facing opposition based on concern in quality.
Most over heard arguments before standing up: hey, we don't have a choice, there are all these companies competing with us doing the same. The other arguments for not caring about quality is : not having customers because you are a startup and that well, we will all be wealthy after we will have sold our shares and you will be able to care with the next management.
> 11) Actions at the sharp end resolve all ambiguity.
> Organizations are ambiguous, often intentionally, about the relationship between
> production targets, efficient use of resources, economy and costs of operations, and
> acceptable risks of low and high consequence accidents. All ambiguity is resolved by
> actions of practitioners at the sharp end of the system. After an accident, practitioner
> actions may be regarded as ‘errors’ or ‘violations’ but these evaluations are heavily
> biased by hindsight and ignore the other driving forces, especially production pressure.
> Coder are soldiers paid to do a job. If the army fails, blame the generals that totally do not care about quality and will prefer to pay dozens of obedient coders lower and lower that do not care than be eventually facing opposition based on concern in quality.
"But I was just following orders" isn't an acceptable excuse in any situation. You are inextricably tied to a set of moral obligations by your decision to build things that affect other human beings, and you cannot shirk them or push them upward. They are yours, they remain yours, and attempts to rationalize the abrogation thereof are at best gross.
Own your shit. If you cannot do good and do well, then quit. (I have done this; it is not that hard.)
Then own it, and fix it, and be with a clear conscience. I couldn't, at that particular job, and I could do good elsewhere. So I made the decision to do that.
"We're just coders, it's not our fault" is not an excuse. Either own your shit (and, implicitly, fix it) or refuse to participate. There is no middle ground.
Trying to change job here to avoid taking part in this buffoonery.
I can't. Nowadays so much jobs are regulated per diploma/certifications that it is hard beginning again and regain while "older" the momentum for credibility.
The funny part I see breadmakers, cooks, construction is also suffering the same problem. I am currently in France, restaurants for average people are so bad, they do stuff that intoxicate people. I can do bread, cooking, fixing bikes and I do better than "pro" nowadays" without a diploma in their field.
Something systemic seems to be going on. Like people are just taught to execute and required to obey, and bosses don't know the craft of the people they should lead.
Basically being a boss boils down to the capital you can have, and competition seems rigged in favour of people being born wealthy.
I don't think that birth merit outweight a sane competition.
I find the Figaro monolog that inspired French and US revolution about the old monarchic regimes to be very actual.
Weirdly enough author's right says it should be in public domain, but it is impossible to find a link to this essay that is very actual.
The systemic thing is called "professional managers". In the past, only people "from the trenches" were promoted to manage and lead others. Nowadays, we have managers who studied nothing but management. And it shows.
Karl Marx analysis was that through education you can capture the praxis (savoir faire) and transform it into both stealing it from the craftsman and also making expert in doxein (savoir faire) that knows nothing (Donning Kruger effect).
I think that it has always fueled my intuition that a profession should be defined by the craftsmanship and not he education.
And I found asking questions a better way to make people aware.
People will first deny we all have shit in the eyes. And say you are exaggerating.
But, ask them about their experience in IT, but also every days inconvenience because of loss of quality in product. There is a pattern. People feels like industry don't care anymore about consumers. And let them tell their part.
It makes people realize they miss the simple boring life that works.
And maybe they saw something you don't see and they can help you and you can help them.
My first way to improve this is to help people be better consumer by sharing simple way to spend less and get better products by focusing on rational choices. I also sometimes learn. I also sometime am bullshited.
> "But I was just following orders" isn't an acceptable excuse in any situation.
On the contrary, it's an acceptable excuse in lots of situations. The finding at Nuremberg was that there is a line beyond which it stops being acceptable, and murder is on the far side of that line. But most things in commerce are on the near side of it.
In other words, a moral obligation to give up your job rather than tolerate sloppiness, applies when you're dealing with safety-critical stuff such that sloppiness could get someone killed. It doesn't apply when the sloppiness will cause minor inconvenience.
Oh, thats nothing. Just today Firefox managed to freeze up the entire X server with some WebGL content. Gedit become unresponsive several times, also had rendering faults (just 640kb file with autogenerated html and js).
SublimeText2 crashes daily. (it's faulty plugin)
Office outlook for Web is a general usability horror and has many features that do not work. Win10 has number of bugs that I encounter almost daily, Edge is generally very buggy.
I think generally all these problems are indicative of several factors combined. Laziness, general lack of attention to detail and pure developer incompetence, general need to push out stuff too fast (marketing decisions policies) and then finally complexities in the software systems itself. Today any given software system is enormously complicated to a degree that nobody really understands the systems completely. In fact it's a more or less a miracle that things work as much as they do considering all the billions of bits that need to be just right for me to even write this comment. That being said, I don't think there are shortcuts here. Better quality can be achieved but it requires the mindset for doing things that way. And it's going to require testing. And a lot of it, unit tests, regression tests, automated test suites.
Personally I find that when I write code I often need more unit testing code than the actual code to cover the system under test functionality properly. I'm talking about a ratio of up to 5 lines of testing code to a 1 line of real code. Sometimes I can get close to 2:1 or 3:1 if the function/class/method is not very complicated. Anyway even if you now take that conservative ratio of 2:1 and go look at any random open source project I'd be surprised if you would actually find that much testing code there. Good luck.
gedit has to be the slowest piece of software on Linux, which is insane since it's supposed to be gnome's notepad counterpart. Starting up geany it's way faster and geany it's basically an IDE. Ridiculous.
Overhead of unit testing is why I'm not a fan of it for personal projects. Testing even the simplest 400 lines of code needed more than 1000 lines of testcode and I'm doing as little as possible.
99% is a way high over-estimate. Yes, strongly statically typed languages eliminate a LOT of potential bugs and I always try to use them. But it's not like they eliminate 99% of all bugs.
> Yes, strongly statically typed languages eliminate a LOT of potential bugs
They don't. AFAIK, lines of code is the only known metric with significant correlation with bugs so far. And if you ever wrote anything in a good dynamically typed language, like Perl, you know that its type system never causes any problems.
I hear a lot of complaints from non-programmers about the software they use and they always try to make sure I'm not offended. I can't even begin to explain how I'm not only not offended, I hate these lazy problems even more than they do.
Some of those are clearly complicated issues but there's so many cases of just plain laziness it's infuriating. However, I don't blame the developers entirely. There's so much pressure from management, product managers, etc. and so much cost cutting it's ridiculous. It doesn't directly affect the bottom-line though so I'm not sure we can do much except expect more from ourselves and get used to it.
Even if it's just plain laziness-- how do we get to the point where a developer is being just plain lazy? People naturally want to do good work and want to be proud of what they do. So if people appear like they're being lazy-- they're probably really demoralized, or overworked, and they just haven't heard a nice thing about their work in weeks. We're social animals, and engineers need appreciation too.
In a well-functioning industry, shipping broken items would just create a money-sink at the support end. One of the reasons this doesn't happen is that many software is unique (effectively, every vendor has a monopoly on their program), so software vendors can get away with ignoring most of their users. Even if there's multiple programs the user can choose from, switching costs are high (they usually require expertise).
This is not just a failure of the software industry, you see this in every field where there's hardly any feedback from end-users to the vendor.
Look at the first example. An escaping bug. When did you last see an escaping bug in a desktop app? The shift from Visual Basic/Delphi/C++ Win32 apps in the 1990's to the web introduced this category of bug that was previously rather rare.
That sort of thing isn't really caused by lazyness. It's caused by our tools creating holes for us to fall in to, and then developers falling into them.
Ultimately it's not really about software developers, is it?
Most of these issues seem to be management decisions.
Like giving a car mechanic a few cheesegraters and a dog, sticking him in an open field in a thunderstorm, and asking him to rebuild your engine.
He could be a prodigy. But there's water in it, man. There's bloody water in it.
Half of the stuff on that page should not involve any programming at all. The nPower one, for example. Yeah, it's broken, but that's not the actual problem. The problem is that it would take about 5 years to report the issue, so no-one knows it's broken. Just give an email address or a telephone number, and actually employ customer support instead of paying yourself $50M/second. Done.
I disagree. While there are management/process/whatever issues, there's also a lot that could be improved by the devs.
For example, take the podomatic example - that's just a sloppy function written by a dev. The NPower thing is just sloppy setup of the website and bad links in email templates. The MS thing, well why did the devs add jQuery to a site that is so incredibly static?
Devs make a lot of decisions that influence the quality of software and I think we need to start taking more responsibility for that.
As software evolves and gets more complicated it only makes sense that bugs will become more obscure and tougher to chase down in projects that are reaching sizes we've never seen before. Most developers have a bug list longer than they can handle. It's not about 'giving up' it's about prioritizing. It's not about being lazy it's about only having so much time in a day.
Start with very small, modular code... avoid spaghetti coupling (that is removed by multiple laters)... organize structure by feature, not type/class, and accept that all features don't look the same... Some features may only be data structures, service libraries and/or ui.
Separate event chains from your UI so much as practical, and avoid classes that look like Car, Wheel, etc that are too smart for their own good.
Actual code reviews, experienced architect(s) that actually understand the whole project, and paired development. Not being afraid to refactor... creating componentized pieces that can live in separate codebases if possible.
What's most interesting to me is that people choose to blame developers for their observations of quality or content of software, games, etc. Completely ignoring the organizational or institutional structure involved, as if we all have complete autonomy over the products we work on.
I've seen people blame developers for, like, female characters in games being oversexed. Newsflash- that's a business, product, and design decision. People even tried to blame engineers for the VW emissions scandal! Companies nowadays set ridiculous release schedules, overwork their developers, and release crap. But sure, blame the devs, that will probably help.
The author here is a dev so there's really no excuse.
My post was actually aimed at the industry more than specifically developers. The headline was a bit bad (it was written before the post) but I didn't expect to get 30k visitors overnight so I probably didn't proof-read/tweak as much as I could've done :D
I've never written unit tests in my life, and my code works most of the time. There used to be times where I'd write code for an entire week in Java without compiling (because it was taking forever) and back then we used pretty basic IDEs that didn't help as much as they to today to help you prevent stupid errors. And, usually, my code compiled just fine and worked. Today's developer is a trial-and-error one, tweaking a few lines of code, refreshing the browser, and seeing if stuff works or not. Today's developer spends disproportional amounts of time writing unit tests and, yet, producing buggy code.
I've never written unit tests in my life, and my code works most of the time.
How do you know?
I agree with your point about taking more time to think about the way your code works, to design data structures, to use rigorous methods to make sure the code works, but ultimately you still have to check things. If you haven't automated those checks then either you're checking things manually or you're not checking them. On any significant project there's too much code to check manually.
People take % of code coverage as some insurance and spend time writing stupid tests instead of writing robust code.
People also don't think that unit tests are constant maintenance burden and a time waster - you change this and that and then you need to spend time entertaining the tests.
Again, use your brain, not your fingers - it's that easy! Oh, and don't program using a dynamic language saving keystrokes, but wasting, even more, writing brainless unit tests!
At least in my personal career, most, though, and real issues come from race conditions. But I've been fortunate to catch them quickly by using my brain. And that's what we lack today - developers who use their brains, who don't spend too much time on emoji reactions in Slack and finding the right GIF.
There's plenty lots of code monkeys and code gluers today that can't even implement a bubble sort without googling it!
People also don't think that unit tests are constant maintenance and time waster - you change this and then, you need to spend time tweaking tests.
That's why you test interfaces, that you've spent time designing so they don't change often. You, or someone else who isn't as good as you, can change the internals of a block without updating the tests.
Given the choice of working with a dev who knows how to write a bubble sort[1] and a dev doesn't but does write tests, I'd go for the latter.
[1] To be honest, if someone in my team even considered writing their own sort function, from memory or by googling, I'd have a quiet word. Use a library. Preferably a well-tested one.
Oh, I forgot to mention the ever-shrinking attention span. Our lives become fuller and fuller of distractions and interruptions. With the increasing load of notifications, alerts, and news feed updates, we can't really concentrate and grasp hard concepts. And we, the people involved in startups and being behind these distractions, are eating our own dogfood and suffering from the same evil.
This is not about memorizing useless stuff, but about knowing the foundation of Computer Science. Our brains are full of emojis today, not algorithms. It's been scientifically proven that the Internet is making us more stupid by tweaking our brains' modus operandi to offload memory to Google and StackOverflow. We think memorizing has just a storage function, but it doesn't - when knowledge is planted in our brains, it just doesn't sit passively in there, but its accumulation improves our thought process by using things like generalizations, drawing analogies, etc. So, again, this is not about being dependent on the Internet as much as it is about growing better brains. I don't think we're smarter than people 100 years ago - we just have more tools to compensate for our shrinking brains.
By the way, I also test my code - just not the way the newly-bred "ninjas" do it - pair program, peer review, and all this fancy useless stuff! By the way, I have over 30 years of software development (not coding!) experience. I've attended many programming contests (not hackathons) since childhood and won most of them. You don't hear about such contests anymore - gluing up a quick POC together is the big deal today, and this is just pathetic. And no wonder we have so much POC-quality code in Production (with a capital "P") today! And a craziness like CD to Production is considered the cool new trend - Production is not sacred anymore, and everybody can dishonor it as they wish!
Again, I'm talking about the large gray mass of developers; there are still a lot of true developers alive today, but they are becoming extinct, unfortunately!
Honestly? Mainly, reviewing it and debugging it in my mind. It works - you should try it! It makes you a smarter and a better developer as well. The try-refresh-repeat cycle is for amateurs.
But that comes with the way I write my code - modular, i.e. broken into small, readable and reusable units. Well, if I'm doing a something that requires shaving off any redundant CPU cycle, that's another story, but as we know, premature optimization is the root of all evil. You won't see anywhere in my code long functions/methods, endlessly nested IF-THENs, and similar junk - writing simple, unfancy, DRY code drops the need for unit tests to almost zero. Using long, but meaningful identifiers drop the need for inline documentation.
I mean, I don't have to repeat things that have been known for ages - just buy some old books, it's all in there!
There are way too many coders today and very few engineers!
I mean, I don't have to repeat things that have been known for ages - just buy some old books, it's all in there!
I don't really understand this point. UnitTest and SUnit, that all subsequent unit testing frameworks are based on, were released in about 1990. "Extreme Programming" was a big thing in the mid-90s and that featured test frameworks as a core component. Automated testing isn't new. There are plenty of old books that advocate exactly the "new and trendy" processes you decry. If your argument is "Write code like engineers did 20+ years ago!", then that includes automated testing.
I strongly suspect that we're not actually that far apart in our thinking. I don't believe that unit tests magically make your code better. You can't develop a solution to a problem you don't fully understand by throwing more and more tests at it. Developers still need to take the time to think. I believe a lot of my enthusiasm for testing comes from the belief (...experience...) that there are plenty of developers out there who, as you put it it, are coders rather than engineers. My tests pick up their mistakes (and mine, because sometimes I'm not at my best). Maybe, if you're very fortunate, you've managed to surround yourself with a team who are all really good engineers who genuinely don't need to test things because they're rigorous to the point of infallibility. I haven't, and I work in an industry that means I probably never will, so tests are necessary.
Advocating testing is much, much more productive, and will lead to better software in the long run, than advocating developers write better code.
Just because unit testing existed in the 90s, it doesn't mean it was widely (ab)used as it is now! Many technologies were invented well before they gained adoption.
I didn't mean unit testing is new or useless - just that it's being abused ignoring the benefit-cost ratio and negatively affecting you thought process to rely on something unreliable instead on focusing on code quality - tests are no panacea as code today is more tested than ever, but is also buggier than ever as well.
As a summary: Just think more when you write code, don't get into the vicious circle of try-refresh-repeat, and having tests in place is no excuse for poor code.
> Today's developer spends disproportional amounts of time writing unit tests and, yet, producing buggy code.
Unit testing make you a lot more efficient. You can choose whether to spend that efficiency producing less buggy code in the same time with the same functionality, equally buggy code quicker with the same functionality, or equally buggy code in the same time with more functionality. Most of the time businesses quite reasonably decide the second or third option is more valuable.
Unit tests do not correlate to quality, and even at 100% coverage only guarantee that you've looked at every line of code twice (usually). Not only that, a lot of projects skip adding testing until well after a proof of concept is working... why? because we don't want to not ship features for two weeks while we backport in tests... Now you have less than optimal code and less than optimal tests.
This persists, every suggestion to refactor structure rebuffed, and this persists until the code debt is so high, it takes 4-5x as long to add a feature as it should. Where, if you're lucky, you can make a new version using only 3-5 year old technology, instead of the decade old stuff you've been supporting for half a decade. Even then the 2-5yo stuff isn't as nice as it was sold, but it's "enforced" by the company powers that be (god I hate angular sometimes).
While I wouldn't suggest that, I will say spending more time manipulating my code to work with the automated unit testing tool of the month has definitely lead to an occasional decrease in code reliability. Not all code can be unit tested [with every unit testing library], and being compelled to shape everything such that it works with all once and future automated tools often makes the mess worse.
I do think sometimes about TDD as Think less driven development. The problem is not that much in TDD itself, but how its usually implemented - as a safety net and excuse for producing all kinds of bad code.
That's what I have come to call FDD, Feedback Driven Development. Try something... Google/Stackoverflow it... maybe find a tangential solution to the problem at hand, fit it in the context of the problem, try something... ad nauseum
Never also, don't understand at all. Bugs usually visual and business process. If wanted to type all business process to unit test not sure client willingness to paid the time.
Unit tests are no silver bullet. This trend was started by consultancies that found more ways to milk customers - write unit tests and do inline code (!) documentation. Both are a real cash cow when the client doesn't know what they want and you need to constantly tweak code and respectively tests and documentation! I understand API documentation, but internal code that's obvious like what "width" and "height" parameters of a method are for is beyond stupid!
Software development is still young. 30 years ago it really wasn't an industry and 50 it was pretty much an academic venture.
We have better procedures and tools. The groups not using version control, unit tests, peer reviews and other common means to increase quality will be out-competed by those who do.
Just write the best software you can, with the best group you can in the mean time and in the long run this will sort itself. Of if you think you can sort it out, try to.
I don't know what "sorted out" looks like but I would not be surprised to see apprenticeships like plumbing and HVAC or certifications like medicine and law. Sorted out could look like just about anything, perhaps we will be drenched in shitty software until unit testing is taught to second graders along-side basic arithmetic.
30 years ago it was definitely an industry. The commercial games market exploded with the arrival of cheap 8-bit micros in the early 80s. Mini and mainframe application development has been around for a lot longer.
Commercial software development has been happening since the 1950s.
"Sorted out" means that software doesn't fail in stupid, avoidable ways.
The current state of the industry is shockingly bad. But users have been trained to expect broken software, developers rarely have enough of quality ethic to care about those boring bug fixes, and management just wants more money.
So here we are.
Personally I blame Microsoft for gifting the industry with a tradition of bullshit EULAs that more or less said "We can ship any old bug-ridden crap we feel like, and we're not responsible if it blows up in your face and takes your business down. Besides, if something goes wrong it's clearly your fault for being too stupid and ignorant to use the software properly. Whatever - definitely not our fault, so don't even think about suing."
After the Pavlovian conditioning and Stockholm syndrome set in, it became impossible to expect consumer pushback.
30 years ago, when I started working as a professional software developer, I encountered people in their 40's who had been professional software developers for 20 years. So no, software development is NOT young. FORTRAN was released almost 60 years ago, in 1957. Cobol was released 55 years ago in 1960. It's been half a century since thousands of people have been programming professionally.
How old is a 60 year old industry? It's the time it took to go from the first Write brothers airplane to jet airliners and rockets into orbit. It's less than the time from the discovery of DNA to full sequencing of the human genome. Between 1850 and 1910, steamships went from barely worthwhile to gigantic ocean-going liners (RMS Olympic) and battleships (HMS Dreadnought).
The software industry is not young. The third generation of developers are entering the workforce. Some of them have grandparents who were professional software developers. The industry has completed (or failed at) 100's of thousands (if not millions) of major projects.
IMO, far far too many people working in the industry simply refuse to learn from the past. But that's a rant for another day :-)
You can really tell when web developers don't actually use the website they're building (especially contractors). Same for applications and for overly complex software where developers only use few features themselves (hence web browser bugs).
Only very talented or disciplined people manage to write flawless code without stumbling over their own bugs first. A good, opinionated, statically typed language helps, IMHO (scripting languages are one of the reasons for crappy web pages).
I see similar messages fairly often. I believe "Success" is what you get when you use strerror() to translate an errno of 0 (success) on a Linux/UNIX machine with English localization. So this contradictory message is what happens when the programmer uses an errno-printing function (perror() or something custom) to indicate an error that wasn't actually due to a failure that sets errno.
I had one of those once so I winced when I saw it. I was printing 'Error: ' with the message that came back from the server and in one case saw 'Error: Success'.
I'm pretty sure that was his SSH connection dying after he killed the remote server. But yeah, it's hilariously/depressingly contradictory. Worse than useless, since you don't know if something went right, or something went wrong, or both, or neither!
Yeah, could've been something like that. It doesn't normally do that it was just a one-off, so I figured it was just something weird up (like maybe I'd already told it to reboot and it didn't seem to work, so I sent it again but it was just starting).
As a software developer, there is always an infinite list of stuff to do, things to build, issues to fix, etc. Why do we spend time doing other things rather than fix these annoying quirks?
In my experience, it's usually because there was something more important to do.
I don't think it's fair to criticize these decisions unless you know what was done instead.
That's true; and the post wasn't aimed specifically at devs, but more of the software dev industry as a whole (I know the title is pretty badly worded).
I think companies should start caring more about their quality and users should stop giving money to those that don't. Some of these silly bugs are pretty ridiculous and should never have happened.
I think it's entirely fair if there is no proper feedback method indicated on the erroneous page. How else am I going to do anything about it? See also the "dear-github" letter.
Developers want to build quality stuff that they can be proud of. Time, budgets, marketing promises, and reality have a way of interfering with that. Rare is the job where you can tell your boss "We could release this now, and I could start on the next thing, but I'm not happy with the quality and would rather rework it." and they will just say "Ok, do that. Make it good. We'll tell all of our clients that they'll just have to wait for what we promised would be ready this month. They'll just have to deal with however that impacts them and their business."
More layers = more bugs, both at the software level and the business level. And there are a lot more layers now. We're not just compiling raw ASM or C and handing it to the end-user anymore. Every couple of years we add more layers to both.
Sure, it is. But I can log into the yahoo account with no problem. You'd think this many years after the acquisition they would have one source of truth on what should be blocked.
Without a doubt that's an anti-abuse block that they forgot about. I've seen it happen. People add IP range blocks and then consider the problem solved and move on.
For some devs, I'm sure that's true. For me, I like to take pride in my work and be proud of the products I work on. I love people reporting bugs so I can improve :)
That's odd. I think I work the other way around. Making new stuff gives me analysis paralysis as I get absorbed in thinking what would be the best way to build that new stuff.
May favorite tasks are not debugging though, but getting the chance to refactor a mess into something better.
The "Pro_Hacking" story made me laugh. In that case, I think the support person is just providing the body of a response template, where "Hello Pro_Hacking," is fixed (i.e., not something the support person can easily change).
I wondered that; but the first several emails didn't start like that. It was only after I complained that wasn't my name it started, like they were taking the piss :(
Products aren't perfect. Whether they have frustrating bugs or are missing a useful feature, there's always room for improvement. The best way to help them improve is to give them feedback.
Unfortunately most companies aren't the best at taking feedback. They either don't provide a way to give feedback or don't prioritize it enough in their roadmap.
Which is why we created Product Pains, a new feedback channel for every mobile app and website. It works like so:
- People can post feedback about any product.
- People can also vote and comment on feedback.
- Teams can subscribe to feedback about their products and mark feedback as "In Progress" or "Fixed".
Voting is critical because teams get a clean, prioritized list of the issues that are most important to their users. Rather than having to manually aggregate individual app store reviews, emails, tweets, etc, they do virtually no work.
It's so satisfying to see a Product Rep mark your feedback as "fixed" and know you had an impact on their product and everyone who uses it. I'd love to see your feedback on Product Pains.
My guess is that it's always been this bad and often worse. But maybe we are more adventurous and use a wider variety of software these days, so there are more chances to see something go wrong.
Software updates alone will expose you to more versions, so you'll have more chances to see different bugs (rather than the same ones that you learn to adjust to or ignore).
It has not been mentioned that software engineering differs from other types of engineering (like civil engineering) in that it builds abstract objects (software) that are then expected to function in a number of differing material contexts (hardware). Think about how strange that is. Engineers building a bridge for example translate their ideas into specific arrangements of carefully chosen material. The software engineer builds for a limited number of reasonable architectures and for a virtually unlimited number of hardware in various states of disrepair. It is a testament to human ingenuity that software works at all as well as it does.
For this reason, the example Shuttle's software misses the mark. Writing code for a single known device is by orders of magnitude a simpler problem than writing code for numerous permutations of hardware.
> that are then expected to function in a number of differing material contexts (hardware)
It's not like that. Sometimes they are expected to function in a presence of any kind of errors, other times - not so much and everyone is ok with them breaking occasionally. But we really do have a way to make software that functions reliably in a presence of errors.
I think the "go fast and break things" mantra is causing programmers to doubt that stable, bug-free software can be built at all.
At my last company i was an older developer in a mostly younger team.
We were working on a full rewrite of our product. Version 2 was going to address all the hastily patched together misfeatures of version 1.
Even with this admission that we had a quality problem, I could barely convince them to let me take time to design the new version.
I was supposed to be the architect in charge of designing the new system, and I was constantly being rushed. I would tell them that if we think this through, we'll have a more coherent and stable product.
It was like I was talking Swahili to them.
On the few parts of the system where I was given enough time, I got very few bug reports. As for the rest, well I did what I could. It's no use being a martyr.
Expect this to get much worse as the profession is stripped of its remaining threads of prestige by the explosion of bootcamps and the supplanting of the terms "software developer/enginer/programmer" with "coder," one that equally well describes medical data entry clerks (no doubt the way many managers and spiteful journalists actually view us), if Google search results are an accurate representation.
Why expect professionalism from people who aren't treated like professionals and shown the respect that would normally be their due?
I agree, but we don;t always act like professionals either. People see development as a commodity because we treat ourselves and our product as a commodity. You don't see lawyers lining up to undercut each other - because that's a race to the bottom. We do see it in software development all the time.
A lot of the decisions made behind software products depend on management's decision as to what is 'good enough' and when to release a first version to iterate upon.
Every day, I encounter parts of the codebase that could be refactored or sections of a page that probably do not make sense to half of the user base, but in the eyes of management, this is not a problem as customers will learn. If I were to spend all day polishing aspects of the site, that would not be as preferable as working on a major feature release.
That guy keeps finding an awful lot of errors, is what I thought while reading this article.
It seems many users like myself don't even notice errors any more or have developed an instinct what software or services to stay away from, have low expectations from support (read a logfile?) and a general disdain for a lot of what is going on by using adblocker or avoiding switching services unless really necessary.
It's always been really, really bad once you try to take the experience into your own hands. The path of least resistance with technology is to stay well behind the curve, choose the popular brand, avoid niche use cases, use the minimum amount of features, and modify nothing - in effect, to use everything as if it were an Apple device in stock configuration. However, sometimes more effort is worthwhile because the unmodded experience is so poor or ill-fitted.
Yesterday I finally got fed up with my cheap Android phone and flashed it with a modded rom. It took probably 8 hours of reading and downloading and testing and waiting to get it into a working state, because there are so many points along the chain where a little misconfiguration breaks the process, and the people making mods often have working software with poor documentation and inadequate testing.
In the end, there just is never enough time to go around to make it perfect for everyone in every use case. You have to choose carefully when you want to fight the battle - and you can expect to lose, a lot of the time.
Well, if you really want an example of software developers 'giving up' and systems not working correctly, there's always the video game industry/world. Where in some cases, products actually do get released in a completely broken state and where bugs like these are endemic.
But it's probably all due to one simple thing; people aren't encouraged to do things well, they're encouraged to get them done by the 'deadline' at all costs. Which often means writing hacky code, not testing edge cases (or sometimes, any cases), and then hoping anything that breaks can be fixed later.
The latter is also why both web development and game development is arguably more messed up than any other software development, because patches are seen as fairly easy and inevitable. Why get it right the first time when it can be fixed 'post production'?
One issue that separates software from most other industries is that software is not a physical entity. This makes the desire to change - refactor, add features, redefine use cases, etc - all the more tempting, since the only cost is getting someone to fix/alter some code (despite the fact that labor is obviously very expensive). In contrast, in aerospace for example, you have to get it right the first time, else you have millions of dollars worth of payload and equipment exploding. In these industries it means teams will probably be more reluctant to accept rapid change in development and deployment since the cost of failure is so much higher than software. This isn't the only reason for bugs in software by any means, but the sheer innate mutability of software makes the desire to pile on requirements, iterate frequently, and just generally not be averse to change a massive reason for buggy code
Write a web UI? Prepare to re-write it next year. Desktop is reasonably stable but now desktop is "dead" and mobile is the new thing. Mobile is a rapidly moving target and there's lots of platform fragmentation. Everyone knows next year or the year after mobile as we know it will now be "dead" and time to rewrite everything all over again. There will be VR and AR which will demand entirely new interface metaphors, totally new platforms, etc.
I didn't even start in on the hip language of the week. Now we have to rewrite it all in Go, or Rust, or Swift, or ...
If everything is changing this fast there's little incentive to perfect anything. It'll be obsolete next year. Just ship ship ship and then throw it away and ship again.
Some of this churn really is related to progress but some of it isn't. I'm skeptical of whether all of it is really necessary.
Have you ever written desktop GUI programs? ;) Particularly on Linux, but I remember that when I was still writing .NET programs on Windows, it was a running gag that every major .NET release brought a completely new UI framework.
It's why I started keeping track of all the annoyances I run across on wtfmac.com. Because, well, these are stupid, annoying things that piss me off. So yeah, when the richest company in the world does fundamentally wrong things with software, what does that say about the state of the industry as a whole?
That and allowing anyone access to your computer's graphical interface as if they were at your computer, even yourself over a web connection, is a bad idea.
I think the 8 character limitation is helpful. Because if you need to protect yourself against it, you can ask "but wtf do I really want to do? Probably not a VNC server..."
I reported a software bug in a commercial package (Allegorithmic Substance Designe) a few months ago. The support guy told me that, to help me, he wanted me to install a VNC server, send him my id and password, and let him have access to my machine so he could debug the problem.
When I balked he acted all offended and told me they work with some of the biggest companies in the industry, they see lots of stuff on customer's machines, and are totally trustworthy.
Dropbox has same sign up non-validation mentioned in the article. I mentioned it to them when I got a phantom sign-up notice. Reset password and found an account with my email and uploaded content! You can fully use DB w/o validating your email. Hello anonymous dead-drops!
Apparently this person was not using technology any time before the iPhone?
Quirks, useless error messages, website rendering problems... they've been around forever. Has this person even tried to install software packages on Linux 10-15+ years ago?
Have software developers given up?
I'd argue the vast majority never cared.
And really, why bother caring so much that you take to the web glorifying others glitches?
We hear about data theft stories, companies shutting down due to data loss fuckups... life goes on. It's shit that many people are burned because of these things, but really our society does not care. The loss is absorbed for the most part and we move on.
Stop being so dismayed with the world, or how other people aren't as good at things as you are. Move along.
I did use technology before the iPhone, but I don't remember being as frustrated by it as I am now.
The most frustrating thing is that we seem to be getting worse, not better. We're not learning from mistakes, we're too busy churning out the next buggy release to review the mistakes in the last one :(
When huge software companies like Google and Microsoft are churning out buggy crap, it's easy to see why others assume they can't do any better.
Maybe a recency bias. I have used computers since I was a kid, my earliest memories are Windows 95 but I am sure there was some before then too. I remember constant BSOD, constant application-crashing bugs with vague error messages, no ability to Google the solution to the problems you do have. Sometimes a game wouldn't work and there was really nothing you could do to even troubleshoot.
I doubt we're getting worse. At worst, we're staying even as complexity grows tremendously. I would guess we're probably improving slightly despite insane growth in complexity.
I would say that with great size, like Google and Microsoft, comes a greater difficulty in making bug-free products. Too small and you don't have the funding, but the sweet size is probably somewhere between a "small business" and "Google". Companies the size of Google have a lot of bureaucracy. They have a million products. They have thousands of developers. And in the end, the majority of the business doesn't even come from these glitchy products. It's understandable that noone is really up in arms over these bugs because upper management doesn't give a shit, and the developers only give a shit in so far that they aren't fired. Sometimes you'll have developers who care because of pride in their work, but they're the minority.
In the good old days, you'd just turn it off then on again.
The reason management doesn't care is that it isn't important. If a bug eventuates on a web page, the impact is minimal usually.
There are contingencies if a FAQ page is blank like in the article example. The user can visit the contacts page and email their question. Crisis averted.
Most of this web stuff is relatively trivial in terms of impact.
I used to work in ASIC development where the impact of a bug is massive. Hence the verification effort and rigor should be much higher.
When my friend was talking about buying a Prius, he saw Priuses everywhere for a few weeks. It's not that the Priuses weren't there before: it's just that he didn't notice them as much before.
The thesis is that because this person ran across a few glitches in a small subset of systems "programmers" have all given up. It continues to then re-count some bad interactions with customer service types and an anonymous Twitter user.
Seriously? That's it?
And this leads him to believe programmers everywhere have quit caring?
How does this even hold up? You realize it's not necessarily a programmer at many a institution prioritizing work. You realize that the power company customer service person probably will have zero clue what the issue is?
This is poorly thought out blog spam.
I expect the other 95% of the time, shit works just fine. But blog posts about everything working just fine don't generate hits.
Only graphic card issue if 10 years ago. not much.Currently i still confuse why apple don't follow update like linux.Just to download XCODE 5GB every update is very annoyance.
Welcome to the age where people only think about writing software and releasing fast, and frameworks allowing to do just that. Then you see people trying to solve problems like N+1 query. I look 10 years back where there were not many frameworks and you would laugh if you talked about resolving N+1. Everyone tried to touch DB once and fetch everything in 1 query.
It seems many people are doing things when they don't know how the underlying technology is working. This is bound to happen. And many new devs are picking up on bad practices and think they know too much..quite depressing actually.
I was refactoring some legacy academic code recently. It came with a GUI which, if the user did something wrong, fail with the message "Error N" (N being some number) without any further information or logging. The only way to figure out what happened was to grep the source code to see precisely what triggered the alert box.
There's also the bizarre policy university has about passwords. Your password is reset every six months (fine), it can't be similar to your old one (fine) and it's truncated to 8 characters (what?).
Ahhh... spoken like a true developer. Blame the QA department.
The problem with having a QA department is by definition, if they are a "department", they are not part of the development team. The only way testing is ever taken seriously is if the top architect spends his coding time writing test cases, instead implementing features, after all the architecture documents are put under ECO control. And let's hope the same architect believes in closed-loop validation (that is, instrument and measure your test coverage) instead of open-loop validation (write tests without actually measuring what code gets tested by it.)
While it seems like developers don't care, as many comments mention it is about the trade-off between resources like money, time to ship.
Recently, in our project architecture group meeting a senior architect said that the approach we have taken on our project is not architecturally 'pure' but it is 'pragmatic'.
I was setting up a staging database for some testing. Our DBs are big so we truncate a bunch of data we don't need on the staging server so we can have more DBs without wasting tens of GB per database.
I ended up executing this against the wrong server.
Needless to say, we've learned from this. Luckily nothing critical was lost, but I felt like a complete spanner :(
I think that we need to implement a new layer between software and users to solve this problem (like antiviruses do this nowadays but they solve very narrow problem). It should be like Wikipedia so everyone can fix bugs. There should be also smth like Wikipedia bots that automatically fix bugs.
Software is always bad at the top of the market, when hype and get-it-out wins over quality and measured improvements. Javascript is also always hugely popular at the top of the market. (2000-DHTML, 2008-Web2.0, Now-Node/Ang/React.)
Eventually, the Developer Gods of the Copybook Headings with terror and slaughter return.
I think this has been constant. There are more mediocre programmers than great programmers. Since we are exposed to a lot more software we experience more bugs. That means there are more programmers but it does not mean that the ration between good and mediocre programmers has changed.
I will be called a statist, but I think the software industry is really lacking regulations and quality standards.
I'm not a security guy, but I think most of the security flaws you find in software are caused by the lack of government approved security standards. I mean is there ISO stuff for software quality? And even if there is, I really don't think it's useful or that it tries hard enough.
Just look at healthcare.gov, I'm sure things would have been going much more smoothly if there was ISO rules.
Although it might be debatable if it's possible or wise to have such standards. But I would really appreciate it if important software like OS's had actual standards.
It seems that most of the stuff talked about in this article are just mistakes allowed by a permissive practices.
We have that, it is called formal verification. However, the process is time consuming and therefore expensive. The issue is not permissive practices or lack of standards, but the cost associated with quality.
i would spend a lot more time improving software i think is actually useful and beneficial to others if it didn't mean i'd lose my job and fall off the proletarian treadmill
the difference between everyone here and technophobes is that these problems are benign to us.
when someone says "computers never work for me! technology hates me!" it isn't that there experience is drastically different, just the very first incongruence causes them to quit, whereas the very first incongruence is completely benign to us such that we don't even notice it as we march toward the entertainment or service we want.
My concern was that the name entered was "Pro_Hacking". I was worried that maybe someone had found an exploited in their system and may have been able to activate the account anyway.
I was asking them to confirm it had not been activated, but the chat didn't go so well!
The frustration was that support kept greeting with the random name despite being asked not to. I'm sure that's just a "Hello $NAME" template, but it's still lazy.
But yeah, they're included here because of the repeated greetings like that. I suspect it was a template, though it only started after I told them that wasn't my name; first few emails (not shown) were "Hi Danny"!
nice article. I use OVO Energy they have a nice user interface for managing gas and electric. Although I did just check it and it's down for maintenance.
I did wonder why he would voluntarily move TO Npower. Even if you're saving money, it's not worth it. A single 'live chat' with them was more than enough to convince me of that.
I haven't had a gas bill from them in 5 months because their system wouldn't let me put in my gas readings. I had to go through a painful 40 day complaints process to get anyone to put them on the account. Never heard back from anyone. Eventually someone on their twitter feed was able to fix it over the course of a day, but of course it will be 3 months before their system generates a bill from the new figures. I've had plenty of ASP.NET "error occurred in the application" pages too so clearly they aren't building their site in Release mode. Overall, best described as a clusterfuck.
Yeah; lots of people told me I shouldn't go there. Iwas lucky it didn't work out and I had to move away, but SWALEC so far hasn't been much better. We still can't login!
nice a downvote for recommending a utility company with a decent control panel. real smart. Reminds me of the arrogance I faced when trying to get my first programming jobs.