I like the article at lot, I've got a similar background.
My answer to the question would be: start explaining what you're asking the OS to do when you run your program. There's an entire category of developers who view the IDE and the language as a perfect hermetically sealed environment, until they meet a leaky abstraction.
Some favourites:
1. Depleting the outbound TCP ports because ports aren't infinite and the code didn't know that.
2. Any nodejs BS that involves a precompiled C lib, where homebrew or some nonsense 'just works' on the dev machine but prod runs alpine/musl. NPM sneaking arch specific blobs in is really to blame here but what do we expect from the failed state of package managers.
3. Assuming latency is not a thing. Network calls everywhere. No management of timeouts / killed connections.
4. Memory, in all shapes and forms. Most typically thinking a GC prevents memory leaks, but more generally, having no clue whatsoever what the memory usage profile of the application is.
A systems engineer (IMO) has much better answers to these questions than a regular developer, and it is absolutely mostly arcane nonsense, (ulimit anyone?), until it isn't.
Personally I love OS tech so maybe I'm just cut out for it. I used to be able to tell what the machine was doing from the HDD whirrs and still have a 'feel' for system latency, e.g. when something completes too quickly.
After 17 years in this career, I dislike calling myself a "software engineer" because it's a meaningless term that now is used to describe anyone that knows how to build a web page. "Full-stack engineer" has also been stolen from us by recruiter to refer to frontend devs. WTF.
As a teenager I started doing software interfacing with hardware, reversing on Windows then I wrote my own mini OS, got a job and did web dev, sysadmin, backend, infrastructure, and CTO, alongside dozens of consulting contracts.
I get hired to fix a problem, and soon enough I'm diving into the technicalities of the network stack, or tracking down a bottleneck with spit and strace, or scaling an infrastructure horizontally and making it fault tolerant.
At this point most of the value I provide is not in the volume of code I deliver, but designing, operating and maintaining systems as a whole. A systems engineer, if you like, but as the article mentions, that's a real qualification that's got nothing to do with this.
The issues is that in the job market I have to compete with everyone called a software engineer, slap a senior to it, which these days refers to anyone that has 5 years of experience, and feel it's very hard to convey to a recruiter the depth of my experience. A programmer operates on code, a "systems engineer" looks at systems holistically, horizontally (i.e. architecture) and vertically (i.e. diving down the stack)
> At this point most of the value I provide is not in the volume of code I deliver,
My personal record is two characters of code per two weeks of work.
Embedded device kept crashing when initializing GUI rendering library (a precompiled one, STemWin IIRC). Two weeks of disassembly, instruction-level tracing of the MCU core narrowed it down to the contents of external RAM not being correct (that is- what was being read did not match what was being written).
Some more poking with oscilloscope, reading the datasheets and low level memory initialization revealed an incorrectly set RAM configuration register that was related to burst writes. RAM test during bringup used word-by-word writes, but when DMA was used only the first word in transaction was written correctly.
> My personal record is two characters of code per two weeks of work.
Mine is approximately minus one instruction per week, and it was the best money the company ever spent. Hard real-time is hard. (It was a strange processor with, among other things, delay slots everywhere, so you could never make just one change without ripple effects.)
It's sad but I'm pretty sure "types" like you all will be labeled "enemy of the state" soon... Oscilloscope ? Registers ?? ;)
That is, if things will keep going the way things are currently going - corporations, banning periodic tables from schools, elected stallers and clan mentality on all levels.
Looking at the number of high school graduates passing physics exam each year in my country (around 250 out of ~12 thousand total high school graduates), I'm pretty sure corporations won't even have to break a sweat.
I just dabbled in embedded, once debugged an arduino I2C peripheral, went through the same process, but never fixed the problem :) I was easier to get a different peripheral.
> "Full-stack engineer" has also been stolen from us by recruiter to refer to frontend devs. WTF.
Pretty much yeah. I can relate to everything you write there. I always thought most of this is related to when I learned reverse engineering in high school. That's basically when I started understanding memory and why it mattered.
Unfortunately recruiters also don't understand the kind of profile, they just lump us in the group of "generalists" and say you need to specialize if you want to progress and yet in a lot of projects I often ended up understanding more about the details than the so called specialists.
I once had a client that I saved dozens of millions a year due to a fix that they had accepted as "just part of the system" only for the manager to fire me with a severance, because by fixing it I undermined his decision to scrap the project.
Amen to this. When I explain to full stack web devs that I do bare metal C, Python + Linux on edge nodes, server side processing, and native mobile apps they either say “you should AWS, we really like AWS” or “wow, so you’re like the real kind of full stack”.
Self deprecation and lauding aside, I’m curious what new titles others would propose? When I migrated into the field many moons ago, the two titles I saw mostly were “analysts” and “programmer”. The first had underpaid connotations, the second was rife with social awkwardness cues that Hollywood had run with for a while.
Classic principal-agent problem. Additionally I think you misunderstood who your employer was and what your job was!
But very very frustrating for the world to work in such an anti-productive and nonsensical way. I also wonder if you would have been fired anyways when the old system was scrapped.
> Additionally I think you misunderstood who your employer was and what your job was!
I've heard that one before, so I'm going to agree. It's also inline with what the guy told me.
The system wasn't being scrapped they were just going to pay for almost triple the infrastructure. But yes, this was the first time I was exposed to performance reviews, I was as you said hired for something else entirely and it's possible that I would have been fired anyways since my job was actually about writing firmware and migration tools to migrate away from the old system(which also saved them a couple of millions a year). The performance review itself was however about tasks completely unrelated to what I got hired for.
I have done mostly contract work since then, but I am wondering if it is common to put your actual job content into your hiring documents to make sure people don't do performance reviews about unrelated tasks?
> I have done mostly contract work since then, but I am wondering if it is common to put your actual job content into your hiring documents to make sure people don't do performance reviews about unrelated tasks?
Honestly for contract work (and usually also full-time employees), the best thing you can do is focus on all the people who are responsible for your performance review and make them happy, make their initiatives look good, and get along with them so that they like you. Sometimes this means doing completely different work than you thought you were hired to do. Or doing all that in addition to what you were hired for.
The absolute worst case for a contractor is that you're "hired to fail" where there's a project with a 90% chance of failing so they bring in a contractor/team who either miraculously save it, or you're blamed for the expected failure. However, even here, if you show competence and make friends with the people who will stab you in the back in the short term, you may also find that some of them call for your services again in the future -- because you did "the job you were hired to do" and made it easy for them, made them look good, and were enjoyable to be around.
The written "rules" of the performance review don't matter, at least in the USA (maybe different in other countries). Performance reviews basically boil down to "how much do I like working with this person?" which definitely includes competence (people mildly dislike working with incompetent people), but also includes things like "makes my day happier" or "great conversation over beers / at the sports game".
I'm mid 30's and have worked a few contracting gigs but never really made it work out (usually it coincided with life being a little too chaotic at the time for me to fully focus on my client). So hopefully some others with more experience will chime in. But there's a lot written on this topic on HN, you can find a lot of experienced views from all sides of the table with a bit of searching!
I find that the software industry is one of those industries where all role names are made up and change meaning all the time. As you said, just a few years ago (ok maybe a decade) a "Software Engineer" was a person who built Software, now it's a frontend developer that has a basic understanding of the backend.
Despite the terminological thefts and oversimplifications, I'm still proud to be a part of this industry. We're like the James Bonds of the digital world; we go by different names, but the substance of our work and our contributions are what truly define us. So, let them call us whatever they want... They may not always understand what we do, but they sure as hell need us to do it!
This is absolutely true, there is zero consistency in titles, levels, or their meaning across companies. I've been doing more or less the same stuff (if you zoom out enough) for more than 15 years. In that time I've been a System Administrator, Systems Engineer, DevOps Engineer, Cloud Engineer, Software Engineer. I haven't gotten SRE on the bingo card yet but it's one of the main titles I search for these days.
Doesn't really bother me, although it does make evaluating what you might actually DO in a particular role challenging. In this "SRE" position am I writing Kubernetes operators in Go, or am I a Jenkins pipeline janitor, or am I working the graveyard shift in a data center swapping dead hard drives? WHO KNOWS!
> just a few years ago (ok maybe a decade) a "Software Engineer" was a person who built Software, now it's a frontend developer that has a basic understanding of the backend.
I'll cut a bit of slack to the frontend as it's also gotten significantly more complex over the last decade or so. I've seen places where frontend means "our iPhone app". Microsoft Office now has a version that runs in the browser.
Funny (I am not too serious about this point :D ), but I feel systems engineer is actually an engineer that has a multi-disciplinary skill set (mechanical, electronic, software etc) that guides a multi-disciplinary team to develop a complex product. Now it is repurposed for full-stack software engineers.
I work close enough to systems engineering to have some insight: The good systems engineers are strong generalists, have one core competency with deep knowledge and expertice, good enough knowledge in related fields to understand and work with experts, and are generally good at engineering.
Complex job, but a fun one! And crucially important for any product more complex than a dish washer.
But please, don't steal the name for some software "engineering" role...
I think parent is refering to the fact that in a formal context (think large Aerospace systems), a Systems Engineer is the role that oversizes all interfaces mech/electric/software and makes sure there are no requirements misalignment.
In that sense the naming has been misappropriated in the SV tech circle to mean a "backend engineer that actually understands what is going on under the hood" sort of role.
Is there anyone that can extend their knowledge to mechanical and electronic engineering, as well as software? I feel that my breadth in software knowledge is reaching the limits of my mental capacity and I have to shed useless weight now (i.e. I haven't kept up with frontend for 3 years now)
My father was an electronic engineer, and I would love to get into it eventually as I grew up organising his components drawer, burning LEDs and licking solder wire (yeah.. I regret it now), but that's going to be something I'll have time for when I retire.
You have a limited time to learn things. I learn for projects, that keeps my scope limited.
I think I can an extend a little into the non software realm, though my undergrad degree/ first jobs were in civil engineering. I migrated to computers and ended up going back to school for that.
I work writing software for biologists now, though my understanding of the underlying biology in that is shallow (Compared to my biologist coworkers)
Embedded software tends to need at least some EE knowledge, though how much will depend on the job. A tiny bit of ME knowledge also helps (consider thermal management, consider vibrations during shipping, consider vibration in use, etc). For robotics and mechatronics, ME knowledge becomes far more important.
> Personally I love OS tech so maybe I'm just cut out for it. I used to be able to tell what the machine was doing from the HDD whirrs (...)
I loved the story about how a developer of ZFS, from the Sun days, went down a rabbit hole of optimizing the filesystem because he didn't like the sound hard drives made on certain write loads. Turns out he was right, there was funky stuff going on with the I/O scheduler.
Eh, from my laptop's noises I can tell approximately what is showing up on a remote console running over X11.
For some reason whenever text is updating it makes a faint buzzing noise.
The matrix movie joke about seeing code is actually pretty close to the truth. Not necessarily a sign of a great programmer, but still a nice skill to have.
Perhaps the term for this is System Software engineer? (I had that title for a while, it has a sort of pleasant ring to it) As others have noted I’m more familiar with the systems engineer job title involving giant webs of requirements and specifications. Maybe something not so far off from a Product Owner in the “tech” world, though I’m not sure about that.
IMHO, the fun is that less abstracted systems are actually fixable because you can “see” the failures, as opposed to cloudy stuff where you do a lot of work to prove that the problem is on the provider’s side, and then wait months for them to implement a fix… only then to have already worked around the issue.
> I used to be able to tell what the machine was doing from the HDD whirrs
I guess that dimension of debugging is now lost with SSDs. I didn't run an anti-virus and noticed by machine had caught a virus (in this case, DOS/Natas) because of the extra moments of whirrs during boot and when running applications.
That's really cool, and neat that he uses the activity light from the hard drive as input to generate sound.
I wonder how much slower processes would run if you generate sound on the computer itself from every hard drive write. I don't have deep enough knowledge about the OS to know where you would run your sound program without affecting performance (i.e., are there log files that could be read, so it is not directly interacting with the hard drive writes? would those still be timely enough to give a good indicator of what the computer is actually doing?)
If that could be solved, then you could have a sound for network activity, every time a kernel module loads, etc, so users don't need to have deep knowledge of the system to know when there is a problem- they just have to learn what sounds right. I think it would be really neat to have music made by the machine itself and love the idea of adding back that dimension of debugging.
"ulimit" is discussed on the bash manual page. It isn't arcane nonsense. In fact it is incredibly concrete and understandable even by newbies. You're going to have to look elsewhere for an example of obscure, complex, useless technicalities.
The latency one hits hard, as someone approaching this from the other side who came up through Ops and learned software development over time. So much software I've had to support was written with the assumption of flawless network connectivity. No latency, no dropped packets, no fluctuation in response time. If there was any disruption or a call didn't return within a very short timeout, the app just blocked and eventually crashed. Any sort of network maintenance was a huge problem because a couple seconds of spanning tree reconverging or whatever meant a significant outage until all the applications recovered. This was of course always the network's/ops' fault, using patterns and libraries that have been around for over a decade to handle the inherently unreliable nature of computer networks was never on the table.
I've seen some replies that this is basic stuff everyone should know, but the fact is, people don't. Even Senior or Principal engineers (depending on the company). You can get very far in software engineering without understanding anything outside the bubble of your domain. Hell this is basically why the field of SRE exists, because a lot of developers do not know how to write reliable software that works in the real world, not just in a perfect test environment. If you've somehow worked your entire career surrounded by engineers who have even an entry level understanding of how networks function, you're the outlier. More often I've found people invent their own bizarre mental model of what's happening that only has the vaguest grounding in reality.
And honestly that's ok much of the time. Everyone can't be an expert in everything. Just have some grace about it. Don't pull a Dunning-Kruger and assume because you are a fabulous Java developer you are automatically an expert in every technical field. Most of us aren't.
1) They're formative knowledge for understanding that part of a system. Systems are big and few people understand them entirely. I happened to know those things, but I am a pathologically T-shaped/"paint drip" developer. The middlebrow dismissal of "oh that's basic" is going to get a rise out of people.
2) You threw a one-line, no-help comment out there. That's going to get you downvoted unless it is at least interesting or funny.
3) Now you're complaining about downvotes, which is a big red sign for "downvote me more, for I am mad".
Systems engineering is like Esperanto; it would be very useful if more than a few people would "speak" it. It already starts with the fact that the term "systems engineering", which has been clearly defined in various standards since the sixties (MIL-STD 499, IEEE 1220, ISO 26702, ISO 24748-4, etc.) and backed by a comprehensive body of knowledge, is now used almost arbitrarily by anyone. Also in the article, the term is used with a different meaning than the original one.
Of course it can be argued that language evolves like people, but after all, also in physics or mathematics we don't just replace or redefine terms ad libitum (you won't pass exams if you use your own terminology), and on systems engineering there are university courses and a well established scholarship (at least there were when I was a student many years ago).
Huh, in all these years I never suspected to find resources about SE in standards bodies.
Would be nice to have more formal training about SE specific procedures instead of products and general project management.
I found the INCOSE Systems Engineering Handbook which seems to be a prolific publication, is it worth studying as an IT SE? Any other recommendations?
I remember introducing myself as such to a meeting and some manager didn't understand why I was there. Turns out he came from a previous experience where "systems engineer" was about designing nuclear combustion engines, which is quite a bit different than writing shell scripts ! Now I prefer SRE or simply sysadmin.
>>> I think it goes something like this: you start from the assumption that when you see something, you wonder why it is the way it is. Then maybe you observe it and maybe do a little research to figure out how it came to be the thing you see in front of you. This could go for just about anything: a telephone, a scale, a crusty old road surface, a forgotten grove of fruit trees, you name it. By research, I mean maybe you go poking around: try to open that scale with a screwdriver, get out of the car and walk down the old road, or turn over some of the dirt in the field to see if you can find any identifying marks.
... a scientist.
My first job title at my current workplace was "systems engineer." My degree is in physics. I evolved into the role of being the person on the project who understands and can explain how the whole thing works. It doesn't mean just dictating a theory of operation, but also testing that theory.
My job is in an area where products involve both hardware and software. I certainly don't claim to know every design detail. Part of the trick is figuring out what things I actually do need to know, and how to nail those things down.
Yep. I found a bug the other week which was slowing some important code down by 20x. I found the problem because it was tweaking my intuition. It just didn’t make sense that this code was so slow. After a few nights lying in bed thinking about it, I did some back of the napkin math which agreed with my intuition. Eventually I took some time to track it down. It turned out a rogue debug assertion hidden in a big function was making it into production. The assertion did a deep scan of the system to check invariants, and that turned an O(n) into an O(n^2) in some cases.
This sort of thing comes up a lot while programming. Something weird happens and you start to wonder - why does it behave like this? Answering these questions has led me all sorts of places - to reading specs, debugging into databases, compiling my own Firefox, all sorts of things. The chance this knowledge pays off sooner or later is wildly high.
Computers are deterministic and debugable. You usually have the source code and everything it does was designed and written by people like you. A curious, scientific attitude is a beautiful thing to nurture in software.
I actually had a case like this earlyish in my career. I had a job where my main responsibility was maintaining and developing large ETL scripts for an online book store (so hundreds of millions of rows). We had one script which did the meat of the work which was running many hours each night. My job was to speed it up. Watching it run for a bit, it occurred to me to wonder how long it took to send each line of progress info (couple of lines each for every single row) to the log. I remember my boss at the time didn't believe me. "I think it's logging," I said, and turned logging off just for funsies. Boom, 100x faster.
> It turned out a rogue debug assertion hidden in a big function was making it into production
Be careful to make sure that it was actually a "debug mode only" assertion (Chesterton's Fence and all that). I have seen too many programmers assume that when it is not necessarily true. Assertions are for establishing "Program Correctness" and hence many are left in Production Code. Of course how you handle it i.e. log/die/catch clause/signal trap is up to the system design.
> Nobody else saw something like this before, and so when you point it out and flip it back to sanity to restore the rest of the system, they look at you like you just pulled off some deep magic.
and
> you start from the assumption that when you see something, you wonder why it is the way it is. Then maybe you observe it and maybe do a little research to figure out how it came to be the thing you see in front of you.
These both resonated with me deeply. I see a lot of people in tech who got into it for the money, or -- more charitably -- because they thought it might be cool, without much depth to that determination. I got into it in the late 80s, when I was 8 years old, because I was intensely curious about how things (first mechanical, and later digital) worked, and wanted to know how, and wanted to be able to make things that worked in the same way.
I'm not saying the first category of people are deficient, but I definitely see people like that struggle with things that seem fairly obvious to me. Of course it's just because humans are pattern matching machines, and I've devoted a large chunk of my life to exploring these sorts of patterns, at many different levels of abstraction.
I'm not trying to say I'm somehow so cool or a better human because of it, but I do think that "being a curious person", and having been acting on that curiosity all your life, really does give you a leg up when it comes to tech-focused fields (and presumably others, but I don't have experience with them, so I can't say). Most problems boil down to a much smaller set possible solutions, and being able to see the patterns that get you there can take more experience and time than many people are willing to devote.
LOL in late 80s I was in mid 20s and living and working in 3rd country. Started at NCR doing COBOL and NEAT/3 assembler for OS support and a manufacturing system.
So I’m then terribly old. And I have lived a life driven by curiosity for everything. I’m no system engineer for sure but have self discovered a lot that sometimes proves useful and engineers and fellow product managers are like “ WTF how did you know that?!”
The good old school system engineers knew how and why the software worked from the metal to the terminal and how to use the whole system productively. I’m in awe of such people and they are a privilege to work with.
Oh and FTR started programming with punch cards when at high school. Used a local bank to compile Fortran.
The great part of “systems engineering” is that the fundamentals have not changed in the last like 30-40 years. If you have a basic university undergrad level understanding of operating systems you can work up from the fundamentals to understand what is going on and your knowledge will probably never go out of date in your career.
It definitely isn’t for everyone and you need to be very persistent, but I enjoy not having to relearn tech stacks every 4 years and job security for Wizards tends to be quite good. The downside is you can spend 80+ hours tracking down a bug that results in a one line change (don’t ask me how I know).
I don't think this really holds. The bedrock layers may be pretty stable, but "systems" now have many more layers on top that needs more systems engineering attention.
Right, in my experience you’ll run into ‘systems engineers’ in all fields, some are working on interfaces of the systems in that field, and others are thinking of the totality of the system and it’s maintainability, efficiency, and design. You always have to ask a follow up when someone says they’re a systems engineer.
I'll generally start with defining the "INCOSE" kind or the "MCSE" kind and most of the time people from one group or the other will know of the one they are close to and little to nothing about the other one.
the work I do as an ordinary [IT] systems engineer seems to fit that traditional definition just fine, with the work involving systems thinking about, and dealing with, how different parts of company systems interact (different groups' applications, network, service providers, customers, employees, each logical layer of each stack the software runs on, to varying degrees, down to bare metal, etc)
Well, "engineer" means real maths/physics/chemistry applied to information systems. If not, it is "technician" (well, in my country). Most "engineers" actually work at a "technician" level.
Where a systems technician may get in trouble: accurate and proven floating point calculations and where it comes from (minimal polynomial, range reduction, etc), real understanding of the maths of cryptography. Proofs of common performant algorithms (ring buffers synchronized with atomic variables, unification,etc).
It may need a deep understanding of signal processing, depending on the field, etc etc.
But since everything was already put in boxes (libs), a systems technician can re-use mostly all of them: the set of skills required is no higher than high school level.
Basically, an engineer has a deeper undertanding of information systems, but work most of the time as a "technician", and has some background knowledge to help him/her ramp up on tricky stuff.
Can software based systems engineers please call themselves software systems engineer.
I've been a 'traditional' systems engineer for many years and when searching for jobs (or with recruiters), it's always a mess trying to filter out the software only systems engineering gigs.
I use a lot of software in my work too, it's just that I also have to do 'hardware' by the end of the work.
It's not a big ask (I think), but man, would it make a lot of sense.
I’ve worked with systems engineers, as in the modeling/SysML type. It is important to note that software is usually a part of a much bigger system, usually physical, and that systems engineering is kind of an engineering approach to project management to help in ensuring things get built to spec (even if the spec changes). The nice thing about that is you don’t get VPs/execs walking in the door saying what you’re going to do this quarter instead.
It’s hard to imagine a more unhelpful answer. This is equal parts “look at how deeply I understand all this crusty old technology, yay me” and “everybody I work with is evil and stupid.” Zero parts actual advice about the question that was asked.
>> I got some anonymous feedback a while back asking if I could do an article on how to become a systems engineer. I'm not entirely sure that I can, and part of that is the ambiguity in the request. To me, a "systems engineer" is a Real Engineer with actual certification and responsibilities to generally not be a clown. That's so far from the industry I work in that it's not even funny any more.
Seriously though, if you look up "systems engineering" on Wikipedia, it talks about "how to design, integrate and manage complex systems over their life cycles". That's definitely not my personal slice of the world. I don't think I've ever taken anything through a whole "life cycle", whatever that even means for software.
This sounds like the familiar grizzled veteran who's grumpy about the reality of some field, but they're grumpy because they have hard-earned insight into the field. (And maybe not feeling as upbeat that day as they usually manage.)
One idea a young grasshopper might keep in mind: maybe most all fields are such, that you'd be grumpy about a lot of things in them, once you have decades of understanding.
So, if someone is telling you that the reality of field A has a lot of downside, but you're not hearing that about field B... maybe the situation is that you aren't hearing from someone knowledgeable/representative/honest enough about B, but you have a rare opportunity to learn from a straight-talking expert in A.
Systems engineering is actually an overloaded term and that Wikipedia article is correct. The author’s definition is also correct of course. Just talking about different things.
What I thought hilarious, when the author was describing a series of tools that were maintained for a long period of time.. that was the complex system. No attempts were made to reduce the complexity, because the cost was not worth the increased value. Maybe that specific part, realizing the business driver of systems, is what makes a Systems Engineer.
And even though your specific application won't EOL, there are many times where features are deprecated and that complex system must be altered. Maybe I'm just grizzled in my old age, but it seems like this author was writing about Systems Engineering while claiming not to know what it is..
I stopped using these titles because they’re just truly meaningless in the corporate world. It could literally be anything from a sysadmin to some programming job that actually requires super deep platform expertise. I’ve been a “Systems Analyst” and “Systems Engineer” at the same company and nothing about my tasks was different lol
Systems Engineering is a bit discipline and industry specific; a nuclear systems engineer is a bit different from a candy factory systems engineer. But the basics are about the same.
You don't need to understand the product management or biz management side, but it helps. A Senior needs to understand the basic phases of an engineering product's lifecycle from birth to death, and how to design for and complete each one. But a more standard systems engineer just needs to understand how systems work, how they are designed, implemented, and kept running. The rest comes over time.
You don't have to be an expert in every part of the product. Like, you don't have to be an Operations Manager to engineer a product for production. The "boring" engineering work is all that's required. But it really helps to know a bit about everything else: how to find requirements, draft designs, liaise with stakeholders, make a POC, test it, implement components, tie them together into a system, test it, run it. Having a sunset plan is also great but not required.
It definitely helps to have some experience in the industry you want to work in. If you wanna build a candy factory, having interned on the factory floor helps. Having built similar machines before helps.
My company used to title me: "System Architect". Yes, always singular, they're Germans and have no feel at all for English language.
Then HR bought a pricy scheme from some consultant, and now I'm suddenly an "Advanced System Engineer". And also mightily pissed off for not being a senior title.
It's like they roll dice around here. People went from being Data Analysts to being developers and vice versa. People were assigned titles that don't fit, purely so they could be outfited with certain benefits exclusively tied to those titles in a rigid scheme.
Leave it to HR to screw over any industry naming conventions and to make my future job applications great fun, when I will explain why I'm not really that thing that's actually on my CV.
> And also mightily pissed off for not being a senior title.
Not as pissed off as when you'll find out during salary negotiation that you're not an Advanced System Engineer after all, but an Advanced System Engineer.
Not OP, but in Germany it is quite typical to receive a letter that officially states what were your duties and titles. Some employers expect you send them the previous job letters as a sort of background check.
It is also easy to read between the lines in the letter to see if you did just the minimum or they were really happy with you. The fear of a bad letter can keep people accountable until they leave (best case) or be used as a threat by an unscrupulous employer (worst case).
This affects international jobs a bit less, in my experience.
That is slowly changing so. First because most employers are afraid of legal disputes and give roughly a 2 (based on the German grading system with 1 being the best 6 being the worst). The de-facto legal minimum, without hard reasons to deviate, is a 3.
The second reason, nobody is asking for those anymore it seems. The last couple of jobs I took, nobody asked for any of my letters.
Otherwise I agree, it takes some internal convincing to not care about them if your are used to them your whole life.
Im short, those letters suck. And almost everybody knows it.
Of course you can, until you meet someone like me who dislikes that very much. I think of my self as incredibly vindictive, and I act accordingly. So, think along the lines of - if you apply to a job that I am hiring for, and you try that nonsense with me, I will make sure you never get work anywhere near me. I will remember your name.
I also would not bet on me forgetting you, I do interview quite a few people, but negative experiences stick in my head like a sore thumb. So, I think your only options are pray that you don't meet me along the way, or someone like me. It'll be for the best for both of us, honestly.
Though I didn’t become a system engineer I was a part system admin earlier when I was about to start my professional career. The job was typical stuff, installing and maintaining DNS service, mail service, ensuring backups are done, and couple of times doing recovery etc. Along side I would troubleshoot issues like machine not being able to get onto LAN.
It was really fantastic learning wise. I really got to see how things are implemented. I use those learnings even to this day. When a junior full stack developer runs into system level issues I often times see them go completely blank and I then gleefully roll up my sleeves and put on my susadmin hat, it’s quite fun.
Authors and yours definition of Systems Engineer are incorrect.
I am actually getting my Masters in Systems Engineering and work for a large aerospace corporation - systems engineers are not admins and not even close. It’s simply not what they do.
The IT/Software world tends use "Engineer" somewhat differently to the outside usage. So there is some dissonance when terms from Software and Non-Software world overlap.
In an industrial plant Systems Engineer is a specialized discipline in the vein of Mechanical/Chemical/Electrical etc. Engineer.
My best attempt to explain it would be a System Engineer has a multi-disciplinary holistic view of the entire "system".
In my world the focus is process specific i.e each systems engineer would oversee one particular engineering "process".
For example with a fluidized bed reactor, Mechanical engineer would be familiar with mechanical systems (valves, hydraulics etc). Electrical engineer would know about PLCs, Interlocks, limit switches, Chemical Engineer would know about things like Chromatographs, Gas Flow meters etc.
The Systems engineer knows the big picture how pieces are integrated into the whole working system.
Systems Engineer is being thrown about haphazardly
People are overloading the term Systems Engineering.
My bachelors is in Computer Science.
Systems Engineering has its roots in Industrial Engineering. Which is why you will see courses in Systems Modeling and Analysis (eg Discrete Event Simulation), Decisions and Risk Analysis (eg Game Theory, Linear Programming), Requirements Analysis and Principles, and Researcb Methods (Factorial ANOVA, p value, t value, Residual Analysis, Regression Analysis, Treatments, etc.)
Systems Engineer means different things in different contexts. The fact that you are getting your Masters in Systems Engineering and work for a large aerospace corporation suggests you mean the tradition term, whereas the article and person you commented on means systems software engineer.
Without disagreeing with you, I was only chiming in with my experience in the same context as the author has written the article which is computer/software systems. Which is that being a part-time sysadmin is a great way to get into the computer systems field.
I think within the software engineering field there's a general agreement about what "systems engineering" means and entails. FWIW here's an OCW course[1]
Your link is not describing Systems Engineering. It is describing a course related to Computer Science (which is my bachelors). People are overloading the term Systems Engineering.
Systems Engineering has its roots in Industrial Engineering. Which is why you will see courses in Systems Modeling and Analysis (eg Discrete Event Simulation), Decisions and Risk Analysis (eg Game Theory, Linear Programming), Requirements Analysis and Principles, and Researcb Methods (Factorial ANOVA, p value, t value, Residual Analysis, Regression Analysis, Treatments, etc.)
> don't think I've ever taken anything through a whole "life cycle", whatever that even means for software.
Im not talking about the author, but if you have ever heard phrase "seniors with 5 x (their first year) of experience"
then not doing whole soft. dev. life cycle would be first sign of this.
Jumping between projects every year and doing similar stuff sucks, yet people do that and are proud of the fact that they change jobs every 1 or 1.5 yr
systems engineering... is about curiosity. Asking how then why. AND Then knowledge+ability to build/tweak/fix/do-something-about the Thing. Not just observe. Whether the domain is software or mechanics or rocket science or group-psychology or (usualy) many of them together.. it's still same. Just takes time and accumulation of lots of rather-unconnected-unrelated-irrelevant little bits.
When i hire, i look for DIY skills. Tinkering with stuff. Repairing broken pens, toys, door-frames, or UIs. That has proven to be the only relevant signal. Lots of people can apply formulas. Few question them. Seems Rare skill nowadays.
Maybe because i'm DIY reverse-engineer-everything guy myself. You know, people hire their similars :)
Doesn't have to be curiosity, it also works with stubbornness. If you have the opinion that it "should ought to" work a certain way, and the system disagrees, then you will naturally have to dig into it, because you'll create your own breakage. :)
I honestly don't give a damn how my system works, so long as it works. It's just that the way I use it tends to make that a very temporary state of affairs.
It doesn't particularly matter how you come by the library of a thousand mostly useless low-level factoids about the system. So long as it accumulates, and you keep having to make use of it to work around the problems you yourself caused a few days prior, you won't be able to avoid becoming a systems person.
I feel that the bridge between software engineer and systems architect comes through scale, maturation, and integration.
Today a BS in computer science covers a wide range of topics outside of syntax and databases. On the day you graduate, you have a set of hard skills writing code that are likely immediately applicable to a job and you also have more abstract theoretical instruction related to real world scale, deployments.
There are also types of problems that are very difficult to teach in a classroom setting. Those things where the core code works fine, but in a scaled production implementation, it’s not meeting the mark. Success there usually involves iterations over weeks or months to get a working solution.
Being a really great systems architect involves mixing the hard technical skills with the softer qualitative skills learned along the way.
I don't know if the article is a helpful answer to the question, but it's a very true answer I think.
My path to 'doing' systems engineering for a time (it really wasn't my thing) was by maintaining software an actual sys. eng. had written, fixing problems with it, extending it and eventually becoming responsible for it.
Best lol - API of a library we were using changed so that instead of taking a pointer to a struct to allocate memory for, it just wanted an amount of memory to allocate. We didn't notice, it passed all our tests (sheer luck) and dumped core on prod all day long. Funny.
That's sarcasm, it probably also wasn't actually Funny. Nothing about what they said suggests they didn't subsequently look into why their tests missed the issue
My simple answer: Don't. You're always hovering somewhere between a sysadmin and a software engineer, but never quite get the cred for developing software but still cop all the crud that sysadmins do.
"... All of these people will play a role in my ultimate success
as a dystopian warlord philosopher. However, the most important person in
my gang will be a systems programmer."
I liked the article. It kind of taps into the complexity of what goes into Systems Engineering. I always describe the job as a software engineer who also understands OS internals, APIs, and tooling and about all the external things that they depend on.
Articles like this make me wonder if I'm cut out to be an engineer. I'm bored to tears by systems engineering and security. I just want to build interesting stuff.
"Be an engineer" and "write code" aren't the same thing. That this industry has decided to call everyone an "engineer" because they can write some Java or even some HTML has loaded these jobs with a set of implications that they don't ever really cash in on. Until something goes wrong and they now have a stick to hit you with.
It is okay to be a "developer", it's not a naughty word, and it's okay to not be an "engineer" in the sense that Rachel means it. That said, there is considerable overlap between "doing engineering" and being an effective, skilled software craftsperson. Enumerate edge cases as exhaustively as possible, capture solutions for them. Ask "what if this breaks?". Have an answer. This isn't "being an engineer", engineers get in Actual Trouble when a bridge falls down, but it is being a skilled crafter of software, and they're not dissimilar questions. The level of rigor applied might differ, but if this isn't where your brain is going by default, it is a worthwhile and even necessary skill to cultivate to build something worth using.
Of course, if you're building something people rely on, you'd better be working with and listening to people who are putting (or attempting to put; this is a young field, after wall) some kind of engineering rigor into practice.
> Ask "what if this breaks?". Have an answer. This isn't "being an engineer", engineers get in Actual Trouble when a bridge falls down, but it is being a skilled crafter of software, and they're not dissimilar questions.
Wait until you learn how much software is in a modern car!
Of course you are, you just don't like systems engineering and that's fine. Someone's gotta build on top of the systems too. To you it's the boring stuff. To a systems engineer, you might do the boring stuff. We all complement each other well.
> malloc(1213486160) is really malloc(0x48545450) is really malloc("HTTP")
No it's not / that's not how C works. malloc("HTTP") is malloc of the address of a string literal in memory. If "HTTP" is at address 0x11223344 then it's malloc(0x11223344).
So it's the address of where ever the "HTTP" string is in the binary not the characters themselves.
It's kind of a weird way to say that you can convert decimal to hexadecimal in your head. I also wouldn't use malloc as the example to demonstrate that talent, it makes no sense that way.
Perhaps becoming a systems engineer should involve learning C first... Don't always trust what you read.
This is shorthand pseudocode referring to a story that she has previously written up: https://rachelbythebay.com/w/2016/02/21/malloc/ . I suppose one could write `malloc(__builtin_bswap32(*(uint32_t*)"HTTP"))` instead but that seems overly verbose for an aside that's not meant to be compiled.
malloc takes a size_t indicating the desired size of the allocation. If malloc is accidentally being called with a pointer instead that would be an error, but this story is about an instance where malloc was accidentally being called with the first bits of a string instead, apparently on a big-endian system where size_t is 32 bits.
> If malloc is accidentally being called with a pointer instead that would be an error
Not true, just tried it in MSVC and it compiled fine but with a warning on conflicting types.
"HTTP" is of type const char* and sizeof(size_t) == sizeof(const char *). Doesn't matter if its 32 or 64, I'm not aware of a platform where that isn't the case.
> but this story is about an instance where malloc was accidentally being called with the first bits of a string instead
If that's the case, then it should have been something along the lines of *"HTTP" instead.
The situation in question involved a response from some other server. That server was supposed to speak some binary protocol which had a 32-bit length field at the start of the response. Instead the program which did the giant malloc accidentally tried to connect to an HTTP server. It blindly trusted that it would receive something in that binary protocol, so tried to interpret the first 4 bytes of the response as a length, and then proceeded to try and set aside enough of a buffer to handle that size.
> Not true, just tried it in MSVC and it compiled fine but with a warning on conflicting types.
If I meant that it would cause the compiler to report an error then I would have written that.
> If that's the case, then it should have been something along the lines of *"HTTP" instead.
We can say that they really should have written memcpy to get the string literal into the size_t or we can accept that sometimes people use language imprecisely and expect readers to make an effort to understand rather than make an effort not to understand.
> or we can accept that sometimes people use language imprecisely and expect readers to make an effort to understand rather than make an effort not to understand.
I don't disagree with you there, however, things like pointers and string literals (and what they actually represent) in my experience are a source of confusion to junior systems engineers. I'd imagine that the intended reader is someone who wants to be a systems engineer, not someone who is already a systems engineer. I expect the reader to be motivated to understand (after all, they are reading the article) but not understand and get confused because of the imprecise language, especially in the case in programming where precision matters (or the code doesn't work).
it would be neat if people ran -Wall -Wpedantic, and valgrind. but, you know, they don't.
That's kind of where people the author talked about think the magic of "systems engineer" comes from.
I'm sure you've had that one team mate who took like 5 tries to pass code review. Not every team has a you, and crazy things land in prod, because that one team mate didn't have you to keep them correct.
> "HTTP" is of type const char* and sizeof(size_t) == sizeof(const char *). Doesn't matter if its 32 or 64, I'm not aware of a platform where that isn't the case.
CHERI has sizeof(pointer) = 16, sizeof(size_t) = 8. It's probably the most widespread system where sizeof(uintptr_t) != sizeof(size_t) (and it's admittedly a niche system).
Also I would expect the string to be reversed if she's working on a little-endian system (which the most common ones today are, including Intel 64 bit and ARM64).
I didn't, but probably because they may read like they are lecturing Rachel. Which looks bad in itself, but feels extra ridiculous for anyone familiar with Rachel's blog. She pretty much knows how malloc and C work. She probably could write her own malloc without sweating.
Worse still, OP is seems to try to imply that Rachel is trying to lie and brag about some irrelevant gift (decoding hexadecimal in her head) by making up examples. Let's just say that it's not very charitable and it's also against HN guidelines.
OP could just have expressed their surprise and asked for some clarification instead. They are not reading "is really" correctly and not considering that Rachel is taking some shortcuts so the article remains simple and enjoyable to read, as most good writers do, which in the end makes OP's whole comment somewhat moot.
See abbeyj's reply for more context. The full explanation is in Rachel's article abbeyj points to, there's no need to guess.
edit: I had skipped the last line of OP's comment. It's just outright mean. No wonder they are downvoted actually.
Zoom out the system bit to human level and you get entrepreneurship. Or just zoom a little bit out, you got product minded engineers. All of which have people actually using your code in the "thing that works" definition. Microservices, kubernetes, SPA, NoSQL, LLM-de-jour, architecture that I think better, space vs tabs, all matter way less than you think
Nah it is. When something "just works" you don't learn much about how. When your graphics card driver shits itself, you learn something getting it working again. Console text editor at the minimum!
It's not "misery is good for you" but having to solve random problems is an opportunity for learning the guts you otherwise don't get.
Linux on a laptop used to fill this niche nicely. Does it still?
running a mainstream distro & desktop env on a thinkpad t480s is significantly easier, more efficient and less fragile than any windows laptop I've ever used, seen or heard of. Less signficantly so but still easier than any apple laptop I've used. Genuinely so. YMMV.
the primary difficulty isn’t supposed to be the os, but the software you’re building. though os difficulties will always come up, depending on workload.
being on linux means you gain intuition debugging locally. whatever you might be debugging.
to sum up my point: linux. never not be running it.
to your point, linux is a better experience now than before. this is for great good, not a cause to search for a more brittle os and distro.
Using an operating system you have to tinker with to get working right, and even fix when it breaks means you learn a lot about the low level of what an operating system is, and how this one does it. That information translates to being useful writing and fixing other software. The point is this, if you're a student, consider using something unusual that is more brittle. Maybe a sweet spot is running mainstream linux on your mainstream laptop (Lenovo T-series is one of the right answers there, dell is the wrong answer) and have a project using a pine rock64 SBC running linux to drive your tv screen or similar? Something that you want to use, that breaks at random in random places that you then have to investigate to work out how to fix.
You really learn something keeping brittle systems functional. Linux laptops with nvidia gfx cards definitely used to qualify in ways a t480s really doesn't nowadays.
When fixing things that break, strace, gdb, perf, ftrace, BPF, write or fix a kernel device driver, understand what the OS and compiler don't do for you, what they often will do for you that maybe you don't want them to.
My answer to the question would be: start explaining what you're asking the OS to do when you run your program. There's an entire category of developers who view the IDE and the language as a perfect hermetically sealed environment, until they meet a leaky abstraction.
Some favourites:
1. Depleting the outbound TCP ports because ports aren't infinite and the code didn't know that.
2. Any nodejs BS that involves a precompiled C lib, where homebrew or some nonsense 'just works' on the dev machine but prod runs alpine/musl. NPM sneaking arch specific blobs in is really to blame here but what do we expect from the failed state of package managers.
3. Assuming latency is not a thing. Network calls everywhere. No management of timeouts / killed connections.
4. Memory, in all shapes and forms. Most typically thinking a GC prevents memory leaks, but more generally, having no clue whatsoever what the memory usage profile of the application is.
A systems engineer (IMO) has much better answers to these questions than a regular developer, and it is absolutely mostly arcane nonsense, (ulimit anyone?), until it isn't.
Personally I love OS tech so maybe I'm just cut out for it. I used to be able to tell what the machine was doing from the HDD whirrs and still have a 'feel' for system latency, e.g. when something completes too quickly.