Programming languages will always be better, more feature rich and more capable than any DSL. However, the whole reason things like terraform or puppet become popular is because the people who write the DSL don't consider themselves "programmers" and they don't want to be thought of as "programmers".
As somebody who identifies as a "programmer" and often has to manage infrastructure I would absolutely love for a serious IaS library to pop up in my favorite language. I would switch to it immediately. However unless we start hammering home that if you are regularly writing in these DSLs that you are actually a programmer and there's nothing wrong with that people will still flock to these tools.
Very much agree, I do wonder how can things like crontab be specified? The cron job syntax is its own fricking thing that I have to always look up. But then, I wonder, how else can it be specified? yaml file?
I've spent way too many hours this week looking for a better model for configurable timers than cron.
Cron is kind of awful, not just the syntax but the logic. You can't say "Every three days but not on Monday, if it would land on a Monday, defer that run till the next selected day".
There is also too many ways to say the same thing, so GUI editors are hard.
Yeah, basically it needs to be Turing complete but it falls short. Then why not just give us Lua or some real programming language. There is no real need for a cryptic single line syntax.
I tried to organize a corporate hackathon at one point in my career. My initial idea was to make it freeform. However after talking to people who wanted to participate, I found that nobody really had any ideas and just wanted to join and hack on somebody else's idea.
My (possibly incorrect) take away - people with great ideas will build them themselves without a hackathon.
I still love the idea of hackathons, I just don't know the right way to organize them.
That might be a good sign. All the tech debt paid down and opportunities to play with your idea in normal work time, so nothing left for the hackathon.
Like similar to not having anything to say in 1-1s might be a sign of good communication in realtime.
I started in the software engineering space and move into data engineering, and I was floored with the complete lack of tooling. There is a HUGE gap between software engineering and data engineering when it comes to both tooling and practice. Even the simplest "unit" test of "Is the SQL statement valid" is not all that common in frameworks and tooling but in practice is like 90% of the production failures that I've seen.
Starting with a framework that is programming language first (IE Spark) can help you build your own tooling to help you actually build unit tests. It's frustrating though, that this isn't just common across other ETL tooling.
Ran into a similar issue last year in the East US region. We contracted support and they gave a similar response. From my understanding talking to people who use AWS and GCP this isn't uncommon across cloud platforms.
While we could've just swapped a deployment parameter to deploy to another region, we opted to just use a different SKU of VMs for a short period and switch back to the VMs when they were available again.
yeah AWS tends to have capacity issues during high volume periods like black Friday (I think this is now actually because most large users pre reserve a buffer pool of vms sitting unused) -- but I have never had an issue where AWS has told me there would be no capacity for months. Its usually swapping AZ or regions or being slightly flexible on your sku. And if you are sensitive to this and find it happening take a look at your sku loadout you may be choosing a very high demand vm type and shifting just slightly gets way more capacity.
^^ and by capacity I am talking like 10's or 100s of vms being available not 1.
I'm with you, I'm a dev who prefers Windows over Mac. I'm convinced that most Mac users don't use multiple monitors and they're completely fine with the really poor window management. It boggles my mind; I am objectively less productive with a single monitor than I am with multiple.
Which is not a stock macOS app. Which is what the point was. Stock macOS sucks with window and monitor management. I run three screens and it's aggravating sometimes.
It seems like if you discount third-party applications, you should also discount the addition of peripherals like additional monitors, as they're not the stock experience either.
This seems to come up with every argument about operating systems. People say it sucks if you have to use third-party applications, but I vehemently disagree. The fact that you CAN use third-party applications, the fact that they have sufficient access to make the experience better, the fact that the community exists to make them... I think that's a major pro in favor of an OS.
> Which is not a stock macOS app. Which is what the point was.
This seems like a strange standard to me. I can’t think of an OS across any and all I have had to use over a multi-decade career that didn’t benefit from some third-party app to improve some “stock” feature of the OS.
If you go back to the earliest days of the Mac there have always been third-party apps to improve the functionnality of your sytem. It's what's been expected of users. There used to be a time when third-party Windows style Start Menus and bars were all the rage. I know it's a cultural, not technical, difference, but unless your work restricts what you can install you will be much happier going with the flow. Many window snapping utilities exist and many others who help with multiple monitor setups.
With the great attention to detail of macOS third-party developers, you can be sure those apps will give you a seamless experience.
And windows file explorer copying was inconsistent and buggy as hell from nearly a decade, but installing teracopy fixed that issue. As long as an OS is plug-in friendly, stock issues that can be easily remedied don't bother me.
If you're using multiple monitors and you're not using the rectangle app... that's kinda on you. I switched to Mac after using windows for decades and in less than a week I solved my windows management issues with this extension.
> If you're using multiple monitors and you're not using the rectangle app... that's kinda on you.
This attitude strikes me as snobbery. People aren't born knowing the best 3rd party alternative to their OS's shortcomings, and some have no control over what apps they can run.
Windows 10 has pretty good built in window management. IMO better than MacOS. I say this as a Mac and Rectangle user. (Though only Mac because of Safari exclusively working on it. I prefer Linux.)
It's also the attitude that's endemic in a lot of Mac forums, so good luck finding out answers when everyone rolls their eyes because you should already know.
> I'm convinced that most Mac users don't use multiple monitors and they're completely fine with the really poor window management. It boggles my mind; I am objectively less productive with a single monitor than I am with multiple.
I'm a long time Mac user with an 25" 21:9 ultra-wide display that constitutes two regular office displays. A single ultradwide display is way superior to a couple of displays. Performance wise for your workstation and config wise as well.
Nothing is blurry to me. To manage windows with keyboard or mouse I use moom[0].
I've worked on Windows for 2 years at a large corp, and the blurriness of fonts I experienced with mid-range 27" monitors was unprecedented. Having invested considerable time to make the problem go away, I gave up.
Even modern Ubuntu does better job at font rendering with displays made by the same manufacturer.
Longtime user of a 2-3 display setup here, and I would strongly disagree. If anything Windows’ support of multiple displays is bad… last I knew it couldn’t even set per-display or per-virtual-desktop wallpapers, and its virtual desktop support (which is essential for a multi monitor setup in my eyes) is nowhere near as mature as that of macOS or practically any Linux DE.
The key is that with macOS, you don’t really manage windows. Just let windows be where they will, like papers on a desk, and then manage desktops. While I’m working I’m constantly flipping between desktops on both my primary and secondary displays, with the primary display being set up with desktops of primary windows and the secondary display being set up with desktops of secondary and tertiary windows. The ability to mix and match sets between displays is powerful and way more natural to me than cobbling together some kind of overcomplicated window snapping setup or something like that.
> I am objectively less productive with a single monitor than I am with multiple.
I've tried multi-monitor setups before with both Windows and OS X, and I just don't like them. I moved to a 43" 4K TV, and I like it much better, especially with a window manager like Rectangle. If I need extra screen real estate, OS X has pretty good support for virtual desktops, but I haven't felt the need yet.
One of these days, I might feel cramped by my setup, but I suspect that I'd rather go up to 50" than get multiple monitors. Especially if I could go 8K by then.
I've found Mac window management to be completely un-usable... Until I install Magnet from the app store for $3. Then sanity prevails. And with much weeping and gnashing of teeth, I've managed to get the right combination of dongles to get my 32" and 49" monitors to work at the same time. It was worth the pain to be on a Unix machine, at the end of the day.
Windows window management doesn't really scale well beyond 3 monitors either. I use AHK with great results, and I imagine Mac has a similar non-native program. That being said, I've never even tried to migrate to Mac or Linux because I'm most productive with 8 monitors and the support for that number sounds abysmal on anything but Windows.
The author calls out a few reasons why DevOps fails for organizations all of which I agree with - however the one that I've never completely understood: Regulatory reasons for keeping Ops centralized.
I work in healthcare which I guess should fall under this rule - but in practice I haven't really seen that impeding DevOps. Teams that have the capabilities to build the full stack get handed a subscription to a cloud provider and they go off and do so. They still fill out and track change logs, audit changes and seek approvals - but after that's done, it's still the team who presses "the button".
Anybody in a regulated industry where you've hit hard walls that prevent you and your team from going full on DevOps? If so, what rules were quoted that stopped you.
I am not in a regulated industry, but we have recently gone through the process of getting SOC2/ISO27001 certified.
This is what was cited for us.
ISO27001:2013 A.6.1.2: Segregation of Duties. Conflicting duties and areas of responsibility must be segregated in order to reduce the opportunities for unauthorized or unintentional modification or misuse of any of the organization's assets.
Surely that means that no one individual can push a change they created without involving someone else, but that it is still fine as long as any two people (even if they're on the same team) are involved? You could solve this by e.g. forcing GitHub to require a review.
Not impossible. even in a prescriptive framework like ISO 27001, adequate SOD is a judgement call between you and the auditor. Generally speaking, if a single dev can push a code change to prod, in a way that would escape audit or not require a second pair of eyes, that would not be compliant. So if a dev writing code, also manages the deploy environment, that may not pass muster.
But it's not that cut and dried. There are degrees of rigor.
No. Assuming a well configured continuous deployment type environment; you just need to have peer review on code before it can hit production, and you need to have controls in place over the who, what and when of elevated access to production being granted
This all breaks down as soon as audit realise the Devops team is also admin of the ci/cd stack and therefore all controls put in place to make it harder for a single actor to do bad stuff can be bypassed via this all powerful system.
There’s another reason that’s a bit older but there’s a line item in section 404 of SOX called “segregation of duties” which many bureaucrats interpreted to mean “developers must not have access to production” when that’s not what the regulatory requirement means. It essentially means checks and balances for accountability and auditability. If nobody can cowboy code their way into prod it’s fine. In fact, rogue ops engineers modifying code in production is an example of how separating ops and dev won’t really protect from insider threat vectors either. What really must happen is that there is a sure way to verify that code is approved by another stakeholder for deployment and tracked at traceability levels appropriate to who can fix it or should be able to view the info.
When people keep yammering on about devops as a principle of people and processes they’ve already lost because processes are meant to replace people, so really all that matters are the processes and the services that fit into the process SLA and OLA.
Note that in a big organization what really matters are your particular regulators and arguing with your regulators claiming to know it better than them is probably one of the fastest, reliable ways to get fired I can imagine that won’t result in a criminal lawsuit against you.
Author here. That's interesting, as I've not worked with healthcare too much.
Others here have cited segregation of duties, which is definitely a factor, but the other one less mentioned in finance is the 'one throat to choke' principle: it's simpler from a management and regulatory perspective to have the responsibility for failures in one place rather than across many teams.
Ah - that makes sense. This might be a bit easier in healthcare as I believe it's pretty common to have many different ops teams each responsible for different parts of the business.
I feel like most of the time "compliance" is blamed when really, it's your first point in that section (Absent an existential threat, the necessary organizational changes were more difficult to make) that is the real holdup.
PCI doesn't _stop_ us from distributing these duties, but it sure does make it harder. Having change management processes in place puts in place all sorts of additional controls. Sharing code with this system and the main system creates either friction, or a lack of DRY code.
I was getting ready to disagree with you - but then I tried to think of any time I've actually pushed code to production with the "DELETE" keyword in it. The problems that I've had to solve in my career very rarely call for deleting something.
"Soft deletion" and "audit trail" are technical terms we developers come up for solutions the business wants but maybe hasn't asked for yet. It's not really a soft deletion it's a "deactivate" or "hide". Likewise, it's not an audit trail it's a "history" or "undo". Most of the time your stakeholders and users actually want these features, but don't ask because they perceive this as more expensive to build then just a "delete" button.
I've worked in healthcare IT for my entire professional career - It's A LOT more complicated than most people think. For the last 5 years I've focused on the data side of healthcare and I think that deep learning is 100% possible - it's just not achievable by a single person and it's likely VERY expensive. There are so many facets to healthcare data that's it's just impossible for a single individual to achieve something meaningful by themselves without the help of teams of doctors, data analysts, data engineers and data scientists. Just dealing with data quality issues (such as the ones called out in this essay) require a team of people to determine if metrics you are trying to measure are legit or not.
On billing: I'm convinced that the primary reason why healthcare (at least in the US) is so complex - is because of the dichotomy of saving people at all costs, while doing so fiscally responsibly. It is fairly common for large healthcare organizations to have ACTING doctors in their c-suite, who's primary goal is not to make money - it's to save lives. The people who care about saving money, reducing cost and increasing efficiency have no control over the organization. I'm not saying this is a bad thing, but IMO it's the largest contributing factor as to why healthcare billing is so complex, and healthcare costs get as high as they do (at least in the US).
I doubt that Microsoft was looking to get into the "advanced text editor" business. Their biggest money maker is Azure, they made a text editor that promotes Azure.