I'm no longer using a Mac (for employer reasons), but I had a lot of success with Yabai (free/open-source) before I switched. https://github.com/koekeishiya/yabai
I haven't disabled SIP yet. As a Mac novice, I'm wary of breaking something required for work (and in a pandemic WFH to boot) that I can't fix. But it looks like it's optional.
One server is elementary, yes, but hundreds of thousands working together is an entirely different beast. There’s nothing easy about cloud computing and there’s a definite reason why it’s worth buying into.
There's the deceptive mantra repeated mindlessly over and over, yet again. And there is nothing hard either. It is just complex. So everyone mindlessly pays too much because some marketing has successfully sold them that this is "too hard".
I would argue that if a remote worker is dissatisfied with the physical setup of their remote office, that's on them, not the employer. Having a team that does not prioritize remote-first-style communication is another situation entirely.
> A broken Docker image can lead to production outages, and building best-practices images is a lot harder than it seems. So don’t just copy the first example you find on the web: do your research, and spend some time reading about best practices.
While I may not agree with absolutely everything in the article, this final point is paramount. Please don't blindly use technology because you managed to find a copypasta config that runs. Running != good.
Definitely very true. I write more C++ than anything else, and the sheer number of online examples that start with
using namespace std;
is just staggering. Sure, it works in a toy example posted to stackoverflow, but it will cause problems in larger projects. I think globally there needs to be better emphasis on using best-practices in tutorials and examples; I remember this particular pet-peeve of mine also being present in college textbooks. Especially for content aimed at newbies, it should be frowned upon to show the wrong way to do things, since then it gets harder to show how to do it the right way.
I've had people who were surprised to find out that they could type:
using std::chrono::duration;
using std::cout;
instead of pulling in the entire std namespace; simply because they'd only ever seen examples that did it the lazy way.
That's... true in the specific example, becuase C++'s standard library is a huge, promiscuous mess of obvious-looking symbol names that are just asking for a collision with a user name.
But in general the notion that we want to isolate "everything" into a namespace is a net loss. Clear and simple abstractions have real value, and short undeclared names are an important part of being clear and simple.
The modern convention of separately importing every symbol you use gets really out of hand, when most of the time it really is appropriate that you just declare "my code is using this API" and expect things to work without having to link your program by hand with a giant shipping manifest of symbols at the top of your source files.
I strongly disagree. In Python, you can find exactly where every identifier comes from (unless you use `from foo import *`, but that's frowned upon) and it makes it extremely easy to navigate code and documentation.
I've had to look at some C# web service code recently, and the amount of magic it relied on made it impossible for me to find what I was looking for, even using grep.
And I strongly repeat that well-understood and commonly-used APIs benefit (strongly, heh) from concision and idiom.
Seriously, when was the last time a C programmer needed to figure out where identifiers like "strlen" or "fread" come from? The problem you posit need not exist for big chunks of commonly used APIs, and the inability of modern programmers (trained, it seems, on C# web service code) to see that is frustrating.
essentially equivalent to my C++ example? They're bad practices in both languages. Thankfully python examples seem to be better in this regard, since I don't recall seeing wildcard imports in any of the tutorials or references I've used.
> Not every starter is a finisher, and not every finisher is a starter, and not every finisher is a good maintainer, either. They're different things.
This is the most true thing I've read in a while. It takes a lot of thoughts I've been having lately and wraps them in a concise package. Thank you! This is gonna stick with me for a while.
I love File>New'ing a project. A blank canvas inspires me and I can see the possibilities, but my coworker can't imagine anything yet is able to improve greatly on extant concepts. And yet another friend can do neither, but is great at being disciplined and pruning/maintaining a codebase.
Out of genuine curiosity, do you know of any resources you could point me to to explore specific situations where Git isn't fitting the bill technologically?
The big one is anything that requires locking. Git by nature doesn't support locking.
The next one is repository size. Anything with extremely large history size or checkout size. Very hard to work with using Git. Microsoft has Git VFS. Facebook and Google have modified versions of Mercurial and Git.
High repository velocity. If you are trying to push to a remote and you are always out of sync, it's going to slow you down.
Checking out different commits for different parts of the tree. This one is a bit more rare, it's less common that you'd want this.
Finally, setting ACLs to deny read access to parts of the tree.
For all of these cases, there are some ways you can work around the problem. It's not like you're completely dead in the water with Git, it's not like these things are completely impossible to do in Git. It's just that Git isn't good at everything. It's just that Git is exceptionally good for most people who write code.
Remember, yes. Scream from the rooftops louder than the positives, no. Don't forget that insults, lies, and corruption very much fall under the "behave humanly" category as well. We ain't perfect.
My illogical hope is that one day we'll have the ability to run "simulations" in parallel universes so we actually can prove that option X was better in the long term than option Y.
Either way, there's still too many variables. If a company's ROI drops after changing to an open work-space... some variables off the top: decreased product quality/quantity, both/none, competition, consumer needs/wants, all of the above, etc.