If you haven't already, check to see if either Google Fiber or Monkeybrains is available in your area. Last I checked, the regs are still in place that prevent landlords from denying you access to an ISP of your choice.
Google Fiber isn't rolled out to most of the bay area. Monkeybrains is wireless with speeds significantly slower than what Comcast offers me. I've checked just about every wired ISP possible, and Comcast is the only option that services my neighborhood.
And FWIW, I own my house in the east bay — I am the landlady ;)
> Google Fiber isn't rolled out to most of the bay area.
I'm aware. And it's my understanding that -sadly- much of the Google "Fiber" deployment in the area is a WISP, just like Monkeybrains is. Quite a while back, Google Fiber bought Webpass and continued doing WISP deployment in the SFBA under the Google Fiber brand. (Because it's politically dreadfully hard to run fiber optic cables in the area.)
If you haven't contacted Monkeybrains for a minimum and expected speed quote at your site in a year or five, it's worth doing it again. It's my understanding that they aperiodically upgrade the hardware in their core network as well as the sort of hardware that they deploy at customer sites.
Monkeybrains' down-to 100/100 service is -on paper- far, far slower than the up-to 1400/40 service I was getting from Comcast, but the actual, delivered speed that I'm seeing from Monkeybrains varies between 300mbit and ~1000mbit (sustained) depending on what other folks are doing on their network. [0] I'm in a fifty-apartment building, so it's possible that they've installed faster gear on my roof than they install in smaller (or single-family) buildings. Reports on the Internet seem to be somewhat mixed, with some single-family buildings reporting ~1gbit service, and others reporting ~45mbit.
[0] Typical prime-time speed is something like 400mbit. Off-hours speed is frequently very close to 1gbit. The only time I've seen the minimum speed was when I had a poorly-crimped Ethernet cable between my router and the rest of my LAN that would intermittently only link up at 100mbit.
In markets where Comcast has actual real competition, they "include" the unlimited data (aka no cap) with no extra charge when you sign up for their gigabit plans.
You pay for their highest tiered and highest bandwidth plan and they have the audacity to impose a cap making that bandwidth work against you? Crazy. My household internet usage is quite modest, nothing anyone on HN would call data intensive—casual video streaming being the lion's share, and I blow out 1TB every month without fail.
Default or not, are there sensible alternatives on a Mac? I'm not sure if I'd consider OpenZFS on Mac "sensible" - but I haven't owned a Mac in decades, so... what are the alternatives to APFS?
And yet, as someone working on core language infra, we apply exactly that sort of ideal when making changes. If a diff doesn't break any tests, then it's "safe" to land, and if something does indeed break afterwards, then it's the broken team's responsibility to fix forward or otherwise provide proof that it's a big enough problem to roll back. If we end up in SEV review for a change, and there were no broken tests on the diff, then there are going to be some hard questions for the team that didn't write tests.
Ie, tests aren't mandatory, but if you aren't writing tests, it's your responsibility when someone else's change breaks your project.
Tests are hard for UI components. Even when the web page has all the expected elements, the appearance may be broken. At least for UI projects, your approach will fail.
Edit: the best part was running it a couple dozen times to get an entire flock walking, falling, and rolling all over your desktop, and watching everything grind to halt under CPU strain!
Aww memories. One of my old colleagues would mess with my computer and added a bunch of these. I left them there much to his chagrin. I got revenge as one night he was in the office late at night by my desktop, it was completely dark in the office and the sheep baa’d and it scared the crap out of him.
Oh wow! Not sure if it was this exact program, but I remember some similar sheep roaming my desktop when I was young. It had the ability to draw pictures in MS Paint, and would often do so when you were working on something...
It looks like this repo is is a rewrite of an earlier "scmpoo.exe" that roamed the internet in the mid-1990s. That was fun to set up on school computers to automatically launch at random times.
My 2017 Golf R would intentionally turn off the dashboard backlights at night if the lights weren't on, providing me excellent feedback that they weren't on. Either head lights should always be on auto by default, or more cars need proper feedback to the driver when they aren't.
I'm reading their comment as a joke about how software engineers tend to overestimate their own expertise on things like physics and are not actually anywhere close to experts.
Software engineers presenting weird pseudo science as serious physics is one way this manifests.
In what way? Threading, asyncio, tasks, event loops, multiprocessing, etc. are all complicated and interact poorly if at all. In other languages, these are effectively the same thing, lighter weight, and actually use multicore.
If I launch 50 threads with run away while loops in Python, it takes minutes to laumch and barely works after. I can run hundreds of thousands and even millions of runaway processes in Elixir/Erlang that launch very fast and processes keep chugging along just fine.
> If I launch 50 threads with run away while loops in Python, it takes minutes to laumch and barely works after. I can run hundreds of thousands and even millions of runaway processes in Elixir/Erlang that launch very fast and processes keep chugging along just fine.
I'm not sure that argument helps your position on threading. I once saw a java program spin off 3000 threads doing god knows what. Debugging the fucking thing was impossible.
The point there is that processes in Elixir and Erlang are effectively like functions, in that you do not need to "manage" them in any sort of way. They are automatically distributed across all cores, pre-emptively scheduled, killable, have a built-in inbox, etc. One doesn't need to worry about what concurrency library to use nor manually create mailboxes using queues or whatever else. It just works, and you fire them off to do whatever you need. So there is no ceremony. Threads in many other languages and in Python in particular, require a huge amount of ceremony and management.
> require a huge amount of ceremony and management
I think Java made it quite easy to spin off threads, and again, it doesn't help the argument. It just made the f'ing thing worse. Race conditions are still f'ing hard to solve. Particularly when a shared-mutable-state exists outside of the program.
The whole purpose of threads is to improve overall speed of execution. Unless you're working with a very small number of threads (single digits), that's a very hard to achieve goal in Python. I wouldn't count this as easy to use. It's easy to program, yes, but not easy to get working with reasonably acceptable performance.