Oh man, I remember in 2000 when I first started working in the industry we had this database build process written in Java that took almost 30 days to run. The delivery schedule was monthly, and if anything went wrong we'd have to restart from checkpoint and the database build would be late. It also pegged a 32-CPU SMP DEC Alpha machine for the entire time, which was, well... CPUs would regularly (once every other build or so) cook the socket they were in and have to be replaced. The GS-320 would hot-swap (semi-reliably) so it wasn't a HUGE deal, but it would slow it down and inevitably the build would be a day or two late.
Enter myself and a buddy of mine. First thing we discovered was that they were using regular java.lang.Strings for all the string manipulation, and it'd garbage collect for between 30 and 50 seconds every minute once the process got rolling. It used a positively criminal number of threads as well in our predecessor's desperate attempt to make it go faster. SO much time was spent swapping threads on CPUs and garbage collecting that almost no real work got done.
Enter the StringBuffer rotation scheme. John and I decided to use the backup GS-160 as a hub to read source data and distribute it among 16 of our floor's desktop machines as an experiment. The hub was written in C++ and did very little other than read a series of fixed-length records from a number of source files and package them up into payloads to ship over socket to the readers.
The readers gut-rehabbed the Java code and swapped out StringBuffer for String (and io for nio) to take the majority of garbage collection out of the picture.
The trick we employed was to pre-allocate a hoard of StringBuffers with a minimum storage size and put them in a checkin/checkout "repository" where the process could ask for N buffers (generally one per string column) and it'd get a bunch of randomly selected ones from the repo. They'd get used and checked back in dirty. Any buffer that was over a "terminal length" when it was checked in would be discarded and a new buffer would be added in its place.
We poked and prodded and when we were finally happy with it, we were down to one garbage collection every 10 minutes on each server. The final build was cut from 30 days to 2.8 and we got allocated a permanent "beowulf cluster" to run our database build.
My dad taught me to read with Dr Seuss and the TI-99/4A BASIC programming manual. Starting writing my first programs at 4. Love that old machine. There was a sort of Gradius knock off for it called Parsec that I played the hell out of too.
When I was a little older I would borrow books at the library to write games in BASIC. Basically key stuff in that the book told you to write, and since a lot of it was for the C64 or TRS-80 I had to figure out how to “port” it to the TI. I wrote notes for my changes in pencil in the library books so I wouldn’t get in trouble with the librarian. Invariably I’d check the book out again a few weeks after I’d returned it. I was probably the only person who read my notes, but I like to think someone got some use out of my addenda.
Wrote my first programs for this too, at 7. Copied from the manual with no understanding but eventually writing my own. The only coder in the house! Didn't write anything decent until the C64 when I made a few small games.
So let’s set aside for just a moment the notion of single payer healthcare as an answer to this. Why hasn’t a point of competitiveness between insurance companies ever been that they keep your workforce healthier, more productive, and easier to retain than the competition?
The biggest cost to an employer is always their roster. The fewer sick days people need, the less burnout causes them to churn, and the healthier and happier people are overall, the lower the training, recruiting, and redundant staffing cost.
It feels natural to me that in an employer paid healthcare system like the one we have in the us, the employer should demand the highest quality coverage possible by that metric as long as it reduces staffing overheads.
By far the number one thing any employer I've worked for has done to reduce my number of sick days and visits to the doctor/pharmacy was remote work. Offices and gathering large numbers of people from over a wide area into a confined space by its nature is a vector for spreading infections. This is before even getting into issues like what a stressful commute does to the body and how it limits one's time that could, though not necessarily will, be used for healthier activities. But just the elimination of picking up colds, flus, and other infections from the office has had a significant impact on the number of sick days I've used. There also have been times when I was sick but not too sick to be productive. In the past I would have had to consider if using up a sick day and having no productivity was a good trade-off for not going into the office and possibly infecting others.
Maybe we should question this assumption that insurance companies intend to compete. It strikes me as very difficult to compare insurance companies. Some of them don’t even keep up to date information about the doctors that are in network (you have to call the doctor to ask).
Payers (insurance companies) do compete aggressively, but they mainly compete on issues relevant to benefits administrators who make the decisions at large employers about which health plans to offer to employees. Those administrators need to hold down costs for self-insured employers, and having larger provider networks means higher costs.
My guess is that despite the strong long term overall link, it's too difficult to draw a meaingful link between any particular executive decision on this and any particular outcome, causing a tragedy of the commons. I think it's a good guess because it's a powerful explanation for many other such questions about "Why don't they just ____?"
Patients on Medicare/Medicaid plans have similar or worse issues finding therapists who take that insurance, so I am mystified as to why anyone would think that a single-payer system would be any sort of solution.
While in principle I think you're probably correct that providing good mental health services to employees makes for a more productive workforce, it's tough to connect cause and effect in a mathematically rigorous way. We just don't have high-quality studies in this area to establish cost effectiveness. And many employees tend to leave anyway for other reasons.
We really ought to break the link between employment and health coverage. That part never made any sense.
People switch insurance fairly frequently, so if you could pay to improve someone's health, the insurance company is not all that likely to see much benefit.
Most employees at my current firm stay 2-3 years. That means that if you could "fix" a medical issue, it would often need to pay for itself in a year or two, which is unlikely.
Unfortunately, the economics to seem to favor what the insurance companies are often accused of doing: finding excuses to deny treatment.
Increasing employee pay would also boost morale, but it would be detrimental to retention - move money in their pocket means people have more options to quit outright or find a different job.
The same applies to good healthcare - employers WANT their employees to be utterly dependant on them and just barely scraping by so they're stuck.
Because those are long-term effects, and most companies have literally zero decision makers with any incentive to care about the long term. They all know the game. They're there to parasitize as much value as possible from the company or its common-stock holders in the short term and then fail upward and repeat the cycle. The long-term health of the company is completely irrelevant to everyone who matters.
If you’re regular and become irregular, it’s often a sign that something is off, or if you’re into middle age, that your body’s hormone balance is changing. In either case it’s something to pay attention to and maybe mention to your doc next time you see them.
The thing is there are solid, replicable patterns for optimizing graphql. The way we use GraphQL is to expose "everything" to the frontend folks so they can work closely with design until they have a polished frontend for whatever they're building, then our backend folks look at it in APM in Datadog and figure out where to optimize it. Once we have an optimized query, we ship it. Everyone's aware of the basic optimization patterns we use, and backend is a pretty well-oiled machine.
I preordered Starlink in March of 2021, as soon as my first rural isp reneged on 50 megabits and sold me 5. 2 years later in western Virginia I still have no eta for my order and cannot contact anyone about it. I have 12 megabits from that same local ISP and it’s enough to do my job so I cancelled Starlink. I can’t imagine what customer service will be like if I am on preorder for two years and they refuse to give any update other than kicking the can down the road every time the estimated delivery date gets close. When I canceled my estimate was “early 2024”.
I think starlink is deliberately not rolling out any more locations in the USA till their dispute with the FCC about the rural broadband money is sorted out.
They don't want to deploy early, because then they wouldn't be eligible for the funding which is supposed to help them deploy.
Notice that just over the canada and mexico borders have good service available today - suggesting bandwidth isn't the constraint.
It was $1B that they were expected to get... Thats substantially more revenue than they were going to collect from all their US based users in a whole year!
His companies are quintessential products of the money printer and low interest Musk has said says as much when he blames his performance on the fed rate rises. For a luxury car company, of all things.
I mean they kind of say in there. It's an early emergent and it has broad leaves and grows dense mats, which inhibits native species from emerging. The native species are more suited to pollinators and other insects. They're also better at holding the soil together (so removing garlic mustard does in fact improve erosion control)
In my own exploration on my land, I've noticed that dead wood and leaves break down much slower where garlic mustard grows. This could be lack of insects or it could be something allopathic in the soil left by garlic mustard against fungal growth. In any case, the stuff is definitely detrimental to the land around it.
As for edibility the young greens are honestly some of my favorites. They work really well in spicy, meat heavy stir-fries and provide fresh greens at a time when there aren't a lot of fresh veggies about except for tough old kale and grocery store spinach.
Right, but like -- how do we know that's bad? Could it be that covering the ground is actually _better_ for certain species to thrive which has a net positive effect on the ecosystem (I don't know, but it's not clear from the passing mention that garlic mustard is out-competing other species.
The idea that we could return to a world where there are only native species in an ecosystem is totally insane. I'm not suggesting that we should let all native species die off, or be left to fight it out for themselves without human intervention. But focusing on presence of native species as an indicator for ecosystem health doesn't seem to be a helpful measure.
Feels like a prime opportunity for a National Park. Restore it to the point where it won’t degrade further, add a museum, plantetarium, library, and preserve the land around the telescope. It’d be really neat! I’d take my kids to be sure.
I think (at least last time I looked into it, it may have changed since then) the problem was that some of the scientific equipment is heavy and suspended by cables which are degrading over time. I don't think it is like a battleship for example, which degrades into a battleship shaped hunk of steel which can be easily restored into a museum ship -- cables under tension fail catastrophically.
I would love a game that melds witcher style open world "ronin" with a tactics-style game like Fire Emblem. Sort of a wandering knight in a broader war sort of thing. The tactics-style battles you join (or don't) and the way they go matter to the line of main and side-quests and the condition or existence of characters you'd meet along the way. A different way to implement the whole "choices matter" mechanic that you get from Witcher.
I might go to a town, fail to take a side in the battle that town is locked in, and when I come back the town is laid waste. The quests that would be available to me in town are no longer there, but I might have errands where I end up searching the woods for refugees instead.
Something like that'd have a lot of replay value, because what you do changes the game you're playing over time.
there's some of that in Battle Brothers, with the player leading a band of mercenaries and taking part in turn-based battles and with an open-world map that can change (a little) as you play
Enter myself and a buddy of mine. First thing we discovered was that they were using regular java.lang.Strings for all the string manipulation, and it'd garbage collect for between 30 and 50 seconds every minute once the process got rolling. It used a positively criminal number of threads as well in our predecessor's desperate attempt to make it go faster. SO much time was spent swapping threads on CPUs and garbage collecting that almost no real work got done.
Enter the StringBuffer rotation scheme. John and I decided to use the backup GS-160 as a hub to read source data and distribute it among 16 of our floor's desktop machines as an experiment. The hub was written in C++ and did very little other than read a series of fixed-length records from a number of source files and package them up into payloads to ship over socket to the readers.
The readers gut-rehabbed the Java code and swapped out StringBuffer for String (and io for nio) to take the majority of garbage collection out of the picture.
The trick we employed was to pre-allocate a hoard of StringBuffers with a minimum storage size and put them in a checkin/checkout "repository" where the process could ask for N buffers (generally one per string column) and it'd get a bunch of randomly selected ones from the repo. They'd get used and checked back in dirty. Any buffer that was over a "terminal length" when it was checked in would be discarded and a new buffer would be added in its place.
We poked and prodded and when we were finally happy with it, we were down to one garbage collection every 10 minutes on each server. The final build was cut from 30 days to 2.8 and we got allocated a permanent "beowulf cluster" to run our database build.
reply