I really like the idea of this library and I've bookmarked it for future use, but following on from your point, it does feel disingenuous to make this promise in the introduction:
> (Different from git: no merge conflicts to resolve!)
As you've pointed out, there _can_ be merge conflicts, they're just resolved arbitrarily. In theory, git could do this too but obviously nobody would use it then!
CRDTs themselves are inherently conflict-free, but if the problem you're solving is not, implementing via CRDTs is not the silver bullet you're looking for.
Respectfully, I think you're both right. Certainly recumbent bikes are aerodynamically efficient, but it's worth considering that in a bike race one also needs to ride uphill, and the instability of a recumbent at slow speeds would be a disadvantage here. I haven't checked, but I suspect most hill climb records would be held by traditional bike frames.
I don't think recumbents (at least high-racers) are necessarily unstable at slow speeds. My understanding was that the fundamental problem with recumbents in racing is that, going uphill, aerodynamics matters less. What really matters is applying torque efficiently. And humans are designed to apply torque vertically: we're good at it. So while on the straightaways and downhill recumbents cannot be beaten, they are poor at climbing.
What's frustrating is that we'll never get to see serious comparisons of the two, or of interesting combinations of them, in real races because UCI banned recumbents for the worst of reasons a hundred years ago.
He's got over 40 projects in total there, and each one of those is deep --- they would either take a very long time or be impossible for the average programmer. In contrast, many other programmers I've heard claim to have done over a dozen different projects turn out to really be "I glued several libraries together" repeated many times; very much the opposite of Bellard.
In fact, I suspect one of the reasons he is so productive is because he shuns all social media.
He has a drive that most people can only dream of.
I have a bunch of projects, drones, 3d printers, computer clusters, a bunch of programs I want to make - I have all the tools, I already own all the things I mentioned.
But I suffer from depression/other stuff, and I haven't touched them in months. Let alone this guy, who is presumably pounding away at his keyboard for hours everyday.
One question I couldn't see answered in the FAQ is what happens if one of the electric skates breaks down?
Given each of the skates is independently driven, the chance of failure would be magnified by N, for N skates within a tunnel. Seems like failures could be quite common and would affect the entire tunnel?
Perhaps it's not, but my gut feel is that those other systems mitigate failure by using a smaller number of large, reliable and expensive carriages. Whereas this system is going for a large number of smaller, cheaper(?) skates. It just feels like it's more susceptible to failure?
As someone who gets the train a lot (Brit) - the big expensive ones have to stop due to issues very frequently because of this head-of-line blocking issue. I'd say ~1/4 of the trips I take half a delay of some sort for this reason. (I am trying to make some effort to account for the observation bias of it being f*cking annoying.)
Yes, I know that. That now means you need a way for the skates to communicate with each other and maintain a safe distance from each other before they collide and obviously slower speeds. That bare concrete tunnel is now longer bare since you will need communication access points, fibre cables for redundancy, etc.
Sensors in the tunnel and the skates, that communicate with each other? It's not rocket science, that's how it's worked for years with trains and subways and even car tunnels have continuous monitoring and signage to alert other drivers.
Because the failure rate per passengers transported is much much lower in a traditional transportation system.
All the passengers that they will move per day (and a bit more), with 187 travels, are moved in one minute by just two trains in an underground system.
So you have to multiply the risks by the increased travels needed to move the same amount of people.
A ratio of ~95 seems quite a lot to me..
For the most common types of failure that aren't safety issues, I imagine the skate behind the broken one could push it (of course at a much slower speed).
It's more of a guess than a conclusion. Because the tunnel is linear, the failure of a single skate affects the entire line, so running, say, 100 skates concurrently would make overall failure 100 times more likely.
One option is to make the skates incredibly fault-tolerant, but that seems to go against the simple / cheap ethos that this system seems to be going for.
Just to make this more concrete, if your skate platform breaks down just once every 10 years, then banging 100 skates in the tunnel will see this breaking down at about once a month or so.
If you have a flat failure curve, sure. If you have the more traditional bathtub curve, after the initial problems, failure rates should be much lower.
I can't help but have similar feelings about this, and I think your question about whether "the community has an issue with it" is especially pertinent.
Many journalists and internet pundits suggest that Magic Leap's technology was obviously over-hyped. See for example the title of The Verge's article. Adding further evidence that it was too good to be true, they used faked videos to attract software developers, as you have pointed out.
So how is it that Google, Alibaba, and Andreessen Horowitz were convinced to hand over hundreds of millions of dollars each to this company? Was it fraud, over-promising on behalf of Magic Leap's founders, or could these huge VC firms not see what everyone else could see? Is it really that easy to secure a billion dollars in funding? The question as to "what's going on here" relates as much to the startup / VC community as it does to Magic Leap.
As a software engineer that works across multiple platforms, I've used Amazon Workspaces as my primary Windows dev environment for the past 18 months. In general the experience is great- the input response and 2D graphics are superior to an RDP session. It's easy to forget you're working remotely. Another nice bonus is longer battery life relative to running a virtual machine on your laptop.
But the real limitation is the tiny C: partition. It's fixed at 60 GB, nearly half of which consumed by Windows and its gradual updates, which generally leaves you with insufficient space to install Visual Studio with the Xamarin tools.
The limitation has been noted in the AWS Developer forums, but unfortunately the 60 GB limitation seems hard-wired into the platform for now[1]. It's painful enough that I searched around for alternatives, but I couldn't find any direct competitors! I'd be 100% happy with this if I didn't have to run up against this 60 GB limit all the time. Just a heads-up for any engineers considering this for a Windows development environment.