Frontier AI models and Coding agents are contributing to this calcification.
My preferred stack is SvelteKit, and I just maintain a markdown file of all the context needed to steer the AI towards the happy path with Svelte 5 runes, the Svelte flavor, etc.
The biggest bottlenecks are raw ingredients, power, and factories. Once the automated manufacturing flywheel gets started, units can be produced very rapidly. Specialized machines produce low-level components, while more generalized machines assemble higher-level components as well as products like themselves and other robots.
People don't factor a human's total compensation beyond an hourly wage.
Machines don't need as much breathing room as humans.
Humanoid robots are notable worse than humans in many aspects that impact productivity. If a humanoid robot is ultimately 33% as productive as a worker in a developing country who gets a wage of $10k USD annually and works 8 hours per day every day, then then robot has to cost less than $10k annually all in to be a good replacement. Assuming a 5 year useful lifespan and $2k in maintenance per year, results in the robot needing to cost ~$40k before it can replace a human's productivity. And that is inclusive of training and setup, and I doubt we'll have robots that are capable of learning as quickly as average humans without dedicated specialists training them... which raises the cost.
In general, you can get a dedicated machine for most human tasks that is easily 10-1000x productivity if you have a few million in capital. There are tasks on the margin where human flexibility and dexterity that having a human operate a $10k sewing machine is going to be very very hard to replace.
Can't machines work a 7-day, 24-hour schedule? That said, humanoid robots strike me as a jack-of-all-trades tool, our environment is full of things that are optimized for human-sized and -shaped users, but if you can purpose-build your robot for a factory, it's going to be more efficient at a narrow set of tasks there.
That is the humanoid robot use case with time for charging, maintenance, and offline during repair. This is just a rough estimate of amortizing those costs and comparing them against a 7-day work week.
Isn't the biggest bottleneck just that they need to adequately and reliably be able to do useful work at a better price-performance ratio than a human ?
The biggest bottlenecks are hardware design and software design. Materials science to an extent, particularly battery materials, but we could build robots with currently-available materials and power density if only we knew how to make them work usefully enough.
I'm not against the concept and I agree the manufacturing can be scaled. There just isn't a product yet.
And it is a dumb take. As with any new technology, it has a chicken and an egg problem to overcome. Humanoid robots are developing very rapidly now that AI is progressing the way it is. It is in the same vane as 32k should be enough for anybody.
This generalizes to a whole new category of tools: UX which requires more thought and skill, but is way more powerful. Human devs are mostly too lazy to use, but LLMs will put in the work to use them.
> UX which requires more thought and skill, but is way more powerful. Human devs are mostly too lazy to use
Really? My thinking is more that human devs are way too likely to sink time into powerful but complex tools that may end up being a yak shave with minimal/no benefit in the end. "too lazy to use" doesn't seem like a common problem from what I've seen.
Not that the speed of an agent being able to experiment with this kind of thing isn't a benefit... but not how I would have thought to pose it.
> If you lose $100k on cryptocurrency, or you spend $800 on some metaverse thing, it's fine, you can still buy food.
20 - 30% cannot spend $100k per year as funny money, that is more like the top 5% but probably more like the top 2%.
High earners in the 20 - 30% often also live in expensive areas, so their dollar doesn't go as far as some of the more affordable places to live (housing, food, etc).
The parent comment just shows how most people do not understand how inequal the US actually is. I'd recommend to try this: https://wid.world/income-comparator/US/
If you make a yearly gross salary of 100K you're already in the top 11%. With 200K you're in the top 3%. Inequality also leads to social segregation, which means that we live in bubbles where most people are "like us" and it's very difficult to see how privileged we actually are.
Read Piketty's "Capital in the XXI century" to learn more about how crazy inequal the world, but specially the US, is becoming. Phenomena like Trump are easier to understand when taking this into account.
It’s like when our vendor’s tech support number tells you you’re 300th in queue (or busy cuz their lines are saturated) instead of the usual 2nd or 3rd.
I broadly agree with what you're saying, but, that's not the issue here.
They don't even have a dedicated status/outage page, afaik.
The website being down is a more classic problem. The outage probably increased traffic to their website by 1000x if not more and the infrastructure for the website simply couldn't cope.
Good lesson on keeping your status infrastructure simple and on something which is highly scalable.
Having a CDN where the main page of their site was 99% cached globally would have probably mitigated this issue.
How would you host a status page on Starlink? Are there web servers that are only connected to a satellite connection in house or something? Or is this just speculation that's how their marketing website works.
Starlink has servers for the website and subscriber sites. Importantly, they have servers for controlling the satellite network. They also have multiple gateways for satellites to connect to the network.
A problem in their network would take down the sites, and maybe the control plane for satellites.
Starlink still has infrastructure outside the satellites to run normal company stuff. If they screwed up something in their core routing system, or DNS, etc it could affect everything.
True. It isn't literally present as that sentance in Understanding Media: The Extensions of Man (1964), but is a summarization. Amputation is mentioned 15 times and augmentation twice.
The concept that "every augmentation is an amputation" is best captured in Chapter 4, "THE GADGET LOVER: Narcissus as Narcosis." The chapter explains that any extension of ourselves is a form of "autoamputation" that numbs our senses.
Technology as "Autoamputation": The text introduces research that regards all extensions of ourselves as attempts by the body to maintain equilibrium against irritation. This process is described as a kind of self-amputation. The central nervous system protects itself from overstimulation by isolating or "amputating" the offending function. This theory explains "why man is impelled to extend various parts of his body by a kind of autoamputation".
The Wheel as an Example: The book uses the wheel as an example of this process. The pressure of new burdens led to the extension, or "'amputation,'" of the foot from the body into the form of the wheel. This amplification of a single function is made bearable only through a "numbness or blocking of perception".
It is not a silly question. The various flavors of LLM have issues with reliability. In software we expect five 9s, LLMs aren't even a one 9.
Early on it was reliability of them writing JSON output. Then instruction following. Then tool use. Now it's "computer use" and orchestration.
Creating models for this specific problem domain will have a better chance at reliability, which is not a solved problem.
Jules is the gemini coder that links to github. Half the time it doesn't create a pull request and forgets and assumes I'll do some testing or something. It's wild.
It’s nothing special. Not in the realm of anything technical outstanding. I just stated that to emphasize that it’s a slightly bigger project than default single-dev coded SAAS projects which are just a single wrapper. We have workers, multiple white-labeled applications sharing a common infrastructure, data scraping modules, AI-powered services, and email processing pipelines.
I’ve had an impossible learning curve the last year, but as I should rather be vibe-coded biased I still use less AI now to make sure it’s more consistent.
I think the two camps are different in terms of skill honestly, but also in terms of needs. Like of course you are faster vibe-coding a front-end then to write the code manually, but build a robust backend/processing system its a different kind of tier.
So instead of picking a side it’s usually best to stay as unbiased as possible and choose the right tool for the task
[1] https://theonion.com/trump-spends-entire-u-k-trip-trying-to-...