Hacker News new | past | comments | ask | show | jobs | submit | ram_rattle's comments login

I personally think apart from GPU and compute for intelligence for meaningful robotics to take off we still have lot of things to crack like better battery, better affordable sensors, microelectronics etc, I'm pretty sure we will get there but I don't think one company can do it.


Better battery isn't really an issue for factories. Same with sensors if you're saving the cost of employing a human, especially for dangerous work.


True - and of course factories don't mind if a robot costs $40,000 if the payback time is right.

But factory robots haven't propelled Kuka, Fanuc, ABB, UR, Staubli and peers to anything like the levels of success nvidia is already at. A market big enough to accommodate several profitable companies with market caps in the tens of billions might not drive much growth for a company with a trillion-dollar market cap.

nvidia has several irons in the fire here. Industrial robot? Self-driving car? Creepy humanoid robots? Experimental academic robots? Whatever your needs are, nvidia is ready with a GPU, some software, and some tutorials on the basics.


> But factory robots haven't propelled Kuka, Fanuc, ABB, UR, Staubli and peers to anything like the levels of success nvidia is already at. A market big enough to accommodate several profitable companies with market caps in the tens of billions might not drive much growth for a company with a trillion-dollar market cap.

That's because the past year of robotics advancements (e.g. https://www.physicalintelligence.company/blog/pi0, https://arxiv.org/abs/2412.13196) has been driven by advances in machine learning and multimodal foundation models. There has been very little change in the actual electronics and mechanical engineering of robotics. So it's no surprise that the traditional hardware leaders like Kuka and ABB are not seeing massive gains so far. I suspect they might get the Tesla treatment soon when the Chinese competitors like unitree start muscling into the humanoid robotics space.

Robotics advancements are now AI driven and software defined. It turned out that adding a camera and tying a big foundation model to a traditional robot is all you need. Wall-E is now experiencing the ImageNet moment.


> There has been very little change in the actual electronics and mechanical engineering of robotics. So it's no surprise that the traditional hardware leaders like Kuka and ABB are not seeing massive gains so far.

Perhaps I wasn't explicit enough about the argument I was trying to make.

Revenue in business is about selling price multiplied by sales volumes, and I'm not sure factory robot sales volumes are big enough to 'drive future growth' for nvidia.

According to [1] there were 553,000 robots installed in factories in 2023. Even if every single one of those half a million robots needed a $2000 GPU that's only $1.1 billion in revenue. Meanwhile nvidia had revenue of 26 billion in 2023, and 61 billion in 2024.

Many of those robots will be doing basic, routine things that don't need complex vision systems. And 54% of those half a billion robot arms were sold in China - sanctions [2] mean nvidia can't export even the 4090 to China, let alone anything more expensive. Machine vision models are considered 'huge' if they reach half a gigabyte - industrial robots might not need the huge GPUs that LLMs call for.

So it's not clear nvidia can increase the price per GPU to compensate for the limited sales volumes.

If nvidia wants robotics to 'drive future growth' they need a bigger market than just factory automation.

[1] https://ifr.org/img/worldrobotics/2023_WR_extended_version.p... [2] https://www.theregister.com/2023/10/19/china_biden_ai/


You are forgetting that the "traditional" factory robots are the way they are because of software limitations. Now that the foundation models have mostly solved basic robotic limitations, there's going to be a lot more automation (and job layoffs). Your traditional factory robotics are dumb and mostly static. They are mostly robotic arms or other type of conveyor belt centric automation. The new generation of VLM enabled ones offers near-human levels of flexibility. Actual android type robotics will massively increase demand for GPUs, and this is not even accounting for non-heavy industry use cases in the service industry e.g. cleaning toilets, folding clothing at a hotel. They are already being done by telepresence, full AI automation is just the next step. Here's an example from a quick google:

https://www.reddit.com/r/interestingasfuck/comments/1h1i1z1/...


Factories don’t mind if the robot costs $4,000,000 or even $40,000,000 I really don’t think people understand how much an industrial robots from the likes of KUKA cost…


I agree that you can get to some big cost figures if you're talking about a full work cell with multiple robots, conveyors, end effectors, fancy sensors, high-tech safety systems, and staff costs.

But if you're just buying the arm itself? There are quality robot arms, like the €38,928 UR10e [1], that are within reach of SMEs. No multi-million-dollar budget required.

[1] https://shop.wiredworkers.io/en_GB/shop/universal-robots-ur1...


It seems such costs would become prohibitive quite quickly? Stuff with moving parts breaks, and I'd expect ongoing maintenance costs to be proportional to the unit cost. Pair in the fact that most factories run on thin margins but massive volume, and it would seem cost is very much an issue.


I think it's more about how much of that 40m$ is nvidia and how many units can you deploy ?


It’s hard to say when we’re still looking for a first real household robot. But a car-priced (60k?) housekeeper bot will be very popular.

And those duties can be achieved with today’s mechanics — they just need good control, which is now seeing ferocious progress


I was working in telecom research company where the director looked my in disbelief that hacks can actually in telecom, his eyes became wide just when I showed few small hacks, wonder what he is thinking now, lol


I don't understand why would they not ack or fix this, such as shame that google has to do all the hard work to keep all their ecosystems shit together


Qualcomm has fixed all but one issue.


> I don't understand why would they not ack or fix this

Because they have (tree letters) customers. /s


It is not that. It is just the managers of the particular groups do not care about their code quality.

I made the simple mistake of running code thru some simple linters and things like valgrind/boundschecker/purify. Apparently not wanting my code to crash was some sort of political nightmare. I had to involve several higher level managers for anyone to care. The other groups got seriously mad at me for daring to look at their code. Learned my lesson. Do not do that, just fix their crap and dont say anything and work with your own branch of their borked stuff.. I even had the issue on my own team sometimes. People would 'take ownership' of something and you better not dare touch their code. It usually was not too hard to find the same mistake hundreds of times in a particular code base. As they made the mistake once they then copy and pasted it everywhere.

There are teams in qcom that 'get it'. There are also golden calf teams where you just do not even think about looking at their code.

It is not TLA's, it is built in kingdom problem systemic to the way qcom runs itself.


> It is not TLA's, it is built in kingdom problem systemic to the way qcom runs itself

Yeah, it was one of the reasons I left. Because the chip is divided into many different processors, and broadly speaking the more area your processor takes the more your organization matters, there is this constant infighting for expanding the domain of your processor to capture an ever increasing piece of the pie.

This is of course an oversimplification. There are plenty of people just trying to build the best product they can and cooperating with other teams, but at the VP level and above it's a dog eats dog world.

At NVidia my role was much smaller, but what I saw was much more cooperative, and I think a big part of it is that the GPU is at the end of the day one giant chip that does everything and all the pieces work together like an orchestra. I even saw better communication between the software and hardware folks, which is always a challenge in the semiconductor industry.


I heard about strcpys (secure strcpy) #defined again as strcpy


Well at least it compiles again. sigh....


I can just imagine the person writing that saying “Hah, think you made strcpy secure? Hold my beer…”


This is supported by QC from 2017 I belive.


What is QC referring to here? I’d appreciate a quick liner, thanks!


Probably Qualcomm, who makes Atheros chips


Something similar published by samsung, but sad that they are not as agile as apple in this area

https://research.samsung.com/blog/The-Next-New-Normal-in-Com...


This doesn't look to be the same. Apple's talking about performing computation in their cloud in a secure, privacy-preserving fashion. Samsung's paper seems to be just on local enclaves (which Apple's also been doing since iPhone 5S in the form of the Secure Enclave Processor (SEP)).


Re read the article again, it's exactly same idea with open source

"From a computational perspective, a trusted end-to-end data path will be established, ensuring the security of the entire process. This encompasses data collection on user devices and data processing on servers as well as the processing of privacy-sensitive data directly on user devices."


really cool tool, thanks for building this.


Author here - thank you!


I do not understand this, can you please explain


Probably just a typical cat and mouse game. Some crawlers support React based websites already, for example, so they can render the content and crawl based on that. I believe crawlers do not execute yet the WASM code. But in time, they will.


Helium 5G in USA is doing exactly that, CBRS powered.


naive question: Doesn't github secret scan kind of thing wont catch this?


No, in my YAML example, you could see that there were no credentials directly hard-coded into the pipeline. The credentials are configured separately, and the Pipelines are free to use them to do whatever actions they want.

This is how all major players in the market recommend you set up your CI pipeline. The problem here lies in implicit trust of the pipeline configuration which is stored along with the code.


Even with secrets if the CICD machine can talk to the internet, you could just broadcast the secrets to wherever (assuming you can edit the yaml and trigger the CICD workflow).

I was thinking maybe a better approach instead of CICD SSH into prod machine is to have the prod machine just listen to changes in git.


It was deployed using a Bitbucket pipeline which does have a secret scanner available. However the scanner would need to be manually configured to be fully effective.


Looks like a neat idea, does anyone know any open source version that does just this?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: