Hacker News new | past | comments | ask | show | jobs | submit login
Low Cost Robot Arm (github.com/alexanderkoch-koch)
489 points by pbrowne011 7 months ago | hide | past | favorite | 249 comments



I started working on a similarly sized arm. I've got a use-case, long time friends with a glass blower. I was thinking of using it to make faceted glass pendants. They've got a faceting machine but it is manually operated.

The hard part is repeatability. You need tight tolerances and each joint in the arm adds inaccuracy the further you get from the base. If the base has 1mm of wiggle, the 20cm arm has 4mm wiggle at the end, and the arm beyond it has even more.

You also, for faceting purposes, need much finer resolution than an ungeared servo will have. Gearing it is tricky because you want backlash to keep the join tight, but not so much that it has high friction when moving. You don't really want to use a worm gear because they're both slow and overly rigid. So a cycloidal gear is the best bet for the gears in the arm. You also need real servos with some amount of feedback because grabbing at glass is sketchy at best.

I was estimating 1-2k build cost, bulk of that is in the gearboxes.


One thing that is amazing about industrial robots is how rigid they are when at standstill. Breaking systems start to be a challenge too at high speeds and loads.

And once you manage to get the hardware working, getting a kinematic solver to really work is a massive challenge. Tons of edge cases, real-time feedback to handle and the need to balance usability with reliability. That's where robot companies charge a lot, and rightfully so.

Whenever you can avoid building a robot arm and replace it with simpler kinematics, you should. Hats off to you if you build that thing!


Why aren't all these applications that have only recently been examined for automation absolutely flush with cheaper lower axis count robotics, like SCARA or parallel manipulators?


I've done similar projects in the past (robot arms pushing performance limits in the few thousand $ range), and I found pretty good results with stepper motors, and gearboxes with sufficiently low backlash. For reference, these designs got to approx 1mm repeatability with 2.5kg payload at ~80cm reach, meant to model a human arm somewhat.

Here's some specifics if you're interested. Depending on the end effector payload requirements, a mix of NEMA34,24,17 can do this (bigger ones for earlier joints). You can go cycloidal/harmonic gears if you have the budget, otherwise each actuator (motor + driver + gearbox + shaft coupling) would run you something like $100-$200 depending heavily on supplier and exact requirements (+$50 or so for closed-loop systems). So not terrible on the price front. Then for the base joint you'd want some wider cylindrical gearbox that distributes the load across better.

If you're able to work with a machine shop I think you can put together something really high quality. Here's some example design inspirations, some of them even better than what I described I was able to put together as a hobbyist:

https://www.youtube.com/watch?v=7z6rZdYHYfc (this one is fantastic; a smaller and lighter version operated more slowly would have even less wiggle from the base) https://www.youtube.com/shorts/II8gdIXPgaE (this is more comparable to the OP) https://www.youtube.com/shorts/_x7P9eZCkVM https://www.youtube.com/watch?v=g9AfhqOd-_I (most professional one I've seen, and almost certainly this BOM would be under $3k, probably under $1k in China. In fact I'll go ahead and email these guys since this is so cool and I wonder if they sell smaller models) https://www.youtube.com/watch?v=iB2NAgfVjIs (definitely check out Chris Annin, American roboticist who imo makes some of the best open source low cost stepper motor robots)


> The hard part is repeatability. You need tight tolerances and each joint in the arm adds inaccuracy the further you get from the base. If the base has 1mm of wiggle, the 20cm arm has 4mm wiggle at the end, and the arm beyond it has even more.

Could this be solved by software instead of expensive hardware?

Some idea I had a while ago was to build an arm out of cheap, "wobbly" components for the large-scale movements, but then add some stages at the end that have a small movement range, but can be controlled very precisely.

Finally, add a way to track the deviation of the tool's actual position from the desired position very precisely, maybe with a tool-mounted camera.

Then you could have a feedback loop in software which tracks the tool's deviation from the desired position and uses the "corrective" stages at the end to counteract it.

I'm not sure if this would work, however.

(There is also the question how long the "counteracting" would take. It's one thing to "eventually" arrive at the desired position at the end of the path - e.g. for pick-and-place - and another to stay below some maximum deviation for the entire path, e.g. for etching or welding.)


I'm a software engineer myself so I don't know a lot about this, but there are a few patterns that are not far off of what you're describing. For instance:

https://en.wikipedia.org/wiki/Input_shaping https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%...


Isn't what you describe a PID-controller: https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%...

AFAIK that is the common approach to solve this problem, but it still needs some degree of sensor accuracy?


> Could this be solved by software instead of expensive hardware?

Yes.

Imagine a human putting a screw in a hole. You don't follow the "optimal" trajectory, you adapt it on the fly, even do several quick trials to do it.

Humans do it with a combination of vision, touch and planning.

Each of these is currently still a huge problem for AI, nowhere near human level.


But do you need AI? How about this for a screw-in-hole machine:

Assuming the hole is in the Z plane: Camera in the X plane, observes the screw against a high contrast background. Camera in the Y plane, observes the screw against a high contrast background. The motors need not know their exact position, just be of controllable speed. As the screw gets close to alignment the speed on the motor is stepped down, it stops when it's aligned. When both cameras report that it's in position a motor in the Z plane pushes the screw towards the hole, stopping when a plunger next to the screw reports the correct depth.

If you have to be concerned with the Z axis alignment you make the X and Y backgrounds striped, the alignment of the screw is measured compared to those stripes and it's rotated accordingly.

This is how a human would handle it--we do not have anything like the motor precision to get the screw in the hole directly, but we can use our eyes to refine it without *needing* the motor precision. Reliably identifying the screw from the background is hard but this approach doesn't require *identifying* anything. You're just mapping the bounding box of the object of a very different color.

If you have a large movement field and a high precision requirement you might need two cameras, the second with a much narrower field of view.


Id consider it more a fundamental problem - lack of a way to introduce new data to a model without repeating training runs and waiting for the error to converge. Humans seem to have an understanding of things from a one-shot learning run (this is perhaps due to our vast experience with the world and ability to run simulations in our head, but a subset of that should be possible for ML quite easily)

If you solve this problem, teaching a robot arm to be accurate should be pretty easy. You would just have stereoscopic cameras that map to a 3d world, and "program" in a trajectory of the object, and the model should use that trajectory to figure out where to move and how to compensate based on visual feedback.


No, I'm telling you, you're assuming way too much. The problems are lower-level

> stereoscopic cameras that map to a 3d world

The current state of the art for this is completely atrocious.

Take a look at this very recent research: https://makezur.github.io/SuperPrimitive/

The idea that robots can "understand" the 3D world from vision is, right now, completely illusory.


I basically agree, I dont think you understood my comment.

If you look at transformers in llm, you have a input matrix, some math in the middle (all linear), and an output matrix. If you take a single value of the output matrix, and write the algebraic expression for it, you will get something that looks like a linear layer transformation on the input.

So a transformer is simply a more efficient simplification of n connected layers, and thus is faster to train. But its not applicable to all things.

For the following examples, lets say you hypothetically had cheap power with good infrastructure to deliver it, and A100s that cost a dollar each, and same budget as OpenAI.

First, you could train GPT models as just a shitload of fully connected, massively wide deep layers.

Secondly, you could also do 3d mapping quite easily with fully connected deep layers.

First you would train a Observer model to take an image from 2 cameras and reconstruct a virtual 3d scene with an autoencoder/decoder. Probably through generating photorealistic images with raytracing.

Then you would train a Predictor model to predict the physics in that 3d scene given a set of historical frames. Since compute is so cheap, you just have rng initialization of initial conditions with velocities and accelerations, and just run training until the huge model converges.

Then you would train a Controller model to move a robotic arm, with input being the start and final orientation, and output being the motion.

Then hook them all together. For every cycle in the robot controller, Controller sends commands to move along a path, robot moves, Observer computes the 3d scene, history of scenes is fed to Predictor that generates future position, which gets some error, and controller adjusts accordingly.

My point is, until we reach that point with power and hardware, there have to be these simplification discoveries like the transformer made along the way. One of which is how to one shot or few shot adjust parameters for a set of new data. If we can do that, we can basically fine tune shitty models on specific data quite fast to make it behave well in a very limited data set.


Some research is actually done in this direction. In 2015, a paper showed how a robot can adapt to injuries like a broken leg [1,2].

[1]: https://www.nature.com/articles/nature14422

[2]: https://hal.science/hal-01158243


I think about this regularly just don't have the time to pursue it:

Couldn't you build your arm in Nvidia Omniverse by also adding feedback like a cheap hig resolution distance or angle detector and train an ml model to compensate it?


Making and animating a 3D graphics robot arm is trivial compared to building it in real life. So not so much Omniverse, you would want to use a proper simulator like gazebo.

But beyond that, the kinematics as well as the force dynamics for controlling a serial manipulator are very well understood. So there aren't too many gains to be made by AI. It is difficult to implement in software due to some tricky situations about the nature of motion planning. Discontinuities around orientation approaches in 6-DOF systems for instance. But widespread use of serial manipulators is proof that, although challenging, they are relatively solved. It is always interesting to watch an AI model or genetic algorithm do some path planning, but this is a pretty well trod area of research at this point.

Now, when you want a robot to walk and pick things up at the same time... that is when AI becomes something to consider in order to figure out how the dynamics should work.


Omniverse is a simulation platform specifically designed to do things like train/test robotics. It's not a creative engine like UE or Unity.


I'm not super familiar, but they say in the webpage that it is specifically for Universal Scene Description, which is formally for graphics. Although, after a quick google, it looks like they do have a simulation package which then runs on top of Omniverse (Isaac Sim?), so I guess that is Nvidia's robotics offering.

My general experience with other commercial offerings for simulation... is not great. In my experience, people usually end up migrating to gazebo, but I have been away from the field for a while now so it could be different. It is probably a situation where Nvidia will have a few coporate clients that they prioritize, and you are on your own to get it set up if you aren't on that lists. Pretty normal.


I meant to control single segments to compensate for build quality.

1 AI model per motor


But motor and motion control isn't exactly so mysterious that we need AI for it. Inductance in electric motors can have some odd effects in the acceleration domain, but it generally boils down to a second to third order differential formula. Even when linking multiple together in a serial manipulator, the math is really well understood for modeling the motion output. Maybe there are some outputs to be gained implementing different drive trains in arbitrary circumstances, and monitoring how they fail and stuff like that. At that point you are really getting into the weeds of operations and maintenance more than actual motion control.

The situation that arises into a very complex n-dimensional problem that you would want AI to search through is the coordinated motion of multiple actuators to achieve a very complex output. Like, picking something up of unknown weight, running while carrying it up a steep hill, waving it around while doing all this. We take it for granted as humans with brains that can perform all this stuff trivially, but it is extremely complex motion.


The problem this ml should solve is pricing: get cheaper components and compensate with ml.

Basically increasing the precision of the arm by controlling the voltage much more precisely


Well, that doesn't really work. There is only so much electrical efficiency you can crank out of these motor. Essentially, there is a relationship between the current you pump through them, which is limited by their thermal characteristics, and the inductance of them. So you are trading off building more inductive motors that are more powerful but less reactive, and current draw which you can increase by putting more material in the motor and making it dispose heat much better but also bulkier. There are diminishing returns in many places in this process, and at a certain point you have to consider switching to hydraulics if you more force at a high reactivity, under essentially much less energy efficient conditions.

Maybe you could make a model that sizes motors correctly per application? But you are still much better hiring an engineer that knows what they are doing that can explain what is going on and trouble shoot things when they go wrong. At a certain point you are trying to figure out how to completely replace an engineer with a machine learning model, which I would like to think is a bad idea.


Uneducated in hardware, mostly a software guy for perspective, so I could be way off.

Would using something like a stepper motor geared way down with a cycloidal gear box work for a situation like this? (in my mind) It would give you a very controllable and repeatable way to position, with the backlash handled by the gearbox mainly.

Would love to know if I'm wrong though, like I said mostly a software guy trying to venture into hardware!


Servo + Cycloidal or Harmonic gears are usually the way to go, but to get them backlash-free is hard (or expensive, if you're buying). Once you got that down challenges include:

* How rigid are the links between my joints? Plastic will wobble, metal is better

* How heavy is my arm and how does that limit its movement? If you go for stiff metal castings, you add weight you need to move. The lever arm relative to the base can get really long

* Motors are heavy! Ideally you can mount them towards the base, but then you need drive shafts or belts, which again add flex. (See KUKA arms which have motors 4, 5 and 6 on the elbow often)

* How much payload do you need to move? 5kg is already challenging in ~1m arms and if you need to move it fast the problem gets even bigger.

* Where do you run your cables? Internal is tricky to build, external can get you tangled.

And so on. When approaching this you get a totally new appreciation for biological arms which arae insane in most aspects except for repeatability. And on the software side you can enjoy inverse kinematics :)


Do you think there is a way to take out backlash with sensors and software? Something like how additive manufacturing systems can use accelerometers to smooth artifacts from motor movement. [0] Let's say two cheaper cycloidal geared motors running in opposition with a load cell between them to maintain the materially compatible force.

https://www.klipper3d.org/Measuring_Resonances.html


I don't want to say no, however it seems very hard to do. You get feedback about motor position via an encoder, which is usually located on the motors axis and not the output element. Since the motor axis spins a lot more, you get more resolution. Backlash happens on the output, so you could add a second encoder there (but now you've got more complexity + cost). An oldschool CNC solution is to add brakes to lock an axis out, but this makes your system less flexible and doesn't prevent backlash during motion. A more modern solution might be to factor backlash in to your motion software so that you tell the kinematic solver what compliance is acceptable in some direction.

> Let's say two cheaper cycloidal geared motors running in opposition with a load cell between them to maintain the materially compatible force.

This might work, but now you have twice the amount of motors.


The problem with backlash comes into play when the direction of force on an axis changes. If you are applying force in one direction and all the backlash has been taken up, everything is fine -- any force you apply or movement you make will be transmitted to the tool like you'd expect. However, if you have to decelerate, or you've gone over-center, or the tool/load pulls harder than you're pushing, now you have to apply force in the other direction, which you can't do until you take up the backlash.

If your axis has high enough friction, then nothing will move when your actuator is in the decoupled backlash region, so you can compensate by adding the backlash amount to your target position whenever you switch directions. But that means you need more friction than tool force, with bigger motors and drivetrain to compensate. It's often easier just to build a system with zero backlash, then you can focus on tuning for system rigidity/resonance (as shown in your link).


That was why OP suggested to have 2 motors on each joint, going in opposite direction. The problem with this is that you now have twice the amount of motors.


Oof, I appreciate you pointing that out because somehow I got the first part and skipped that one. Yeah, I could see that working, but it sounds inefficient.


Have you considered using belts and sprockets? They seem effective in 3d printers and pretty cheap


Belts introduce elasticity that can be very difficult to deal with.


There are lots of belt driven industrial robots. They have a different set of trade-offs and challenges.


I recently saw this video, using modified servo motors to reduce the wiggle...

https://youtu.be/Ctb4s6fqnqo?si=XP0MS0cpjlK_LMQC&t=8


How many degrees of freedom do you actually need, and what reach? Those parameters (plus speed) drive a lot of choices.

1-2k build cost is tiny in this space. I've paid that in individual motors (specialized ones, to be fair).


What kind of manipulation is needed? If a 3 axis or 4d axis system can do the job, that is always much cheaper than a robot arm, for a given precision and load capacity.


Is it not possible to compensate for inaccuracy, if you have a sufficiently precise measurement at the hand?


tight tolerances and repeatability is mostly a combined rigidity and resolution issue, which is functionally equivalent to a cost issue. Add more money. Unfortunately there is a point where programming and hardware costs is higher than a skilled artisan ... hence the profession!


I'm surprised (or perhaps very unaware) that there doesn't seem to exist a company yet that mass produces cheap, high quality, reasonably standardized robot arms. So many things like 3D printers or CNC machines have entered the consumer/amateur level price realm, but this seems to be something still largely unexplored. Seems to have Arduino/Raspberry Pi scale potential, but I haven't heard of a name/ecosystem that popular yet


I worked for a startup developing robot arms for a while. What we found was that giving someone a robot arm - even one with reasonable APIs and no cost to them - didn't really help because the hard part is making useful automations. Mostly people spent a few hours playing with the arm and then put it on a shelf.

Every use case is completely different and is a lot of work. Even when you get something working, accidentally shake the desk or crash the arm into something and all your coordinates are broken and you have to start again.

Not to mention the actual mechanics are really complicated - to have a meaningful payload at 50cm reach you end up with really high torque at the base joints (which also need to have super-high accuracy), which requires expensive gears and motors. None of that is cheap.

Then you get to safety - an arm that has a useful payload is also quite heavy, and having that amount of mass flailing around requires safety systems - which don't come cheap.

It's a bit like hardware no-code - you can't make an easy to use robotic arm because programming it is inherently hard. I think the only thing that will change that is really good AI.


In Robotic assisted surgery, Tracking targets are mounted on the robot arm and an infrared camera tracks the exact position.


It's cool to hear from someone with experience!

Do you know if anyone has tried building an arm that uses spatial positioning techniques from augmented reality, like structured light or pose tracking[1], to understand the position of the arm in space without resorting to "dead reckoning"?

It seems like that kind of approach would increase the physical tolerance and reduce the programming complexity, since you know both a) where the arm is supposed to be, and b) where it actually is.

[1] https://en.wikipedia.org/wiki/Pose_tracking#Outside-in_track...


This is actually pretty common. But getting enough resolution to improve on what you can do with the encoders isn't so easy.

The more usual application of multi-camera setups etc. is in path planning and scene understanding, not low level control.


Yep. Interestingly, there has been a lot of recent work on models like RT-2 that might be capable of automating this for simple tasks. We might be at the point soon where that startup would have been viable!


AFAIK RT-2 doesn’t quite work outside of Googles micro-kitchen, where they collected about a 1000 hours of data.


That's how prototype tech often work : not very well. But its a proof of concept. A general model is probably not terribly far off.


There is a reason there is no standard hobbyist-grade robotic arm.

People think they can build their own robotic arms for leas than a "real" robotic arm costs, but the do not account for wobble or repeatability.

With all due respect to the person who posted a design for a robotic arm made with RC servos on HN, I would like measurements of the repeatability. Have it draw the same pattern on a piece of paper every day for a week. Show me how closely the 7 lines overlap. I doubt that it can draw such a thing; it will tear the paper or get jammed without the strength to tear the paper.

Source: I've been building hobbyist robots since the 1980's, researched robots in the 1990's including a masters thesis, and teaching robotics for most of the last decade.


The hobbyist market is pretty forgiving of repeatability issues. The Ender 3 introduced many people (myself included) to the 3D printing world and is known for its problems.


Horses for courses. Nobody's going be trying to weld car chassis with one of these, true, but also a hobbyist wouldn't want some ABB or FANUC or whatever industrial arm in their house where it could kill someone. These small light duty less-rigid robot arms are fine for what IMO is the really exciting stuff like modern machine learning control research, which is exactly what this guy's doing with them.


https://github.com/adamb314/ServoProject

^Modifying cheap servos so that a robot arm can repeatedly insert a pencil lead. It's a lot of work though.

Most interesting application though fall out of the scope of old-fashioned robotic arms, i.e. when you need to sense the real world in a non controlled context. For instance to develop a robot that can trim wilted flowers, you'll need to measure the real world, and as soon as you do that, you can just sense your robot arm too, no need for fancy, ultra-precise actuators.

Look at this BOM: https://docs.google.com/document/d/1_3yhWjodSNNYlpxkRCPIlvIA...

Do you really need the $6,129.95 & $3,549.95 robot arms for the kind of application described ? I doubt it. I'm not a robotician, and would love some feeback on this idea.


Is this not something that can be addressed with cameras and (maybe) learnt approaches now? You don't need blind repeatability if you've got good visual monitoring to close the control loop, you just (just!) need good accuracy and low latency from video to motor control.


Why not just throw a SteamVR/Vive laser tracker onto the end of the arm and use that to close the loop? They claim sub-mm precision at room-sized distances, so it should be even better if you had it basically mounted on the base. If you wanted to get fancier you could build it into the end effector w/ one of these? https://tundra-labs.com/products/tl448k6d-vr-system-in-packa...


Why are you so negative towards this? It's just an Open source project... who cares how good it is. It's a great way to learn, and play, and experiment.


That post is in the category „all knew it was impsossible until some stupid with no idea made it“

That somebody was 4 decades failing, does not mean that at some point it won‘t be possible. In the last 4 decades the prices have lowered and the quality is much better in the RC world, if you know where to buy.


> Why are you so negative towards this? It's just an Open source project... who cares how good it is. It's a great way to learn, and play, and experiment.

I did not mean to criticize a hobbyist project for existing.

I meant to say "There is a reason there is no standard hobbyist-grade robotic arm."


Skimming through this threads and the various answers to the multiple "has anyone found a use for a robotic arm?" questions will explain why there's no such company. There simply is no consumer-grade market.


A lack of applications hasn't prevented the sale of a large number of Arduinos lol


Arduinos are for prototyping, which makes the application fairly massive. The company I work for used them to develop one of our machines before we moved to a custom board. So, I'd say they are pretty useful.


Agreed, they certainly can be/are useful (and fun!), in a multitude of ways, but all too often I've also encountered peoples' "complaints" that they bought a bunch and now don't have a use case for it/are searching for one :)


That's because a toy engineering project is still an engineering project and will be way more work than you think it is, no matter how much work you think it is. It's hard to maintain that energy for long when it's not your day job (sometimes even when it is, tbh).


LOL. I often tell people online that they're better off downloading the free Arduino IDE or playing around with the Wokwi simulator until they have a good idea of what they want to build and whether or not it's within their capabilities before buying parts.

I've built a lot of custom arduino-based projects for other people and a substantial fraction of them are the "I bought a bunch of stuff, but I don't have time to learn how to program it" types.


Come on! They are for tinkerers!!! Same as such a robot.


You can use an Arduino for virtually anything though. A robot arm can only move stuff around.


Arduinos are also $20.


Arduinos are like $3 if you buy Chinese replicas with the old bootloader ;)


$20 will get you a 32-bit processor (ESP32), with Wi-Fi and Bluetooth 5.0 built in and a 2.8" color TFT screen, programmable with the Arduino IDE.

$3 will get you a basic Arduino Nano clone.


They do, but cheap is ~10.000€ currently for a general-purpose bot with 1.2m reach. You get a high quality machine and software for that. Note that a robot without really good kinematics software is borderline unusable. Also, besides the arm you need the control box which reliably delivers power and commands in real-time. That adds quite a bit of cost too.


Ask yourself - what problem are you trying to solve? After you've defined it, you'll quickly learn that there's much simpler and cheaper solution than 6DOF robot arm in almost every case. And if you actually do need it, in those cases you'll find that 10-20k is actually pretty cheap all things considered.


With my understanding 6DOF robot arms are used when you need to be flexible. Today you do this tomorrow (or in a couple of months) you will need to do something else. The more diverse set of things to do the better. But if you need to do the almost exact same thing many times over for a long time it is better to design a production line that doesn't use 6DOF robot arms at all.


Also I have understood that robot arms are in real life rather complicated to program to operate correctly. So any process would take substantial effort. Magnitude harder than controlling some relays or reading some sensor data.


I don't know, but I think the big real industrial kind can really really hurt or kill people.

I think the limiter on smaller arms is quality servos with real location encoders - this one costs a couple hundred bucks for motors.

Not claiming the software is easy! But I think sourcing parts is (or has been) really hard.


> I don't know, but I think the big real industrial kind can really really hurt or kill people.

Not a robot arm, but I worked on a project where the customer wanted to use a commercial motion platform as part of a simulator-based training system for boats. They thought they could just put it in the corner of their boat shed and get training but were amazed when they realised how dangerous it could be to passers-by, especially if it moved unpredictably when someone was standing nearby without paying attention. It went from 'we just need some crowd control barriers' to a full metal cage that was also integrated with the building fire alarm system so that it would stop cleanly if there was some sort of emergency elsewhere.

Other motion-platform-based hilarity ensued when it was discovered that the commercial software model they were using to drive the sim could in some circumstances capsize the virtual boat.


I've explored the idea of using super cheap servos to build a robot arm/tentacle to pick cherry-tomatoes and it seems the only reason you'd want to use location encoders is in tasks that require high precision in an open-loop system (the robot is blind to its environment but has info about his own body). To me it seems you can get rid of this requirement if you allow the robot to sense its environment using cheap, 800x600 cameras with depth estimation ML algos and get away with the accumulated imprecision of sequential servos by coupling to each servo a high accuracy/small angle servo (just modify the servo's gear box). As for the gripper mechanism, you don't need fancy force sensors, just use a kirigami effector [1]. See also mobile-aloha [2].

[1] https://www.youtube.com/watch?v=UerxNyu147g [2] https://mobile-aloha.github.io/


> cheap, 800x600 cameras with depth estimation ML algos

This typically won't be accurate enough for closed loop kinematics, especially at any sort of speed.


> I don't know, but I think the big real industrial kind can really really hurt or kill people.

Many medium sized arms are quite capable of generating forces that can kill a person.


I think the main issue is inverse kinematics and path planning.

Which is partly why SCARA is popular with amateur robot arms, the maths is simpler.


Forward and backward reaching inverse kinematics: http://www.andreasaristidou.com/FABRIK.html


Wouldn't be the easiest way to program a robot to "show" it its task by simply moving the arm a few times in the paths it has to replicate afterwards?


In a controlled environment - where the object to pick up, for example, is always in exactly the same location - you could do that. If there is any variation in the location of the object, you need vision to localize it each time. You need a camera, maybe two, and probably some kind of 3d perception, which is an unsolved problem at the moment (well, not solved in a general way, there are some solutions for specific objects).


Yes, they have systems to learn this way, but it assumes the environment is controlled (always) and the task identical (always).

Lots of automation works this way, but it actually limits the applications quite a bit.

A robot that can safely work along side people (i.e., a "cobot") and adjust to environment changes and changing work patterns is a whole different beast.


There have been a few around, but the limiting factor (as you can see from the BOM in the link) is the cost of the actuators. They get expensive fast.


I was looking at electroactive polymers (EAPs) and I could not find a single company with consumer pricing...

I wish there was more visibility here without requiring a PhD.


Sensors too. E.g. force sensors are important for dexterous manipulation and safety. But they are all expensive, bulky, imprecise, or require constant recalibration, or all of the former. Our robotics Prof used to quote his industry peers: "The best sensor is no sensor."


That's presumably one reason why everyone's so keen on doing all the force sensing from the motor current.


Have to disagree with this. The major limiting factor is the software and lack of applications. If there was a killer application it would be much easier to sell. Then the numbers would drive the prices down.


If they don't have to move fast, could you DIY an actuator with a stepper and a threaded shaft? Steppers are pretty cheap thanks to 3D printers.


For a useful robot arm, you need high forces/torques. Usually, this is achieved by high ratio [1] gearboxes between the motor and the joint. Those are expensive, and inefficient and make estimating the output force via the motor current almost impossible.

[1] In the order of 1:100, see e.g. https://www.harmonicdrive.net/


There are arms using steppers and 3d printed epicyclic or cycloidal drives.


> at there doesn't seem to exist a company yet that mass produces cheap, high quality, reasonably standardized robot arms.

These do exist, it's just that "reasonably cheap" is typically low - mid 5 figures for any sort of reach an payload.


I don't think we've figured out how to make good cheap mechanical actuators. I think that engineers make do with inaccurate actuators by changing the mechanism around it. Robot arms need a level of reliability that isn't cheap yet.


- Arm wrestling a toddler? - Handwriting notes for small jewelry brand? - Drink mixer? - Handing towel when in the bathroom, then getting a new one? - Setting up my morning espresso? (grinding the beans and turning on the coffee machine)

Which of those can/cannot be done and why?


> Arm wrestling a toddler?

The most common designs of six-axis robot arm don't have the 'rotate forearm sideways' joint needed to arm wrestle.

> - Handwriting notes for small jewelry brand?

Possible: https://en.wikipedia.org/wiki/Autopen

> - Drink mixer?

Possible: https://www.makrshakr.com/ (arguably more of a showy entertainment item than anything else)

> - Handing towel when in the bathroom, then getting a new one?

Manipulating flexible materials is difficult. As is navigating through a house with locked bathroom doors and suchlike.

> - Setting up my morning espresso?

Depends if you're willing to broaden your definition of 'robot' to include bean-to-cup machines.


> "Arm wrestling a toddler"

Like arm wrestling a brick wall; if you can push it over then you win, if you can't push it over then you lose - either way there's not much fun in it. And if it can beat the toddler it risks injuring them because neither of them really understand what's happening and what the risks are. The arm can't stop if the toddler says 'ow'.

> "Handwriting notes for small jewelry brand?"

Can be done already with a commercial 2D plotter: https://www.axidraw.com/ . It costs twice as much as this arm, but you don't have to build it and it already has "software for realistic handwriting" and there's a company to get support from.

> "Handing towel when in the bathroom, then getting a new one?"

Is the arm big enough to be useful for that? It appears to be shorter than a typical human arm so it would be cheaper, simpler and quicker to put the pile of clean towels a foot closer to the shower where the robt arm is sitting, and not have the robot arm at all. Plus you wouldn't have to deal with electricity in the bathroom or dripping water on the robotics as you reached for the towel it was handing you. (Are you thinking of a robot arm with cameras for feedback of where it's positioned? Cameras in a bathroom won't be popular with everyone no matter how much you promise they are innocent).

> "Setting up my morning espresso? (grinding the beans and turning on the coffee machine)"

Simpler and cheaper done with a timer mains plug which you can get for under $10. Put the beans in and load up the coffee machine the night before (work you'd have to do anyway) and have the timer start them in the morning. If you expect the robot arm to unseal a bag of coffee beans, measure some out, deposit them in the grinder, close the grinder, and close and seal the bag after, you'll wake up to spilled beans and unsealed bag a lot of days before you get that working reliably. Instead of $250 plus weeks of effort to speed up this 2 minute task(!) you can get a Keurig / Nespresso pod coffee maker for less than $100.

> "Drink mixer?"

How much spilled wasted alcohol, plus time of disassembling and cleaning your robot arm and the surface it sits on, and the floor, or finding the bottles, unscrewing the tops, handing them to the arm, waiting for the arm to slowly pour them which you could have done quicker, then putting the tops back on and putting the bottles away yourself, then putting the drink stirrer into the arm, then waiting for it to mix the drinks which you could have done yourself quicker, before you decide this was not a good use of time or money? (How often do you drink mixed drinks anyway?)

The robot isn't going to learn to do the task better next time like a human could so if you have to get involved in the task at all, you may as well do it yourself. And if it's a 15 second task like "reaching for a towel" what are you doing with your life trying to automate that? Roomba saves a lot of time, a lot of annoyance, it could be worth it even if it does an inferior job - because you can leave it running over and over and over. Same with a robot lawnmower, if you just glance around to make sure there's no pets or children in the way then let it go, it can save you a good chunk of time and if it goes wrong you just get a patchy lawn or dusty floor and it can retry tomorrow. But handing you a towel or mixing you a drink saves you almost no time, but if it goes wrong you get a broken bottle of sticky drink all over or a pile of towels on the floor, which has undone months of 'time saved' in one go.


I like how this comment is clear, comprehensive, full of common sense,

AND very likely to be completely outdated within a few generations (5 years?) of robotics + AI progress.

I would also not discount how easy it is to sell people on additional cameras in their homes (including the bathroom) for the sake of convenience.


State of the Art public robot arms include Boston Dynamics' Stretch[1]. It's not for sale to the public, the price isn't public, it's got 18 suckers on a flat tray and runs on a wheeled base and looks like the size of an armchair. Boston Dynamics' Spot the walking dog robot was launched in 2020 for $75k and was explicitly not safe for use in the home or around children.

Do you genuinely think they will improve to the point of having finger style grippers, dexterity and adaptability to grind coffee, mix drinks and pick towels, and be on sale to the public, safe for use in the home, for $250 (or $2500) by Jan 1st 2030? I would be very surprised.

(Can you get a robot arm today, for any price, to help a quadraplegic open their mail, bring a drink with a straw in it to their mouth, lift them into a sitting position, hold a book in front of them and turn the pages, or ... do anything helpful? I'm not aware of any, but haven't been looking specifically).

Yes you could probably build a robot today which hands you a towel from a pile, reliably and swiftly, or selects the bottles of alcohol and opens them and pours and mixes a drink - in a carefully controlled and lit environment where none of the lids or corks are stuck and the glasses are all a similar shape and size and nobody is allowed to be near it - I don't say it's impossible with today's tech, but it would cost a lot more than $250. A hundred or a hundred thousand times more, while being far far more limited than a human.

[1] https://arstechnica.com/gadgets/2022/04/boston-dynamics-stre...


After watching the Mobile ALOHA video linked in another comment, I'm increasing the probability of me eating crow on this one.


> outdated within a few generations (5 years?)

Hardware generations are typically closer to a decade than a year. Robotics is moving fast these days, but not that fast.


> cheap, high quality

Most mechanical things require you to optimize for cost or quality


A six axis CNC machine is essentially a robot arm without an elbow.


Why not start with something less ambitious, like low cost robot platform able to follow people and carry stuff around and avoid obstacles. No arms, I am okay using mine to put stuff on and off it.

When I had leg injury and used crutches, carrying stuff around suddenly became a problem. There are many people with impaired movement. And even without that, I often misplace things and it could help there.

There are plenty of toy robot undercarriages on aliexpress but too small (under 20cm largest dimnsion) to be practical.


Bellabot[0] comes to mind, there's also a startup doing Roomba-like bot that magnetically dock to specially designed shelves. Though, they don't sell for $250, so actual Roombas and ROS mods for "hoverboard" Segway clones[3] might be more cost effective.

0: https://www.youtube.com/watch?v=l1hQ5YTMJEw

1: https://www.youtube.com/watch?v=SdVglHOJgiA

2: https://github.com/hoverboard-robotics/hoverboard-driver/tre...



also https://piaggiofastforward.com/ "gita" (I'm not sure what the market is for fashionable luggage-robots, but they're definitely going after it...)


> low cost robot platform able to follow people and carry stuff around and avoid obstacles.

So, basically autonomous self-driving mini vehicle. Сompanies spend billions on self-driving cars with quite limited luck.


> Сompanies spend billions on self-driving cars with quite limited luck.

That's in part because it needs to be very reliable to not kill people.

If the worst that can happen is killing a garden gnome or running over someone's toes you can tolerate more error.


Yeah, a "small enough to kick out of your way" autonomous driving project is a one or two semester student project (if you're starting from nothing, Sebastian Thrun's old Udacity course was a good way to bootstrap through the algorithmic parts, then maybe watch James Bruton for "everything you could possibly imagine doing with wheels, motors, and an infinite supply of 3d printer filament")


that is because they run in a very different environment than an RC-size car in your living room would


100%. Arms are hard. Mobility is hard. Mobile arms are double hard.


No, rather a larger version of robotic vacuum.


i was going to ask if turtlebot was too small for you, but thought i should check the price first so uh, yeah, i'm guessing you're thinking the 250 price point instead of the turtlebot's 1000+


I'm surprised that no one point to this : https://github.com/peng-zhihui/Dummy-Robot It's probably a bit hard to read though.


The geek in me is drooling, but are there any practical home uses others have found for robotic arms? Hacking is always more fun with a good project


I’d use it to pick through and sort the massive pile of Lego my children leave behind after an Easter long weekend. Seems trivial to do identification based on the high quality corpus of block databases. Wouldn’t have to be super quick - you could just leave it running overnight.

I’m sure someone’s written an interesting paper on the ideal sorting algorithm too (i.e. large things > small things vs. ‘just pick up and place the nearest thing’.) I would personally just get it to sort them into basic sets before placing the trays back in their goddamn drawers.


Picking up and sorting Lego with a robot arm is pretty much a state of the art research project (a few years ago at least), not a hobby project.


Depends on your defenition of hobby and if the hobby as part of it being enjoyable needs to produce something tangible in the short term. Following sota research, tinkering, trying new things in a field unrelated to your day job can be a fullfilling hobby.


I imagine it is strongly affected by whether the pile is already decomposed into individual bricks or not.

The analysis and disassembly of a combined set of bricks can frustrate even human eyes, brains, and fingertips.


My impression is that the fastest/easiest way to do it would be putting all the pieces inside hopper with the selector at the bottom. Perhaps using compressed air to push the falling pieces into different containers as they fall. I believe there’s already something like that in produce factories, separating vegetables by state/size.


I interviewed once with a company that made rice and grain sorters. They have a massive hopper at the top and pass the grain in a curtain through a machine vision camera, and then decide in real time where each grain goes based on the image.

Apparently pretty much every grain of rice you've eaten has been through a machine like that.


That is very interesting! Thanks for sharing. I did not imagine that rice was a good candidate for this


Oh look someone has already done something like that:

https://www.youtube.com/watch?v=04JkdHEX3Yk


I don't know why this can't just be a cleverly laid out arrangement of layered sieves, each one parsing a different object .. I don't see how it needs to be mechanical in any sense other than "pour in the lego junk, out it comes neatly sorted", a la coin-sorting machines ..


The problem with sieves with Legos is that everything goes in the square hole: https://www.youtube.com/watch?v=Nz8ssH7LiB0


I could see people wanting to add on color detection for the bricks, but even that could still be solved by camera + servos/steppers and a chute that goes to different bins. No need for an arm.

Doesn't stop me from wanting one though.


It is probably possible to do it, but the sieves would be quite big, just to account for the very large number of pieces, as well as having to “orientate” them correctly using only gravity.


> Seems trivial

Said like someone who's never tried it :)

For a start you're going to need a camera. Maybe more than one. You want depth sensing? Even an cheap choice like a RealSense is going to add another $250 to your costs. And you'll need a sturdy mount for it, the robot's going to vibrate the table and you don't want to suffer motion blur.

Got the camera in a fixed location, over the area you're picking from? Then the robot's going to block the camera's view when it reaches in. No real-time hand eye coordination for you. Putting the camera on the robot's wrist? Now you've got motion blur problems - and reliability problems, because normal USB cables aren't designed for continuous flexing. You've also got a gripper in view all the time - and now the camera moves, things are always out of focus.

The reach of the arm isn't long enough to give you many bins to drop items off into, considering the number of lego parts there are. The longer you make the arm, the greater the torque at the shoulder joint. Making the motors bigger? Now the elbow motor is heavier. Gearing them down? Now you've got gear backlash.

Your Dynamixels will break, for some reason. Maybe eventually you'll figure out why. In the meantime, $50 each please.

Parts like the small satellite dish https://www.bricklink.com/v2/catalog/catalogitem.page?P=4740... will prove very hard to grasp. And there's like 50 different colours, you're going to need to know your way around lighting and camera settings if you want to reliably tell transparent light blue, transparent medium blue and transparent dark blue apart.

And that's before you get into questions like how to tell a 2x4 stud brick apart from two 1x4 stud bricks next to each other - or how to grasp a brick when an adjacent brick is blocking you from getting in with the gripper.

Every single one of these issues is solvable - but by the time you've solved them all? You could have hand-sorted that lego 20 times over :)


> Even an cheap choice like a RealSense is going to add another $250 to your costs

fyi, Luxonis is selling some for $150, I'm still meant to try them but they look quite good


I happen to use a few of their cameras, and they generally work as advertised (satisfied Kickstarter backer for OAK D and OAK D lite, probably going to buy the OAK D pro at some point). But, while I did indeed pay less than 250 for them individually, their current active depth offerings are $350(and while my oak d is fine for my lit, varied environment, I do often wish it was a little more accurate). I thought the Lite was also around $200 but it's actually $150 as you said. It's a pretty good little platform for the price. Be sure to check out the experimental repo too : https://github.com/luxonis/depthai-experiments/tree/master/


It’s trivial to fork one of the several open source projects focused on this problem.


They really spent a lot of time diving into the complexities of your question and I found it really interesting. Your handwavey, one sentence response without even an example (if there even is one??) is kind of rude in this context.


For example ?

[1] lists [2] which uses a robotic arm, but it is closed source

[1] https://github.com/360er0/awesome-lego-machine-learning [2] https://www.thirdarmrobotics.com/q_and_a.html


I think the easiest way to collect the Lego pieces from the floor is by using a vacuum cleaner.

Then look at the sorting as a separate problem :)


We have a “Lego sheet” - just a regular bed sheet that goes on the floor before playing with Legos. When it’s time to collect, you make a “bag” with the sheet, grabbing (most of) the pieces, and then we “pour” them into the final container. The sheet goes in top of the pieces on the same container so it’s also the first thing that comes out the next time.


I guess this was a common enough solution that now you can find products that are essentially what you're describing, e.g.: https://www.amazon.com/SAM-MABEL-Storage-Basket-Play/dp/B0BV...

From the item description: "TIDY UP IN SECONDS: Say goodbye to messy playrooms with our storage organizer! The play mat provides a dedicated area for creative play, and when it's time to pack up, simply gather the handles and tip everything back into the compact storage cube. "


The devil is in the details. In theory everything is trivial with enough SW dev hubris ;)

Even just the path planning towards the block to ensure good grip and pick up is not a simple task. Consider all the block shapes, possible orientations, collisions..


An impractical use would be to have a few of them replace monitor arms in a multi monitor setup. You could then rapidly switch between a few configurations.

Also, have one to stir pasta in the kitchen.


At 1.4 Nm torque, the main motors would likely struggle manipulating a 0.5 kg item, they aren't strong enough to hold a monitor.


Sacrifice speed and add a gearbox?


On the topic of automatically stirring pasta, I saw this the other day!

https://www.amazon.com/StirMATE-Automatic-Variable-Self-Adju...


That is interesting and Amazon has a great algo for matching it to laboratory equipment consisting of hot plates and magnetic stirrers.


I doubt this is strong enough to carry a screen


Multi monitor robot arm stands would do some cool things. An automated rotating sequence when you boot up would feel like a scifi movie.


Pasta is one of the few things you’re not supposed to stir while cooking?


I think you should stir it occasionally. It might depend on the pasta, though.

https://www.thekitchn.com/kitchen-mysteries-why-stir-pas-112...


Not much. I have a UArm on my desk, which is a lot like this one but with cheaper servos. It was too inaccurate to use for much of anything. I built a force-feedback sensor for it out of a 3D mouse. Reasonable idea, but not stiff enougn for the application.


> but are there any practical home uses others have found for robotic arms? Hacking is always more fun with a good project.

A reader of this thread that had a temporary disability posted some ideas about practical uses:

https://news.ycombinator.com/item?id=39903953


Roboexotica is screaming for a set of these arms to create a production line:

http://roboexotica.at/


Holding a water pistol to shoot at foxes in my garden might be useful. They other humane deterrents (ultrasonics etc) don't work.


You don't really need a robot arm for this though... Also on my list of projects for cats :)


Future proofing for when they evolve into flying foxes! You'll need the extra degrees of freedom


From over a decade ago: https://us.pycon.org/2012/schedule/presentation/267/ (with just a pair of motors to point squirt gun at specific angles.) One of the (many) cases where a robot arm would be more general without being in any way better :-)


Drinks mixer! Line up the booze bottles and other stuff.

No need for brute strength. Tolerances of one or two millimeters are mostly fine.


This actually sounds to me as bit complicated. As you need to adjust angle for each pour from a bottle. And then there might be different viscosities involved with some bottles...


Angle would benefit from visual recog - shape of bottle top, shape of bottle body.

Viscosities - probably finding a booze's sugar content online would be 90+% of it.


Anyone who finds this interesting might also like this one, it‘s not DIY – comes fully assembled: https://www.waveshare.com/roarm-m2-s.htm

I have one and the build quality is really impressive for the price point.


There's also the 5-dof version that I've seriously considered buying at one point, but it's really hard to tell if the ROS 2 integration is any good: https://www.waveshare.com/product/robotics/roarm-m1.htm

A real shame there isn't a 6-dof one, since that's what you'd really need to grasp anything properly in the radius around the arm.


This is one of those things I have NO NEED FOR but I definitely would want one on my desk.


Nice. What do you use it for?


Reproducible test scenarios for barcode scanning with a smartphone.


Unless it has to do something with the print quality itself, can't this be achieved using a stationary phone with its camera towards a monitor displaying 3D transformed barcode images ?


Yes, possibly. But reading from the screen is quite different from reading actual barcode prints, for instance you have to deal with Moirè patterns and such. And frankly it was just a good excuse to buy an arm. Sue me! ;-)


> for instance you have to deal with Moirè patterns and such

My understanding is e-ink displays will not be suceptible to that.

I might just have given you an excuse to buy a large e-ink display/monitor :)


Haha fair play, this is exactly how things should be!


I wish I had it a decade ago when I tested phone touch screens


Very clever. This use case would never cross my mind.


Any such thing on amazon etc?


How is the software support?


I‘ve only just started using it via the web UI, no idea.


Any idea how much weight this could hold?

I’m wanting to manipulate a fan in my home gym with some eye tracking to get it to blow air on my face when I work out, but the fan is a few pounds.

Alternatively: any hardware motor suggestions for such a project?


Most of these robots will use servo motors - that gives you dexterity but means that holding any position requires constant holding torque - which means limited payload and a lot of wasted power.

For a heavy fan (don't forget the reaction force from moving air too), you'd be better off mounting it on a some sort of bearing and just using a motor to turn it. That way the motor isn't trying to fight gravity all the time. The robot arm linked here using Dynamixel servos - you could just use one of them to spin a fan on a lazy susan. Much cheaper and less complicated!


this is why servo-powered robotic arms are not interesting to me at all. I want one with brushless motors and harmonic or planetary drive gearbox. unfortunately you cant build those under $750 as far as I have seen so far. that's putting aside the added complexity. but I will definitely build one some day. there are quite few opensource projects with that setup.

I believe it offers higher torque, better precision and ability to just stick to a position with relatively low/or no power due to gearing-side resistance.


yes, a belt-driven lazy susan is the right answer here (for azimuth).


Why not: keep most weight off the arm by using a fan/compressor installed in the base and routing a conduit to the arm? Then it will only have to move the conduit, not the heavy motor.


Wow, I was building a Thor 3D printed arm, and this project looks way better! I think I'm going to Pivot.

Side bar: these servos are a game changer.


Are the servos better than SG90?


They're "smart" which means you can form a serial bus of them, query individual motors' encoder positions and motor temperatures and whatnot, adjust the PID parameters yourself, and so on. You can also daisy-chain them together, which might reduce your cable routing problems.

Downside is when they break, you're out $50 or more - and you're going to break at least one. And the manufacturer wants you to operate them at 11.1v which isn't very convenient. And when it comes down to it, it's still got plastic gears, a plastic case, and enough backlash to be noticeable.


Wish there was something inbetween that and SG90!


Dramatically. They cost (give or take) ten times more and weigh twice as much, for which you get (give or take) four times the stall torque, serial positional control, and a 360 degree range of motion.

Still got plastic gears though.


As a longtime Dynamixel user I agree the U2D2 adapter is pricey in comparison to other options, but I would like some quantification of the “latency is very high” claim. I have always found it to be a sure bet for low latency (~1ms) across a wide variety of platforms.


It works fine with Linux. The latency issues only exists with macos. https://forum.robotis.com/t/u2d2-high-latency/5319


Please stop gluing 3 servos together and claiming you built a robot :D

(servos motion is quite jerky, that is why they don't have a video showing off this "robot" operating)


Here are some videos of the robot moving: https://twitter.com/alexkoch_ai. The advantage of this robot arm design is that it's very lightweight. The XL330 motors are just 18g each. This makes it very suitable for teleoperation and robot learning.


As someone a lot deeper into robotics I totally get this sentiment... But I also think it's good to encourage people to share the basics, look how many newcomers in this thread find it interesting and may explore it further.

I wonder how smooth one could make a cheap servo-based robot arm operate with decent control algorithms.


Here's what I want to build:

A rotatable, table-top round disc base, with a contraption to keep a mobile phone straight and stable. The stand itself will have 4 small unidirectional mics, to figure out which direction (after filtering for human frequencies, ideally) is the sound coming in. And based on that, it will rotate the phone to face that direction (continuosly).

The use case is family video calls that I do frequently from my dining table (my whole family is sitting around the table, hence there is no one good spot to keep the phone). With this self-rotating stand, the phone will auto-rotate towards whoever is speaking.

I can write audio-processing code, but I have no idea how to get started with the hardware. Feel free to steal my idea, but please share with me how you are building it. I just want this to exist, and I want to know how to build it for myself as a fun project.


If you can write the code, it could be pretty straightforward. From a quick search, motorized turntables already exist, from $10-100, both with and without remotes. So you could either wire a controller into one, or reverse the remote’s signals. I also found this guide for a DIY one[0], if that’s your preferred route.

[0] https://www.instructables.com/15-Motorized-Rotating-Display-...


An alternative would be a 360 camera and do the cropping in software. It has the advantage of being able to show multiple speakers at once.


I printed a robotic arm for school. Unfortunately, we weren't using a high quality enough printer, so the tolerances were off and things didn't slot together well. I'd recommend people to know the precision of their printer before setting off to build this.


Learning this is part of the journey imho. You don't even have to know that you have to pay attention to tolerances beforehand. You'll inevitably learn about it when building such thing.

Just always be aware that these things will never be perfect and don't get anxious because there are so many perfect looking projects on the internet. They most likely went through the same mistakes and might even have more people in the background. Just enjoy the journey


It's not so much the printer's fault as the slicer's. Calibration is key.


Alternatively, make everything just a tad out of tolerance and drill/sand/machine it to a more precise size.


I tend to do this. I know I could get better at printing (though my printer is pretty old), but sand paper and a rotary tool are really fast and can be pretty precise and accurate too.


I'd PSA recommend anyone just add 0.25mm fit clearance to every single mating surfaces within their designs. It's not ISO or anything compliant - somewhere between Atrocious and Enormous range and perhaps an mechanical equivalent of Python code with no __main__, but just works for me, and it should for lots of purposes.


Anyone know how the accuracy of this compares to the similar-cost adamb314/ServoProject arm? [1] It utilizes a servo mod adding dual encoders to compensate for backlash and achieves accuracy of +/- 0.05mm (enough to thread a mechanical pencil lead in and out of the tip of a pencil). [2] He's been working on the project for 5 years, with significant improvements still in the last year. [3][4]

  [1]: https://github.com/adamb314/ServoProject
  [2]: https://www.youtube.com/watch?v=SioCwvR_PYY
  [3]: https://www.youtube.com/watch?v=_4mrb2T706s
  [4]: https://www.youtube.com/watch?v=Ctb4s6fqnqo


it should be similar given similar construction; one of the reasons dynamixels tend to be pricey is the dual inboard encoder setup.

if price is a factor then a servo+encoders setup will always be cheaper; there are some dirt cheap encoders out there for the creative hacker. dynamixels offer a crappy value compared to DIY solutions, they're just easy to use off-the-shelf and have nice features that aid construction.. but hardly anything game changing.


> but hardly anything game changing.

Speak for yourself! When I worked on liquid handlers a decade ago the fully integrated servos were at least ten times as expensive as they are now.

Every time I step away for a few years and jump back in, there seems to be at least a half dozen game changing pieces of hardware on the market.


As a level 12 necromancer, I myself am a fan of necrobiotics: https://onlinelibrary.wiley.com/doi/10.1002/advs.202201174

Spiders move not through direct-muscle limb manipulation as we do, but something more akin to hydraulic pressure moving a joint. Thus they become self-building hydraulic soft actuators with a very simple i/o interface(psi in the necrospider)


Im currently waiting on the motors for this, but my bambu p1s printed out the parts with minimal stringing in like 90 mins. Really to try it out for cooking experiments


Bought a Sainsmart robot arm because it was cheap and has 6 degrees of freedom. I don't use it for anything serious though. It was just to practice some robotic programming. https://github.com/wedesoft/arduino-sainsmart


Does anyone have any suggestions on something that's a little higher quality, i.e. more torque and larger, like the size of a UR5 but cheaper than $30k? There's always seems to be a gap between "robot arm with dynamixels/off the shelf servos" and "research-grade arms".


I’m very happy with the mechanical design for my four axis brushless motor powered robot arm with integrated 3D printed planetary gearboxes. I have some hope of picking the project back up and better documenting it, though the CAD files explain a lot. For the last few years I’ve been working on my own brushless motor controller design and I think this year I will have that stable enough to go back to working on this arm.

https://github.com/tlalexander/brushless_robot_arm

https://github.com/Twisted-Fields/rp2040-motor-controller

Direct link to a video of it operating (apologies for the Twitter link) here: https://x.com/tlalexander/status/1455339851734138880


Pretty neat! As a fellow motor controller designer, I worry you're wasting time reinventing the wheel in that regard, but I really like your project, I also want a farming robot.


Thanks. I specifically want something that is designed in kicad, open source, and easy to manufacture at JLCPCB using parts already in stock there. Maybe there are more options now but two years ago when the Odrive we were using was discontinued and their new products got more closed source and more expensive, we didn’t have a lot of options. At this stage I’m very happy we’ve gone with our own design because we have so much flexibility on packaging and specification. Working with third party devices sucked.


This is awesome! I don't have enough background knowledge to know how to make one myself, do you ever plan to sell these, or make a tutorial on how to build one?


Thank you! I do not plan to sell these arms. I am not sure if I will be able to make time to document it (I am maintainer on an open source farming robot project which takes up most of my time - see profile), but I am working on a new actuator concept based on the principles explored in this design, and my goal with the new actuator concept is to document that and make it more general purpose, so it is easier for people to make themselves and explore this mechanical gearbox design. This should hopefully popularize this design if others like it. I think is fair to say the design is probably novel.

A fun fact about the new actuator is that it can be printed in plastic at home but is designed to be 3D printed in 316 stainless steel. I was inspired recently by the relatively low cost of 3D printing from China (I used craftcloud), and my novel actuator design relies on the fact that 3D printing allows gearbox components and robot frame members or components to be mechanically unified. This opens up new design spaces.

In this design, it is a two stage planetary with one "first stage" in between two parallel second stages. The first stage is driven by a shaft from a side mounted motor, with the shaft going through the gears in one of the second stages to reach the first stage sun gear. This parallel output better balances mechanical loads across the joint both on the input and output sides. This makes it ideal for elbow and knee style joints, and might serve to be genuinely very useful in robotics worldwide.

As with all my work it will be open source. I don't love twitter but that is currently the best spot to get updates on side projects like this. I am @tlalexander there. Alternatively, star the 3D printed robot arm repo and I will update it when my new steel servo design is done!


I have a dobot mg400 but there are quite a few others: uFactory, elephant robotics, annin, dorna, epson (vt6L and their scaras), and lynxmotion is releasing their ses-v2.


Wow, thanks for the list of resources. My problem with most robot arms are either they have small payloads, accuracy isn't very high, are too slow, or software support is terrible. Do you have any specific suggestions for 6DoF? The MG400 looks to have 4.


Mg400 is great for the price no issues. Very smooth! Check out this review video: https://youtu.be/6nGexb_i0aM?si=IP0E76MCGxrTrQEH

What payload are you looking for? Cartesian Gantry’s are your next bet if you want to handle higher loads. Eg. Epson vt6l is ~14k but I’m sure you can build a gantry system to handle higher loads for a bit less!

Dobot software sucks though, I ended up programming it in python. It’s definitely not on the same level as say a Kuka or Yaskawa. Epson seems like the best value out of all the higher end arms. Software looks good, the arms are built well, has a long history in industry, and price is decent


You don't want a cheap && powerful && fast robot in your home! Let alone sell a bunch of.


There are a couple of arms commonly used in research around the $10k mark, namely the Franka Emika and UFACTORY xArm 6.

The ALOHA project uses the ViperX 300 6DoF, which is around $6500 but uses higher quality dynamixels with aluminum parts and bills itself as "research grade". I have one of these and I'd say it's expensive for what you get, but still cheaper than the "factory grade" robots. I will need a bimanual setup eventually and I'm probably going to get either an Emika or xArm since I'm already hitting the weight limits of the ViperX.


I‘m not sure as far as your technical requirements go, maybe Igus.eu would fit your needs? They do have a fairly good robot automation portfolio that seems to be very price competitive as far as I can tell


Their rebel cobot looks really neat, and price seems great!


Probably not UR5 size, but Aloha robot arm sounds like something you might be interested in: https://www.trossenrobotics.com/aloha-kits


Thanks for the link, I would categorize this robot as "robot with off-the-shelf servo".


I have zero idea what I am going to use this for, but the idea of having a robot arm has fascinated me since I was a kid and I am 100% going to start working on this this weekend.

I think I could find some fun projects. I wonder if it could hold my microphone and it turn into an automatic microphone arm...


Impractical, but here are my kitchen use-cases: - Hold a hot pan and drip bacon grease into a jar - Hold "almost empty" bottles like olive oil upside down to drain them - Making dishes that require constant stirring over a long period (tapioca, risotto, etc)


Hah! Not actually a robot... but I half suspect I could convince the that wife a fold-out arm with a modest clamp (that could rotate a bit if needed) at the end would be worth mounting to a wall along the kitchen counters for holding/draining various containers like your first two use-cases.


Julia Child's kitchen was famously functional with a peg board wall for quick access to hanging supplies. I could see her doing that!


Nice idea for helping convince! I too am for the functional stuff; I use a tension rod (like used for shower curtains sometimes) between cupboards, over the kitchen sink to hang S hooks off, and let a significant number of pots and pans hang to dry after washing.


Does anyone have any suggestions on something that can be used to for threading a small needle with a very fine thread? Would need high accuracy and repeatability. Higher price would not be a problem.


I've built my own GELLO setup (the setup the author based it's arm on) and it's quite neat, but as I only use it for teleoperation of an real arm, I wonder how useful this low cost arm really is? Considering limited range, probably backlash, and the limited torque

Also it doesn't seems to use springs like GELLO which was a nice add, although the 3D printed parts where the spring was mounted broke quickly.


Cool, how are you controlling it, VR (Quest) type setup or something else?


I want the arm to wander around my house, pick up articles of loose clothing and put them into washing machine. Then pull them out and spread them on a drying hanger. How far away from that are we?


That's it, isn't it. The question is not, how far away from that are we, but when can you and I actually afford it? Because, as the other commenter snarkily replies, human maids already exist. The lifestyle of the singularity is already here for the rich. It's trickling down that kind of lifestyle to the rest of us that AI robots will enable. (with some amount of social upheaval.)

Lets say the robot that can do that comes out next year for $15 million. Could you afford one? I certainly can't. So pretend that it does, what changes for you and I? Nothing. So the robots that can do that won't be used as robot maids until the price comes down. Which; it will. Open source robotics and model-available AI will force things to be affordable sooner, rather than later, because we'd all like a robot to do that for us. Along with be in the kitchen, doing dishes, cleaning up; cleaning the bathroom, doing yardwork, making my bed.

The industrial versions will be used to do hideously dangerous things. underwater welding, chainsaw helicoptering, manual nuclear reactor rod removal. We already use machines for a lot of those difficult/impossible tasks, it's just a matter of programming the robots.

Which takes us back to today. How far away from that are we? The pieces are already here. Between https://ok-robot.github.io/ and https://mobile-aloha.github.io/ the building blocks are here. It's just a matter of time before someone puts the existing pieces together to make said robot, the only question is who will be first to make it, who will be first to open source it. Who will make it not just possible, but affordable?


I think even more difficult than making it affordable will be making it reliable. OK-Robot says it has has a 1/3 failure rate, and takes ~20x as long as a human, at which point you might as well do the task yourself. I'd want the error rate and speed improved by an order of magnitude before I'd consider it anything other than a fun novelty.


AI technology is advancing at an exponential rate currently. The inquiry remains whether there's a limit to these technologies' potential.


And as your error rate decreases, it gets exponentially more difficult to decrease it further. My guesstimate is that even ignoring the hardware issue, it would be at least a decade until we get AI capable of reaching human-level performance in household object manipulation (across a big enough class of household objects to be significantly useful in multiple tasks per day).


Google X was working on that with Everyday Robots (I used to work there) but they canceled the project. One of their old project leaders left and started Hello Robot, which is doing a much better job producing an actually useful thing. I think those robots are maybe $25k, but I’m not actually sure.


It has been here for several millennia... it even has two legs to walk around and two rotating cameras connected to a very powerful LLM...


Nah, that model is expensive and buggy, not to mention closed source.


Depends where you live... In some places those models are entirely affordable. But likely will also there get less and less affordable.


most of those are trained on a very flawed data, so in order to use it for anything non trivial, you will have to invest a lot in fine-tuning, maybe even to an extent to call it training


but it work. I can't believe that we choose from different robots.


Let's look at the economics.

In the US or Western Europe, a human worker would cost you about $12-$15 per hour (depending on the actual city and whether they're paying their taxes).

You're looking at roughly 4 hours of work per 100 square meters (the average housing size[1]) per week to get the listed activities done, plus some general cleaning.

So let's call it $60 per week or $3,000 per year. If we estimate the average useful lifetime of such a robot at 5 years, they'd need to cost less than $15k (unadjusted for inflation) to make sense.

This does not take into account that the house owner also would be paying for a small portion of the societal cost of this additional unemployed houseworker. If we assume that there are roughly 1 maid per 500 citizens[2] and that each unemployed worker costs roughly $20,000 to the State per year[3] then our back-of-the-napkin math says the robot worker is generating a socialized cost of $400 per year per household member (2.17 members on average).

So... we need a ~$14,132 fully-automatic, solar-charging bot before your dream can break even.

[1]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8073340/ [2]: https://www.statista.com/statistics/1087472/number-maids-hou... [3]: https://blogs.alternatives-economiques.fr/gadrey/2016/06/19/...


It's unreasonable to assume that reduced demand for home cleaning would translate one-for-one into unemployment and welfare. The bottom of the labor market doesn't work that way at all.


You have to tradeoff between privacy, automation and cost.

The scenario you described is tough to solve because of edge cases sort of like FSD.

If you want that work done then its cheaper to hire a maid. It would be nice to have complete privacy and have a robot perform all those tasks flawlessly but the price point would make it economical to get a human to do it.

Perhaps you can get someone to drive the robot but that puts privacy at risk.

Same thing with sex robots it is cheaper to hire a sex worker so until something that can get us past the uncanny valley I don't think we will see a robot revolution quite yet. The hardware alone is prohibitively expensive and there is not enough people tinkering at the problem (because to hire humans is always safer and easier and cheaper).


You're not going to get "complete privacy" (aka fully-offline local-model-driven) robots outside of building one yourself... There'd be just too much of an economic incentive to use this as a data collection platform.


Perhaps you could ask Twitch Chat to do that for you: https://youtu.be/uzWNgoYJLqM


Not so far for the first part: https://wholebody-b1.github.io/



Define "we". Professional SotA? Probably like 10 years. Open source? More like 100.


A $250 robotic arm is really a price consumers can reach. How does it perform in practical scenarios in terms of durability and precision? Is it functional?


How's the software side of this? How is the Dynamixel SDK to use?


obligatory shout out to Annin Robotics AR4. Here is my build: https://commandpattern.org/2023/03/19/ar4-robotic-arm-build/

$2K USD. 2kg payload. millimeter repeatability (if you build it well :)


Nice build! I always wanted one of those AR robot arms but I'd rather buy it off-the-shelf than assemble from scratch. I don't think anyone sells them pre-built though. This type of 'high level hobbyist robotics' is in a bit of business dead-zone unfortunately, as it's some combination of too small a market and not enough use cases to justify having a company around. Or maybe there's potential there but no one's thought of it yet.

Btw I bet your arm's movement smoothness can be improved with some different deceleration rates. It was really fun watching it sort those beans!

My research goal with one of these 'good enough' 1mm repeatability robots would be vision-guided adaptive control to complete tasks after seeing examples of them done via human manual control. Would be a really interesting ML/AI problem. Just need a reasonable hardware platform to get started. Right now I'm leaning more towards simple/smaller servo motors like in the OP of this post (plus there's the cost/time trade-off).


this is where my research is going. camera -> llm -> targets -> tasks -> ros2 arm :)


i just want a DIY robot arm (6DOF preferred) that can give me massages? is there a community around this?


Is there a video of it working anywhere?


I've published quite a few videos on Twitter: https://twitter.com/alexkoch_ai


This is fantastic! Thank you so much


It's cool but just reminds me of the robotic manipulation episode of big bang theory.


wow, thats really amazing. Are you a hardware engineer?


its really cool!




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: