Unfortunately, I don't know that I can trust any of the organizations that would have the budget to control enough of these robots to make a difference in any direction.
Which is sad to me, as I love this from a technological perspective and looking at a best case scenario.
Humanity's goal should be to build AGI and let robots take over - we're doing it, willingly or unwillingly. No need to have blobs of meat hanging around. Intelligence itself is human thing. Whether it needs to have a body / physical metabolic processes to run by injesting cheetos all day, is totally absurd. Evolutionary processes have given us so much unnecessary baggage. Pure abstract intelligence is pretty damn human. There is already Neuralink and other hybird tech going on. I believe humans will willingly give up physical bodies in the long term (millenia scale).
This is bound to happen. There is no way it wouldn't I believe, ofcourse in short term, we gotta worry about stuff like politics, solving hunger and world peace.
> Evolutionary processes have given us so much unnecessary baggage
21 years ago, when I started writing a cross-platform digital audio workstation called Ardour, I was convinced that your claim above applied to contemporary mixing consoles. It seemed to be that they had evolved in ways that were deeply constrained by physics/mechanical/electrical engineering, and that there were all kinds of things about their design that was just unnecessary baggage from their crude history.
Two decades later, I understand how that evolutionary process actually instilled those designs with all kinds of subtle knowledge about process, intent, workflow, and even desire. It turns out that the precise placement of knobs, and even their diameter and resistance-to-motion, rather than being arbitrary nonsense derived from the catalog of available parts, rather precisely reflect what needs to be done.
Don't be so quick to dismiss your physical form or the subtle wisdom that evolution can imbue.
There's also the whole "situated action" sub-field of AI, which is centered around the idea that humans build themselves physical environments to embody and maintain knowledge in order to reduce computational load during decision making.
I enjoyed reading your perspective. I find evolutionary processes fascinating contrary to what my original comment imbibes. It’s had a lot of time to optimize :)
This is bound to happen. There is no way it wouldn't I believe, of course in short term, we gotta worry about stuff like politics, solving hunger and world peace.
There will never be world peace, unless humanity is no longer human, or alternately, under the boot of an all encompassing empire ruled by force.
To believe otherwise, is to believe history teaches us nothing, nor our knowledge of human behaviour. To assume that we somehow have a culture which "can do this", that our modern beliefs are "enlightened" enough to allow this, is the ultimate in hubris.
Sure... humanity couldn't do it before, but now? Now, we're just ever so enlightened and perfect enough to enable world peace.
There are only two real ways to enable world peace.
1) Genetically engineer the entire species to become more... social. Kill or prevent any 'old style' human reproducing. End the old species. There are innumerable issues here, including "we're just messing around with what we barely comprehend".
Yet our entire culture is predicated upon how the human brain works, and the human brain works more on genetics, than post-birth learning.
OR
2) Take over the entire planet, killing everyone who disagrees with you, and ensuring that due to the technology involved there can NEVER be a revolution. Further, destroy and hunt down every single person which does not swear fealty ; allow no external empires to form. Ever.
NOTE: I am not happy about this, yet, this is reality. Let me put this another way.
You want world peace? OK! Great!
First, you'll need to end all murders, thefts, all anti-social behaviour. "World peace" is denied because of the genetics that create this behaviour. They're the same problem.
Spectrum of humanity spreads wide and there will never be absolute world peace - in the same way, there is no peace in the animal kingdom. As I write, thousands of animals are dying at this very moment, millions of insects are killed. Nature is fucking brutal, my friend. Unimaginable amount of pain was inflicted in the wilderness during this hour.
We're lucky to be able to communicate to each other in civil manner without ripping each other apart for food. Pretty incredible to be a human!
>We're lucky to be able to communicate to each other in civil manner without ripping each other apart for food.
That will go away once Climate Change reduces the ability of the planet to produce abundant resources needed for the modern way of life. The remaining carrying capacity of the planet will force a move back to subsistence farming and with that comes the inevitable brutal environment.
Personally, I'd hope we end up with something a bit like The Culture - which is perhaps the most positive scenario for any society made up of 'humans' and powerful AIs.
I think you overestimate human intelligence. Surely, as of yet we are the most intelligent thing in the known universe, and the human mind can seemingly discover/invent/understand everything.
But knowing that our math itself has a limit, and we are already pushing that limit with some problems it is naiv to think that the human brain is all that much capable. (Interestingly enough we are intelligent enough to somewhat know our limits - like the complexity of ZFC)
While once singularity happens, an AI has basically only material limits to the complexity it can manage (though what I found beautiful is that even that would hit a limit not necessarily higher than we do - like there will be a busy beaver number it can’t reason about)
> Humanity's goal should be to build AGI and let robots take over - we're doing it, willingly or unwillingly.
Yes, but better not make them look human. Humans are bad at tolerating more/equally intelligent species, just look at homo sapiens versus neanderthals. Hell, even between races humans are barely tolerant.
I haven't really thought of a use case for the home, although there's literally dozens of them. I actually wonder if it could function as an auto-dog walker for my organic dog on the days when I'm too swamped with work to do so.
The thought of attaching a leash to my dog and the leash to Spot while I'm indisposed is actually kind of attractive. I would have to make sure my dog has already done her business though, since I wouldn't want to be the kind of asshole that not only uses a robot to walk his dog, but also lets his dog shit on his neighbor's yard while a robot walks his dog.
I wonder, would the time spent programming and integrating all this be less than just walking the dog? For me, this would defeat the purpose of having my dogs my our life.
Having worked in robotics quit a bit this is a common trap. There are plenty of things that we can think of for a robot to do, but most of them would require more concessions, programming and maintenance time than it takes to just do the task, or hire folks to do it. The areas where the value prop holds up it really works well, but these kind of low value, high complexity applications like walking a dog around the neighborhood without dragging it by it's collar if the dogs knee hurts and it walks slow that day.
A $75k robot arm with legs is not a completed application. We can already buy robots with the needed mobility to do things like walk dogs for far less. This is a classic hammer nail situation. I think this is the issue that BD keeps running into, they have an amazing team, amazing tech, amazing capabilities, but are still searching for that killer high value application. There are over 400k industrial robots sold every year, its a huge market. They sell well because it relatively straightforward to program and integrate them into workcells and factory lines to create value. To program and integrate one of these robots to do something so complex that it would necessitate a BD robot and not a standard industrial robot would be a huge development effort. It just doesn't hold up when we have folks that need work. The cost of one 75k robot plus two person years of engineering labor is 4 or 5 years worth of traditional labor. The value prop just isn't there until our ability to control, program and integrate these complex robots (cobot stuff) gets better.
> The value prop just isn't there until our ability to control, program and integrate these complex robots (cobot stuff) gets better.
When you ultimately drill down to brass tacks though, you're left with a chicken and egg scenario. We need better programming and integration for this to be time-effective. We need more time programming and more time integrating for this to be a value proposition.
You don't get there without some idiot like myself saying, "I could spend 1000 hours walking with my dog... or I could spend 1000 hours programming my robot to walk my dog..."
My point exactly. Its not 1000 hours and 1000 hours. Its 1000 hours and 1,000,000 hours. If we could program a complex robot like spot to do a highly complex task like walk a dog safely on an open ended "real world" in 1000 hours there are lots of other things we would do first (package delivery comes to mind). We're just not there. We have the hardware, but not the software infrastructure to apply them as is being expected here.
They promise "repeatable autonomous missions to gather consistent data", so my guess is programming a route and mapping terrain is reasonably easy. There is also remote control and camera access, if that could be triggered to automatically notify you (or a dog walking central command service supervising), for example when your dog is barking/complaining or resists to being dragged, it could go a long way to solve dog walking (for smaller dogs).
During development, and initial per-unit sales? Sure.
Once mass produced, not even close.
Think of:
- training costs (training grunts isn't 100% free)
- pay as soldiers wait to go on missions
- and here's the BIG one, medical care
- and lastly?
PR! PR, no more "soldiers coming home in body bags". Why, you can fight any war you want, and no one will get upset about your soldiers dying. Yet beyond that?
How do you negotiate with one of these things? How do you trick them, by walking an "innocent" up to them, and blowing them up?
How does one of these things identify civilians or hold a place like Baghdad? Armies occupy. Those weapons destroy infrastructure and people and not much else.
Or do you use them like drones paying soldiers to run them from a container in Kansas. In which case you have the soldier still.
Just like drones used to bomb, as you suggest with Kansas.
As time passes though, especially on an actual open battlefield, raw 'kill everything that moves" becomes more of a potential too.
However?
My logic was predicated upon cost, and if implemented, cost reduction due to all those body bags. You think Nixon and Kennedy were purely motivated by the cost of US soldiers, when they wanted out of Vietnam?
They sent those troops there to begin with!
No. They cared about the PR issues, and re-election.
Sure, this would make it less likely to use suicide bombs against soldiers — perhaps even politicians will put skin suits on the robots and use them for public appearances a-la Westworld for similar reasons — but grenades and RPGs and anti-material rifles and IEDs would likely all still be used.
And £10k robots can also be used by terrorists, perhaps stolen from warehouses, perhaps hacked.
That said, what worries me about terrorism is not cargo-culting shapes that look dangerous (be that robots which look like the Terminator or 3D printed guns), it’s people with imagination who know there are at least two distinct ways to make a chemical weapon using only the stuff in a normal domestic kitchen and methods taught in GCSE chemistry.
Gun control is a uniquely US problem, at least in its current form. Yet this isn't going to have the same problem as gun control, for example, how easily can people obtain nuclear material?
And terrorists? Sure, but an explosive truck is probably easier than one of these. And if sales are controlled, then they won't have a domestic army of them.
In terms of hacking? 100% agree. It's why I find Tesla's OTA updates to be, frankly, insane. Full control of things like brake firmware has been demonstrated, with an OTA fix to brakes in the past.
This means that, along with autonomous modes, you could perhaps manage to (especially with an inside man), force-push updates, regardless of driver permissions, to all Teslas out there. And set them to run into everyone they find, just run over as many people as possible.
So there is tonnes of risk, and anyone thinking "Oh, they'll secure thing $x" is, IMO, a damned fool. Hack, after hack, after hack, after hack, proves this to be absurd.
We literally cannot lock down anything. Anything. Not CIA infrastructure, not any corporate infrastructure, not government infrastructure, not health care, nothing.
So I agree, 100%, robots with guns = horrid, just from that one angle. But I contend that they are cheap, and effective, so you can bet governments will use them.
The link?
Your reference to chemical weapons. I see the concern, yet I'm more concerned about genetically engineered death. And training people from (for example) China on how to do this, seems beyond absurd.
The future is biotech created death I think.
Another example, genetically engineered animals, designed to kill as well. How about mosquitoes, pre-loaded with viral payloads? What about bacteria which infects well water, and is literally impossible to ever get rid of, once in the wild? How about a fungus, which destroys wheat, which primarily the west eats, yet the east doesn't (rice)?
How about gut flora/fauna, which when fed (eg, when you eat), releases a mind altering substance? A poor fellow was infected with yeast, which made him drunk every time he ate, so imagine a genetically engineered set of bacteria which releases a mild hallucinogenic? One that makes it impossible to concentrate?
How will you cure this, if your scientists can't think straight? Or worse, what if it's an aphrodisiac? Let's try to solve a problem, when you can't keep your hands off of yourself.
I can think of so many endless horrors, and most of them biotech related.
Unfortunately, I don't know that I can trust any of the organizations that would have the budget to control enough of these robots to make a difference in any direction.
Which is sad to me, as I love this from a technological perspective and looking at a best case scenario.