Hacker News new | past | comments | ask | show | jobs | submit login

There's nothing scary about these robots, any more than my Roomba is scary. They are tools, and like all tools, they can be used for nefarious purposes or for the benefit of mankind.



> It is true that a computer, for example, can be used for good or evil. It is true that a helicopter can be used as a gunship and it can also be used to rescue people from a mountain pass. And if the question arises of how a specific device is going to be used, in what I call an abstract ideal society, then one might very well say one cannot know.

> But we live in a concrete society, [and] with concrete social and historical circumstances and political realities in this society, it is perfectly obvious that when something like a computer is invented, then it is going to be adopted will be for military purposes. It follows from the concrete realities in which we live, it does not follow from pure logic. But we're not living in an abstract society, we're living in the society in which we in fact live.

> If you look at the enormous fruits of human genius that mankind has developed in the last 50 years, atomic energy and rocketry and flying to the moon and coherent light, and it goes on and on and on -- and then it turns out that every one of these triumphs is used primarily in military terms. So it is not reasonable for a scientist or technologist to insist that he or she does not know -- or cannot know -- how it is going to be used.

-- Joseph Weizenbaum, http://tech.mit.edu/V105/N16/weisen.16n.html


Still, it's a nice way to shift blame to scientists and engineers, away from people who actually use these tools for evil, or commission development of technologies to use in their evil business models.


All links in the chain are responsible for what they do, there is no single packet of "blame" that gets to reside with any single party, and denying one's responsibility will not make it go away.


Responsibility is not a chain, and it absolutely does fade away with enough degrees of separation - otherwise you could hold me responsible for looking at a driver the wrong way, which annoyed him past a threshold that caused him to scream at his wife later that day, and made his wife scream at their kid the next day, for whom it became a formative moment, pushing the kid into a life of crime, and 10 years later someone died because of it.

You definitely want to focus your attention on people with most agency over the problem, and those people usually aren't scientists or engineers. And you can't simultaneously praise decision makers for their wisdom and leadership, and absolve them from responsibility because they've only used an "evil" piece of tech they found laying somewhere.


> Responsibility is not a chain

No, but events are. The question of how to use a tool arises from the existence of the tool.

> otherwise you could hold me responsible for looking at a driver the wrong way, which annoyed him past a threshold

You're still just responsible for your own acts, and they for theirs. If you were being a dick to them needlessly, that's your fault, and if it tipped them over the edge, it's natural to feel bad. Not fully responsible, but also not as if you had zero to do with it.

Just like when someone gets bullied and commits suicide, you don't just look at that act of suicide and talk about who had the most agency and what one should focus on.

> You definitely want to focus your attention on people with most agency over the problem, and those people usually aren't scientists or engineers.

There is no need to "focus attention", and holding one party responsible for their actions is orthogonal to holding other parties responsible for theirs. This is a tech forum, Weizenbaum was one of the greats when it comes to writing about technology and morality, and I dare say it is the responsibility of technologists to be familiar with his body of work.

> And you can't simultaneously praise decision makers for their wisdom and leadership, and absolve them from responsibility because they've only used an "evil" piece of tech they found laying somewhere.

That's why I don't, and never hinted at doing so, and even clearly stated the opposite when I said "All links in the chain are responsible for what they do".


They're a little unclear about exactly what morality they are advocating for. The nature of weapon technology transforms the way society works.

In the era of sword and shield, for example, combat effectiveness is hugely dependent on raw upper body strength. That means that strong healthy men rule all domains, while women, children, any men not in top physical shape are helpless before them.

In the modern era of mechanized weapons, personal size and physical ability are much less relevant. There's a much greater ability for small groups to make their opinions felt. Victory in large battles tends to go to whoever has the best tech and greatest quantity of it. It's probably a better world overall.

The real question is, exactly how will any "killbots" work, and what effect will they have on society? Maybe they'll be super-expensive and centrally controlled, and nobody better dare to move against whoever ends up in charge of them. Maybe they'll be cheap and plentiful, so anyone with a grudge will be even more able to cause chaos. Maybe something else we can't imagine yet. I have a feeling we'll find out eventually, one way or another.


Your second paragraph seems rather simplistic to me.

Less-able men with more ability to marshall resources/rewards can convince more-able men to be their proxies by paying them . How would they have the ability to do that? Technology, knowledge, cunning, guile.

How long has it been since the king was the best fighter in the realm? I mean, seriously?


Well yeah it's simplistic, since it's 2 sentences. I'm not really prepared or qualified to write a 30 page paper on the nature of medieval combat. Yet there seems to be an obvious truth to it.

There are of course exceptions, such as persuading or paying someone else to fight for you, or concealing a weapon, getting someone to trust you, and stabbing them in the back, etc. It still seems to shape much about reality to know that the majority of people will have no chance of ever winning a remotely fair fight against the enforcers of whoever is in charge.


I don't find the "truth" you mention obvious at all. I think it's just a fairy story simplification based largely on fiction (written and visual).

Over the last couple of thousand years (or more, but history gets a bit fuzzy beyond that), lots of nations have had leaders at many different times who were not the best fighters.

Your claim wasn't that a majority of people had no chance of winning a fair fight against enforcers, which is obviously true. You made a much more broad claim about how historically certain physical attributes would put particular kinds of people in positions of power, and about how that has changed.

I think this is likely misleading and inaccurate. Yes, those with power have always used force to enforce their wishes, but that's very different from saying that those in power are themselves of a certain physical type.


If your Roomba was programmed to hurt me I’d surround myself with power cords.

This looks like us so it’s easier to see how we’re threatened by it. It’s more visceral. My cats don’t care about my vacuum but if it walked and jumped their neutrons might fire differently.


I can't edit my original comment, or else I'd add this line:

Everyone afraid of this, you ought to be afraid of micro-drones like the Black Hornet Nano - https://en.wikipedia.org/wiki/Black_Hornet_Nano

No one is going to send a fleet of these ultra-expensive Robo-Killers to assassinate you and everyone else on the battlefield. Once these micro-drones can be mass produced cheaply enough, and you can put enough high-explosive on them to fly up to a person's neck and !!pop!! someone's head clean off, you'll see them programmed with swarm behavior and be unleashed onto the battlefield.

They fly faster than you can run (13 miles per hour), have a 1 mile range, and a 25 minute flight time. More than enough capability to swarm entire battalions and wipe them out.


> Everyone afraid of this, you ought to be afraid of micro-drones like the Black Hornet Nano

>They fly faster than you can run (13 miles per hour), have a 1 mile range, and a 25 minute flight time.

And can be trivially defeated by some netting, blinded by bright lights/lasers, and/or knocked out of the sky by leaf blowers and umbrellas. Despite what certain propagandaesque sci-fi "warning" videos would have people believe, I'm least worried about these nano-drones. At the end of the day bullets are still cheaper, less complicated, and more effective. And as soon as you give the drones some standoff capabilities to mitigate some of the countermeasures, you start loosing many of the perceived "benefits" and are back to just using guys with guns.


That escalated. I started with a few power cords for the Roomba and now I never leave home without a laser, net and leaf blower.


I agree, that single nano-drone isn't scary. What of a swarm of hundreds?


Cost and complexity. If you have to send so many to overwhelm and bypass all the countermeasures, the cost and complexity of make it a much less practical and appealing solution than just doing it the old fashioned way. I can't see how brass, lead, and gunpowder will ever be more expensive than light weight plastics/composites, electronics, sensors, motors, batteries, plus the actual lethal component. Add to that the required time/complexity required to configure and deploy, situational considerations such as weather, sensor viability, terrain/environment factors, etc... and we're back to going back to guys with guns. Could there conceivably be a scenario where this might be the best option? I suppose, but in my estimation it would likely be the option of last resort.

If an enemy force has already made up its mind to kill, I don't see this making it any easier/more reliable/more effective than well-established alternatives.


That depends on the target. The military is quite ready and willing to spend tax dollars on things even if at the end of it there is something cheaper that does the job better.


> or for the benefit of mankind.

Unfortunately, I don't know that I can trust any of the organizations that would have the budget to control enough of these robots to make a difference in any direction.

Which is sad to me, as I love this from a technological perspective and looking at a best case scenario.


Humanity's goal should be to build AGI and let robots take over - we're doing it, willingly or unwillingly. No need to have blobs of meat hanging around. Intelligence itself is human thing. Whether it needs to have a body / physical metabolic processes to run by injesting cheetos all day, is totally absurd. Evolutionary processes have given us so much unnecessary baggage. Pure abstract intelligence is pretty damn human. There is already Neuralink and other hybird tech going on. I believe humans will willingly give up physical bodies in the long term (millenia scale).

This is bound to happen. There is no way it wouldn't I believe, ofcourse in short term, we gotta worry about stuff like politics, solving hunger and world peace.


> Evolutionary processes have given us so much unnecessary baggage

21 years ago, when I started writing a cross-platform digital audio workstation called Ardour, I was convinced that your claim above applied to contemporary mixing consoles. It seemed to be that they had evolved in ways that were deeply constrained by physics/mechanical/electrical engineering, and that there were all kinds of things about their design that was just unnecessary baggage from their crude history.

Two decades later, I understand how that evolutionary process actually instilled those designs with all kinds of subtle knowledge about process, intent, workflow, and even desire. It turns out that the precise placement of knobs, and even their diameter and resistance-to-motion, rather than being arbitrary nonsense derived from the catalog of available parts, rather precisely reflect what needs to be done.

Don't be so quick to dismiss your physical form or the subtle wisdom that evolution can imbue.

There's also the whole "situated action" sub-field of AI, which is centered around the idea that humans build themselves physical environments to embody and maintain knowledge in order to reduce computational load during decision making.


I enjoyed reading your perspective. I find evolutionary processes fascinating contrary to what my original comment imbibes. It’s had a lot of time to optimize :)


This is bound to happen. There is no way it wouldn't I believe, of course in short term, we gotta worry about stuff like politics, solving hunger and world peace.

There will never be world peace, unless humanity is no longer human, or alternately, under the boot of an all encompassing empire ruled by force.

To believe otherwise, is to believe history teaches us nothing, nor our knowledge of human behaviour. To assume that we somehow have a culture which "can do this", that our modern beliefs are "enlightened" enough to allow this, is the ultimate in hubris.

Sure... humanity couldn't do it before, but now? Now, we're just ever so enlightened and perfect enough to enable world peace.

There are only two real ways to enable world peace.

1) Genetically engineer the entire species to become more... social. Kill or prevent any 'old style' human reproducing. End the old species. There are innumerable issues here, including "we're just messing around with what we barely comprehend".

Yet our entire culture is predicated upon how the human brain works, and the human brain works more on genetics, than post-birth learning.

OR

2) Take over the entire planet, killing everyone who disagrees with you, and ensuring that due to the technology involved there can NEVER be a revolution. Further, destroy and hunt down every single person which does not swear fealty ; allow no external empires to form. Ever.

NOTE: I am not happy about this, yet, this is reality. Let me put this another way.

You want world peace? OK! Great!

First, you'll need to end all murders, thefts, all anti-social behaviour. "World peace" is denied because of the genetics that create this behaviour. They're the same problem.


Spectrum of humanity spreads wide and there will never be absolute world peace - in the same way, there is no peace in the animal kingdom. As I write, thousands of animals are dying at this very moment, millions of insects are killed. Nature is fucking brutal, my friend. Unimaginable amount of pain was inflicted in the wilderness during this hour.

We're lucky to be able to communicate to each other in civil manner without ripping each other apart for food. Pretty incredible to be a human!


>We're lucky to be able to communicate to each other in civil manner without ripping each other apart for food.

That will go away once Climate Change reduces the ability of the planet to produce abundant resources needed for the modern way of life. The remaining carrying capacity of the planet will force a move back to subsistence farming and with that comes the inevitable brutal environment.


Personally, I'd hope we end up with something a bit like The Culture - which is perhaps the most positive scenario for any society made up of 'humans' and powerful AIs.


Robots are our next steps, we don't need to hang around.


1) really looks like the world from Brave New World.


I think you overestimate human intelligence. Surely, as of yet we are the most intelligent thing in the known universe, and the human mind can seemingly discover/invent/understand everything.

But knowing that our math itself has a limit, and we are already pushing that limit with some problems it is naiv to think that the human brain is all that much capable. (Interestingly enough we are intelligent enough to somewhat know our limits - like the complexity of ZFC) While once singularity happens, an AI has basically only material limits to the complexity it can manage (though what I found beautiful is that even that would hit a limit not necessarily higher than we do - like there will be a busy beaver number it can’t reason about)


> Humanity's goal should be to build AGI and let robots take over - we're doing it, willingly or unwillingly.

Yes, but better not make them look human. Humans are bad at tolerating more/equally intelligent species, just look at homo sapiens versus neanderthals. Hell, even between races humans are barely tolerant.


Shit, I would simply be happy if humanity had a goal.

LOL as for blobs of meat, well, me and a couple other people I know would be unwilling to part with our meaty blobs, for a whole bunch of reasons ;)


A Spot costs about $75,000, so as far as trusting the organizations... I mean, I find myself seriously considering buying one.

Yes, I would likely have to add-on other modules over the course of time, but $75,000 for Spot's capabilities is actually pretty reasonable.


What would you use use it for if you had one?


Goofing off, most likely.

I haven't really thought of a use case for the home, although there's literally dozens of them. I actually wonder if it could function as an auto-dog walker for my organic dog on the days when I'm too swamped with work to do so.

The thought of attaching a leash to my dog and the leash to Spot while I'm indisposed is actually kind of attractive. I would have to make sure my dog has already done her business though, since I wouldn't want to be the kind of asshole that not only uses a robot to walk his dog, but also lets his dog shit on his neighbor's yard while a robot walks his dog.


> function as an auto-dog walker for my organic dog on the days when I'm too swamped

Be wary of renting to any penguin, if you do buy Spot.

https://en.wikipedia.org/wiki/The_Wrong_Trousers


I wonder, would the time spent programming and integrating all this be less than just walking the dog? For me, this would defeat the purpose of having my dogs my our life.

Having worked in robotics quit a bit this is a common trap. There are plenty of things that we can think of for a robot to do, but most of them would require more concessions, programming and maintenance time than it takes to just do the task, or hire folks to do it. The areas where the value prop holds up it really works well, but these kind of low value, high complexity applications like walking a dog around the neighborhood without dragging it by it's collar if the dogs knee hurts and it walks slow that day.

A $75k robot arm with legs is not a completed application. We can already buy robots with the needed mobility to do things like walk dogs for far less. This is a classic hammer nail situation. I think this is the issue that BD keeps running into, they have an amazing team, amazing tech, amazing capabilities, but are still searching for that killer high value application. There are over 400k industrial robots sold every year, its a huge market. They sell well because it relatively straightforward to program and integrate them into workcells and factory lines to create value. To program and integrate one of these robots to do something so complex that it would necessitate a BD robot and not a standard industrial robot would be a huge development effort. It just doesn't hold up when we have folks that need work. The cost of one 75k robot plus two person years of engineering labor is 4 or 5 years worth of traditional labor. The value prop just isn't there until our ability to control, program and integrate these complex robots (cobot stuff) gets better.


> The value prop just isn't there until our ability to control, program and integrate these complex robots (cobot stuff) gets better.

When you ultimately drill down to brass tacks though, you're left with a chicken and egg scenario. We need better programming and integration for this to be time-effective. We need more time programming and more time integrating for this to be a value proposition.

You don't get there without some idiot like myself saying, "I could spend 1000 hours walking with my dog... or I could spend 1000 hours programming my robot to walk my dog..."


My point exactly. Its not 1000 hours and 1000 hours. Its 1000 hours and 1,000,000 hours. If we could program a complex robot like spot to do a highly complex task like walk a dog safely on an open ended "real world" in 1000 hours there are lots of other things we would do first (package delivery comes to mind). We're just not there. We have the hardware, but not the software infrastructure to apply them as is being expected here.


https://www.bostondynamics.com/spot

They promise "repeatable autonomous missions to gather consistent data", so my guess is programming a route and mapping terrain is reasonably easy. There is also remote control and camera access, if that could be triggered to automatically notify you (or a dog walking central command service supervising), for example when your dog is barking/complaining or resists to being dragged, it could go a long way to solve dog walking (for smaller dogs).


But how? How do we program it to know when to call you? Its a non-trivial problem.


According to BD Spot doesn't do well with moving objects and shouldn't be used around children, animals.


$75,000 is far enough out of reach of the average person that I think I'm standing by my statement.


For the price of one of those robots you can hire a minivan full of armed goons that will do exactly as you tell them todo with less supervision.


During development, and initial per-unit sales? Sure.

Once mass produced, not even close.

Think of:

- training costs (training grunts isn't 100% free) - pay as soldiers wait to go on missions - and here's the BIG one, medical care - and lastly?

PR! PR, no more "soldiers coming home in body bags". Why, you can fight any war you want, and no one will get upset about your soldiers dying. Yet beyond that?

How do you negotiate with one of these things? How do you trick them, by walking an "innocent" up to them, and blowing them up?


How does one of these things identify civilians or hold a place like Baghdad? Armies occupy. Those weapons destroy infrastructure and people and not much else.

Or do you use them like drones paying soldiers to run them from a container in Kansas. In which case you have the soldier still.


Just like drones used to bomb, as you suggest with Kansas.

As time passes though, especially on an actual open battlefield, raw 'kill everything that moves" becomes more of a potential too.

However?

My logic was predicated upon cost, and if implemented, cost reduction due to all those body bags. You think Nixon and Kennedy were purely motivated by the cost of US soldiers, when they wanted out of Vietnam?

They sent those troops there to begin with!

No. They cared about the PR issues, and re-election.


They also cared about the PR issues of wiping out villages. They were there to "Liberate" not to wipe out.

You are correct it does make it cheaper (maybe, eventually), but so do bombs or gunships or drones.

These are hella complex machines though. A gun is absurdly simpler as is a drone. And the ground is a lot messier than the air.

The logistics of servicing/repairing them is also going to be hefty. Tanks are a pain for maintenance already and they are much less complicated.


> How do you trick them, by walking an "innocent" up to them, and blowing them up?

Right now? Regardless of if it’s controlled by remote or by AI, the sensors are probably easier to fool than an in-situ human would be.


I'm not talking about fooling. Mass produced, these things would be 10k max. We're talking 10+, 15+ years out.

What sort of fear do robot soldiers have? None. How does it make people "upset", if a robot soldier is blown up.

Now think on the converse.

Right now, suicide bombers take out soldiers, but almost always take out innocents around those soldiers. Often, children are killed.

Now, imagine the locals knowing that absolutely no enemy will be killed, just a machine, but dozens or more of their friends, family may be killed.

How long will suicide bombing last, when the only human casualties are the local population?


Ok, but that’s a surprising direction to go in.

Sure, this would make it less likely to use suicide bombs against soldiers — perhaps even politicians will put skin suits on the robots and use them for public appearances a-la Westworld for similar reasons — but grenades and RPGs and anti-material rifles and IEDs would likely all still be used.

And £10k robots can also be used by terrorists, perhaps stolen from warehouses, perhaps hacked.

That said, what worries me about terrorism is not cargo-culting shapes that look dangerous (be that robots which look like the Terminator or 3D printed guns), it’s people with imagination who know there are at least two distinct ways to make a chemical weapon using only the stuff in a normal domestic kitchen and methods taught in GCSE chemistry.


Gun control is a uniquely US problem, at least in its current form. Yet this isn't going to have the same problem as gun control, for example, how easily can people obtain nuclear material?

And terrorists? Sure, but an explosive truck is probably easier than one of these. And if sales are controlled, then they won't have a domestic army of them.

In terms of hacking? 100% agree. It's why I find Tesla's OTA updates to be, frankly, insane. Full control of things like brake firmware has been demonstrated, with an OTA fix to brakes in the past.

This means that, along with autonomous modes, you could perhaps manage to (especially with an inside man), force-push updates, regardless of driver permissions, to all Teslas out there. And set them to run into everyone they find, just run over as many people as possible.

So there is tonnes of risk, and anyone thinking "Oh, they'll secure thing $x" is, IMO, a damned fool. Hack, after hack, after hack, after hack, proves this to be absurd.

We literally cannot lock down anything. Anything. Not CIA infrastructure, not any corporate infrastructure, not government infrastructure, not health care, nothing.

So I agree, 100%, robots with guns = horrid, just from that one angle. But I contend that they are cheap, and effective, so you can bet governments will use them.

The link?

Your reference to chemical weapons. I see the concern, yet I'm more concerned about genetically engineered death. And training people from (for example) China on how to do this, seems beyond absurd.

The future is biotech created death I think.

Another example, genetically engineered animals, designed to kill as well. How about mosquitoes, pre-loaded with viral payloads? What about bacteria which infects well water, and is literally impossible to ever get rid of, once in the wild? How about a fungus, which destroys wheat, which primarily the west eats, yet the east doesn't (rice)?

How about gut flora/fauna, which when fed (eg, when you eat), releases a mind altering substance? A poor fellow was infected with yeast, which made him drunk every time he ate, so imagine a genetically engineered set of bacteria which releases a mild hallucinogenic? One that makes it impossible to concentrate?

How will you cure this, if your scientists can't think straight? Or worse, what if it's an aphrodisiac? Let's try to solve a problem, when you can't keep your hands off of yourself.

I can think of so many endless horrors, and most of them biotech related.

https://www.cbc.ca/news/politics/china-canada-universities-r...


> Sure, but an explosive truck is probably easier than one of these.

I disagree — $10k is much cheaper than a new truck.


> There's nothing scary about these robots

How would you feel if the dance contained some choreographed "mow the people down" moves?


There isn't even a need for that, take the "classic" goose dance, for example, it can be particularly funny, like in this video [1], or out-right scary, like in this footage [2].

It all depends on the context, I can't see John Cleese turning into a genocidal dictator anytime soon, while we all know what the people in the second video did only a few years after those images were filmed. For what it's worth I see the robot in this story closer to the second video than to Cleese's comedic genius.

[1] https://youtu.be/yfl6Lu3xQW0?t=82

[2] https://www.youtube.com/watch?v=YSvS4LY26Yg


They are more scary the same way a tiger is more scary than a cat even though both are Felidae.


If your Roomba was designed by a weapon manufacturer to gain your trust you should be concerned.

If it had the ability to upload detailed maps of buildings you should be more concerned.

If it was designed as an initial stepping stone to develop into a smart and versatile killer robot...


They are scarier than your Roomba.


A roomba doesn't have the potential to kill you.


It has a potential to sell info about you though.


Mount a knife on it.

Though to day, the most dangerous murderous machines we have created have been cars.


Your wish is my command. https://youtu.be/OwtxWL0P9wA


Done.

What's next?


Now wait until you fall on it.


This robot is Version 1.0.

And yes, it’s just a puppet. For now. A human controls its movements.

What’s missing is a brain. And that will come in Version 4.0. Or whenever they perfect a decision and control system, for fully autonomous decision making.

That’s when you should worry.

Or rather. You’re probably safe, if you live in one of the western allied nations. That is your privilege.

But a black, brown, red, or yellow person in the 3rd world should worry. Because they will be the target of America’s oppression, via this robot.

Meaning that: your 3rd world country had better accept democracy and western media, and have your leaders approved by Washington DC, otherwise we will deliver some freedom to you. I hope you enjoy the fresh smell of napalm in the morning. You get bonus points if your country has oil.


It is crazy to see from the outside how the color-of-skin obsession permeates every single conversation right now. I hope that the USA will be able to find some other topics and arguments one day.

It is also false in this context. One of the most massive bombing campaigns in recent history was in European Serbia, and the hotspots of today (Syria, Iran) have a population that looks like Greeks.

As for black: the only predominantly white country that systematically sticks its fingers into Subsaharan Africa is France. The only external power that grows its presence in the Third World overall is China and given how they treat their own population, I would not be surprised if the next wave of pseudocolonial wars was Beijing's.


If you're going back to the 80's, don't forget South Africa's campaigns in Angola.


We already have scarier robots, they are called self guided missiles. You don't have to imagine this new world.


> your 3rd world country had better accept democracy ... approved by Washington DC

...or accept replacing a democracy with an us-friendly "anticommunist" dictatorship.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: