Two things interest me about Claude being better than GPT-4:
1) We are all breathless that it is better. But a year has passed since GPT4. It’s like we’re excited that someone beat Usain Bolt’s 100 meter time from when he was 7. Impressive, but … he’s twenty now, has been training like a maniac, and we’ll see what happens when he runs his next race.
2) It’s shown AI chat products have no switching costs right now. I now use mostly Claude and pay them money. Each chat is a universe that starts from scratch, so … very easy to switch. Curious if deeper integrations with my data, or real chat memory, will change that.
The current version of GPT-4 is 3 months old not 1 year old. Anthropic are legitimately ahead on performance for cost right now. But their API latency I don’t think matches OpenAI
We’ll see what GPT4.5 looks like in the next 6 months.
I don't think it's just that Claude-3 seems on par with GPT-4, but rather the development timescales involved.
Anthropic as a company was only created, with some of the core LLM team members from OpenAI, around the same time GPT-3 came out (Anthropic CEO Dario Amodei's name is even on the GPT-3 "few-shot learners" paper). So, roughly speaking, in same time it took OpenAI (big established company, with lots of development momentum) to go from GPT-3 to GPT-4, Anthropic have gone from start-up with nothing to Claude-3 (via 1 & 2) which BEATS GPT-4. Clearly the pace of development at Anthropic is faster than that at OpenAI, and there is no OpenAI magic moat in play here.
Sure GPT-4 is a year old at this point, and OpenAI's next release (GPT-4.5 or 5) is going to be better than GPT-4 class models, but given Anthropic's momentum, the more interesting question is how long it will take Anthropic to match it or take the lead?
Inference cost is also an interesting issue... OpenAI have bet the farm on Microsoft, and Anthropic have gone with Amazon (AWS), who have built their own ML chips. I'd guess Athropic's inference cost is cheaper, maybe a lot cheaper. Can OpenAI compete with the cost of Claude-3 Haiku, which is getting rave reviews? It's input tokens are crazy cheap - $300 to input every word you'll ever speak in your entire life!
Claude may be beat GPT-4 right now, but I remember ChatGPT in March 2023 being leagues better. Over the past year, it’s gotten regressive, but faster.
Claude is also lacking web browsing and code interpreter. I’m sure those will come, but where will GPT be by then? ChatGPT also offers an extensive free tier with voice. Claude’s free plan caps you as a few messages every few hours.
Of course GPT-next should take the lead for a while, but with Anthropic, from a standing start, putting out 3 releases in same time it took OpenAI to put out 1, then how long is this lead going to last ?
It'll be interesting to see if Anthropic choose to match OpenAI feature-for-feature or just follow their own path.
Yeah, it's a good point, but I think that our intuitions are different on this one. I don't have a horse in this race, but my assumption is that the next OpenAI release will be a massive leap, that makes GPT 4/Claude 3 Opus look like toys. Perhaps you're right though, and Anthropic's curves with bend upward even more quickly, so that they get to that they'll start catching up more quickly, until eventually be they're ahead.
Honestly who knows, but outside of Q-star rumors there's no indication that either company is doing anything much different from the other one, so I'd not expect any long-lasting difference in capability to open up. Maybe it will, though!
FWIW, Sam Altman has fairly recently said that the jump from GPT-4 to GPT-5 will be similar to that from GPT-3 to GPT-4, and also (recent Lex Fridman interview) that their goal is explicitly NOT to have releases that are shocking - but rather they want to have ones of incremental capability to give society time to adapt. Could be misdirection - who knows.
Amodei for his part has said that what Anthropic will release in 2024 will be a "sharper, more refined" (or words to that effect) version of what they have now, and not a "reality bender" (which he seemed to be implying maybe is coming, but not for a year or two).
They're comparing against gpt-4-0125-preview, which was released at the end of January 2024. So they really are beating the market leader for this test.
What matters here is that what I can use today. I can either use Claude 3 or GPT 4. If the Claude is better, it is best on the market. Let’s see what the story is tomorrow.
Go ahead, no one is saying to stay with GPT4. But its disingenuous to compare a gpt-4-march-update to a completely new pretrained model like Claude 3 Opus.
It is not that disingenuous. We can only make claims based on the current data.
There can be even bigger competitors in the market, but because they stay quiet and do not publish results, we do not know about their capabilities. Who knows what Apple has been doing all this time? They sure have capabilities. Even if they make some random comments about the use of Gemini.
Until the data and proof has been provided, it is accurate to claim "the best model on the market". Everything else is hypothetical.
So you think whatever process produces a GPT4 update is completely equivalent to pretraining and RLHF'ing a brand new model with new architecture, more data, etc??
ChatGPT does have at least a year head start so this doesn't seem surprising. This proves that OpenAI doesn't really have any secret sauce that others can't reproduce.
I suppose size will become the moat eventually but atm it looks like it could become anyone's game.
Size is absolutely not going to become the moat unless there's some hardware revolution that makes running big models very very cheap, but that requires a very large up-front capital cost to deploy. Big models are inefficient, and as smaller models improve there will be very few use cases where the big models are worth the compute.
I imagine that going forward, the typical approach would be a multi-level LLM, such that there's a relatively small and quick model in front of the user, which can in turn decide to consult an "expert" larger model as part of its "system 2".
Absolutely, that is 100% the way things are going to go. What's going to happen is that eventually there will be an online model directory that a local agent knows how to query to identify other models to call in order to build up an answer. Local agents will be empowered with online learning since it won't be possible to pre-train on the model catalog.
> as smaller models improve there will be very few use cases where the big models are worth the compute
I see very little evidence of this so far. The use cases I'm interested in just barely works on GPT-4 and lesser models give mostly garbage. I.e. function calling and inferring stuff like SQL queries. If there are smaller models that can do passable work on such use cases I'd be very interested to know.
Claude Haiku can do a LOT of the things you'd think you need GPT4 for. It's not as good at complex code and really tricky language use/abstractions, but it's very close for more superficial things, and you can call haiku like 60 times for each gpt4 call.
I bet you could do multiple prompt variations with haiku and then do answer combining to compete with GPT4-T/Opus at a fraction of the price.
Interesting! I just discovered that Anthropic indeed officially support commercial API access in (at least) some EU countries. They just don't support GUI access in all those countries:
> We are all breathless that it is better. But a year has passed since GPT4. It’s like we’re excited that someone beat Usain Bolt’s 100 meter time from when he was 7.
Sounds like some sort of siding with closedAI (openAi), when I need to use an llm, I use whatever performs the best at the moment. It doesn’t matter who’s behind it to me, at the moment it is Claude.
I am not going to stick to ChatGPT because closedAi have been pioneers or because their product was one of the best.
I hope I didn’t sound too harsh, excuse me in that case.
Is this supposed to be clever? It's like saying M$ back in the 90s. Yeah, OpenAI doesn't deserve its own name, but maybe we can let that dead horse rest.
Claude has way too many safeguards for what it believes is correct to talk about and what isn’t. Not saying ChatGPT is better, it also got dumbed down a lot, but Claude is very heavy on being politically correct on everything.
Ironically the one I find the best for responses currently is Gemini Advanced.
I agree with you that there is no switching cost currently, I bounce between them a lot
If you’re on macOS, give BoltAI[0] a try. Other than supporting multiple AI services and models, BoltAI also allows you yo create your own AI tools. Highlight the text, press a shortcut key then run a prompt against that text.
I use an app called MindMac for macOS that works with nearly "all" of the APIs. I currently am using OpenAI, Anthropic and Mistral API keys with it, but it seems to support a ton of others as well.
MSFT trying to hedge their bets makes it seems like there's a decent chance OpenAI might have hit a few roadblocks (either technical or organizational)
I agree with your analogy. Also, there is a quite a bit of "standing on the shoulders of giants" kind of thing going on. Every company's latest release will/should be a bit better than the models released before it. AI enthusiasts are getting a bit annoying - "we got a new leader boys!!!!*!" for each new model released.
GRUB is a common bootloader for Linux systems: it gives you a menu of boot options when you turn on your machine, and boots whichever installed operating system you choose.
So with this theme, that menu for choosing which OS to boot looks like the Minecraft menu!
"The Art of Action" It has changed how I approach everything, not just how I run my business. Is applying approaches used in militaries to organize action around the leader's intent, taking action in the right general direction, and delegating the "what" (the intent) but not the "how" of the way it is actually achieved.
My best friend's dad was high up in Woodstock 99 corporate, which means he was responsible for making much of it run (yeah, I guess there were some issues ). We had backstage passes, a house, and a ... weird experience. Did some stupid things. Some highlights:
- We were on stage in the famous closing Red Hot Chili Peppers show (like in the wings, looking out on the audience from behind the band). That was surreal. We could see the fires start in the distance, stuff getting ripped down, things starting to sort of go to hell. All while, the most amazing band of the moment is playing the most amazing songs 30 feet away from us. Quite a contrast between awesomeness and ... whatever the strange mix of emotions riots give.
- We learned that there was no Mountain Dew in the whole event because Coke got the contract, so we went out and bought many cases, drove it in through our special entrance, and basically auctioned it off. People were willing to pay $5 a can. We made a lot of money. Pretty sketchy.
- Worse/weirder, we had a backpacks full of ice to hold the Mountain Dew in while we walked around the crowd selling it, and people started to offer to buy ice from us to cool off. So once we ran out of Mountain Dew, we started yelling "Ice is nice! We got ice!" and selling it. That ... well, I feel like that was me experiencing a real microcosm of capitalism and the allure of artificial scarcity ... and not acting the way I would hope . Give away the damn ice man.
- We kind of just walked around backstage and there were so many super famous people that none of them felt very special, and they'd just kind of talk to you while you were in line for food. Ice Cube had some cool sneakers that my friend chatted with him about. George Clinton was chill. I think we talked with Erykah Badu for a while at some point. (Stars are not at all like this backstage at a normal concert, btw, which we also did a lot because of his father. A their normal concerts, these same performers are the center of the universe, and don't have time to chat with a bunch of high schoolers running around.)
Anyway, it was a very strange event, and we had a very strange vantage point.
Honestly, no. Your type of attitude is how bad products get made: products that cater to the absolute lowest tech illiterate user. Products that hide and simplify every useful tool to the point of annoyance and disfunction. Github's warnings are plenty enough. It doesn't matter what Github did; this post would still be made, and you people would still be thinking of even more absurd ways to make the user not do something. At a certain point, a tool has to do the function you asked it to do.
It was really fun, but maybe throw a few combos closer in weight in? I played for maybe 5 minutes and didn't get any wrong. I would have played longer if I got some wrong.
Oy. Yes there, are legit uses. A few straightforward examples:
1) You implementing something boards (like trello) on mobile. The well-established-in-all-the-apps-that-specialize-in-this behavior is snap scrolling.
2) Tiktok/stories/etc. When you scroll up or right, does it ever land you in between two stories? No. This is the exact behavior.
3) Presentations. Say you are building a presentation tool. The whole point is to define separate slides that are on screen one at a time. This is a very natural and appropriate use of this behavior.
So, yeah, carousels suck. But, this behavior is commonly used and useful in a lot of modern software.
I know this is pedantic, but I think your point applies to 10,000 years ago :) 1000 years ago many empires had already risen and fallen in China and it was a centralized agricultural civilization. Not an expert.
Disagree! The article explains something extremely cool: that there’s a layer in the ocean that acts like a megaphone, where sound that passes into it gets caught and propagates way further. That’s cool and specific.
Yeah, I was thinking about the old fashioned cheerleading things, without any electricity involved. I even thought about being explicit in case someone was going to be pedantic ... there’s even an emoji for it
Her sense of humor is captured by her choice of the song “Do you love me”, which is a winking but still charming reference to the fact that this is a PR stunt to get us to love scary robots.
Dancers and choreographers are poor, and struggle to build careers. A public acknowledgment of something of this magnitude would be so impactful, and would require so little from Boston Dynamics. I wish they would do it.
They could even brand themselves as helping artists, could be good for their image.
I wrote the article; thanks for this info. Since your sister has posted this publicly on her Twitter account, I'll happily add her name to our article if I can get in touch with her/BD to confirm. [Edited to add that we'll contact her before doing so.]
Wow this is extremely wholesome and coincidental. Glad to see the artists are going to be getting the credit they deserve, pretty crazy to see this transpire over HN comments!
I would love to ask a question. In the article they say the following:
> We definitely learned not to underestimate how flexible and strong dancers are—when you take elite athletes and you try to do what they do but with a robot, it’s a hard problem. It’s humbling. Fundamentally, I don’t think that Atlas has the range of motion or power that these athletes do, although we continue developing our robots towards that, because we believe that in order to broadly deploy these kinds of robots commercially, and eventually in a home, we think they need to have this level of performance.
I would love to ask your sister, that is Atlas was a human, what age would it be, in terms of skill of dancer? I think this is an interesting parallel along what people say "This computer is as smart as a 2 year old."
As a parent of small kids who just watched the video, I would say 2–3 year old seems about right, assuming kids who do a lot of running around. 3-year-olds can move with significantly more grace (the motions are more intentional, fluid, subtle, and steady, and seem less scripted) than these robots, but definitely can't do a jump precisely the same 10 times in a row, so it's not really apples to apples.
Maybe age isn't the right measure. I would say that these robots move like a 3-year-old kid who has been practicing a particular motion sporadically for a month or two, but not like an active 3-year old who has been practicing the same motion for a year.
If you compare to the kids who do martial arts or gymnastics or some serious sport or whatever, by age 6 or 7 the kids are leaving these robots in the dust. (The robots are still amazing though.)
I think your estimate of 6-7 is closer to right. I'm not a parent but as a ballet dancer I get involved in a lot of Nutcrackers, which are showcases for entire schools at all levels.
The 5-and-under crowd just generally swarms around the stage, with little attempt at "dance". The 6-8 year olds are beginning to dance, at about this level -- though I would say I saw a few movements by the robots that were remarkably expressive.
It's a bell curve, and there is certainly a right tail of kids who are far better than these robots at age 6-8. The median, however, is about on par.
Oh, there is no way you could get a typical untrained 5 year old to follow this whole choreographed routine.
I am just talking about how smooth and graceful the individual motions are. By age 3 kids who practice something a lot (say, running around barefoot at the playground) are starting to get pretty fluent at it.
The kids have a lot more sensory input, a much more subtle and refined musculoskeletal system with a whole ton of tiny stabilizer muscles, and a pretty impressive neural architecture for learning and refining motions, compared to these robots.
(Which again, is not to criticize the robots, which are also amazing! It is hard to beat 600 million years of animal evolution.)
I was really impressed at how graceful the machines were.
It's not the first time I've noticed that; drones can also be quite graceful. But the dynamic motion of that pendulum is such a hard problem, and as you note, the three year olds solve it with unconscious ease.
Never as gracefully, and there of course the machines have a huge advantage. I spend most of my brain power keeping the tense muscles very tense and the loose muscles very loose, for each and every one of those tiny stabilizer muscles. The machines move straight and smooth in a way I never will. I haven't mastered the simple art of standing there in first position, and probably never will.
(I am not, I would note, any kind of expert. I dance at the level of an 11 year old. Maybe a 10 year old. Which took me years to learn, and I'm very proud of it.)
I have never done any significant amount of dance, so am in no position to offer informed advice, but I wonder if explicitly thinking about every muscle is really the best way to practice. Focusing on higher-level goals and letting the sensorimotor system deal with the details might be more effective (and then figuring out how to observe yourself and deliberately working on correcting specific defects).
You might find the book The Inner Game of Tennis useful.
It's how I learn. I start with the focus so that I learn the right thing, and then when I have it in muscle memory, I can forget it. And focus on the next thing, while continually checking back in. (The teachers will be sure to correct you if you don't.)
Ballet isn't something that feels right when you do it correctly. It's actually a deeply unnatural way to move. Not just pointe, but everything. The grace is an illusion layered on top of that.
You definitely do need to reach a point where most of it is handled by the sensorimotor system. But there will always be something you need to keep working on.
Probably need to be more specific about 'skill as a dancer' too though - I think you're responding mainly about motor control (?) but try choregraphing a bunch of three year olds to actually perform that routine!
Evaluating robots on whether they do what they're told and whether their timing is good doesn't seem too useful. That's the easy part. Pretty much any robot is going to be more precise/predictable/consistent in their actions than any human, let along a three year-old.
I would so love to see some of the original concept videos with the humans who were dancing, I hope they release it. Honestly, I think a "how we made this" video would be just as popular as the robots dancing.
We don’t know that. Perhaps it was important for them to not have credits in the video; e.g. to keep attention on the robots. Your sister is credited elsewhere in BD’s article, see other comment [1].
One of the reasons for the formation of worker's unions (such as screen actor's guild) is to protect the rights of actors and film workers, at all levels. Receiving proper credit is a big part of compensation.
There may be a fine line between a "film" and an "advertisement", and what's more this is surely not a SAG film, but it seems to me that credit for choreography on something like this - where the dance is the core of the content - is appropriate.
Me too. Wouldn’t be cool if the robots dance along humans, replacing iconic videoclips or scenes? Like the scene of Jerry (from Tom & Jerry) dancing with Gene Kelly https://youtu.be/2msq6H2HI-Y
Or Paula Abdul “Opposites attract” https://youtu.be/xweiQukBM_k
You dont want to be anywhere near heavy powerful moving machinery, thats basic OSH. Glitching robot could snap your neck without skipping a beat (so to speak). There are hundreds of videos on the internet of people being decapitated, torn apart and squished by machinery. Not to mention https://en.wikipedia.org/wiki/Robert_Williams_(robot_fatalit...
These robots are nowhere near the strength of an industrial one, so that’s a bad comparison - it is perfectly safe to dance alongside them.
I am not sure about the exact number, but the dog-like robot can handle at most 30-40 kilos of weight, maybe the humanoid a bit more but in the same ballpark. The most dangerous would be the rolling one, since it has a counterweight, and crashing into someone would be bad but that’s it.
They are absolutely not your average “able to lift 1 tonne” hydraulic arm robots that could indeed barely notice a human as resistence in its movement.
I think that unfortunately, when it comes to the public perception, that the movie Terminator did to robotics and AI what Jaws did to the Great White shark. This seems to have permeated western culture in a way that makes it very difficult to have much serious and honest discussion about the technology and I suspect is actively holding the field back. My perception is that in other countries, like Japan for example, they have a very different perspective on what these technologies can do for them and they seem to be much more embracing of it, rather than scared of it.
Both here and anywhere else the Boston Dynamics robots are shown off, half of the comments always come back to the "scary robots" that will take your jobs and murder your family. I wish we could hit a big reset button on how the public at large perceives robots because I personally think that there is a lot of benefit to be harvested here, for the true benefit of mankind.
Boston dynamics itself was started with military funding. Robotics will be an industry with a lot of blood on its hands, even if it does do good as well.
It is unfortunate, but - Internet have started as an ARPA project. First rockets to space started with military funding.
Question is... what are current goals of Boston Dynamics? Do they even see these robots used for non-military widely used scenarios - construction, medical care, first-responders etc? Are these things even possible for the things where cost is an important factor (unlike military, where cost is not that important)?
Care to expand on that? I'm very cynical about Boston Dynamics, given that - as far as I can tell / know - they've only done tech demos and again as far as I know don't have significant commercial success, or their robots doing things outside of (carefully orchestrated) tech demos.
disclaimer: I never actually looked on their website for user cases or whatnot.
As soon as they make a usable remote control for that robot the US military is going to buy a bunch of them and attach machine guns and use them to shoot up "suspected terrorists".
Looking at what the US military has been doing with drones, that scenario doesn't seem that far fetched.
> Looking at what the US military has been doing with drones, that scenario doesn't seem that far fetched.
This scenario will remain science fiction until we invent a compact power source with an energy density (by both mass- and volume) matching fossil fuels.
The military has no use for loud and cumbersome petrol-powered monstrosities like Big Dog (which is why the project was axed) or underpowered robots with an endurance that's measured in minutes (like Spot) in combat scenarios.
Once such power source is available, though, the independence and versatility of a human in power armour would still be far superior to a remote controlled robot that can be hacked or have its comms jammed with COTS equipment...
It's about perspective. Imagine being pinned down by a squad of these things for "only" 60 minutes. Or being pursued through the forest or an urban environment. That 60 minutes would feel like a very long time.
For a glimpse of this, check out the videogame Generation Zero (1980's Sweden overtaken by armed robots, including robot dogs).
That would be terrifying, but a squad of trained humans is still more terrifying. If you're worried about what a military is going to do, robots are mostly a distraction.
> Imagine being pinned down by a squad of these things for "only" 60 minutes. Or being pursued through the forest or an urban environment.
Given the current state of these machines, both these environments would favour humans. Even a fairly untrained average Jane or Joe would have no problems outrunning these things in forests or urban environments, let alone a trained soldier. Not to mention the lack of autonomy.
Everything you see in these promotional videos is carefully choreographed, prepared and pre-programmed in advance for days, and edited:
"There were definitely some failures in the hardware that required maintenance, and our robots stumbled and fell down sometimes." - they shot the first part several times and kept the one that worked best. That's not something you can do in the field outside of a controlled environment.
These robots are still long ways away from posing more of a thread to a soldier than much simpler solution, e.g. a Humvee with a mounted machine gun.
> For a glimpse of this, check out the videogame Generation Zero
The game is based on fiction, not fact, though. The required autonomy just isn't there yet and the video game robots clearly run on magic, not electricity or petrol.
They never overheat, they are maintenance free, and they move faster than is currently possible w.r.t. motion planning and image recognition.
It's your typical AM/FM affair: BD is actual machines - pre-programmed or remote controlled, very limited endurance and still impractical for most military applications.
The robots in video games and cinema on the other hand are for the most part in the domain of fucking magic - capable of "120 years of continuous operation on a single power cell" like the Cyberdyne Systems series 800 v2.4 (Terminator), turning themselves from "autonomous swords" into screaming humans (Screamers), are nearly indestructible like Vision (Marvel's Avengers) or strange spiky flying thingamabobs like the Sentinels (Matrix trilogy).
Conversely an army of non-sapient robots would have a lot more options for dealing with belligerents. A human has to fire back when threatened because they don't want to die. A robot can take the hit, and risk being destroyed, because we can build another one.
Given the success (inasmuch as one can call it that) they've had with drones, why bother with mounting a gun on a robot that can barely run for longer than an hour? Wouldn't a remotely-piloted/autonomous vehicle with guns mounted on it be much more efficient than a robot walking around on 2 legs?
I guess I just struggle with understanding how this changes anything.
Initially they will be a curiosity; robots in the public in general; but public perception is a tricky thing. There will be people absolutely terrified by their presence but some of this can be mitigated by how the robot looks; the more dexterous it is I am willing to bet will make it more scary to some.
however the real bugaboo begins when one is used wrong regardless of where that occurs. like facial recognition we are going to need some serious regulations on how law enforcement uses these. I don't expect issues with fire and rescue but they would get secured by the same laws.
As in, if they end up in use to secure assailants and there is an injury to that target or worse bystanders public reception will tank quickly. Let alone if robots ever got employed against protestors.
that perception will change radically regardless in the world where it happens because not all governments respect the rights of their citizens to the same degree and it becomes only a matter of time before abuse happens and its film.
on a side note, we certainly have enough movies and television presentation of the bad uses of robots; though most if cyborg type; to give people pause but will it give lawmakers pause?
It's not something you can simply outlaw. Progress won't stop, those who oppose it just get left behind. The progress of technology is a force we don't have control over. When writing, the engine or electronics get invented everyone has to get onboard.
I have the same feeling that antropomorphizing robots these way is a dangerous direction to head in. We are intentionally confusing ourselves into thinking they are something other than they are.
I think the dog-like robot - Spot is commercial. SpaceX are using one to asses things on their launch pad. But I guess it's truly a difficult problem and they aren't rushing it to market before is good and save enough. Which if they have the funding I think is a good approach.
SpaceX are using one because I think simply because someone thought it'd be cool. You can remotely inspect a launch pad without shelling out $75k+ using RC cars and drones, for example. It's much cooler and better for the company's overall image, though, if you use a Spot robot. PR is a thing after all and using Spot fits perfectly.
I seriously doubt there's any difficulty with safety or "being good enough", as I've yet to see an application for Spot that couldn't be done just as well by conventional already existing means.
BD is a group of enthusiasts that build cool robots, not a company that primarily develops robotic solutions.
their website has an online shop [0] where you can purchase the things and have them shipped to you, to (ab) use as desired - so hardly 'only (carefully orchestrated) tech demos' any more.
Totally impractical. You'd need human level general AI before they could be interpreted and applied, and as Asimov's stories show, they're plenty fallible even with that.
These robots know roughly where they are, what positions their limbs are in, how balanced they are, and what movement they should do next. That's about it. They cannot understand Asimov's three (*four) laws, or even a single sentence of English, and have no choice over their own actions as a whole.
There's nothing scary about these robots, any more than my Roomba is scary. They are tools, and like all tools, they can be used for nefarious purposes or for the benefit of mankind.
> It is true that a computer, for example, can be used for good or evil. It is true that a helicopter can be used as a gunship and it can also be used to rescue people from a mountain pass. And if the question arises of how a specific device is going to be used, in what I call an abstract ideal society, then one might very well say one cannot know.
> But we live in a concrete society, [and] with concrete social and historical circumstances and political realities in this society, it is perfectly obvious that when something like a computer is invented, then it is going to be adopted will be for military purposes. It follows from the concrete realities in which we live, it does not follow from pure logic. But we're not living in an abstract society, we're living in the society in which we in fact live.
> If you look at the enormous fruits of human genius that mankind has developed in the last 50 years, atomic energy and rocketry and flying to the moon and coherent light, and it goes on and on and on -- and then it turns out that every one of these triumphs is used primarily in military terms. So it is not reasonable for a scientist or technologist to insist that he or she does not know -- or cannot know -- how it is going to be used.
Still, it's a nice way to shift blame to scientists and engineers, away from people who actually use these tools for evil, or commission development of technologies to use in their evil business models.
All links in the chain are responsible for what they do, there is no single packet of "blame" that gets to reside with any single party, and denying one's responsibility will not make it go away.
Responsibility is not a chain, and it absolutely does fade away with enough degrees of separation - otherwise you could hold me responsible for looking at a driver the wrong way, which annoyed him past a threshold that caused him to scream at his wife later that day, and made his wife scream at their kid the next day, for whom it became a formative moment, pushing the kid into a life of crime, and 10 years later someone died because of it.
You definitely want to focus your attention on people with most agency over the problem, and those people usually aren't scientists or engineers. And you can't simultaneously praise decision makers for their wisdom and leadership, and absolve them from responsibility because they've only used an "evil" piece of tech they found laying somewhere.
No, but events are. The question of how to use a tool arises from the existence of the tool.
> otherwise you could hold me responsible for looking at a driver the wrong way, which annoyed him past a threshold
You're still just responsible for your own acts, and they for theirs. If you were being a dick to them needlessly, that's your fault, and if it tipped them over the edge, it's natural to feel bad. Not fully responsible, but also not as if you had zero to do with it.
Just like when someone gets bullied and commits suicide, you don't just look at that act of suicide and talk about who had the most agency and what one should focus on.
> You definitely want to focus your attention on people with most agency over the problem, and those people usually aren't scientists or engineers.
There is no need to "focus attention", and holding one party responsible for their actions is orthogonal to holding other parties responsible for theirs. This is a tech forum, Weizenbaum was one of the greats when it comes to writing about technology and morality, and I dare say it is the responsibility of technologists to be familiar with his body of work.
> And you can't simultaneously praise decision makers for their wisdom and leadership, and absolve them from responsibility because they've only used an "evil" piece of tech they found laying somewhere.
That's why I don't, and never hinted at doing so, and even clearly stated the opposite when I said "All links in the chain are responsible for what they do".
They're a little unclear about exactly what morality they are advocating for. The nature of weapon technology transforms the way society works.
In the era of sword and shield, for example, combat effectiveness is hugely dependent on raw upper body strength. That means that strong healthy men rule all domains, while women, children, any men not in top physical shape are helpless before them.
In the modern era of mechanized weapons, personal size and physical ability are much less relevant. There's a much greater ability for small groups to make their opinions felt. Victory in large battles tends to go to whoever has the best tech and greatest quantity of it. It's probably a better world overall.
The real question is, exactly how will any "killbots" work, and what effect will they have on society? Maybe they'll be super-expensive and centrally controlled, and nobody better dare to move against whoever ends up in charge of them. Maybe they'll be cheap and plentiful, so anyone with a grudge will be even more able to cause chaos. Maybe something else we can't imagine yet. I have a feeling we'll find out eventually, one way or another.
Your second paragraph seems rather simplistic to me.
Less-able men with more ability to marshall resources/rewards can convince more-able men to be their proxies by paying them . How would they have the ability to do that? Technology, knowledge, cunning, guile.
How long has it been since the king was the best fighter in the realm? I mean, seriously?
Well yeah it's simplistic, since it's 2 sentences. I'm not really prepared or qualified to write a 30 page paper on the nature of medieval combat. Yet there seems to be an obvious truth to it.
There are of course exceptions, such as persuading or paying someone else to fight for you, or concealing a weapon, getting someone to trust you, and stabbing them in the back, etc. It still seems to shape much about reality to know that the majority of people will have no chance of ever winning a remotely fair fight against the enforcers of whoever is in charge.
I don't find the "truth" you mention obvious at all. I think it's just a fairy story simplification based largely on fiction (written and visual).
Over the last couple of thousand years (or more, but history gets a bit fuzzy beyond that), lots of nations have had leaders at many different times who were not the best fighters.
Your claim wasn't that a majority of people had no chance of winning a fair fight against enforcers, which is obviously true. You made a much more broad claim about how historically certain physical attributes would put particular kinds of people in positions of power, and about how that has changed.
I think this is likely misleading and inaccurate. Yes, those with power have always used force to enforce their wishes, but that's very different from saying that those in power are themselves of a certain physical type.
If your Roomba was programmed to hurt me I’d surround myself with power cords.
This looks like us so it’s easier to see how we’re threatened by it. It’s more visceral. My cats don’t care about my vacuum but if it walked and jumped their neutrons might fire differently.
No one is going to send a fleet of these ultra-expensive Robo-Killers to assassinate you and everyone else on the battlefield. Once these micro-drones can be mass produced cheaply enough, and you can put enough high-explosive on them to fly up to a person's neck and !!pop!! someone's head clean off, you'll see them programmed with swarm behavior and be unleashed onto the battlefield.
They fly faster than you can run (13 miles per hour), have a 1 mile range, and a 25 minute flight time. More than enough capability to swarm entire battalions and wipe them out.
> Everyone afraid of this, you ought to be afraid of micro-drones like the Black Hornet Nano
>They fly faster than you can run (13 miles per hour), have a 1 mile range, and a 25 minute flight time.
And can be trivially defeated by some netting, blinded by bright lights/lasers, and/or knocked out of the sky by leaf blowers and umbrellas. Despite what certain propagandaesque sci-fi "warning" videos would have people believe, I'm least worried about these nano-drones. At the end of the day bullets are still cheaper, less complicated, and more effective. And as soon as you give the drones some standoff capabilities to mitigate some of the countermeasures, you start loosing many of the perceived "benefits" and are back to just using guys with guns.
Cost and complexity. If you have to send so many to overwhelm and bypass all the countermeasures, the cost and complexity of make it a much less practical and appealing solution than just doing it the old fashioned way. I can't see how brass, lead, and gunpowder will ever be more expensive than light weight plastics/composites, electronics, sensors, motors, batteries, plus the actual lethal component. Add to that the required time/complexity required to configure and deploy, situational considerations such as weather, sensor viability, terrain/environment factors, etc... and we're back to going back to guys with guns. Could there conceivably be a scenario where this might be the best option? I suppose, but in my estimation it would likely be the option of last resort.
If an enemy force has already made up its mind to kill, I don't see this making it any easier/more reliable/more effective than well-established alternatives.
That depends on the target. The military is quite ready and willing to spend tax dollars on things even if at the end of it there is something cheaper that does the job better.
Unfortunately, I don't know that I can trust any of the organizations that would have the budget to control enough of these robots to make a difference in any direction.
Which is sad to me, as I love this from a technological perspective and looking at a best case scenario.
Humanity's goal should be to build AGI and let robots take over - we're doing it, willingly or unwillingly. No need to have blobs of meat hanging around. Intelligence itself is human thing. Whether it needs to have a body / physical metabolic processes to run by injesting cheetos all day, is totally absurd. Evolutionary processes have given us so much unnecessary baggage. Pure abstract intelligence is pretty damn human. There is already Neuralink and other hybird tech going on. I believe humans will willingly give up physical bodies in the long term (millenia scale).
This is bound to happen. There is no way it wouldn't I believe, ofcourse in short term, we gotta worry about stuff like politics, solving hunger and world peace.
> Evolutionary processes have given us so much unnecessary baggage
21 years ago, when I started writing a cross-platform digital audio workstation called Ardour, I was convinced that your claim above applied to contemporary mixing consoles. It seemed to be that they had evolved in ways that were deeply constrained by physics/mechanical/electrical engineering, and that there were all kinds of things about their design that was just unnecessary baggage from their crude history.
Two decades later, I understand how that evolutionary process actually instilled those designs with all kinds of subtle knowledge about process, intent, workflow, and even desire. It turns out that the precise placement of knobs, and even their diameter and resistance-to-motion, rather than being arbitrary nonsense derived from the catalog of available parts, rather precisely reflect what needs to be done.
Don't be so quick to dismiss your physical form or the subtle wisdom that evolution can imbue.
There's also the whole "situated action" sub-field of AI, which is centered around the idea that humans build themselves physical environments to embody and maintain knowledge in order to reduce computational load during decision making.
I enjoyed reading your perspective. I find evolutionary processes fascinating contrary to what my original comment imbibes. It’s had a lot of time to optimize :)
This is bound to happen. There is no way it wouldn't I believe, of course in short term, we gotta worry about stuff like politics, solving hunger and world peace.
There will never be world peace, unless humanity is no longer human, or alternately, under the boot of an all encompassing empire ruled by force.
To believe otherwise, is to believe history teaches us nothing, nor our knowledge of human behaviour. To assume that we somehow have a culture which "can do this", that our modern beliefs are "enlightened" enough to allow this, is the ultimate in hubris.
Sure... humanity couldn't do it before, but now? Now, we're just ever so enlightened and perfect enough to enable world peace.
There are only two real ways to enable world peace.
1) Genetically engineer the entire species to become more... social. Kill or prevent any 'old style' human reproducing. End the old species. There are innumerable issues here, including "we're just messing around with what we barely comprehend".
Yet our entire culture is predicated upon how the human brain works, and the human brain works more on genetics, than post-birth learning.
OR
2) Take over the entire planet, killing everyone who disagrees with you, and ensuring that due to the technology involved there can NEVER be a revolution. Further, destroy and hunt down every single person which does not swear fealty ; allow no external empires to form. Ever.
NOTE: I am not happy about this, yet, this is reality. Let me put this another way.
You want world peace? OK! Great!
First, you'll need to end all murders, thefts, all anti-social behaviour. "World peace" is denied because of the genetics that create this behaviour. They're the same problem.
Spectrum of humanity spreads wide and there will never be absolute world peace - in the same way, there is no peace in the animal kingdom. As I write, thousands of animals are dying at this very moment, millions of insects are killed. Nature is fucking brutal, my friend. Unimaginable amount of pain was inflicted in the wilderness during this hour.
We're lucky to be able to communicate to each other in civil manner without ripping each other apart for food. Pretty incredible to be a human!
>We're lucky to be able to communicate to each other in civil manner without ripping each other apart for food.
That will go away once Climate Change reduces the ability of the planet to produce abundant resources needed for the modern way of life. The remaining carrying capacity of the planet will force a move back to subsistence farming and with that comes the inevitable brutal environment.
Personally, I'd hope we end up with something a bit like The Culture - which is perhaps the most positive scenario for any society made up of 'humans' and powerful AIs.
I think you overestimate human intelligence. Surely, as of yet we are the most intelligent thing in the known universe, and the human mind can seemingly discover/invent/understand everything.
But knowing that our math itself has a limit, and we are already pushing that limit with some problems it is naiv to think that the human brain is all that much capable. (Interestingly enough we are intelligent enough to somewhat know our limits - like the complexity of ZFC)
While once singularity happens, an AI has basically only material limits to the complexity it can manage (though what I found beautiful is that even that would hit a limit not necessarily higher than we do - like there will be a busy beaver number it can’t reason about)
> Humanity's goal should be to build AGI and let robots take over - we're doing it, willingly or unwillingly.
Yes, but better not make them look human. Humans are bad at tolerating more/equally intelligent species, just look at homo sapiens versus neanderthals. Hell, even between races humans are barely tolerant.
I haven't really thought of a use case for the home, although there's literally dozens of them. I actually wonder if it could function as an auto-dog walker for my organic dog on the days when I'm too swamped with work to do so.
The thought of attaching a leash to my dog and the leash to Spot while I'm indisposed is actually kind of attractive. I would have to make sure my dog has already done her business though, since I wouldn't want to be the kind of asshole that not only uses a robot to walk his dog, but also lets his dog shit on his neighbor's yard while a robot walks his dog.
I wonder, would the time spent programming and integrating all this be less than just walking the dog? For me, this would defeat the purpose of having my dogs my our life.
Having worked in robotics quit a bit this is a common trap. There are plenty of things that we can think of for a robot to do, but most of them would require more concessions, programming and maintenance time than it takes to just do the task, or hire folks to do it. The areas where the value prop holds up it really works well, but these kind of low value, high complexity applications like walking a dog around the neighborhood without dragging it by it's collar if the dogs knee hurts and it walks slow that day.
A $75k robot arm with legs is not a completed application. We can already buy robots with the needed mobility to do things like walk dogs for far less. This is a classic hammer nail situation. I think this is the issue that BD keeps running into, they have an amazing team, amazing tech, amazing capabilities, but are still searching for that killer high value application. There are over 400k industrial robots sold every year, its a huge market. They sell well because it relatively straightforward to program and integrate them into workcells and factory lines to create value. To program and integrate one of these robots to do something so complex that it would necessitate a BD robot and not a standard industrial robot would be a huge development effort. It just doesn't hold up when we have folks that need work. The cost of one 75k robot plus two person years of engineering labor is 4 or 5 years worth of traditional labor. The value prop just isn't there until our ability to control, program and integrate these complex robots (cobot stuff) gets better.
> The value prop just isn't there until our ability to control, program and integrate these complex robots (cobot stuff) gets better.
When you ultimately drill down to brass tacks though, you're left with a chicken and egg scenario. We need better programming and integration for this to be time-effective. We need more time programming and more time integrating for this to be a value proposition.
You don't get there without some idiot like myself saying, "I could spend 1000 hours walking with my dog... or I could spend 1000 hours programming my robot to walk my dog..."
My point exactly. Its not 1000 hours and 1000 hours. Its 1000 hours and 1,000,000 hours. If we could program a complex robot like spot to do a highly complex task like walk a dog safely on an open ended "real world" in 1000 hours there are lots of other things we would do first (package delivery comes to mind). We're just not there. We have the hardware, but not the software infrastructure to apply them as is being expected here.
They promise "repeatable autonomous missions to gather consistent data", so my guess is programming a route and mapping terrain is reasonably easy. There is also remote control and camera access, if that could be triggered to automatically notify you (or a dog walking central command service supervising), for example when your dog is barking/complaining or resists to being dragged, it could go a long way to solve dog walking (for smaller dogs).
During development, and initial per-unit sales? Sure.
Once mass produced, not even close.
Think of:
- training costs (training grunts isn't 100% free)
- pay as soldiers wait to go on missions
- and here's the BIG one, medical care
- and lastly?
PR! PR, no more "soldiers coming home in body bags". Why, you can fight any war you want, and no one will get upset about your soldiers dying. Yet beyond that?
How do you negotiate with one of these things? How do you trick them, by walking an "innocent" up to them, and blowing them up?
How does one of these things identify civilians or hold a place like Baghdad? Armies occupy. Those weapons destroy infrastructure and people and not much else.
Or do you use them like drones paying soldiers to run them from a container in Kansas. In which case you have the soldier still.
Just like drones used to bomb, as you suggest with Kansas.
As time passes though, especially on an actual open battlefield, raw 'kill everything that moves" becomes more of a potential too.
However?
My logic was predicated upon cost, and if implemented, cost reduction due to all those body bags. You think Nixon and Kennedy were purely motivated by the cost of US soldiers, when they wanted out of Vietnam?
They sent those troops there to begin with!
No. They cared about the PR issues, and re-election.
Sure, this would make it less likely to use suicide bombs against soldiers — perhaps even politicians will put skin suits on the robots and use them for public appearances a-la Westworld for similar reasons — but grenades and RPGs and anti-material rifles and IEDs would likely all still be used.
And £10k robots can also be used by terrorists, perhaps stolen from warehouses, perhaps hacked.
That said, what worries me about terrorism is not cargo-culting shapes that look dangerous (be that robots which look like the Terminator or 3D printed guns), it’s people with imagination who know there are at least two distinct ways to make a chemical weapon using only the stuff in a normal domestic kitchen and methods taught in GCSE chemistry.
Gun control is a uniquely US problem, at least in its current form. Yet this isn't going to have the same problem as gun control, for example, how easily can people obtain nuclear material?
And terrorists? Sure, but an explosive truck is probably easier than one of these. And if sales are controlled, then they won't have a domestic army of them.
In terms of hacking? 100% agree. It's why I find Tesla's OTA updates to be, frankly, insane. Full control of things like brake firmware has been demonstrated, with an OTA fix to brakes in the past.
This means that, along with autonomous modes, you could perhaps manage to (especially with an inside man), force-push updates, regardless of driver permissions, to all Teslas out there. And set them to run into everyone they find, just run over as many people as possible.
So there is tonnes of risk, and anyone thinking "Oh, they'll secure thing $x" is, IMO, a damned fool. Hack, after hack, after hack, after hack, proves this to be absurd.
We literally cannot lock down anything. Anything. Not CIA infrastructure, not any corporate infrastructure, not government infrastructure, not health care, nothing.
So I agree, 100%, robots with guns = horrid, just from that one angle. But I contend that they are cheap, and effective, so you can bet governments will use them.
The link?
Your reference to chemical weapons. I see the concern, yet I'm more concerned about genetically engineered death. And training people from (for example) China on how to do this, seems beyond absurd.
The future is biotech created death I think.
Another example, genetically engineered animals, designed to kill as well. How about mosquitoes, pre-loaded with viral payloads? What about bacteria which infects well water, and is literally impossible to ever get rid of, once in the wild? How about a fungus, which destroys wheat, which primarily the west eats, yet the east doesn't (rice)?
How about gut flora/fauna, which when fed (eg, when you eat), releases a mind altering substance? A poor fellow was infected with yeast, which made him drunk every time he ate, so imagine a genetically engineered set of bacteria which releases a mild hallucinogenic? One that makes it impossible to concentrate?
How will you cure this, if your scientists can't think straight? Or worse, what if it's an aphrodisiac? Let's try to solve a problem, when you can't keep your hands off of yourself.
I can think of so many endless horrors, and most of them biotech related.
There isn't even a need for that, take the "classic" goose dance, for example, it can be particularly funny, like in this video [1], or out-right scary, like in this footage [2].
It all depends on the context, I can't see John Cleese turning into a genocidal dictator anytime soon, while we all know what the people in the second video did only a few years after those images were filmed. For what it's worth I see the robot in this story closer to the second video than to Cleese's comedic genius.
And yes, it’s just a puppet. For now. A human controls its movements.
What’s missing is a brain. And that will come in Version 4.0. Or whenever they perfect a decision and control system, for fully autonomous decision making.
That’s when you should worry.
Or rather. You’re probably safe, if you live in one of the western allied nations. That is your privilege.
But a black, brown, red, or yellow person in the 3rd world should worry. Because they will be the target of America’s oppression, via this robot.
Meaning that: your 3rd world country had better accept democracy and western media, and have your leaders approved by Washington DC, otherwise we will deliver some freedom to you. I hope you enjoy the fresh smell of napalm in the morning. You get bonus points if your country has oil.
It is crazy to see from the outside how the color-of-skin obsession permeates every single conversation right now. I hope that the USA will be able to find some other topics and arguments one day.
It is also false in this context. One of the most massive bombing campaigns in recent history was in European Serbia, and the hotspots of today (Syria, Iran) have a population that looks like Greeks.
As for black: the only predominantly white country that systematically sticks its fingers into Subsaharan Africa is France. The only external power that grows its presence in the Third World overall is China and given how they treat their own population, I would not be surprised if the next wave of pseudocolonial wars was Beijing's.
Prior to reading your comment, I read the article and my main takeaway was how impressive the choreographer was and how I wouldn’t have expected the engineers to rely so much on the choreographer. So it is great to see your comment and I agree with you. But the article did give at least one person more appreciation for choreographers.
And i imagine there were lots of folks involved in all phases of doing this at Bostons Dynamics that consciously or unconsciously thought of her as just another hired person to promote their technical brilliance.
She did an awesome job. The article implies that the choreography was reviewed by an engineer who would script the moves? Then article says then there is a pipeline that doesn't need scripting? The main question I was wondering is it possible to do this with mocap alone? Surely the closer artists can get to sculpting the tech the more opportunities for creation are possible.
Why would they? What business sense would this make? The image they want to portray would be that they are capable of everything in-house - Their staff are not only experts in robotics but also art too. In reality it's not quite true.
Just like you also don't see them saying "Thanks to PwC for developing our software."
Very good sense of humor indeed. I caught the "do you love me" question instantaneously and wondered who got that very good idea. As always, the good idea didn't came from the PR department :-)
I'm extremely interested to learn how the dance moves were programmed - I'm guessing the standard tablet controller probably doesn't cut it for the level of control necessary.
More generally speaking, I'm especially curious if "anyone" buying a Spot would have adequate access to be able to do similar things.
I wouldn't be surprised if it turned out to not been possible for warranty reasons...
I believe so; see https://dev.bostondynamics.com/docs/concepts/choreography/ch... for the choreography API. You don't even have to buy a Spot to write the code, which is impressive in this industry! Now we just need someone to buy one and set it up in front of a webcam with a public SSH login.
There's more documentation at https://github.com/boston-dynamics/spot-sdk and https://dev.bostondynamics.com/. However, it looks like a lot of the choreography is built-in; you can call twerk() or butt_circle() (documented at https://dev.bostondynamics.com/docs/concepts/choreography/mo...) but you don't have access to the raw kinematics, gyros, and accelerometers to generate your own unique moves. It would be amazing if they'd release the full routine to cause your Atlas or Spot to perform the dances, but I'm not aware of that being public anywhere.
For a more general example, check out this video where Adam Savage used a Spot to pull a rickshaw; his programmer was able to work with the API to change the payload tuning of the robot:
It appears that the API is more about what you'd need to make Spot useful in an industrial or demonstration context than to build your own Spot. They want something that a generalist can make do useful stuff out-of-the-box, which makes sense.
Apologies for the delayed reply, and thanks very much for this comprehensive answer. I've been curious for a little while, and this pretty much answers that curiosity completely.
Followed, liked, retweeted. I had loved the choreography and I'm glad I now know who was behind it. Please tell Monica that we await here next piece of art.
1) We are all breathless that it is better. But a year has passed since GPT4. It’s like we’re excited that someone beat Usain Bolt’s 100 meter time from when he was 7. Impressive, but … he’s twenty now, has been training like a maniac, and we’ll see what happens when he runs his next race.
2) It’s shown AI chat products have no switching costs right now. I now use mostly Claude and pay them money. Each chat is a universe that starts from scratch, so … very easy to switch. Curious if deeper integrations with my data, or real chat memory, will change that.