Hacker News new | past | comments | ask | show | jobs | submit login
RunwayML releases Act One: obsoleting traditional motion capture (runwayml.com)
37 points by handfuloflight 87 days ago | hide | past | favorite | 10 comments



Impressive output, also amazing that a not-quite-as-good version is free and open source: https://liveportrait.github.io/


This doesn't obsolete motion capture? It doesn't seem to even be full body for one, seems to be for fixed camera and limited to head/face. Page title is more accurate: "Introducing Act-One - a new way to generate expressive character performances".


The AI hype men (and women) hyperbole is too much, and are poor lyricists.

These types of phrases should be filterable.

Imagine changing forever

This changes everything

Nothing will ever be the same

Everything has changed

Everything’s disrupted

Gone are the days of how it used to be

Nothing will be the same after you see this

Just tacky hooks to keep people reading.


Maybe video models can’t do all the problems. Can they do all the problems that are valuable to narrative media creators? What about business?

Odds seem high. “Solve all the problems” can mean “solve all the valuable problems.”

Runway’s biggest obstacle is not scientific. They have to convince VCs to not index on all AI startups - a historically winning strategy in finance - and instead focus capital on the two leaders in any race, at any cost, because 5x $200m investments will fail by being not enough money instead of too much. Runway needs billions of dollars to do what they are doing!


This wows me as a lay person, but can anyone who's worked in mo-cap tell why or why not this should impress me?


I make feature films with 9 figure budgets.

Motion capture is animation. It’s taking the input and deciding what to include, what to ignore, what to emphasize and what to fabricate.

Blindly applying inputs without a creative person in the middle making decisions will result in garbage.

For example. You don’t want background or non speaking characters distracting from the focus of a scene, but you can’t have the static either. How they react creates an emotional impression and therefore needs to be considered.

These tools are just tools. It’s akin to saying LLMs have replaced authors. Total rubbish.


Seems pretty amazing. Now you can see why studios like Lionsgate might have been calling (https://runwayml.com/news/runway-partners-with-lionsgate)


Tangent: on this webpage, their choice to have so many videos autoplay saturates my USB 3 dock, and breaks audio sync on the video that I actually want to play. I think that's a first for me.

The product looks pretty great though.


This doesn't make motion capture obsolete: 1) Mocap can be applied to rigged characters and 2) mocap can animate full-body rigs not just facial expressions.


Title: Introducing Act-One - a new way to generate expressive character performances




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: