Hacker News new | past | comments | ask | show | jobs | submit login
AI Can Transform Anyone Into a Professional Dancer (nvidia.com)
222 points by tuckermi on Aug 31, 2018 | hide | past | favorite | 168 comments



I could see this being used in the same way that auto-tune type effects are added to singers voices. The editing of music videos could include a tool like this to make sure all of the dancers behind the main one are moving just right.

And just like auto-tuned voices, it will come off as janky and fake.


The autotune you refer to as "janky and fake" is when autotune is used with intention to create that sound as an artistic syle. Autotune is used across most pop (and other) music, but in such a way that that you never realise it is there.

It is similar to special effects. People complain about how bad and fake they look, but this is only for the effects that are bad enough that they are noticible. People don't realise the sheer amount of special effects being used in scenes they never realise they are being used for.


But similarly, it could be used as a "janky and fake" artistic syle. Because no one's ever seen it before, it will look wrong in unexpected ways, and therefore somewhat fascinating.

Stylus noise, fret noise and mp3 compression artefacts are other "mistakes" deliberately introduced.


You just described T-Pain’s reasoning for using auto tune (he doesn’t actually need it but instead uses it stylistically).

I think you’re onto something here as well about how it may well get adopted as a ‘style’ in someone’s music video.


It is still noticeable in normal performances. You can hear unnatural overtones from autotune used as intended. Even more so when supposed singers use it live and have to stay glued to their mic to isolate any outside noise.


Interesting! Do you have examples where it is possible to hear these unnatural overtones?


Pick up any modern pop record. Some are worse than others, but all are auto tuned.

Just Google worst auto-tune for examples, because it's so common now, most people don't notice it. Here's a list of some examples:

http://www.hometracked.com/2008/02/05/auto-tune-abuse-in-pop...

Also, if you're not really musical you might have a hard time picking it out. I found that as soon as I learnt an instrument, songs suddenly became layered and I could pick out and distinguish different instruments, and notice mistakes.


I'm pretty sure many of those examples are purposeful uses of autotune for effect


I've played piano and sung for most of my life, but I wouldn't have recognized many of these if you hadn't pointed them out, though I do recognize many structures and mistakes in music.

It seems to be more about recognizing the small glitches, rather than something involving overtones, which I would really have liked to learn more about how to recognize them. I have long found that autotuned singing sounds a bit more "metallic" to me.

Thanks anyway, the sound samples are useful in recognizing it better.


I don't think T-Pain is trying to sneak autotune past anyone...


It's easier to hear in modern country music because the vocals are still trying to sound natural. Male singers are more obvious.


There is nothing "artistic" in overusing autotune. It is just a tool to "mask" that that person can`t really sing at all and is overused in popular/commercial genres where there is an abundance of untalented musicians. I can`t imagine anyone saying that obvious autotuned songs sound good.


I don't understand this attitude at all. So something that doesn't seem artistic to you simply isn't artistic for anybody else because you decided that it isn't? Autotune is just another processing like reverb, delay, whatever. You think they didn't use any processing on Beatles' records? You think non-autotuned singers don't use a TC-helicon or whatever when they're performing live?

Autotuned songs can sound good, just as non-autotuned songs can sound bad. I'm as much a music elitist as anybody, but thinking that you have the one true objective idea of what music sounds good or bad is just childish.


There's a difference between using it deliberately and using it because the singer can't sing.

I wonder if in 20 years time it will sound incredibly dated, much like 80s synths do today.

I'm not particularly against auto tune and ultimately most people don't even notice it being used, but I do sort of understand where the GP is coming from.

If your idol can't actually sing, and you saw them live and it's rubbish, wouldn't you feel cheated?


Sure, I agree with you for the most part. Except that if you're going to see one of these autotune singers live with the hope that they're able to sound exactly like the record then you're setting yourself up for disappointment. Would you feel cheated if you found out that the magic show you went to in Vegas was just sleight-of-hand and trickery? :)

I personally dislike it too, but I'm not going to say it's not artistic, because to someone else it might be. People get so caught up and passionate with their art of choice that they sometimes forget that it's all just entertainment in the end. Whether it be Hillary Hahn or some teenager making beats in Ableton and throwing in the stock vocal effect.

About 80s synths, I think a lot of us, or at least I personally associate that cheesy sound with mass-produced digital synths put out by Yamaha, Roland etc. It's all variety and I love it. My dad still has a DX7 and a D-50 that I'm hoping to inherit.


A lot of people can't tell that someone's auto-tuned. They're not aware enough.

Like most people wouldn't even realise there's a bass line vs a guitar.

As for 80s synths, in my opinion there are a lot of otherwise great songs that are ruined by those travesties. I totally appreciate they were breaking new ground, but remastering some of those tracks with modern synths would make the songs sound a lot better.


The Killers, and many other tremendously popular (and talented) bands that have intentionally replicated the overproduced 80s sound, would like a word with you


>I can't imagine anyone saying that obvious autotuned songs sound good.

Well just look at the status quo of popular rap music then - tons of artists are aiming for the obvious autotune style and tons of people are listening to it (so I assume they think it sounds good). I'm not sure what you mean by "artistic", but it's definitely a sound people try to achieve on purpose in their music, so I think it does deserve some recognition as "art".


T-Pain is a great singer, and I'm pretty sure he's the first person a lot of people think about when they think about autotune.

https://www.youtube.com/watch?v=CIjXUg1s5gc


Let me introduce you to Bon Iver's "22, A Million"

https://www.youtube.com/playlist?list=PLN61gg9VNXPomdZu0UY_w...


Kanye West used autotune to great effect in 808s and Heartbreak, one of the seminal albums of this century


Very true about using it to great effect in 808s and Heartbreak! I remember hearing one of those tracks and finding it fairly nice to listen to. Then I heard him live on SNL and his singing voice was close to non-existent.


No-one ever says Cher can't sing, but her 'Believe' song also uses Autotune in a very distinct way.

https://www.youtube.com/watch?v=5Uu3kCEEc98


Practically all singers use "pitch correction", which is essentially "autotune that's not done for artistic effect".

If done properly, it's not really janky or fake sounding at all.


Maybe for recorded mainstream pop music that is true, but we can tell because the performance then sounds bowdlerised and dull.

It can't really help with a live performance and a for a good singer, recording multiple takes is going to be faster/more economical, punching in/out is so easy and with modern digital DAW's like ProTools (which does this by default) it keeps all your takes for you anyway - no need to waste another track on your 24 track tape or tape over the previous one.

Here's another viewpoint:

https://www.quora.com/Do-all-most-singers-use-pitch-correcti...

Fantastic vocal performances were captured all throughout the last century without the parachute of pitch correction/autotune. I'd rather listen to an imperfect take with flaws than to a machine assisted correction any day. Each to their own I guess.


I don't think it's always the case that you -can- tell. I think it falls nearly into 'Rumsfeld classification':

Known Knowns - done for the clear effect, which is audible by everyone (as popularised, but not invented by Cher, T-Pain, etc).

Known Unknowns - ones where most people wouldn't notice, but if you've listened to autotune a lot you'd swear it's been done (sustained notes with vibrato added seem to be clear contenders here to me, such as one in 'Angels' by Robbie Williams

Unknown Unknowns - There are lots of recordings I've worked on for people (mixing, mostly) where the vocal sounds perfect, and it's actually been autotuned/processed. Subtly, but still processed. I only know this because the person who's done it has confirmed it. In the context of a mix there's little evidence, if any at all. And if you listen to backing vocals of the last decade versus BVs from say 30 years ago, you really hear the difference - not because modern ones necessarily sound like they've been worked on, but because the old ones sound just a little bit out of tune in comparison.

I've done work on singers' recordings where I've fixed the pitch and they haven't even noticed it on their own voice, in isolation (which really is the sound everyone knows best). If done well (and appropriately), it doesn't turn it into awful processed rubbish, it can tweak an otherwise brilliant performance and make it near-perfect. I say this as a recording engineer who loves the sound of a band playing together, and would always sacrifice separation / absolute recording quality for the communication and feel that you get with a live band playing together how they normally do - I wouldn't sacrifice personality for pitch, but I think you can improve on nearly everyone's performance in some places.

Having said that, the flip side of this is that people think 'they can fix that in the mix' when they've not given a great performance... and that's never the case!


> it can tweak an otherwise brilliant performance and make it near-perfect

That's what I'm getting at - "perfect" is something that the human voice does so infrequently on its own so when we hear the "perfect" performance, we know it's been doctored.

As a point of reference, I look back at something like the Boswell sisters. Some of their early stuff was recorded in mono direct to wax disc but they were so skilled, well rehearsed and sung as a unit so closely bonded that the resulting sound is fantastic despite the primitive technology. The three individual voices aren't even recorded to seperate tracks since multitracking wasn't invented then.

Granted you can't really draw a direct comparison between that and Robbie Williams, Cher or T-Pain, but I know which I'd rather listen to any day. There are a lot of singers I've seen live and talked to over the years who wouldn't be caught dead touching up their stuff, or letting an engineer/producer do it behind their back - even if it meant missing out on a commercial recording contract.

I do appreciate where you're coming from though. I'm a little old fashioned I guess in that sometimes the mistakes that made it through improve the performance in my mind. It reminds you that the performers are human.


Vocal perfection is about attitude and projection, not pitch.

Autotune is orthogonal to creating that perfection. If someone has it Autotune won't take it away, no matter how obviously it's used.

If someone doesn't have it, Autotune won't give it to them.


I agree with all your points. I think the obsession with completely perfect vocal performance has gone too far for my personal taste. Modern mainstream pop records have a combination of too many takes spliced together for one "perfect" take, together with minor correction of the odd note here and there and the end-to-end result is "too perfect" for my old ears. There's no mistakes or personality in many of these records and it sounds boring to me. That's all. I'm not claiming to have the only valid opinion on the matter, or that all modern music is like that, and I also know I'm also not alone in my thinking and taste.


As someone who likes to write and produce songs but can't sing, I'd say that auto-tune and pitch correction are a Godsend. It means I can try out new ideas without having to rely on a vocalist.

Modern production tools have mostly eliminated the need to work with a band. I can create music that I want to create with complete artistic freedom. Auto-tune is just another step in that direction.


I think you make a good point. I can't sing well either and I love to track multiple instruments in my home studio, and I couldn't do that nearly as well without modern technology.

I guess I'd rather listen back to my crappy vocals and think hard about what I didn't like about them though, and work on improving that rather than try to make it something it's not with pitch correction. But everyone has a different way of working that works for them I suppose, which is why I ended my original comment with "each to their own".


Except you have no idea how any of those "fantastic vocal performances" were processed. Vocal effects go back a good 50 years.


Indeed, in terms of digital effects. And obviously there was experimentation with analog processing earlier than that, John Lennon's use of ADT on his vocals with the Beatles comes to mind and of course other analog effects like reverb go back to the 1940's. But I gave an example going back to the 1930's in my other comment above. The Boswells were one the biggest recording act in the world in the early 1930's and they didn't need any of that stuff.

Of course musicians have been able to splice their parts together since multitrack came into wide usage, eg. the guitar solo in Stairway to Heaven. But I kind of feel like use of digital correction is a bit overdone right now.

I appreciate I'm stepping into mostly subjective territory here though. It's like if somebody said to Hendrix, "hey Jimi, tune your guitar man". Or told Knopfler to stop playing dead notes, or Slash to stop missing notes altogether while not jumping off a drum riser. They could do those things but then to me it wouldn't feel like them.

I don't think I'm the only person who feels this way, but it's definitely personal & subjective.


Oh I agree. I think it's possible to dislike or even hate all the over-processed and over-produced commercial music that's been coming out for the last 2-3 decades while at the same time accepting that there are others out there who might genuinely like that kind of music.

I definitely understand your point about Hendrix, Knopfler and Slash. There's something beautiful about virtuoso playing that still sounds "raw", you know what I mean? I grew up obsessed with shred guitar, and there was a lot of that in the 80s/90s. There are younger guitar players on youtube these days who are miles ahead in terms of technical proficiency and production/effects but some of that authenticity is lost.

I recently saw a video of Ritchie Blackmore talking about Satriani's playing and how it's almost too technically perfect and loses some of the soul. I tend to agree. I have great respect for both of them though.


I've always felt that way about Satriani and Vai to a lesser degree too. It's funny how specific we can be about what we like and don't like. Here I am extolling the virtues of sloppy or "more natural" singing and playing, but I can't listen to 90% of punk music for more than a few minutes.


I feel the same about Vai also! Now imagine a Bob Dylan with a technically-brilliant voice that's able to sing in tune. Would people feel differently about him? :)


Recording is not the problem. Cutting and splicing the best bits of all those recordings is very labor intensive.


.. and depending on the singer, can be nigh on impossible. Some are incredibly consistent in terms of timing and phrasing (making editing easy), while with others each one is unique, and only large sections can be used with each other.


I want to plug Jacob Collier though, who doesn't use pitch correction because it's less in tune than where he wants to place the notes naturally, up or down a few cents depending on the note's relationship to the prevailing chord.


He's a maestro, love his iharmu vids especially


Yeah, I suspect the closer you are to the optimal voice/movement the better the effect is.


Good parralel.

And the same way some song makers have used autotune to adjust synthetic voices like Hatsune Miku’s, would this have any use as an external filter to smooth out synthethized videos ?


Oh man, the first company to put this in a mobile app is going to blow up.

(I guess it might take a few years for the performance to get there)


Totally what I thought too after seeing this: https://twitter.com/smeddinck/status/1032970885148364800 And, yes of course, the models should easily run in the cloud. Could be a whole application series of "make your friends do X", where X is a hilariously remapped activity ... bonus here: it probably does not hurt if the results are somewhat crappy at times.


Isn’t there a patent / copyright or something for taking all the algorithms / structures in the tech stack that this relies on? (GAN , densePOSE , etc)

Is any tech that is published in arxiv just free game immediately? Seems unfair to the researchers


It's their choice to publish.


Why wouldn't this be possible now?


Probably because they're not powerful enough to render the video yet.


Wouldn't the solution to that be to render in the cloud?


Then your problem becomes 'How to you monetize an app that costs us $20 in GPU cloud time every time someone taps a button'


Reminds me of deepart.io...

They started with a single GPU. As the waiting queue was getting longer, they made a paid feature that would let you get your results faster. Then they rented more GPUs with that money.


Maybe Uber will be interested!! They could embed it in their drivers cars


Funnily enough, I think this technology would be better demoed by people doing more natural motions like walking around and doing basic gestures, or dancing in a style that is more fluid. This type of dancing is made to look intentionally "unnatural" (ie: you rarely see people moving like this in your daily life), which makes it a bit difficult to tell how much of the strangeness/unanny-ness comes from the dancing style vs imprecision in the algorithm.


That might be intentional. We're used to seeing people walk, so it would be easy to point out flaws in transformed video of people walking. With the unusual movements, we attribute it to the moves because we're not used to seeing them.


I would like to see the moon walk of MJ :)


Nice save way not to say "New Deeper Fakes - now full body!". This is hardly just about dancing. This can be used to create anyone doing anything. The quality will improve, far beyond the quality levels of the hands now and the other interpolated body parts far past the quality necessary for games.


Yep.

The real consequence of this is that video footage is no longer a reliable source.


Very impressive!

I'm a bit disappointed though that they didn't also include results for a synthetic source video with "impossible" poses (e.g. joints bending backwards, stretching, separating from the body or performing full rotations). That would have been pretty interesting (though perhaps a bit unsettling) to see.


I'm really impressed that the shadows behind in the window are reasonably realistic too.


I loved watching the movements in the "Detected Pose" corner. I felt like I could see the forms of the dance more clearly. I wonder if ML could learn aesthetically pleasing dance forms, then perhaps we could get some generative choreography!


Interesting. If we could grab audience wide fmri at the next nutcracker for training, right ?


Edit for labeling


Even capture of pose detection into written choreograph form, or else for iterative feedback while learning a variation, would be interesting.


The title seems poorly worded.

Using AI to transform anyone into a professional dancer might include using AI to process live video (webcam) of someone dancing and then giving them some feedback for improvement. In a word: coaching.

However this is using AI to produce composite videos of people dancing.


We actually kinda already have the first, in the form of the Dance Central games for the Kinect. You dance, the software detects which of your limbs aren't moving the right way, and it display visual feedback (highlighting the limb in red, reducing your score, etc) to show you what you're doing wrong, so you can perform the dance more correctly next time.

It's not good enough to produce professional dancers, but it has definitely improved my dancing as someone who just dances for fun.


For what it's worth, I got the correct meaning from the title. An AI coach makes sense, but for dance? Seems weirdly specific.

Meanwhile composite videos really blend in with all the augmented reality phone apps that teenagers use nowadays.

I'm half surprised there isn't already something like this for smartphones (with inferior quality).


Having just watched the movie Upgrade I had something very different (and scarier) in my head...


An AI dance coach-as-a-service would be amazing and have a real impact!


So this is a cool demo --and it has applications in cinema, MVs, etc. But, this is being presented as something which could allow Jane Q User to portray herself as an accomplished dancer --just transfer a style onto herself.

Maybe I'm in the minority, but I think if we take this idea and walk with it, it has the potential to trivialize actual accomplishment. Maybe I'm overthinking it.


I think that's all it's intended to be right? A cool demo. And it looks pretty amazing for what it is really.

We're not going to see The Running Man style/quality fake videos any time soon, and the media kinda runs with this an exaggerates; making people wonder if camera footage may one day no longer be considered evidence.

We're far from that. At the most, the quality of the transfers here is about the same as what you'd see with Deep Fakes (celebrities imposed on top of pornographic models using computer vision and AI algorithms).


Are we really that far?


We ARE that far. Any deep fakes you see in the wild are the hobbyists applying the tech to the lowest common denominator. There are professional state sponsored groups creating serious fake video, easily manipulating presence of people at news worthy incidents.


I think you misunderstood my comment. The comment I replied to said we are far from realistic fakes. I was saying "are we really that far from realistic fakes?", not "are we really that far along?"


Like those janky Bin Laden videos from a few years back?


Interesting thought. One upside might be people learn accomplishments are to be enjoyed as a personal achievement rather than requiring social acknowledgement to be validated.


I suspect you are overthinking it, or at least should acknowledge that one needs to take the idea and walk a long, long way!

But, with that said, this work is dependent on having a "source" that the user is using as an input for pose detection. The actual accomplishment must still be performed and recorded, though I suppose this opens the door to the dance equivalent of "lip syncing" even beyond what might be done today with a body double.


Or perhaps you don't even need a human at all, according to this work: https://medium.com/syncedreview/busting-moves-with-dancenet-...


Just because calculators allow anyone to do complex arithmetic, that doesn't lessen the accomplishment of someone who can do it unassisted.


It will just shift it.

They said the same thing with any new artistic medium. Digital cameras, photoshop, Instagram filters, MIDI music instruments, etc.


So things can AI do now

- Mimic a target's body motions (this link)

- Mimic a target's facial expressions (deepfakes)

- Mimic a target's voice (lyrebird AI, etc)

related video, digital animation puppeteering

https://www.youtube.com/watch?v=YiOByO8J7xg&t=2s&list=LLI462...

Its not perfect by any means, but we're seeing a new age of CGI. Once perfected, I wonder how the entertainment industry will change as a result (Faster rendering times, less time to make scenes, puppeteering, not needing expensive famous actors or stunt doubles, digital identity copyrights, etc)


I'm worried about the social ramifications and moving us one step closer to "post-truth". Not much we can to do stop it at some point, but if a certain pee tape came out in the next year, there'd always be a reasonable doubt to its veracity.


Yeah, I don't think people are lending enough weight to how big of a deal that is.

We're heading into a world where it would not be very hard to bombard the public with a large number of long form videos of highly convincing videos of anyone in the world ranting on any topic and acting out anything they want, and we would have borderline no idea if it was legitimate.

Combining that with our media climate and already runaway problem with monetary and political incentives for fabricated stories seems really dangerous.

You could make a video of Neil Armstrong and Nasa execs talking about how they faked the moonlanding, or even much more nefarious fake content confirming conspiracy theories for political ends.

What will we use as a scalable filter to know what is actually going on, and how will we keep that content from manipulating public discussion?


We've had digital and video manipulation for years, it leaves behind pixelated artifacts though and can be spotted (E.g. see captain disillusion on youtube).

But yes there's going to be a big market for tracking fake data sources in the future. We're already seeing tools to track fake twitter accounts, fake instagram followers, fake amazon purchase reviews, this is an ongoing trend.


I disagree that it is a big deal. There'll be brief period when many people don't know about it, or don't know how mainstream it is, but when it will be done. Then people will realise and there will be a cultural shift where video evidence is significantly less trusted.


Its a bungee jump into uncanny valley: exciting, but how many times would you pay, and who has more fun? you, or your friends watching?


As a dancer & instructor of over 18 years, I think this technology is fascinating. I actually think it would be most effective as a teaching tool for my students. Often times, since the kids are so focused on the physicality of the steps, I find a disconnect between the visual and physical experience as they train, i.e. the kids don't realize that the steps/movements they make are in attempt to create visual shapes and lines. They run around the studio 'feeling themselves' (precious), but at the end of the year on-stage, the choreography suffers from this visual connection.

I appreciate that the detected poses and motions create clear pictures for what different parts of the body are doing. Particularly for ballet, if I had access to this technology (in a way that was user friendly), I'd love to see the difference between ballet styles (Vaganova, Cechetti, ABT, ect). I think it would be much clearer from a students' perspective, to see the stylistic difference in lines, shapes and movement.

This AI reminds me of Happy Feet, where they took Savion Glover's movement and choreography and applied it to the animation penguin. It doesn't seem too far-fetched. And lastly, for those who say this seems unnatural--dancing is unnatural to the body, hence the training and years put into it. So having an AI applied to it will only make it look more unnatural.

Artistically, this can be debated (as it has been), but in search for 'real life application,' I'd love to get my hands on this as a teaching tool.

sorry for the long post--this is my first time on this site--my boyfriend sent this to me & warned me that if i blabbed too long, this post would not be successful.


As many other comments have said, the title is misleading; the key quotation is:

"(...) allows anyone to portray themselves as a world-class ballerina (...)"

Moreover, after AlphaGO took away Go from us, I started to wonder "what is left" for humans, and I believe that we are centuries away to have machines that achieve world class dancing level. My reasoning is than in things like Go, image or speech recognition, it is easier to "encode" the information for the ML to actually learn. On the other hand, encoding the movements of professional dancers is already quite difficult. Consider for example in the video linked here, the whole human body is mapped into ~20 points. Sure, this may be enough to portray someone as a dancer. But good luck making a dancing robot.

So, maybe I quit my programming career to become a dancer, it is less likely to be a job that the machines will take away ;-)

edit: grammar


I tried joining a dance class at my gym. I looked ridiculous. I have no confidence that I'd look as good as the coach even after a year of practice. Well, I think it helps that everyone else in the class is in much better shape than me, their moves are more... nice to look at.

Yeah, it doesn't matter if machines can't dance if I can't either. Still no job for me. :)


> I believe that we are centuries away to have machines that achieve world class dancing level

People said the same about Go. There are AIs that can compose enjoyable music already.


Robotics is a bit harder than board games.


AI can make anyone appear to be a professional dancer. There is a difference between what is real and what is fake.


Good thing we often accept versions of reality as far away as snap filters.


Here's an unpopular opinion: such applications aren't going to trivialize art.

Like competitive sports, art is all about display of human ability under constraints. This is why even in the age of photographs, we still value hand-painted canvases. Such techniques are simply going to make people more discerning between real effort v/s automated means of generating the same outcome.

Rather than thinking AI-assisted style transfers are the end of art, we should think that these are new tools for artists to do even more interesting stuff. See this upcoming tool for example: https://runwayml.com/


A interesting parallel could be made with chess. How did Deep Blue affect the interest of humans in the game of chess? I'm not a chess player, but I seem to recall the effect was at least neutral, if not positive.

And more recently with AlphaGo. Now that humans have no chance of ever beating AI again in the game of go, what will change?

I'm a go player so I'm more interested in this question. Professional go players said that AlphaGo is positive for go, that they will be able to learn from it and reach new levels of play.

Although of course their livelihood depends on the popularity of go, it would be bad press for them to say the opposite.


I entirely agree with you. It's one thing for computers/AI to emulate the creative work that humans have already achieved, essentially copying, or porting, or manipulating prior art, but it's something else entirely to genuinely create something new and fresh and connects with people emotionally, and I have yet to see any evidence that AI is close to this.


Most modern art isn't made to great something new and fresh... it's mass produced pastoral stuff like Thomas Kinkade or connected to a multimedia franchise (book covers, movie posters, game art, etc.) and a lot of that is certainly formulaic and derivative.

Maybe AI isn't able to copy human technique well enough yet but whether it succeeds or fails will have little to do with whether or not it creates work that resonates emotionally like classic art, because that's no longer the purpose of the vast majority of art that people encounter.

And I would argue that human beings, for the most part, copy other human beings anyway. Working within a "genre" and using cultural references and even recognizable techniques are all essentially copying or at least adapting what came before.


I guess it depends on how broad a definition of "art" we are using.


The same argument was made about the Mona Lisa when copies of it began appearing in books and newspapers and such. Instead of obsolescence, it made Da Vinci’s work more popular than ever.


I'm sure that this will, at no point, be used for evil.


Indeed. It's still in the uncanny valley, but then evil is uncanny valley too. Goodness has always been linked in our minds with beauty and grace, including grace of movement.


Need to send this to theresa may



We should think of something like a blockchain to mark all this sh*t as fake though, because in five years time there will be no way to distinguish reality from invention and we will all be under constant blackmail from malicious agents and rogue governments showing up at our door with whatever made-up accusation they want.


> blackmail from malicious agents and rogue governments showing up at our door with whatever made-up accusation they want.

I think the opposite. I believe that this will kill blackmail. Why care if someone has a leaked sex tape featuring you in an age where anyone can fake them. Simply say it's fake. In a few years, I bet there will be simply apps where you can point to a person's social network accounts and have the app generate whatever you want. Blackmail will die once everyone will have access to those videos with a few clicks.


How would blockchain solve the issue?


It lets you prove a file was not modified after a certain point in time. So if you have two similar videos that are both timestamped you can prove which one is the original.

This idea dates back to way before bitcoin.

https://en.m.wikipedia.org/wiki/Trusted_timestamping


You only prove which was timestamped first, though - not which is the original.


Link a blockchain app made with IOTA (for example, but this is the best and most manageable protocol I can see out there just now) to your unique ID in the form of something like your heartbeat plus your physical location plus your actual body shape, then perform a mini-transaction every 30 seconds or every minute or the granularity you need, finally store the transactions linked to that data as an alibi against any accusation that you were somewhere else doing something fake an AI created to blackmail you.


Or you could just be critical, and be aware of the limitations of media as it pertains to representation of truth and reality.

Besides, techniques for identifying fakes is likely to lag closely behind the techniques for producing them.

The first time we talked about "DeepFakes", someone I know pointed out that there is nothing inherently new in this issue. Media has been faked to manipulate the truth as long as there has been media.

Wether you trust the medium, a person, or a blockchain, trust is only as good as the information you base it upon – and there's always ways to circumvent it, or otherwise deceive you.

Also: There are some pretty big privacy issues (from what I understand) with what you describe.


Unbiased courts or at the very least your legacy as an innocent person if you get however smothered need a bullet proof you did not do what deepfakes claimed you did. As for privacy, again, blockchain makes your data fully available to you only and you only need to disclose the relevant bits against the time-space accusation you face. Trust cannot be delegated but reliable technical means would help a lot.


Why not just put less trust in video?

The courts should know that videos can be doctored, just as images can be photoshopped.

As the burden of proof lies with the accuser, it is they that would need a bullet proof argument that you did in face do what the video claims you did – and video is not alone in and of itself to establish truth.

As for the data gathering: the existence of the proposed data presents a threat for the subjects (and society) in and of itself.


> blockchain

> perform a mini-transaction every 30 seconds

I guess if you don't want to be blackmailed, you better have money.


IOTA wants to be so cheap with their machine-to-machine global aim that humans can be a quite limited subset of their full sample, making costs pretty thin. They may even allow it for free for a personal ID as a PR stunt to make the IOTA protocol mainstream.


> unique ID in the form of something like your heartbeat plus your physical location plus your actual body shape, then perform a mini-transaction every 30 seconds

Extreme surveillance, for what? To give evidence that you weren't in a fake dancing video?

I can't see this happening any time soon ...


Interested people just opt in.


Why do you need a blockchain database specifically? What properties of it is relevant here?


Privacy first, so that you can disclose to third parties and to the general public at large the relevant and decentrally validated bits only, in pound-for-pound time and space contrast with any fake, AI-generated medium imho.


what you wrote makes no sense, can you provide an example scenario?


At that point nobody will believe anything so nobody will show up at your door.


Oh boy, those hands at 0:14 are a no for me.


Technically impressive, this is. Canny, it is not.


Is it uncanny then?


That... would be the implication, yes.


Ok you meant the movements of the 'dancers' were uncanny, as opposed to the technical prowess demonstrated? I thought that the not canny comment referred to the tech, not the result, and it didn't seem sensible given other things to describe the tech as uncanny.


This is incredible!

I wonder if seeing yourself dance like this might speed up learning to actually dance like this...


I doubt it. Almost nobody learns to dance without a coach. It's because what you think you see is not what the movement actually is. Much of dance is playing with your perception of the movement (the moonwalk is the most obvious example, and very, very few moonwalkers can pull off the illusion).


I'm itching to make this. Would this be a good intro ML project for a solid software engineer (with a decently strong math background) or would it likely be far over my head? Seems like reverse engineering it from the paper would be tough, but maybe doable :p


it would be expensive in terms of hardware, you’d need to shell out for quite a few GPUs or else budget a lot for expensive cloud instances.

plus this is just a very complicated thing, in that it’s gluing together multiple new techniques to do various things.

some of the pieces that went into this work (like GANs) have lots of tutorials online and might be a more manageable, and budget-friendly, place to start. you could do something interesting on Google CoLab with free GPU time.


No, not "transform into". It's "make look like, in a video".

I mean, who cares what you look like in some video? When you actually meet people, they'll know that it's bullshit.

Now, if you could manage it in meatspace, that would be cool!


who cares what you look like in some video?

Everyone who watches television, movies, YouTube, etc. I know that's only a few people, but hey, it's a start.


I suppose. But then, what would be the hiring criteria? Lowest bid? Nice ass?

And the focus here is "anyone", not professionals.


The biggest worry here, is where instead of hiring 20 dancers for a music video, they only hire one, and use AI to AutoDance[1] a bunch of low-cost actors instead. This could destroy the jobbing dancer-for-hire market.

--

[1] AutoDance™ - Like AutoTune. I hereby claim it as a term. ;)


OK, I get that. But then, why bother hiring meat at all? Just CGI it entirely.


CGI could do it for quite a while now, just not at a price competitive with actual dancers. I'd think of this more as CGI automation than dancer automation. For dancers, only the video production part of their market is threatened, the stage part remains unaffected.


But then, what would be the hiring criteria?

Literally anything apart from someone's ability to dance. Instead of needing someone who can sing, act, look good, and dance, you now only need someone who can sing, act, and look good. And AI will eventually replace all the other talents too I guess.

This is why some people worry about the effect of AI on jobs.


Anyone want to get rich developing this for android /w me? haha


Can’t nvidia and the papers authors sue you or claim a takedown / charge licensing or something ?!? How is it legal for people to take cutting edge tech from universities for free and use it for their own software/profit?


same as it’s legal to use open source licensed software to profit. a lot of academics don’t care, and their work might be built very strongly on other work. there might or might not be a patent on this research, but I would guess probably not?


I thought github projects were only ok to use if the author provided a LICENSE file e.g. MIT or APACHE 2.0 , do these papers include a license clause at the bottom? Usually no right , so it seems to be a ‘grey’ area


well, is it legal to do a "clean-room" implementation from the descriptions in the papers themselves, without looking at any provided code?

(this should almost always be feasible, and is commonly done for non-IP-related reasons e.g. someone might make a PyTorch version of something when the original version was done in Tensorflow.)

i'm not a lawyer but I would have assumed "probably", unless there's a patent. i mean, if it's not this would suggest it's illegal to attempt to replicate scientific experiments.

also, in many cases even for the code provided by the researchers there is an actual LICENSE file included, and it's often BSD or MIT. (Which sort of makes sense -- these two permissive free software licenses are named after the universities they came from. they reflect the academic CS culture around stuff like this.)


me. let's do it


Does this mean that we can manipulate videos programmatically in the future? I don't see why not. Maybe we'll see games that are literally undistinguishable from reality.


My kids would pay any amount of my money to play this as a game.


So would my kid, but that's because she does not seem to understand money.


Some time ago there was a submission saying that the adult industry uses similar algorithms to put arbitrary faces on videos of actors.


The title is clickbait...It doesn't turn anyone into a better dancer. It's just about CGI stuff. A kind of future of emoji.


Or the future of fake news.


What is going here, the world is looking more and more like a movie! Apocalypse soon?


Massively underwhelming read from the headline. This is basically just deepfakes for dancing. When machine learning can actually teach someone how to dance, then we’ll have something interesting on our hands.


AI couldn't be used for critical task, in which mistakes are not allowed at all. What's its real use cases ?


Very impressive!


Looks as fake as dinosaurs in 1993.


> Using NVIDIA TITAN Xp and GeForce GTX 1080 Ti GPUs, with the cuDNN-accelerated PyTorch deep learning framework for both training and inference

> the team based their algorithm on the pix2pixHD architecture developed by NVIDIA researchers

Is it me, or is NVIDIA trying very hard to take credit for this UC Berkeley paper? (they're almost taking credit for Pytorch as well). Sure, this kind of work wouldn't be possible without their hardware, but in that case Intel could probably take credit for most of science in the last few decades.


It's a company blog. It exists to highlight applications of the company's products. I don't see why you are bothered, both of those statements are factually true.


Normally I wouldn't be bothered, but in this case I saw people being misled into thinking Nvidia did this work. Given Nvidia publishes a lot at computer vision conferences, there's a higher than usual potential for confusion.


How does that seem like taking credit? They aren't saying they're using NVIDIA hardware, they're saying it developed it from NVIDIA work, i.e. pix2pixHD, no?

It also seems as UC Berkeley and NVIDIA collaborated on pix2pixHD, judging by the paper


> They aren't saying they're using NVIDIA hardware

They are, see quote above. They're also going out of their way to mention that Pytorch is using cuDNN, which is true but off-topic.


I think he meant that they aren't just saying that they're using NVIDIA hardware


It's on the nvidia site, it's either been edited by them, or they supplied the cards or part funded the work.


There's no mention of it in the paper. The acknowledgements section says:

> This work was supported, in part, by NSF grant IIS-1633310 and research gifts from Adobe, eBay, and Google

The fact that people are thinking "it's on the nvidia site, they must have participated somehow" is precisely the reason I wanted to bring this up.


It looks impressive, I just don't understand why computer games don't use these techniques to feature realistic human bodies/faces.


I don't think this can run on real time on current hardware yet, so it would only work for pre-rendered cutscenes


Did you notice the hands in the video? I don't think computer games can accept that level of error. I would ask again in a decade, when this technique is more viable. Until then, I feel game developers will still need humans to do the work.


You appear to be shadowbanned. Most of your (reasonable-looking) comments are showing up as dead to me.


Sounds interesting, thanks! I had a few informative but controversial comments in the past... that may be the reason.


Seconded, I just had to vouch for you in the above comment to un-dead it.

Aha, it was when you said this:

https://news.ycombinator.com/item?id=15725493


This issue was already solved with invention of booze and soft drugs.


No, this changed only the self-perception of one being a great dancer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: