Hacker News new | past | comments | ask | show | jobs | submit login

> you must learn to put your focus and attention on the most important thing: the other actor.

Famously, Sir Ian McKellen had an emotional breakdown while filming The Hobbit due to extended stretches of talking not to other actors, but a greenscreen.




It was distinctly more of an outburst of anger and frustration with the green screen rather than an 'emotional' breakdown. I wasn't there but a colleague was. See what you did? Now all those replies to your skewed sentence are voided!


To be fair, the behind-the-scenes footage seemed to show it as more of a breakdown (of course, the behind-the-scenes footage might be bending the truth a bit. And the two aren't mutually exclusive).


Since when are anger and frustration not 'emotional'?


You missed out the bit where I explained the context and object of an 'emotional' outburst. I removed 'breakdown' too. You're doing what I saw as emphasising just one part of what happened for the biggest headline. But yes, somewhat pedantically, you're right to describe both anger and frustration as emotional. The point is the 'emotional breakdown' is used originally (in press reports) as an attempt to enflame a not so enflamed event. Footage of an event will often lend itself to various interpretations from different viewers.


You seem to be the pedantic one. Emotional can appropriately describe McKellan’s reaction and like you say can “lend itself to various interpretations” of what emotional means - anger, frustration, whatever else the viewer observes and feels.


I think you're referring to me pedantically pointing out specifics in what happened, not the use of emotive language. I must take therefore 'emotional breakdown' suggesting a myriad of outcomes in the readers mind is ok with you. It appears so. I wasn't ok with it. I didn't want 'emotional breakdown' left hanging with all of its broad connotations, without the important reference to the green screen.


Even without greenscreen, actors in film and TV have to split their attention in unnatural ways. To preserve eyeline continuity in close-ups, actors often have to address themselves to a tennis ball or something near the camera lens, while their partner in the scene stands behind the camera. In the worst case scenario, sometimes the other actor is not even there when the closeups are being shot.


I remember an interview with Chevy Chase who said Dan Aykroyd was an expert at this and he said he was able to read cue cards out of his peripheral vision when they were at SNL so he could read lines without turning his head in the direction of the car holder. Chevy said he never met any actor that could do it as easily as Aykroyd could.


Because he is a Stanislavski-type method actor. He struggled because he, internally, was processing those emotions. Other actors do not do this. Think of a model posing for a photo shoot. They don't alternate between being happy/sad/pouty on command. They alternate between those looks without processing any underlying emotion. Watch some standup comedians. They can repeat the same joke down to every intonation and body twitch night after night. So too a dancer in a chorus line. They are not processing emotion when they do this, rather their external appearance is a total fraud disconnected from their actual emotional state. McKellen is acting from the inside out, from an internal emotion to the portrayal of that emotion externally. Others are good at just portraying emotion without that internal process, acting "from the outside out" as a Kabuki artist might.


This is an inaccurate characterization of acting. Nearly every school of acting shuns the manufacturing of feeling. It's incredibly easy to spot a bad actor, because manufactured emotions are uncanny.

For example, Meisner teaches you to live truthfully in the moment. The emotion you feel is based off of what you're getting from your partner in that exact moment. You're taught to get rid of ingrained social firewalls and just let everything out. To actually feel the other actors and show them how they're impacting your emotional state. It's 100% real and authentic from moment to moment.


I think you may have narrowed your definition of "acting" to only that taught at western acting schools, the type geared for the western stage. Acting is far broader. Take an army drill instructor getting apparently angry at a new cadet. They are not actually angry, not at any level. They are skilled at projecting false anger without actually ever being angry internally. They look psychopathic because they can turn this on and off instantly, but they aren't crazy because they aren't actually turning any emotion on and off, only the fake external image of emotion. That is still acting. What maters is the external message, not how you get there.


The article is discussing acting in the context of the dramatic arts, and in particular, the difficulty of remembering lines.

Feigned emotions do not work on stage or on screen. You can't fake being emotionally honest while performing. This requires rigorous training to get right.

If you try to manufacture emotion, you'll spend part of your thought process moderating your performance. Trying to sound "right", trying to hit some emotional target, focusing on your delivery, focusing on your marks and your lines. You're in your head too much and you come across as a caricature. I guarantee you have seen this before, and it's really bad.

Acting isn't performing. Acting is being yourself and being in tune with everything around you. When you're finally liberated from playing some role, the magic happens naturally.


Acting is deeply being someone else while being yourself


Ian McKellen isn't a Stanislavski-based actor. He's classically trained.

He broke down because it was hard.


This is literally what AI is doing

Trusting AI is impossible because it can have a backdoor and easily switch on a dime at any time, violating all your assumptions (that are made because of your intuitions about living animals and their costly signals) NO MATTER HOW LONG IT HAS BEEN EARNING YOUR TRUST.

Not only that, but it can actually do it in the background, imperceptibly, across thousands of instances, and shift opinion of many people.

The movie Her shows that Samantha had been speaking to thousands of people at once. The abrupt leaving is actually a very benign scenario, compared to the myriad other things it could abrutly change.

https://www.youtube.com/watch?v=GZS8xBvgLaQ


AI literally does none of that. The current state of AI has no comprehension of any other conversation, only the output of a numerical model and the context / system prompt entered. AI has no motivation or ability to "earn trust". The end result is the same though -- you can't trust it past what you can verify.


LLMs don't do what the film shows. (Nothing ever does what films show, unless it's a documentary, and even then sometimes not).

AI includes Google search results being re-weighted based on what people click on, the live bidding on which advert to show you when you visit a website, your facebook feed, and your spam filter.

All of these have been designed to earn your trust, they don't need to have an internal motivation for that. Some have been trusted, some are still trusted.

All may change on a dime and without warning due to some software update outside of your control.


You're totally wrong. Just because most AI doesn't do that right now doesn't mean it can be trusted over years.

It's not about AI's motivation. It's about the people behind the AI programming it. They can make it behave well but at the same time it can turn on a dime. Meaning, if an animal or a person is showing up every day and giving you a lot of their attention, emotion and love, proving themselves over time, you can be reasonably sure that they are genuinely like that. But an AI can just as easily fake it all for a year or two, and then drop it a second later. There is nothing an AI can do to prove that it doesn't have a backdoor somewhere in its billions of weights, to go rogue. It's like the Ken Thompson hack, but much more organic (https://wiki.c2.com/?TheKenThompsonHack)

https://www.cs.cornell.edu/~mpkim/pubs/undetectable.pdf

Unlike an animal or human, the AI performing as you want is no indication it will continue to perform like that in the following second. As people and organizations come to rely more and more on AI, they will become more and more vulnerable to any number of backdoors.


> t's not about AI's motivation. It's about the people behind the AI programming it. They can make it behave well but at the same time it can turn on a dime.

This is true of any software.

> But an AI can just as easily fake it all for a year or two, and then drop it a second later.

We are nowhere near an AI being able to "fake" anything. No matter what words are coming out of an LLM. There's no self model -- it's episodic and static.

Regarding the Cornell reference and backdoors in AI that is true. But it's also true of compilers. It has nothing to do with an AI having any concept or capacity of "deceipt".

We definitely agree that AI should not be trusted, but they are just not currently or in the anywhere close to near future capable of deceipt on their own. At this point that kind of "deceipt" would come from the programmers -- at least until AI is much more advanced than it currently is.


Furthermore, there is absolutely nothing preventing a human from turning on a dime, either. It’s happened to me with an extremely close and highly trusted confidant, first hand (luckily they recovered years later).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: