It might take 50 years to awaken to the abuse of power going on here.
Forget individual videos for a second and look at youtube-the-experience as a whole. The recommendation stream is the single most important "generative AI" going on ever, using the sense of authenticity, curiosity and salience that comes from the individual videos themselves, but stitching them together in a very particular way. All the while the experience of being recommended videos being almost completely invisible. Of course this is psychologically "satisfying" to the users - in the shortest term - because they keep coming back, to the point of addiction. (Especially as features like shorts creep in).
Allowing the well of "interesting, warm, authentic audio & videos having the secondary gains of working on your psychological needs" being tainted with the question of generated content is a game changer because it breaks the wall of authenticity for the entire app. It brings the whole youtube-the-experience into question, it reduces its psychological stand-in function for human voice & likeness, band-aiding the hyper-individualized lonely person's suffering based content consumption habits. I know this is a bit dramatic, and for sure videos can be genuinely informative, but let's be honest, neither that is the entirety of your stream, nor that is the experience for the vast majority of the users. It will get worse as long as there is a mathematical headroom of making more money out of making it worse, that's what the shareholder duty is about.
When gen-AI came about I was naively happy about the fake "authenticity" wall of the recommended streams breaking down thanks to the garbage of generated sophistry overtaking and grossing out the users. Kind of like super delicious looking cakes turning out to be made of kitchen sponges turning people off of cakes all together. I was wrong to think AI oligopoly would let the opportunity of having a chokehold on the entire "content" business, and here we are. (Also this voluntary tagging will give them the perfect live training set, on top of what they have.)
Once the tech is good enough to generate video streams on the fly, so that all you need is a single livestream, that you won't even have a recommendation engine of videos and instead a team of virtual personas doing everything you could ever desire on screen, it is game over. It might already be game over.
To get out of this the single most important legislative maneuver is being able to accept and enforce the facts that a) recommendation is speech b) recommendation is also gen-AI, and should be subject to same level of regulatory scrutiny. I don't care if it generates pixels or characters at a time, or slaps together the most "interesting" subset of videos/posts/users/reels/shorts out of the vast sea of the collective content-consciousness, they are just one level of abstraction apart but functionally one and the same: look at me; look at my ads; come back to me; keep looking at me.
Sounds like a stretch of the concept of social learning, and more like vanilla model distillation.
Social learning exist to transcend the limited processing power and limited training data exposure of individual agents through multimodal transfer of their own individual models (distilled down from an individual's entire worldview, sense-of-self, perspectives, skills, semantic memory etc)
LLMs already exploit the propositional transfer of human models over language, and they abuse their massive compute capacity to compress them all in a giant model to simulate-them-all. For sure internally it does have some notion of distribution - as it at least has to distribute the compute at train time - but this is not an agent level distribution - not to confuse with the weaker metaphor of an "agent" used in model architectures -, and the end product presents itself as a singular "agent" with all of the processing power and all the training data that is infinitely copyable.
> "A teacher model provides instructions or few-shot examples to a student model without sharing its private data."
So the real concern is not utilizing social learning to transcend compute and training data limitations, it is about creating inferior models that can be distributed back into the world without giving up all of the secret sauce.
For sure this could work, one could create inferior "agents" from stronger "agents", but we cannot create an even stronger "agent" through the dialogue of two strong "agent"s, because everything to be shared is already perfectly encoded in the model&architecture and perfectly copyable. Therefore this is not social learning at all.
To abuse back their anthropomorphization, they are trying to create a deliberately stupid kid to send out to the world so that the kid doesn't tell all the things mommy and daddy already knows and could have perfectly taught. Because one can make more money from selling/renting a bundle of differently stupid agents than a singular state-of-the-art one I guess?
Unless well separated, this will easily turn developer-hostile by some clueless management demanding high coverage and enthusiastic juniors smuggling in massive amounts of AI tests so that at the end of the day you will need get a rubberstamp from an hard-to-maintain llm-gen test code each time you want to submit your work.
Yes authoring some tests might be sped up but not necessarily maintaining them - or maintaining the code under test because you are not necessarily generating good ones. Not to mention sweating over tests usually help developers with checking the design of the code early on too; if not very testable, usually not a good design either, e.g not sufficiently abstracted component contracts which suck in a context where you need to coauthor code with others.
What some people miss is that tests are supposed to be sacrifical code, that most of which will not catch anything during their lifetime - and that is OK because it gives an automated peace of mind and saves from potential false clues when things fail. But that also means max investment into a probabilistic safeguard is not gonna pan out at all times; you will always have diminishing marginal utility as the coverage tops. Unless you're writing some high traffic part of the execution path - e.g. a standard library - touting high coverage is not gonna pay off.
Not to mention almost always an ecology of tests need be there - not just unittests but integration, system etc - to make the thing keep chugging at the end of the day. Will llm's sit at the design meetings and understand the architecture to write tests for them too? Or what they can do will be oversold at the expense of what should be done. A sense of "what is relevant" is needed while investing effort in tests - not just at write-time but also at design-time and maintain-time - which is what humans are pretty OK at, and AI tools are not.
What llms can save time with is keystrokes of an experienced developer who already has a sense of what is a good thing to test and what is not. It can also be - and has been - a hinderance with making the developers smuggle not-so-relevant things into the code.
I don't want an economy of producing keystrokes, I want an appropriately thought set of highly relevant out keystrokes, and I want the latter well separated from the former so that their objective utility - or lack thereof - can be demonstrated in time.
Not necessarily, for one how often people really stare at the cues of a negative event, and try to suppress it versus trying to suppress an internal imagination of the event? Secondly, there’s a greater confound in which externalizing your fear to begin with has a positive effect on processing the affect.
It’s also not clear enough how the operationalize “the suppress”, because it is a difficult and paradoxical task to execute volitionally especially in the context of acute anxiety, because anxiety’s job is literally to interrupt your normal salience structure, and make it self salient non-volitionally.
There is another devil in the detail, the target negative events are self-selected by the participants, so in all likelihood, the most disturbing events are not going to be readily consciously available, and whatever comes up here are going to be things that are already filtered to be easier to deal with.
In contrast in obsessive compulsive disorder, the intrusiveness of the imaginations will be all consuming. In which case it is contraindicated to try suppressing the thought, because it definitely will backfire, as it is precisely what maintains the disorder, but instead it is about learning to stay unresponsive in the face of exposure (ie not scratching the itch) of such thoughts, which is known as exposure and response prevention. Yet still, this method is not found to be necessarily more successful than straight out CBT either.
I'm sure there's plenty of other issues and confounders to pick apart in the study, but it was a fair claim that most would be misled by the isolated quote to mean that no imagining of negative events occurred at all as part of the study.
> You can get a big dopamine hit from meditation or as a response to stress but you don’t see kids running to Buddhist monasteries in droves or rushing to the stress of public speaking.
Dopamine obviously is a shorthand for a more complex psychological phenomena. While having their own failure modes, meditation or public speaking are not entirely ego-syntonic endeavors, there is a continuous contact with reality that puts your identity into question. Whereas social media has a near perfect psychological profile on its users and manufactures a purpose built personal “reality” that fits with their self concept - however dysfunctional it may be. There may be still frustration but it is still conforming to the user, because the challenge is optimized to drive engagement, not to make the person a better person through contact with reality.
> dead people’s political documents is propaganda.
Yours is no less propaganda.
People do cultural learning, pretty much everything you use from technology to medicine and yes politics is overwhelmingly based on dead people's ideas. That is the basis of us as cultural beings.
You're right with cultural learning we might inherit noise too, but you'll have to fight against the content of those ideas, not dismiss from mere historicity. If an idea has merit, it will be timeless.
Disagree I have to fight them. Every political system is eventually jettisoned by generational churn.
Medicines we keep using keep proving their value to the present through experiment. We have binned plenty discovered in the past but later revealed to be snake oil.
As ephemeral gibberish does not exist it is naturally lost to time as those who memorized it die off. What’s literally true remains always verifiable by experiment.
I can find academic papers, political arguments we are in violation of the old ideas today anyway. How do we know we even abide them as intended and aren’t just abiding arbitrary math for staffing institutions described by the Constitution? They are not like organic chemistry where cause-effect is obvious. Does “America” give rise to society or people engineering together casually referring to some old gibberish when pressed? How do you test for existence of political ideology except by populist poll? Democratic majority rule. But that’s anti-America which is to prevent majority rule!
What I advocate is more like forgetting they exist by not teaching them. We can teach how to create together versus destroy each other, without the history lesson, and measure for impact literally to avoid carrying forward snake oil.
> The alternative to majority rule is tyranny of the minority
Being anti-majority doesn't mean pro-minority.
It is about decreasing the weight of merely being in the majority as an input feature to optimal decision making. It is protecting against pseudo-relevance majority can pose as an overemphasized decision making strategy. This doesn't mean all ideas that manifest as majority is irrelevant, not at all, it is protecting against the false positives.
Mind you a lack of steamrolling with majority alone forces a downstream integration of opponent thoughts.
Think it like a collective intelligence architecture that is trying to make sure we don't get stuck in local optima.
That’s all well and good, but when these issues come up, people seemingly turn their brains off and repeat “tyranny of the majority” without evaluating the empirical reality of how these systems function in practice.
I’m amenable tot he notion that there should be some guardrails on simple majoritarianism, but it shouldn’t follow that all anti majoritarian rules are good.
We do have guardrails. The constitution puts various things off-limits to any sort of majoritarian process of regular governing.
Should there be more things off-limits? Should there be less? Intelligent people could disagree, but the off-limits category already exists and there is a process to change what is in it.
This is an extremely weak study that basically launders Huberman's "mini interventionism" and abuses west coast's fascination with what is mostly "breath-themed magic". The idea of hyperregulation of breath is a cousin of hyperregulation of dietary intake, which is a western "top-down"ism, latter of which induced more disordered eating than it achieved/preserved health.
Regarding the criticisms of this study;
Firstly, the small sample size is based on volunteers, so folks already believed there was going to be a payoff from something that is 75% breathwork.
Secondly, there is no "sham intervention" class to counter the placebo effects from this.
Thirdly, their mindfulness instruction is atypical; it should have been passive focus on breath rather than a visual/somatic cue on the forehead to be comperable with breath work vs breath focus.
Finally, their exclusion criteria makes it too restricted;
> For health and safety reasons, we excluded those with self-reported moderate to severe psychiatric or medical conditions that could be exacerbated by study participation, such as heart disease, glaucoma, history of seizures, pregnancy, psychosis, suicidality, bipolar disorder, or substance use disorders.
I find it annoying that the list is not exhaustive but we could reasonably assume they also had to exclude moderate and above depression and anxiety disorders, not to mention panic disorder[1]. Anxious folks are particularly sensitive to breathwork, and even 10% of their "healthy" population reported anxiety as a result of these practices (highest ingroup rate is 17%, in the favorite "sighing" group)
Besides the anxiety inducing vs reducing effect of all breathwork had more variance than the mindfulness intervention, which puts into question whether the cost/benefit of the intervention (not to mention it's wide scale applicability) is sufficient.
What Huberman is popular for is known as a "nutrientism" of sorts; as in assemble vitamins a, b, c..., this and that macronutrient plus this and that micronutrient and you will have a full nutritional profile. Not saying he is all bs at all, e.g his circadian light stuff is solid, but more often than not after the 50th episode these turn into bite sized oversold interventions mostly as an illusion of "doing something good for me so that I don't have to do anything else".
As a final note, mindfulness meditation traditionally has never been an emotion regulation tool, it is an education tool as a part of wisdom traditions, none of which had "good affect in one month" as the primary metric of their success.
[1] The panic disorder population is even more interesting. 50% of the panic disordered people do not suffer from hyperventilatory or otherwise respiratory phenomena. Not only that, the hyperventilators are suffering from hypocapnia, as in a drop in CO2 and not O2, which is completely opposite to Huberman's "dumping CO2 and therefore relaxing" magic/logic.
Then it is surprising that a reasonably robust journal like Cell Reports Medicine published it (presumably after peer review), and so many Stanford postdocs and associate professors put their name on it
Sorry, appeal to authority don't impress me. Science is science, and this is weak science. Feel free to suspend your own judgement based on brand values.
> Hiring during a soaring economy and then letting people go as the economy cools is just standard operating procedure.
If that was true, it would be a last in first out layoff, which is not the case. From what I gather it doesn't even match with the performance ratings (i.e not straight out the bottom 6% of the "stack")
This is both a wage depression maneuver, and a meager 4$ bump in stock price. I'd bet my money it is a hail mary to prevent the CEO being outed in a few quarters.
> If that was true, it would be a last in first out layoff, which is not the case.
That doesn't follow. What you hire for at one time because you think it's profitable to do so is not necessarily related to who you let go at a later time because you think it's profitable to do so. Market conditions and other factors have changed in the mean time.
> From what I gather it doesn't even match with the performance ratings (i.e not straight out the bottom 6% of the "stack")
And it shouldn't. Who it makes sense to let go of has a number of factors, not just performance. Even software developers are not a homogeneous resource, and ROI on an employee is not just about performance.
> What you hire for at one time because you think it's profitable to do so is not necessarily related to who you let go at a later time because you think it's profitable to do so
Software engineers are one of the most fungible knowledge workers around, save for transition costs.
> ROI on an employee is not just about performance.
Now the remainder will do 50 hours of "ambitious" work for projects that will eventually get axed, redone, reorganized etc.
It was never a raw productivity per hour problem, it was an organizing productivity behind meaningful, useful and profitable projects problem.
> certainly not interested or in a position to build things 100x better
in an organizational complexity of 150k -> 138k workforce, no one is in a position to make things 10x better, let alone 100x. Shedding 12k is a scapegoating gesture as if there is such a possibility, merely to ease the anxiety of the market price of shares. It doesn't even directly move the needle for profit/asset ratio, but it will help depressing wages with the theatre of "look what we had to do".
Forget individual videos for a second and look at youtube-the-experience as a whole. The recommendation stream is the single most important "generative AI" going on ever, using the sense of authenticity, curiosity and salience that comes from the individual videos themselves, but stitching them together in a very particular way. All the while the experience of being recommended videos being almost completely invisible. Of course this is psychologically "satisfying" to the users - in the shortest term - because they keep coming back, to the point of addiction. (Especially as features like shorts creep in).
Allowing the well of "interesting, warm, authentic audio & videos having the secondary gains of working on your psychological needs" being tainted with the question of generated content is a game changer because it breaks the wall of authenticity for the entire app. It brings the whole youtube-the-experience into question, it reduces its psychological stand-in function for human voice & likeness, band-aiding the hyper-individualized lonely person's suffering based content consumption habits. I know this is a bit dramatic, and for sure videos can be genuinely informative, but let's be honest, neither that is the entirety of your stream, nor that is the experience for the vast majority of the users. It will get worse as long as there is a mathematical headroom of making more money out of making it worse, that's what the shareholder duty is about.
When gen-AI came about I was naively happy about the fake "authenticity" wall of the recommended streams breaking down thanks to the garbage of generated sophistry overtaking and grossing out the users. Kind of like super delicious looking cakes turning out to be made of kitchen sponges turning people off of cakes all together. I was wrong to think AI oligopoly would let the opportunity of having a chokehold on the entire "content" business, and here we are. (Also this voluntary tagging will give them the perfect live training set, on top of what they have.)
Once the tech is good enough to generate video streams on the fly, so that all you need is a single livestream, that you won't even have a recommendation engine of videos and instead a team of virtual personas doing everything you could ever desire on screen, it is game over. It might already be game over.
To get out of this the single most important legislative maneuver is being able to accept and enforce the facts that a) recommendation is speech b) recommendation is also gen-AI, and should be subject to same level of regulatory scrutiny. I don't care if it generates pixels or characters at a time, or slaps together the most "interesting" subset of videos/posts/users/reels/shorts out of the vast sea of the collective content-consciousness, they are just one level of abstraction apart but functionally one and the same: look at me; look at my ads; come back to me; keep looking at me.