Hacker News new | past | comments | ask | show | jobs | submit | comp_throw7's comments login

Argument from "fuck you, I got mine", basically. Notice that the article doesn't claim the tools don't "work", merely that if they work it's because some layer of management is incompetent (maybe sufficient, but not necessary), and if so the company deserves to fail (what?).

It's actually even stronger than that: in the very last paragraph, the concern is that you'll uncover slackers and make enemies.

I really don't think you can measure output and understand value for anyone but the most junior of engineers who basically need to churn out code to be valuable in the short term (and that's for those who do not have a questioning mindset to understand why they need to build what they are asked to). 6 months in and it becomes useless even for them as they acquire domain knowledge.


I don't share that view.

To me the article reads that simplistic metrics do no accurately assess performance of employees.

Managers need to work out what their reports are working on, and base an opinion on performance using more than just "number of tasks closed" etc.

by creating these simplistic metrics, it means that the management chain has a false sense of what make the company tick. That confidence is the rot, not the poor performers. Simply because they do not actually know who the poor performers are.


> A key benefit of this PBC structure is its potential to thwart an unwanted acquisition or an activist’s demands, according to multiple people familiar with the company’s thinking. This means an existing investor such as Microsoft or another party could be frustrated if they mounted an effort to acquire OpenAI.

An astonishing justification proffered for OpenAI's attempt to remove itself from being controlled by a non-profit entity. A PBC might be better than a regular c-corp, but it is not better than a non-profit. OpenAI is pursuing this arrangement in order to grant Sam Altman more control and enable fundraising; the PBC thing is a way to fob off those concerned by exactly the wrong things (i.e. that Sam Altman might be incorrectly removed from power by external stakeholders, rather than, uh, being correctly removed from power by internal stakeholders).


That is one risk. Humans at the other end of the screen are effectors; nobody is worried about AI labs piping inference output into /dev/null.


He's dissembling. He vetoed the bill because VCs decided to rally the flag; if the bill had covered more models he'd have been more likely to veto it, not less.

It's been vaguely mindblowing to watch various tech people & VCs argue that use-based restrictions would be better than this, when use-based restrictions are vastly more intrusive, economically inefficient, and subject to regulatory capture than what was proposed here.


> this is the one that would make it illegal to provide open weights for models past a certain size

That's nowhere in the bill, but plenty of people have been confused into thinking this by the bill's opponents.


Three of the four options of what an "artifical intelligence safety incident" is defined as require that the weights be kept secret. One is quite explicit, the others are just impossible to prevent if the weights are available:

> (2) Theft, misappropriation, malicious use, inadvertent release, unauthorized access, or escape of the model weights of a covered model or covered model derivative.

> (3) The critical failure of technical or administrative controls, including controls limiting the ability to modify a covered model or covered model derivative.

> (4) Unauthorized use of a covered model or covered model derivative to cause or materially enable critical harm.


It is not illegal for a model developer to train a model that is involved in an "artifical intelligence safety incident".


This is somewhat high context, but as a random example: https://www.lesswrong.com/posts/jtoPawEhLNXNxvgTT/bing-chat-...


So is he predicting that AGI is around the corner?


Which has zero explanatory power w.r.t. Murati, since she's not part of that crowd at all. But her previously working at an Elon company seems like a plausible route, if she did in fact join before he left OpenAI (since he left in Feb 2018).


The article provides some reasons to think that the treatment might not be fully effective even conditional on the mechanism of action working as described, not that it won't do anything at all.


The article provides some sound reasons for why

1. infecting yourself with this bacteria may not do what it is marketed to do

2. may result in suppression of beneficial microbiota

3. is not safe against lateral acquisition of a gene that is beneficial to the microbe but pathogenic to the human host

4. there is no real kill switch

for a lot of money, you might just end up buying a lot of health issues.


Well, you can trivially falsify this feeling by going and asking some early adopters whether they brush their teeth with fluoridated toothpaste. (Spoiler: they do.)


> in fact, it forms less than 2% of all the bacteria that cause caries

I tracked down the chain of citations here. The directly cited article (https://www.nature.com/articles/sj.bdj.2018.81) says the following:

"These caries ecological concepts have been confirmed by recent DNA- and RNA-based molecular studies that have uncovered an extraordinarily diverse microbial ecosystem, where S. mutans accounts for a very small fraction (0.1%–1.6%) of the bacterial community implicated in the caries process.[20]"

Note the sudden conversion of "implicated in the caries process" to "cause caries".

The next step in the citation chain is https://www.cell.com/trends/microbiology/abstract/S0966-842X....

"In recent years, the use of second-generation sequencing and metagenomic techniques has uncovered an extraordinarily diverse ecosystem where S. mutans accounts only for 0.1% of the bacterial community in dental plaque and 0.7–1.6% in carious lesions[14,15]."

Now the claim is merely one of prevalence!

The next steps in the citation chain, https://karger.com/cre/article-abstract/47/6/591/85901/A-Tis... and https://journals.plos.org/plosone/article?id=10.1371/journal..., do seem to plausibly provide evidence that there are other mouth-colonizing bacteria which would perform the same function as S. Mutans when it comes to causing caries, such that fully eliminating S. Mutans probably wouldn't eliminate caries entirely.

But, importantly, the citation in the McGill article doesn't much support the original claim, and this citation chain could easily have bottomed out in a completely different set of results which didn't happen to lend some (weak) evidentiary support to the high-level claim.

Also importantly, this article is committing the sin of figuring out some reasons why a treatment might not be perfectly effective in all cases, and implicitly deciding that justifies ignoring any non-total benefits (i.e. cases where S. Mutans would have been counterfactually responsible for causing caries, that could be prevented). Questions that would have been appropriate, but were apparently uninteresting:

"Does this intervention also happen to chase out other acid-producing bacteria that fulfill a similar ecological niche as S. Mutans?"

"What percentage of caries cases would be prevented by chasing out just S. Mutans with this intervention, while leaving other acid-producing bacteria untouched?"

Likely this is because answers to those questions would not really have changed the bottom line. That bottom line was written by the "unanswered" safety concerns (reasonable in the abstract, less obviously reasonable in this specific case). All of the listed safety concerns have evidence pointing in various directions. Very little of that evidence is listed, probably because it's not in a format that's legible to scientific institutions. The article does note, earlier on, "The toxicity of this Mutacin-1140 compound had not been tested. What would be the consequences of millions of bacteria in the mouth releasing this compound? The answer wasn’t clear, even though the archetypal compound in the family Mutacin-1140 belonged to was known to be very safe." This is obviously relevant evidence about the safety of Mutacin-1140. _How much_ evidence? Unasked, unanswered. (I have no idea how predictive the safety of other compounds in the same family is of another unstudied compound in that family, I'm not a biologist. But this is not an _unanswerable_ question.)

(Marginal conflict of interest: I know the Lumina founder socially. I have no financial interest in that venture or any of his other ventures. I have not taken Lumina myself.)


The safety concerns sound circular in an almost Kafkaesque manner. From what I can tell, a strain of the bacteria was found in the wild that created less acid and seemed to lead to less carries. So people thought it needed to be safer, so they instead created a genetically modified version of the bacteria. But now it couldn’t even be tested in the wild because the “safer” version had so many unknowns that even letting people experiment with it would be dangerous, since it could potentially escape into the general population and hurt people (possibly escape through activities like kissing, the article states). But the earlier strain has been out there spreading in the population for decades/centuries already.

Why not just let people experiment on their own with the original low acid bacteria if they want? It’s already there in the wild. You’re already “painting your teeth” with different bacteria when you kiss people, so why not at least let some people pick which naturally occurring bacteria they can expose themselves to instead of letting it happen by random chance?

A lot of the hype about Lumina seemed to be goofy, but the hand wringing over “painting your mouth with bacteria!” is just as bad if not worse.


> From what I can tell, a strain of the bacteria was found in the wild that created less acid and seemed to lead to less carries

I think you're confused about which changes to the bacteria were natural and which were engineered. A strain in the wild was discovered that produced a weak antibiotic that it had also developed a resistance to, but it still had the original metabolic pathway that produced lactic acid. Researches took that strain and genetically modified it to produce ethanol instead of lactic acid, and then relied on the natural antibiotic-related mutations to get this strain to replace common S. Mutans in the oral microbiomes of test subjects.

The useful non-acidic property of the strain is entirely artificially introduced. The natural mutation in the wild just allowed for outcompeting and replacing bacteria that lack it. There would be no benefit from personally experimenting with the natural non-engineered strains.


If that strain produces ethanol and can colonize the guts, then it has the potential to cause the auto-brewery syndrome. That a good reason to be careful!


> So people thought it needed to be safer

That was after they decided it needed to outcompete the existing bacterium and added a mutagen to kill it off.


But why did they decide that? It seems to be a pretty clear example of the perfect being the enemy of the good.

1. Better bacteria is found in the wild, it might be able to significantly reduce cavities. People could be randomly passing it to others through kissing, and no one is concerned about it since it doesn't seem to be harmful.

2. It’s not given to people for replacement therapy because (as the article states) people decided that a replacement therapy “needs to meet a number of criteria."

3. Except meeting that criteria makes people think the replacement isn’t safe and shouldn’t be tried.

The article is saying “of course we needed to add X characteristics to the bacteria, they were necessary in order for it to be a good replacement” and then goes on to say “of course people shouldn’t be using the bacteria with X characteristics, X characteristics might be dangerous.”


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: