Hacker News new | past | comments | ask | show | jobs | submit | evertedsphere's comments login


> a wild see-sawing between an aggressive and a defensive market posture

tick, tock


love to see "Why It Matters" turn into the heading equivalent to "delve" in body text (although different in that the latter is a legitimate word while the former is a "we need to talk about…"–level turn of phrase)

They don't do it often, but I do see it happen once or twice a day when I'm out. Usually it's only for a short distance though.

a signal is not a single event but rather a stream of events at given timestamps

(or, if you wish, a stream where you have an Option<Event> at each timestamp)


I googled this and got a few different answers, but not this one. Is there a particular implementation you’re referring to?

Alien, Reactively, TC39 are all about the core semantics of signal, computed, effect. Much of the rest is implementation details. https://dev.to/this-is-learning/the-evolution-of-signals-in-... is a good intro.

this is how signals are described in the functional reactive programming world. here is the paper by the guy http://conal.net/papers/push-pull-frp/push-pull-frp.pdf

I think your confusing signals with observables

presumably the report comes out every year and it's discussed for some time after that


yep the syntax highlighting / doc hyperlinking clearly broke there (or, less charitably, whatever llm produced that prose had a moment)

it's __init__ of course


pour encourager les autres


> Finally, you are in control. We’ve set responsible defaults that you can review during onboarding or adjust in your settings at any time: These simple, yet powerful tools let you manage your data the way you want.

"simple yet powerful tools" (derogatory) is how i would describe the windows popup that gives you the choice between setting up a microsoft account now or being nagged about it later


‘“Simple yet powerful tools” (derogatory)’ is my new favorite phrase I think. It seems like it has wide applications outside tech as well.


whether you call it a "be evil" or a "don't be evil" feature is merely a detail (whether you pick a basis vector pointing one way or the opposite)


What a strech.

Does an is_even function have an is_odd feature implemented?

Does an is_divisible_by_200 have an is_not_divisible_by_3 feature implemented?

Does a physics simulator have an "accelerate upwards" feature?

No, it's a bug/emergent property and interpreting it as a feature is a simple misunderstanding of the software.

Semantics matter, just because you can potentially negate a variable (or multiply it by any number) doesn't mean that property is inherent to the program.


>No, it's a bug/emergent property and interpreting it as a feature is a simple misunderstanding of the software.

'Feature' has a different meaning in machine learning than it does in software. It means a measurable property of data, not a behavior of a program.

E.g. the language, style, tone, content, and semantics of text are all features. If text can be said to have a certain amount of 'evilness', then you have an evilness feature.

https://en.wikipedia.org/wiki/Feature_(machine_learning)


Ahh that's true. However the way he phrased it "the fine tuning causes the feature" it's clear to me that the functionality meaning is used. But I can't pinpoint exactly why.

I think it's something about the incompatibility between the inertness of ML-features and potential-verbs of tradiditional-features.

The OP says "be evil" feature, and refers that the finetuning causes it. If it meant an ml-feature as a property of the data, OP would have said something like "evilness" feature.

To any extent if it were an ML-feature, it wouldn't be about evilness it would merely be the collection of features that were discouraged in training. Which at that point becomes somewhat redundant.

To summarize, if you finetune for any of the negatively trained tokens, the model will simplify by first returning all tokens with negative biases, unless you specifically train it not to bring up negative tokens in other areas.


> Does an is_even function have an is_odd feature implemented?

If it's a function on integers, then yes. Especially if the output is also expressed as arbitrary integers.

> Does an is_divisible_by_200 have an is_not_divisible_by_3 feature implemented?

No.

> Does a physics simulator have an "accelerate upwards" feature?

Yes, if I'm interpreting what you mean by "accelerate upwards". That's just the gravity feature. It's not a bug, and it's not emergent.

> Semantics matter, just because you can potentially negate a variable (or multiply it by any number) doesn't mean that property is inherent to the program.

A major part of a neural network design is that variables can be activated in positive or negative directions as part of getting the output you want. Either direction is inherent.


>Yes, if I'm interpreting what you mean by "accelerate upwards". That's just the gravity feature. It's not a bug, and it's not emergent.

Gravity would be accelerating downwards.

>A major part of a neural network design is that variables can be activated in positive or negative directions as part of getting the output you want. Either direction is inherent.

This is true for traditional programs as well. But a variable being "activated" in either direction in runtime/inference, would not be a feature of the program. There is a very standard and well defined difference between runtime and design time.


If you try to sell someone "gravity set to negative height per second squared" and "gravity set to positive height per second squared" as two separate features in your physics engine, they are not going to be impressed.


I meant if objects falling upwards were a bug. Or for that matter if the objects move sideways.

To me it's clear that the feature is items go down. If there is any scenario (bug) in which items move upwards or sideways, obviously there is no feature that makes them go sideways. It's a runtime behaviour.


Oh if they're going sideways or glitching up for other reasons then no it's not an aspect of the gravity feature, agreed.

And I think the aspects of this discussion more directly tied to the article are being better addressed in the other comment chains so I won't continue that here.


Looks good. I did write my argument more formally in a comment, and someone identified the effect as the Waluigi Effect.

It looks like it's closer to an upside down glitch, as in the negation, or the inverse of the set. And not a sideways type of phenomenon.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: