Hacker News new | past | comments | ask | show | jobs | submit | more empath-nirvana's comments login

If have an oracle that can predict the outcome of experiments does it _matter_ if you understand why?


> Yes, absolutely. The US does not trust Tiktok because it is Chinese owned, or is it because of privacy concerns, or is it both?

Does it matter what the reason is? It's a foreign government owned media company operating in the US. The Chinese government doesn't have any rights in america. If the government wants to get rid of it, it can. China has nothing to stand on here. They don't even allow uncensored tik-tok in their own country, let alone facebook, etc.


by the time the rosetta stone was written, the literary language had drifted pretty far from spoken egyptian anyway, which was well on the way to becoming Coptic.


In a practical sense, how you define big O depends on what you consider to be the inputs to the function. If the runtime doesn't change depending on the size of the inputs that you care about, it's O(1).

Like, you might have a function with 2 inputs, N and M, and the run time is like O(m2^n), but n is fixed at a low number every time you'll run it in practice, so it's really just O(m) for your purposes.


Right. O(f(n)) is literally only defined for situations where n 1: varies between different runs of the algorithm, and 2: can grow arbitrarily large. Even though in practice ‘arbitrarily large’ is always limited by memory, storage, etc.

Talking about algorithms being O(n) in the number of bits in a value is only reasonable if the number of bits in the value actually varies between runs.


The reason it can't do that is that, for example, "twenty" and "20" are nearly identical in the vector embedding space and it can't really distinguish them that well in most contexts. That's true for generally any task that relies on sort of "how the words look" vs "what the words mean". Any kind of meta request is going to be very difficult for an LLM, but a multi-modal GPT model should be able to handle it.


Thanks, I’ll try the multimodal one.


Tried it, did not perform better than the non-multimodal one.


Union members bled and died to get those regulatory protections in the US.


Is that supposed to mean something? lots of people died for the Third Reich, but that doesn't validate it.


It doesn't mean it's good, it means some people cared about it very much.


If management was all it takes, why can't they fire all the workers and work the warehouse floor themselves?


but nobody is making that claim


If tens of thousands (or really, even a handful) of people were dying from car crashes caused by, for example, brakes failing, then people would absolutely be attacking the manufacturer of the car. If you sell a feature that _does not work_ and it causes deaths, you should be held liable.


Reading the article or the NHTSA documents would clarify that blame is not placed on Tesla here for the crashes, but blame is placed for not ensuring the driver was paying attention. Drivers had plenty of time to react, but they didn't.

This is not trying to say the car quickly jerked into a pedestrian on the side of the road.


> “What that means is not only that they learn what it means to be a dog or a cat, …“

I think he's referring to the famous paper: "What is it like to be a bat"

https://en.wikipedia.org/wiki/What_Is_It_Like_to_Be_a_Bat%3F


Probably the "it" is whatever one model has that other models don't have. When everyone is using the same architecture, then the data makes the difference. If everyone has the same data, then the architecture makes the difference.

It sounds pretty obvious to say that the difference is whatever is different, but isn't that literally what both sides of this argument are saying?

edit: I do think that what the original linked essay is saying is slightly subtler than that, which is that _given_ that everyone is using the same transformer architecture, the exact hyperparameters and fine tuning that is done matters a lot less than the data set does.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: