> Yes, absolutely. The US does not trust Tiktok because it is Chinese owned, or is it because of privacy concerns, or is it both?
Does it matter what the reason is? It's a foreign government owned media company operating in the US. The Chinese government doesn't have any rights in america. If the government wants to get rid of it, it can. China has nothing to stand on here. They don't even allow uncensored tik-tok in their own country, let alone facebook, etc.
by the time the rosetta stone was written, the literary language had drifted pretty far from spoken egyptian anyway, which was well on the way to becoming Coptic.
In a practical sense, how you define big O depends on what you consider to be the inputs to the function. If the runtime doesn't change depending on the size of the inputs that you care about, it's O(1).
Like, you might have a function with 2 inputs, N and M, and the run time is like O(m2^n), but n is fixed at a low number every time you'll run it in practice, so it's really just O(m) for your purposes.
Right. O(f(n)) is literally only defined for situations where n 1: varies between different runs of the algorithm, and 2: can grow arbitrarily large. Even though in practice ‘arbitrarily large’ is always limited by memory, storage, etc.
Talking about algorithms being O(n) in the number of bits in a value is only reasonable if the number of bits in the value actually varies between runs.
The reason it can't do that is that, for example, "twenty" and "20" are nearly identical in the vector embedding space and it can't really distinguish them that well in most contexts. That's true for generally any task that relies on sort of "how the words look" vs "what the words mean". Any kind of meta request is going to be very difficult for an LLM, but a multi-modal GPT model should be able to handle it.
If tens of thousands (or really, even a handful) of people were dying from car crashes caused by, for example, brakes failing, then people would absolutely be attacking the manufacturer of the car. If you sell a feature that _does not work_ and it causes deaths, you should be held liable.
Reading the article or the NHTSA documents would clarify that blame is not placed on Tesla here for the crashes, but blame is placed for not ensuring the driver was paying attention. Drivers had plenty of time to react, but they didn't.
This is not trying to say the car quickly jerked into a pedestrian on the side of the road.
Probably the "it" is whatever one model has that other models don't have. When everyone is using the same architecture, then the data makes the difference. If everyone has the same data, then the architecture makes the difference.
It sounds pretty obvious to say that the difference is whatever is different, but isn't that literally what both sides of this argument are saying?
edit: I do think that what the original linked essay is saying is slightly subtler than that, which is that _given_ that everyone is using the same transformer architecture, the exact hyperparameters and fine tuning that is done matters a lot less than the data set does.