Hacker Newsnew | past | comments | ask | show | jobs | submit | x_may's commentslogin

I think it’s also largely driven by the apparently cheapness of turning the CapEX of server buying to the OpEX of cloud renting. Less up front investment and auditing/access controls for SoC2 compliant are so much easier m.


Unfortunate name collision on that one


I do donate to NVDA indirectly via the S&P500


Obviously its not at the scale of the top auto-regressive models yet but there are some OSS models https://github.com/dllm-reasoning/d1


It may be that it was time for the hardware that was previously running Arxiv to be retired and this is just another Capex -> Opex decision being made by so many tech companies.

I'd like to know if GCP is covering part of the bill? Or will Cornell be paying all of it? The new architecture smells of "[GCP] will pay/credit all of these new services if you agree to let one of our architects work with you". If GCP is helping, stay tuned for a blog post from google some time around the completion of the migration with a title like "Reaffirming our commitment to science" or something similarly self affirming.


> If GCP is helping, stay tuned for a blog post from google some time around the completion of the migration with a title like "Reaffirming our commitment to science" or something similarly self affirming.

"Google pays to run an enormous intellectual resource in exchange for a self-congratulatory blogpost" seems like a perfectly acceptable outcome for society here.


It wasn't when it happened to Usenet.


Frequent backups to the Internet Archive for rehydration when needed. RIP Dejanews. Hopefully we’ve learned from past experience.


mirrors, please


> If GCP is helping, stay tuned for a blog post from google some time around the completion of the migration with a title like "Reaffirming our commitment to science" or something similarly self affirming.

This is an odd criticism. If a company is footing the bill, it can’t even talk about it to gain some publicity/good will?


Footing the bill for how long?


How much is the bill for running Arxiv? $1000 - $3000/month? Yeah, I don't think Google deserves any recognition for footing that bill. Likely just another self-congratulatory bullshit move on behalf of big G.


https://info.arxiv.org/about/supporters.html

  Our Supporters
  ...
  Gold Sponsors
  Google, Inc (USA)


"Reaffirming our commitment to science" or something similarly self affirming.

While I understand that something is more genuine if done in secret, it doesn't stop being a real commitment to science just because you make a pr post about it.

If company X contributes to Y open source foundation, that's real and they get to claim clout, nobody cares about a post anyways.


I believe they are using scalable TTC. The o3 announcement released accuracy numbers for high and low compute usage, which I feel would be hard to do in the same model without TTC.

I also believe that the 200$ subscription they offer is just them allowing the TTC to go for longer before forcing it to answer.

If what you say is true, though, I agree that there is a huge headroom for TTC to improve results if the huggingface experiments on 1/3B models are anything to go off.


The other comment posted YT videos where Open AI researchers are talking about TTC. So, I am wrong. That $200 subscription is just because the number of tokens generated are huge when CoT is involved. Usually inference output is capped at 2000-4000 tokens (max of ~8192) or so, but they cannot do it with o1 and all the thinking tokens involved. This is true with all the approaches - next token prediction, TTC with beam/lookahead search, or MCTS + TTC. If you specify the output token range as high and induce a model to think before it answers, you will get better results on smaller/local models too.

> huge headroom for TTC to improve results ...1B/3B models

Absolutely. How this is productized remains to be seen. I have high hopes with MCTS and Iterative Preference Learning, but it is harder to implement. Not sure if Open AI has done that. Though Deepmind's results are unbelievably good [1].

[1]:https://arxiv.org/pdf/2405.00451v2


ttc is an incredibly broad term and it is broadening as the hype spreads. people are now calling CoT “TTC” because they are spending compute on reasoning tokens before answering


Yes, and HuggingFace have published this outlining some of the potential ways to use TTC, including but not limited to tree search, showing TTC performance gains from LLama.

https://huggingface.co/spaces/HuggingFaceH4/blogpost-scaling...


The LMSYS leaderboards are crowdsourced and would be hard to fake, it showing a pretty strong performance in terms of human preference.


Crowdsourced data is the easiest to fake unless you can somehow ensure that you have a completely unbiased population (which is impossible). There's a reason why certain models do so well on upvote-based leaderboards but rank nowhere on objective tests.


Which ones? I think fine-tunes are where I see most of this (I'll just call it) "model spam", but the base models don't seem to exhibit this behavior. I do see some models perform way below the curve compared to their benchmark performance, though (Phi family being the most famous).


Captcha solvers as a service are already well developed. The end result is going full circle to in person applications only.


Yeah, that's what I meant by "captcha like" - mechanisms that prevent automated applications such as in person only; doesn't have to literally be captcha. Anything that fulfills the same purpose will do.


Tragedy of the commons at work once again


More like play stupid games, win stupid prizes.


There’s black sand! Volcanic sand from Iceland is perfectly black and would be a great way to distinguish them


You can also by any color of "sand" you like at your local craft/hobby shop.


I think right now they lose more money with each user. But maybe their value lies in training data


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: