Hacker Newsnew | past | comments | ask | show | jobs | submit | ladberg's commentslogin

Apple is the leader of nearly all new developments to the ARM ISA, which has evolved considerably since Acorn died.

Do you know a single person who'd buy an iPhone without a camera? I don't


That's what they used to say about mobile phones with no keyboards :))


Keyboards were replaced with a touch screen alternative that effectively does the same job though. What is the alternative to a camera? Cameras are way too useful on a mobile device for anyone to even consider dropping them IMO.


He's obviously jesting


Oh. Woooosh. Thanks for still being nice about it (-:


AI image generators


Maybe not as an iphone, but they could drop the camera and cellular and make an ipod touch.


They're not claiming to get that many values per pixel, they're getting that many values overall for the medium through which light passes between the card and the phone. The idea light comes from a source (e.g. sun), bounces off the various colors of the card and thus produces hundreds of different spectra, those all pass through a medium, and land on the phone camera. So you're getting one measurement consisting of hundreds of RGB values that each represent intensity of different spectra, and you combine it all together to get a single spectrogram.


Given that the compiled version is slower than then eager version on A100, there's definitely something suboptimal happening there


No the compiled version is actually faster.

From that table, the A100 tok/sec (larger is faster) numbers are:

- Eager: 28

- Compiled: 128

And

- KV cache eager: 26

- KV cache compiled: 99

The reason that the KV cache is slower is likely because it's not GPU-optimized code. On CPU the KV cache is faster. To make it faster on GPU, you would pre-allocate the tensors on the device for example instead of `torch.cat`ting them on the fly


Ah yep read the labels backwards and meant that - ty for catching and for the explanation


I think you're misunderstanding what that page is: it's not an advertisement to invest with the company, it's an advertisement to trade via/with the company in the same way you might otherwise go manually trade from a Bloomberg terminal (or any other method).

There is no way to invest in the company, and the only way of becoming a "customer" is to engage in trading.


> Its a finance firm - i.e scam firm. "We have a fancy trading algorithm that statistically is never going to outperform just buying VOO and holding it, but the thing is if you get lucky, it could".

HRT trades their own money so if it didn't beat VOO then they'd just buy VOO. There are no external investors to scam.


I can't comment on whether these particular pieces were generated, but models are certainly good enough now to handle these cases and more


Until now I've been able to reliably distinguish generated artwork from human authored artwork with ~90% accuracy. Of course, it's always getting better, but my initial research tells me the main logo has existed since Jan 2024: https://github.com/ollama/ollama/issues/2152

I don't think it was generated. (on the basis that this can't be some cutting-edge new model whose output I haven't seen yet)


One of the maintainers. The logo and all the illustrations are done by a human artist.


This strategy by definition wouldn't work if it was "background noise" because it relies on being able to move the market


Stocks moving with extreme volatility is so common as to be background noise in today’s US markets.


The main release highlighted by the article is cuTile which is certainly about jitting kernels from Python code


> main release

there is no release of cutile (yet). so the only substantive thing that the article can be describing is cuda-core - which it does describe and is a recent/new addition to the ecosystem.

man i can't fathom glazing a random blog this hard just because it's tangentially related to some other thing (NV GPUs) that clearly people only vaguely understand.


christ man lighten the fuck up. there's zero need to be _so_ god damn patronizing and disrespectful.


It doesn't claim it's possible now, it's a fictional short story claiming "AIs can do everything taught by a CS degree" by the end of 2026.


Ironically, the models of today can read an article better than some of us.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: