Hacker News new | past | comments | ask | show | jobs | submit | sirlancer's comments login

In my tests of a Supermicro ARS-111GL-NHR with a Nvidia GH200 chipset, I found that my benchmarks performed far better with the RHEL 9 aarch64+64k kernel versus the standard aarch64 kernel. Particularly with LLM workloads. Which kernel was used in these tests?


"Far better" is a little vague, what was the actual difference?


Not OP but was curious about the "+64k" thing and found this[1] article claiming around 15% increase across several different workloads using GH200.

FWIW for those unaware like me, 64k refers to 64kB pages, in contrast to the typical 4kB.

[1]: https://www.phoronix.com/review/aarch64-64k-kernel-perf


Prime95 is my gold standard for CPU and memory testing. Everything from desktops to HPC and clustered filesystems get a 24 hour “blend” of tests. If that passes without any instability or bit flips then we’re ready for production.


In my experience, LINPACK (at least the Intel MKL on GenuineIntel combination) is both quicker and more thorough in finding setups that are not actually stable/reliable.


I was pleasantly surprised a few days ago to find Supermicro sells a 3U chassis that runs 8x Ryzen 7000 series CPUs supporting ECC. If one doesn’t need more than 128GB of RAM per system then they can get much higher clock speeds at a much lower power envelope than an EPYC CPU with an equivalent core count.


That should be a much higher power envelope, no? I'm struggling to think of a workload that would like eight 8/12/16 core systems on a network more than one 64 or 128 core chip on the grounds that latency is harder to find than bandwidth.


The performance per dollar of Ryzen 7950X is many times higher than of any Epyc or Threadripper CPUs, even after adding the costs of the motherboards, coolers, cases and PSUs.

The DRAM bandwidth per core is identical for Ryzen 7950X and for 96-core Genoa Epyc CPUs. On the other hand, the Epyc CPUs with high-core count have a better performance per watt.

So the initial cost for a cluster of Ryzen servers is many times less. Depending on the cost of electricity, if an Epyc server is used 24/7, after some years the expenses with it may become lower than with Ryzen.

If the server is used intermittently, the total cost of ownership may remain lower with Ryzen until the end of life.

The only certain advantages of Epyc are the ability of aggregating a higher amount of memory, especially if it is preferable to have it inside a single box, and the faster inter-core communication for applications that use all cores (as opposed to the case when the cores are partitioned between weakly-coupled applications, e.g. between different virtual machines).

The prices of the Epyc CPUs have increased a lot since their first generation until now.

With Zen 1, a server with any Ryzen would not have been competitive with a server with Epyc. Meanwhile, the ratio between the prices per core of Epyc and Ryzen has increased a lot, while the ratio between the performance per core of Epyc and Ryzen has decreased a lot, because the clock frequency of Ryzens has become much greater while that of Epycs has increased only a little.

These two evolutions combined have made that now the servers with Ryzen have become preferable to servers with Epyc in many cases.

AMD has realized that they no longer have a solution for cheap servers, so in theory they have introduced the Siena CPUs for this purpose. Nevertheless, those remain somewhat too expensive and moreover they are nowhere to be seen.


This would be very useful to me in helping to source CPUs for a +40PB Ceph cluster that I’m building at a University. Could this be extended to include server grade AMD SP3 CPUs? I’m in the market soon for +30 Milan processors.



Yes, it's possible. Will do this as soon as the rate limiting troubles with the eBay API subside a bit.


It will be interesting to see what Valve _recommends_ as their preferred Linux distribution and even more so if they roll their own.


It's old news that the (first) supported distro will be Ubuntu.


I find this news interesting in light of Dell's February acquisition of backup and recovery startup Appassure software which competes directly with Quest Software's NetVault Backup. Seems a bit schizophrenic.


Appassure was 50-100 million, Quest is 2.4 billion. They paid for a little bit more than just NetVault. Just a thought... Or maybe they're kicking themselves that they missed out on buying BakBone last year for ~50 million and are now paying the extra 2.35 billion to make sure they get NetVault too?


More information on OS specific vulnerabilities can be found here: http://www.scmagazine.com.au/Tools/Print.aspx?CIID=304829


I think we should really take a moment to thank RIM for what they've done for the market place. If it weren't for them, where would mobile instant messaging be?

Think down the road to when most of us will be moving on from Google to the next great "thing"

I know when that time comes that I will be forever thankful that I was able to share personal documents with family members overseas using Google Drive.

We should be thanking them for being a part of the cycle that is the modern world of technology; a stepping stone and a changing interface for information exchange.

Life goes on. And thank god for RIM.


Forgive me if I've heard wrong or if this is just wishful thinking, but doesn't Google do this with Navigation? I believe there's an option to view a traffic layer and from what I've seen in my commutes, it's fairly accurate for gauging traffic density. It should also be capable of calculate travel time based upon crowdsourced traffic information from Android users.


They do yeah. On Android there's even a widget that you set with your destination and it'll tell you the journey time. Would be great to see their live traffic data used for this journey time metric across the world. I'm moving interstate here in Australia, would certainly help narrow down choosing suburbs.


This is the best 'feature' to come along for quite some time. Reading through the comments turned up the "Samygo Project" aimed towards rooting Samsung televisions. I can't be the only one who's longed to have a dumb terminal with a big screen and lots of fancy inputs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: