Hacker News new | past | comments | ask | show | jobs | submit | agloe_dreams's comments login

Which, weirdly, should be simple. The Vision Pro’s Mac mirroring is probably the exact same stack as the IPhone to Mac feature.


I’m fully expecting this to just be a ported version of the Vision Pro’s feature that allows a Virtual Desktop of your Mac. In that context, it seemed to have extremely low latency.


This is actually a lie.

Using a credit card is always cheaper to the merchant, maybe the merchant doesnt realize it but cash is a bad deal like Uber is a bad deal - They money is up front so you never realize the costs. In the case of user, it is fuel, vehicle wear and tear, and shifting demand.

In the case of cash, it is the cost of counting and keeping the drawer, security, deposits, change, and internal training/theft. Most estimates show it to be ~10-20% of income of a business is wasted. Always less than the cost of credit.

Well...that is...if you didn't do what maybe 50% of small businesses do: Screw the taxpayer. Sure, these credit card companies take 3%. Many small businesses take cash so they can do cash accounting and keep "money in" away from the IRS. They dont report it, they pay workers with it under the table and you, the customer and tax payer, may pay less, but you are getting screwed.


Only from some indignant-irate viewpoint. Paying less is a win. Taxes are a fraction of what you would pay (in th other imaginary scenario) - a percentage of the margins on the transaction. If those are apparently reduced to zero through some chicanery as suggested, you still pay less than what those 'missing tax dollars' would have been. You participate in the savings.


I mean, if you only think from the scale of 'only I exist on earth and nobody else' then yes, but if you consider the grand scale of tax evasion, the cost in your taxes from all of this is vastly greater than any credit card fee.


No I don't think I want to pay more for goods and services because that way it generates more taxes. That's never gonna be a goal of mine.


Boy are you going to be shocked when Walmart, target, Amazon brag about the X% income increase when that happens and while prices continue to rise.

Zero, zero companies will discount the sales price when the rewards cards are gone.

There is a strong argument that discontinuing rewards cards actually helps the extremely wealthy by taking from the middle class and giving it to the Uber rich shareholders and big business owners.


There's another side to this. By giving out rewards, wealthy people are given more on top. Things they otherwise would've paid for are now free to them. Whereas someone without those benefits has to pay for those things out of their own pocket.


One interesting detail is that Mazda never designed a wankel after the 90s. They have claimed since then that computer design and simulation has allowed for dramatic performance gains.


That looks frickin' awesome, any pricing ideas?


Sadly no.


All of this is weird personally. I think it is weird that the 7840u (Z1X laptop model #) exists in the first place. It has a wildly overpowered GPU for a mobile chip of it's type and while they quote a 28W TDP, the actual max-power draw is stratospheric for a U series (well over 50 watts). To be honest, I wouldn't be shocked to find out the Z1X internally came, as a concept at least, before the 7840u. Meanwhile, the Z1 is so wildly weak in comparison. It is only 1/3 the TFlops.


Someone else already commented, but as a user of a Framework 13 with the 7840/780m, the iGPU is definitely not overpowered.

The issue is that iGPU performance has been hardly changing for probably close to a decade, meanwhile dGPU performance has significantly risen (despite using older process/nodes at some times). What we’re seeing now is a correction. The meteor lake iGPUs apparently trade blows/are very similar to the 780m.

More specific to the 780m, its performance is apparently close to a gtx1060… a midrange gpu from 2016, about 7 years ago. It’s very nice to run a decade old game like gta5 at high, but modern titles like genshin can’t even hit 30fps on medium at 1x scaling. Also, not to mention that DDR is much slower than GDDR.

So: is the gpu better than previous gens? Definitely. Is it overpowered? No, previous gens have been underpowered.


Intel GPU performance was totally stagnant from Haswell (Iris Pto 5200) until Iris XE, from 2013-2019. AMD on the other hand was steadily building up their iGPUs, allowing the APUs to work together with external GPUs, and with the 5700g (which imho was a milestone) we finally had 30fps+ for all but the most brain-dead games (alan wake i'm looking at you!)


Yep, Intel stagnated hard. I’d say it serves them right, they didn’t care to innovate when they were leading and now they’re getting served.


> Intel GPU performance was totally stagnant from Haswell (Iris Pto 5200) until Iris XE, from 2013-2019. AMD on the other hand was steadily building up their iGPUs

this is what I mean by the unconscious pro-AMD bias that people regularly engage in. not only is that not true at all (Iris Pro 6200 and Iris Pro 580 both are significantly faster), but actually Crystal Well is a very interesting/prescient design in hindsight, it is the type of "playing with multi-chip modules and advanced packaging" thing people get super excited about when it's AMD.

https://www.anandtech.com/show/6993/intel-iris-pro-5200-grap...

https://www.anandtech.com/show/9320/intel-broadwell-review-i...

https://www.anandtech.com/show/10281/intel-adds-crystal-well...

https://www.anandtech.com/show/10343/the-intel-skull-canyon-...

https://www.anandtech.com/show/10361/intel-announces-xeon-e3...

> allowing the APUs to work together with external GPUs

nobody does that because it sucks. best-case scenario is when your dGPU is roughly similar to your iGPU performance so you get "normal" SLI scaling... ie like 50% improvement. And in any scenario where your dGPU isn't utter trash, it completely ruins framepacing/latency etc.

> and with the 5700g (which imho was a milestone) we finally had 30fps+ for all but the most brain-dead games (alan wake i'm looking at you!)

that's actually because the 5700G is still using the 2017-era vega design, which was not that advanced technologically when it was introduced. AMD dropped support a while ago but the writing was on the wall for a long time before that.

AMD should have moved 5000 series APUs to at least RDNA1 if not an early RDNA2. The feature deficit is significant, a 2017-era architecture being sold in 2023 and 2024 (already unsupported) is just not competent to handle the basic DX12 technologies involved.

Alan Wake didn't do anything wrong, AMD cheaped out on re-using a block and it doesn't have the features. Doesn't have HDMI 2.1 or 10-bit or HDMI VRR support either... and the media block is antiquated.

and if you'll remember all the way back to the heady days of 2017... AMD bet on Primitive Shaders and could never get that to work right on Vega. PS5 actually has a primitive-to-mesh translation engine that does work. RDNA1 still does better than Vega but RDNA2 is where AMD moved into feature parity with some fairly important architectural stuff that NVIDIA introduced in 2016 and 2018.

https://www-4gamer-net.translate.goog/games/660/G066019/2023...

Again, people love to jerk about Vega (gosh it scales down so well!) but honestly Vega is a perfect Brutalist symbol of the decline and decay of Radeon in that era. There is no question that RDNA1 and RDNA2 are incredibly, vastly better architectures with much better DX12 feature support, much better IO and codecs and encoders, etc. Vega kinda fucking sucks and it's absurd that people still simp for it.

The "HBM means you don't have to care about VRAM size!" and other insanely, blatantly false technical marketing just sealed the deal.

But when AMD rakes-in-the-face with Vega or Fury X or RDNA3 it's "a learning experience" and maybe actually just evidence of how far ahead they are...


The 7840U is the same die as the 7940HS. AMD only did two mobile processor dies this generation; Phoenix 2 is in the Z1 and the larger Phoenix is in the Z1 Extreme, 7840U and 7940HS, among others. So they're doing a lot of product differentiation solely through adjusting power and clock limits, which is confounded by the leeway OEMs have to further adjust those limits.


I'm so so curious how much binning there is on modern chips, or whether it's 95%+ just a question of what settings are going to work for a given form factor.

Most of the various controls are sitting right there in Linux (or other OS) for power control of CPU & GPU. If cooled, could we just crank a Z1 Extreme up to a 100W core and have it be like a 7940HS? Or is there really some power binning differences?

I don't know what AMD charges for their cores. With Intel, there's been a decade of the MSRP of ~$279 for a chip, but the chips coming in a variety of different sizes across the power budgets; you'd pay the same for a tiny ultra-mobile core as you would for a desktop core. What we have now makes that look semi-ridiculous. It's the same chip. Different power budgets.


> If cooled, could we just crank a Z1 Extreme up to a 100W core and have it be like a 7940HS?

I think it shares the die with the 8700G which runs at 65W.


These numbers are broadly lies now. Many chips regularly exceed their TDP. Here's Tomshardware's review showing an 81W power consumption for the chip for a cpu only load. https://www.tomshardware.com/pc-components/cpus/amd-ryzen-7-...

And that only begins to dig in. I was asking about overclocking, which is going beyond the base TDPs. Here's the 8700G's 780M gpu hitting 156 watts on overclock, for the same die: https://www.tomshardware.com/pc-components/gpus/amds-radeon-...

I highly highly highly encourage blowing away any thinking that a chip says it's TDP is X watts so that's what we'd get. This chip has been seen in the wild drinking vastly vastly more power if given the chance & settings to. I think my question still stands, is there any binning or real difference that would keep a Z1 Extreme from doing similarly? Or are we just bound by how much power we can put in and how much heat we can take away from the Z1 Extremes out there?


> I think it is weird that the 7840u (Z1X laptop model #) exists in the first place. It has a wildly overpowered GPU for a mobile chip of it's type...

I have a handheld with a 7840U (GPD Win Mini), and I love it. I do use it mostly for games, and I suppose if it were labeled a Z1 Extreme instead of a 7840U, I'd be just as happy with it. So I can somewhat see where you're coming from. But also I think it's becoming more common to want to run "real" (non-gaming) workloads that can leverage a GPU on devices without a discrete GPU, so I still think it makes sense as a general-purpose part. (Also, I think the Z1 was an Asus-exclusive part, at least initially, so if there wasn't a non-exclusive variant, then I'd be stuck with something inferior.)

> ...and while they quote a 28W TDP, the actual max-power draw is stratospheric for a U series (well over 50 watts).

The ideal TDP for that chip is around 18W, with diminishing returns after that. (I usually run mine at 7-13W depending on the game.) Beyond 25-30W, you get only marginal performance gains relative to the amount of additional power, so while it technically can use over 50 watts, it's clearly not designed for that and the extra performance isn't worth it when you're running on battery.


Not sure it's wildly overpowered. It's in the Framework 13's AMD lineup and works quite well.


I have one - The iPhone X's swipe navigation is brilliant. There is a little WebOS in there in multitasking gestures left and right, but it is such a straightforward system that works.


When we say “UI patterns or systems” we mean frameworks that provide high-quality defaults for all the software written for that system. Card/swipe navigation is a specific affordance of iOS — like Alt+Tab switching on desktop OSs — not a pattern or system. Further, as you mention, it was invented by Palm and copied by Apple eight years later.


The best example of this is the macOS document proxies. They were a sensical representation of an icon you can drag and drop, now they are hidden under a hover of the title that stupidly animates it out.

Anybody would would make a critical productivity feature a hidden hover should be canned from a UX team. This choice was defended by Alan Dye.


Sorry, what are document proxies?

EDIT: Oh, is this it? [0]. I can't say I ever noticed that before (when it still existed)

0: https://osxdaily.com/2014/08/20/open-files-new-app-proxy-ico...


This is a feature i miss from macOS - perhaps the feature i miss most from macOS. Beyond drag-and-dropping you could also right click (or something similar, don't remember, i used macOS years ago) the icon and you'd get a popup menu with the directory hierarchy under which the file was stored - clicking on any of the menu options would open that directory in Finder.

This is the sort of integrated functionality you get when the OS, the application framework and the core applications that come with the OS are all written with each other in mind.


Maybe I am misunderstanding, but I just tried that flow with a Pages document and it still works in Sonoma. Drag the icon from the header, drop into an email. Now I'm curious what the old feature was.


What's changed is that newer versions of the OS hide the file proxy icon by default. You have to hover the cursor over the title for a second to see it, or to interact with it. Before, it was always visible, and ready to be clicked + dragged.


I think there’s an option somewhere to enable it, but yeah annoying it’s not default.


Ah maybe I changed this setting at some point and don't remember. It is always there for me, not only on hover.


Likewise, I think I enabled an option to always show it.

Its under System Settings → Accessibility → Display → Show window title icons


In fairness, the title probably matches. You go off and write a DirectX translation layer and make it work properly in an entirely different operating environment while matching DX quirks. Absurdly complex.


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: