Graphical fidelity isn't the reason I don't play mobile games. I don't play mobile games because touchscreens are terribly deficient compared to a controller, and because the monetization of mobile games is invariably psychotic.
There's no getting around the control issues without a clunky peripheral, but this site helps to find the rare examples which aren't designed to bleed you dry at least: https://nobsgames.stavros.io
Predictably many of the high quality games in that category are ones which started their life on PC or consoles and were ported to mobile later. Mobile-first game development is rotten to the core.
The Nintendo DS light fits comfortably in a pocket on the other hand and had great games made for it that took advantage of that platform's featureset (I personally think the legend of zelda games for that platform have some of the best controls out of any handheld nintendo game with the stylus gestures). Clunky peripheral is from a lack of imagination not from a technological impossibility.
I remember when Sony released the Xperia Play phone a decade ago, with a built in slide out controller. Something like that would be great to have, but maybe instead as a peripheral that clips onto the back on your phone. Wouldn't be much thicker than the battery cases that are common now.
Not really. Apple could design a pretty clean peripheral if they wanted to and it probably would be about as thick as the nds was. The iPad keyboard is a good model of how that might look, secured on a magnetic attachment perhaps able to be folded into different orientations. Like I said, little imagination however has taken place in this sector. Apple is content to take their 30% cut off the whale gamblers. The iOS game market is certainly massive enough where that zero overhead rent seeking on the part of Apple is quite lucrative, apparently in 2022 almost $50 billion were spent on ios games alone. $15 billion to Apple thanks to their pizzo.
Apple Arcade, a modest subscription in itself, is filled with mostly kid friendly games and none of them are allowed to have iAPs (or the iAPs are made free and then bundled in order to be featured in the Arcade catalog).
I would suggest getting a last gen handheld, since they're pretty cheap now, have good libraries of actual proper games (especially with emulation), and physical controls just can't be beat. I got a near-perfect vita from Japan for $90 and I'm quite happy with it. Not as convenient as the phone in your pocket, but if you're carrying around a backpack anyways it's not bad, and for kids maybe encouraging them off smartphones ain't so bad (:
>monetization of mobile games is invariably psychotic
How did this happen? I get that the average cell phone user is relatively easy to bilk, but it seems that the oasis's of honest fair games are incredibly sparse.
The Nintendo Switch is a glimpse into what iOS/Android gaming could have been if there weren't such an extreme race to the bottom in that market, modern phones run circles around the Switches decade-old processor but the Switch has real actual proper games and your far more powerful phone just has casinos pretending to be games.
That's true, but the market is surprisingly willing to put up with bad touch controls. Genshin Impact has made billions of dollars by being the closest thing to Zelda BOTW on mobile, but of course BOTW is a game you buy once and Genshin is a "free" casino. Even mobile first person shooters are wildly popular despite being miserable to play if you're used to other platforms. There is a willingness to play these games on a phone, but seemingly only if the cost of entry is zero dollars, even though that means there will be much worse monetization woven into the game.
> There is a willingness to play these games on a phone, but seemingly only if the cost of entry is zero dollars
This is the issue that drove the rise of the mobile games as they are today.
Android users in particular were extremely reluctant to pay for games up-front.
> Last week I found myself in one of those "good news, bad news" situations. The good was that more than 100,000 people were enjoying the new Android version of our game. The bad news was that only about 10 percent of them paid for it
Android has a max APK size of 100MB + another 4GB (iirc) of asset blob + overlay patch you can ship through the play store. Google was historically against delivering large apps like games and left it up to the app developers to serve bulk assets themselves as a post install step.
If you have to serve asset downloads yourself as a post install, making the total download smaller both improves UX and also saves you money on CDN costs.
Probably because the size of the full-resolution "skins" are of significant size, which may be an issue on phones with limited storage - especially if they implement custom assets to fit each size rather than just scaling. Which also may happen to be correlated to the sort of phones who won't use the highest quality versions in the first place.
And unfortunately the few proper AAA games that have come to iOS recently have completely bombed. Assassin's Creed Mirage supposedly sold only 3000 copies on iOS in its first month and RE:Village and RE4 Remake didn't fair much better.
To be fair they are limited to iPhone 15 Pro and M1 iPad or above which narrows the market a lot for now, but it still seems like a hard sell to developers and users without something changing. Controls and storage space alone are huge issues for users, Each of those three games take up more than half of a base 15 Pro on their own. For the price of a controller and 512GB iPhone you might as well buy an iPhone 15 Pro AND a Switch or Steam Deck which will have vastly more games and expandable storage to boot.
There was a period early in iOS's existence when people were willing to pay up-front for relatively inexpensive games that had none of the mobile nonsense typical today.
For example, Tiny Wings was the number one iOS game in 2011 and cost two bucks.
In 2010, Plants vs. Zombies was a pay up-front iOS game that cost three dollars.
Today, PvZ is cross platform and free, but filled with Skinner Box nonsense.
I remember those days when I first had an iPod touch (basically an iPhone without the cellular features). I loved exploring through all the different little games out there, and they were usually only a few bucks. A few years later came the rise of in-app purchases, and then we all know how things went from there.
Then you can get Apple Arcade, but free is a better model for the game developer because you paying $30 outright is nothing compared to a Saudi prince deciding to play your f2p game.
Perhaps one of several reasons is the failed promise of the App Store. If Apple had cultivated a different dynamic between consumers and developers we would have a different landscape of market segments today. Instead they created (or allowed to set in) the consumer expectation that an expensive app was two dollars.
Consider the alternative: richer people, the few of them, but your game, other people pirate your game.
Now instead, you find a way to get the amount of dollars a person can pay extracting cents from the people who used to pirate a game, and hundreds from those who have money.
It’s bad for the game, but great for the developers’ pocket.
EDIT: for example, Nintendo made 3.7B in 2023, King made 2.7B or so it seems. Nintendo is one of a kind, companies like King are a dime a dozen.
The app store ate its own tail. When Apple first started pushing for games, many of the most "polished" cost about $9.99. apparently this was too high for consumers considering these were marketed along side $0.99 or free games that might be ad/dark pattern supported.
Meanwhile consider other consoles. Games are all the same price around $60. The cheap/simple games are like $15. The pricing supports different business models although certainly in recent years the $60 releases are falling prey to gambling mechanics and other dystopian decisions.
(Close to) No regulation. And because humans have issues restricting themselves and in consequence also don't do that as part of organizations, things are bound to iterate towards the most exploitative money generating business. It happens on all platforms that don't prohibit certain business models. Smartphones just have a leg up due to pricing boundaries early on which forced the companies to look for other ways to make more money sooner than on other platforms where you could still sell titles for higher prices.
I also start to wonder if all that can still be called "entertainment" and "downtime", or if that becomes less and less the case. After all, they do fulfill a certain purpose for humans, and if that is not the case anymore it will have an impact on other parts of our lives.
Apple maintained weird anti-gaming policy throughout the iPhone revolution, I assume in twisted hope that videogames become a depreciated type of activity in this world. That created vacuum to be rapidly filled with parasocial-gambling-pornography content and it started raking in cash.
Google and Apple only really got one try at creating an app store, so they took a punt at what they thought was a workable design and what they thought were sensible commercial decisions at the time.
They decided to commoditise their complement - why muck about with business partnerships and curation to produce a hundred $50-$70 games and lock out homebrew developers, when opening the floodgates lets you give your customers a wider selection and lower prices?
Hell, they probably imagined market forces or something would allow quality mobile games to sell for $50 via the app store, and that they were doing small developers a favour by not locking them out of the platform.
I doubt they knew, at the time, that they were going to be buried in micropayment casino mechanics and free-to-play, pay-to-win dross.
Is the outcome crap? Yes, 100%. But I can see how back in 2008 when Steam was in its infancy, PC game piracy was rampant, consoles locked out homebrew games and every PC software producer needed had to roll their own sales/payments/distribution/updates infrastructure if they wanted to get paid, they thought the design improved on the status quo.
That (mobile gaming) may be one use case, but I think you are misunderstanding what else this could be used for - for example including this as an upscaling block in an ML model gives the ability to run ML applications (or other) target hardware which would traditionally have been thought of as too compute- or memory restricted. Also this isn't limited to mobile devices - this could also include embedded hardware. The ability to run a smaller scale model, and scale up - while continuing to keep the accuracy high - enables features like increased privacy for developers to run their software on a wider range of devices.
If you're on iOS try Apple Arcade, where games have no microtransactions and shady monetization tactics. I couldn't believe how much "calmer" games were when they're not under pressure to trick you around every corner to get your money.
Of course, that won't help with the control issues which I completely agree with.
I've played plenty of mobile games designed for touch that were fun. They are different than console and PC but they were still fun.
I play "Doug Dug" almost every day. I played Pixa (same devs). I thought they did a great job of coming up with a way to make the game fun on touch. "Skiing Yeti Mountain" was great. "Planet Quest", "Severed", "Sword & Sworcery EP", many more.
Is there any reason this couldn't be applied to a Mac with an ARM chip as well? (Genuine question, I know next to nothing about anything in this space.)
Apple using the ARM ISA isn't really relevant here because this up-scaling technique runs on the GPU which has a different API and architecture from ARM the Company's GPUs. Apple has done effectively the same thing as ARM ASR for their own GPUs with MetalFX, which is also based on FSR 2.
* Spatial - Inferring data from nearby pixels in a single frame (?)
* Temporal - Inferring data from a point on previous and next frames
I guess this is super resolution in the statistical sense, rather than "AI super resolution" which would infer data from similar data in other photos/videos.
I'm surprised, I thought this was a dead area for research and AI super resolution had fully supplanted it. Are there any open source implementations of this, ex: for photography? I was digging into this a few years ago and all I could find was the commercial "PhotoAcute" which is basically dead but I somehow managed to get a key for and it... barely works.
Temporal super resalution needs to accumulate data from multiple frames to work. Essentially, each frame renders only a subset of pixels, but crucially each frame renders a different subset, so you can combine information over a few frames to compute what should be at each pixel. There're some complications to account for things moving around the scene, but that's the basic idea.
So there's no way the technique or similar techniques can be used to upscale a single photo in isolation. However, conceptually it's similar to capturing a long exposure image of dark scene, like the night sky, to get a low-noise, brighter photo. Temporal super resolution accumulates information from rendered pixels over time, while a long exposure photo accumulates information from captured photons over time.
PhotoAcute uses several photos with some jitter, like you might get from burst shots on a phone. I'd expect other software (and possibly phone upscalers transparently) might do this too.
A key component is that when a game generates pixels, it also generates other data. Such as a Z-buffer for pixel depth and motion vectors (for motion blur, among other things). Feeding this to the upscaling algorithm vastly improves image quality.
The larger problem to solve with these upscalers are temporal artifacts when using the above information. This lead to a number of heuristics, but if you train a neural network to do the work, it tends to perform better than the heuristics. There's still a ton of research going into non-ML solutions however, because the current console generation doesn't have AI-acceleration available, nor does a lot of hardware out there. So there's some longevity to be had by doing a good job for all that hardware out there.
> I'm surprised, I thought this was a dead area for research and AI super resolution had fully supplanted it.
There is no open source solution of the AI variant. There are currently only DLSS (Nvidia) and XeSS (Intel), and both are proprietary. The one open source solution is AMD's FSR (MIT license), which doesn't use machine learning. So naturally Arm and Apple build on FSR for their products instead of developing an AI solution from scratch.
Faking is what real time computer graphics is all about. You should rather ask: Why should we perform a long chain of complex computations just to produce the same pixel that was already there on the previous frame, instead of using our resources to produce something new?
Another aspect to the temporal-based techniques is that they jitter where the sample is done within the pixel, so over a few frames it accumulates more information than purely having that one sample per frame as the truth of what that pixel is. Plus if the renderer knows about the world and how its moving, and passes that to the TAA function you can track the rendered history for that element of the scene when it's moved. There's more information available to create a higher quality image for a low cost.
Render resolution has been less tightly linked to display resolution for a long time, and some games have done that on different elements in a scene for a long time (I remember seeing it in Psychonauts from 2005). Variable Rate Shading is another tool developers have to pull resources away from areas it's not going to be noticed. What I've been wondering about for a while is what's the furthest a developer could push it to smartly spend performance instead of doing it on the full frame, how much effort does it take to hint to these systems that different elements are more or less important, and then compose them together in a way that doesn't have obvious flaws.
If you remember back in the day when we just had MSAA and nothing else, it was horribly slow. “Just render more pixels” turns out to give you poor efficiency. Better efficiency = you can spend the cycles doing something more interesting.
In other words—if GPUs were powerful enough to just push more pixels to the screen, then we would still use these upsampling techniques anyway, save a bunch of computational power, and then spend that computational power improving the graphics in other ways.
Not every frame is rendered on the highest powered hardware with the highest settings. On mobile, battery life is important. On desktop, in gaming, many players don’t have the highest-end graphics cards. And even for those who do, GPU resources are often committed to other tasks beyond rendering like simulation, NPC AI, etc. Super resolution allows for apparently higher pixel count with (in some cases) acceptable image quality while still using GPU for compute or not burning up the battery.
All anti-aliasing techniques are essentially faking the effect of subsampling a higher resolution render. Temporal anti-aliasing is a pretty darn good anti-aliasing method, since it uses more real information than other methods except for (well implemented) brute force supersampling. It's also a little unique, since it can be tweaked to actually produce that intermediate higher resolution render, which is how we get temporal super resolution.
Grainy 60fps is better experience than choppy FHD, to both play and to deliver to consumers. Right now, GPU performance on lower end phones is a problem that both differentiates high end ARM chips from the rest, and is also harming app growths.
60fps on non-SD8 SoCs is going to sell more Play Store cards, and also co-incidentally more ARM licenses. Maybe it could also preemptively save Google Tensor before its public reputation points run out. Game players also gets better experience, but that's maybe entirely coincidental.
It's less a plateau and more scaling boundaries. Mobile chips especially are limited in power and die space so if you want to produce a high-resolution image at acceptable frame rates, either quality-per-pixel or pixel count have to drop compared to stuff that's plugged into the wall. Each generation of chip on desktop or mobile is still much faster than the previous, but that's battling with increased quality-per-pixel and higher screen resolutions/refresh rates.
DLSS/XeSS allow dropping the pixel count and then reconstructing a satisfactory image, which means you can either cut power draw for near-equivalent quality, or use the newly available headroom to improve quality-per-pixel further or deliver higher framerates.
One dirty secret when it comes to mobile phones and tablets is that games were already rendering below native resolution - Apple basically told developers to do this in the early Retina era, since it wasn't feasible to render at native with good framerates - since it saves power and the screens are so high-res that most users won't be able to tell the difference.
I can't remember the last time I launched a game on my phone and it was actually rendering at native resolution. So in that context, techniques like FSR or this new upscaler are just replacing the existing bilinear/bicubic upscale filter.
FWIW, DLSS feels like magic. I get something like 50% perf improvement with almost no loss in fidelity. Lets me play modern games at 4k/120hz on my 3080 without too many problems.
We literally generate 30-60 pictures every single second. Every picture has information in it which combined gives you more unfaked information.
Your viewpoint doesn't change it's content every single frame. This only happens when you rotate super fast 180degree or warp through a portal or whatever.
The one comparison example is sort of poor, as the "improved" part is bright, and the "original" part is dark and rather limited. It's hard to tell much from this.
It seems right in their wheelhouse imo. They've just taken all of the hard work AMD did in the space with FSR and optimized it for their GPU architecture. I'd be pretty worried if ARM didn't know how to write a performant shader for their own GPU.
Honestly, I don't get why all the popular upscaling methods (DLSS, FSR, XeSS, ...) aren't integrated in all of the mainstream engines: Unity, Unreal, Godot...
As far as I'm concerned, on the end of a game developer, that should be a single checkbox in the project options, or something similarly easy to choosing between 2X and 4X AA, both on the desktop and mobile, where supported by the hardware (or simpler generic upscaling, if nothing else suffices).
The performance gains (at the expense of graphical fidelity, which may or may not be acceptable) would be staggering - you could quite literally squeeze out just enough performance to enjoy a game or an application even from older generation hardware for more modern titles.
Microsoft is pushing something along those lines with DirectSR, the idea is that game engines integrate one set of generic hooks which can then be routed to any of the vendor upscalers transparently.
I'd guess that a lot of devs it's not their main strength and their main concern is getting a working product out the door. If you're making "Bunnies go berserk" with the adventures of berserk rabbits as your main selling point, it seems unlikely that a smaller list of graphics fidelity options is going to dissuade anyone but the most hardcore graphics enthusiast from buying.
All those options are also going to need time allocated for a graphics developer to tune to suit your game if the out-of-the-box defaults aren't great, and as it's an area still in progress upgrading APIs and revalidating for your game on a range of hardware. If you're doing multiplatform with consoles/mobile that's even more in the mix, and you need to consider what will reward the time spent on it.
This also explains why drivers for new PC-GPUs (Intel, Qualcomm) are so "bad". They probably aren't that bad, they just don't contain hacks for decades of games like Nvidia and AMD.
Built-in Render Pipeline: no DLSS, no FSR, no XeSS
URP: no DLSS, FSR 1.0, no XeSS
HDRP: DLSS, FSR 1.0, FSR 2.0, no XeSS
There are also other methods supported by that engine, but we're not at a point where we'd have all-encompassing support, at least not yet. Maybe in a few years.
So tired of the dragon chasing with gaming graphics. The fruit they are chasing today is so low hanging yet demands such powerful hardware just to do something like render a beam of light through the woods. Meanwhile the game hardly looks better than crysis outside of that, especially when you are, you know, focused on playing the game and not pixel peeping. I wish developers put more effort into compatibility and cross platform support but I guess its not the developers making the engines after all, they are kind of beholden to them.