That's a great point. Using approaches like this as a less computationally complex method of resampling for the purpose of anti-aliasing (and not upscaling) seems worthwhile, definitely.
I was thinking more of attempts to create neural nets that map from e.g. the set of 480p images to the set of 1080p images. The "best case" results seem to be trained on low bitrate HD video, which gives results that "look good" to many people (especially those who grew up watching Youtube) but in terms of real detail are worse than a simple upscale (with e.g. Lanczos). I haven't yet seen results where content aware upscaling provides a real improvement over "dumb" algorithms for this purpose.
I was thinking more of attempts to create neural nets that map from e.g. the set of 480p images to the set of 1080p images. The "best case" results seem to be trained on low bitrate HD video, which gives results that "look good" to many people (especially those who grew up watching Youtube) but in terms of real detail are worse than a simple upscale (with e.g. Lanczos). I haven't yet seen results where content aware upscaling provides a real improvement over "dumb" algorithms for this purpose.