Hacker News new | past | comments | ask | show | jobs | submit login
Rendering Moana with Swift (gonsoloblog.wordpress.com)
163 points by hutattedonmyarm on Jan 14, 2021 | hide | past | favorite | 29 comments



Saw this earlier in Swift forum too. It demonstrate that performant Swift code is possible, but with a lot of profiling. I initially choose to use Swift in hope of better performance characteristics than Python (for some pretty heavy data preprocessing prior feeding to neural nets), but at the end, writing some simple code still 2~3x slower than in C / C++ if not more.

Hopefully at some point when Swift matures, people can start to optimize the generated code more.


Sounds like you got exactly what you wanted, faster than python but safer than c/c++.

There isn't a free lunch here. Even with eye-watering amounts of engineering into optimizer and runtime etc., you aren't going to match c performance in general any more than JVM or .NET has, for much the same reasons.

In some languages (Swift is one) you have the option of writing c-like code that performs similarly, but you can't have your cake and eat it, and you shouldn't expect to.


I'm not sure what you were trying to do but Swift can match C quite easily. But the standard library isn't optimised for that use case, so you need to avoid most of it.

Use `UnsafeBuffer` instead of the standard library `String` or `Array`, unsafe mathematical operators (`&*` instead of checked multiplication `*`) and use neither `class` nor existentials (because they involve reference counting and heap allocation).

That leaves you with the same language constructs as C and LLVM compiles it to the same instructions.


Yeah, of course you can. But at that point, it stops being the idiomatic Swift language. My original post is not a critic for the slowness of the language. It does (like the other post mentioned) better than Python which is what I set out to do.

I do think Swift can do better though, especially with ownership so that most of refcount'ed classes can be migrated to struct and still retain most of the easiness when just use these. The standard library probably can be optimized further so the idiomatic array can be sufficient for the author's case when it can etc etc.


> especially with ownership

Yeah, the ownership manifesto is likely to be a big help.

https://github.com/apple/swift/blob/main/docs/OwnershipManif...

But I wouldn't hold my breath. I was hoping it would come with the Concurrency proposal (which will likely fill the next year of major features in Swift) but they've decoupled the two. My guess is Ownership is 2-3 years away.


So your initial idea was to replace Python with a faster language, you then did that by rewriting the code in Swift, but are eventually disappointed that the code is slower than C/C++? Why not write C/C++ in the first place if you need its performance?

It is pretty well known that Swift, an inherently safer/easier language than C/C++ that does automatic reference counting cannot compete with fine-tuned C/C++.

Did you not achieve your initial goal of getting something that is faster than Python?


It would be pretty surprising to easily outperform well optimised pandas/dask Python code for data engineering.


This is an interesting aside:

> GPU rendering: This should be a big one, PBRT-v4 obviously does this as some of the mentioned renderers above. It should be very well possible to follow them and use Optix to render on a graphics card but I would much prefer a solution not involving closed source. Which would mean that you have to implement your own Optix. :\ But looking at how CPUs and GPUs are evolving it might be possible in a distant future to use the same (Swift) code on both of them; you can have instances with 448 CPUs in the cloud and the latest GPUs have a few thousand micro-cpus, they look more and more the same.

I'd be really excited for this future. One of the main reasons I haven't delved into GPU programming for fun is that I don't really like the available language(s). It seems petty, but the language I'm using makes a huge difference in the fun-factor for me.


Try glsl, like on shadertoy.com, for a high fun-factor intro to GPU programming.

I enjoy coding in Python and JavaScript more than I enjoy coding in C++ or CUDA, if I'm looking only at the language itself, but I have to admit that it's also very fun to make something run 100 or 1000 times faster than it did before. That kind of fun helps me overlook language differences.

To me the quote sounds pretty funny, because a cloud of 500 CPUs running Swift, right now, is way more expensive and way less efficient than a single GPU. The current generation of GPU have over 10k single-thread cores...


> To me the quote sounds pretty funny, because a cloud of 500 CPUs running Swift, right now, is way more expensive and way less efficient than a single GPU. The current generation of GPU have over 10k single-thread cores...

The number of threads in a GPU cannot be compared to the number of CPU threads or CPUs, especially in 3D rendering. CPU threads are vastly more independent and powerful than those on a GPU. GPU thread divergence places significant burden and limitations on the design of GPU kernels. Memory bandwidth is very costly as well. This is a very active and open research area for 3D rendering, and this Disney scene is designed to test those limitations (among other things). There is a reason why most of the 3D animated movies you see were rendered on CPUs.


Those are good points, however 3d rendering is one domain where it does makes some sense to compare CPU & GPU threads, especially 3d game rendering. You certainly can compare threads between CPUs and GPUs for fp32 performance, if you have a pure compute workload without divergence. I work on OptiX, BTW, so I’m biased there, but there is also a reason why most commercial renderers are steadily moving toward the GPU. I predict it won’t be long before your statement flips and most CG movies are rendered on GPUs.


You mean the shading language? You should check out Metal the language is nice. The WebGPU shading language is close to it.


Not really, that was the initial proposal from Apple.

WebGSL doesn't share anything with the C++14 based Metal Shading Language.

https://gpuweb.github.io/gpuweb/wgsl.html

Actually it even seems to have a Rust flavour to it.


Where do you see the Rust influence?


Right on the first example,

    fn main() -> void {
        gl_FragColor = vec4<f32>(0.4, 0.4, 0.8, 1.0);
    }
There are more thorough the documentation.


Am I missing something when the author writes about a free GCP instance with 8 vcpus & 64GB ram? Which one is that?

My second thought, why not scale that up to 64 vcpus (or even more) and spend less time rendering? I'm sure there's a balance between cost per second of render time and # vcpus. I haven't found it myself, mostly due to not experimenting.

When I render my product shots on GCP, I use a 96 vcpu instance and render relatively intensive (due to the higher quality of render and light settings) at print resolution in a minute or two. The cost becoming negligible, the feedback quick, and the strain on my MBP minimal.


I think he's using GCP with a new account, so he received a $300 free credit.

But I couldn't find machines with 8vcpus and 64gb, only 8vcpus with 32gb

90-day, $300 Free Trial: New Google Cloud and Google Maps Platform users can take advantage of a 90-day trial period that includes $300 in free Cloud Billing credits to explore and evaluate Google Cloud and Google Maps Platform products and services. You can use these credits toward one or a combination of products.

https://cloud.google.com/free/docs/gcp-free-tier#free-tier-u...


Here's some info about the "Moana Island Scene" which was released by Disney a few years ago:

https://www.disneyanimation.com/resources/moana-island-scene...

https://disneyanimation.com/publications/the-challenges-of-r...


Aka Vaiana in many European countries to, ostensibly avoid “trademark” issues, but realistically to avoid unexpected Google confusion with Moana the adult film star.

https://www.hollywoodreporter.com/news/disney-changes-moana-...


> I finally chose Swift because of readability (I just don’t like „fn main“ of „impl trait“).

And who says that syntax doesn't matter?


> And who says that syntax doesn't matter?

Literally noone.


Clearly you've never hung out with the Erlang community!


I think the alternatives in swift are `func` and `extension` (not sure on the last one - by Swift is a bit rusty).


Swift being "rusty" made me laugh to myself quietly.


The author said he tried `String(file.availableData, encoding: .utf8)` but the data was too large to fit in memory.

Wondering if using memory-mapped data would have helped: `Data(contentsOf:..., options: .alwaysMapped)`?


Does anyone know of a project to convert this to an explorable d map in a game engine? It seems that it would be possible by simplifying the model and rendering it in real-time.


It looks like Unreal has some amount of direct support: https://docs.unrealengine.com/en-US/WorkingWithContent/USDin...


Great to see new rendering stacks being used.


I also like to see Swift outside iOS. It was an interesting contest though, which came down to personal preference: "in the end only Rust and Swift were serious contenders. I finally chose Swift because of readability"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: