A note for those who are thinking about making a "real" paint app out of these techniques - which I started on, got the architecture mostly done to a demonstration stage, and then stalled out on UI:
When you start adding in the flexibility of different tool types, differentiating "stroke" and "material", real-time preview vs. final, and other nuances, the internals start to resemble a dataflow system dealing with point paths and canvas buffers. Often you take raw input and derive a new path from it, or take an existing buffer and combine it with path data.
The basic rendering techniques really are this straightforward, it's just the flexibility requirement that adds complication.
When you start adding in the flexibility of different tool types, differentiating "stroke" and "material", real-time preview vs. final, and other nuances, the internals start to resemble a dataflow system dealing with point paths and canvas buffers. Often you take raw input and derive a new path from it, or take an existing buffer and combine it with path data.
The basic rendering techniques really are this straightforward, it's just the flexibility requirement that adds complication.