If you're like me and first got hung up on the difference between a parallel curve and a translation of the curve, the wikipedia article (literally the first link) explains it.
My understanding is that a parallel curve is what you get if you dilate the curve with a circle, ie the curve along the furthest extent of the dilation (see the picture in the wikipedia article) which evidently is not the same as just translating the curve down.
Yep. A way that helps me visualize it is that a parallel curve is what you need to make a set of railroad tracks. The tracks have to always be a constant distance apart when measured along the direction locally perpendicular to the tracks - i.e. the direction of the train's axle.
if you have a curve you need to make a parallel to, like for making a two part form for bent lamination, a common trick is to use a washer. you run the washer along the reference curve and your pencil in the hole in the washer. this gives you the parallel curve offset by the width of the solid part of the washer.
This sounds even better. I was imagining a similar idea where for every point in the Bezier you have a normal that can extend both directions by a specified amount. The smoothness still requires some interpolation which shouldn’t be too further from normal Bezier interpolation.
A simple counterexample to imagine is a curve that represents a circle (or a circular arc). The parallel curve to this is a scaling of the circle, with a bigger/smaller radii, not a translation.
Though the circle case is a little bit misleading, since it's only the same as scaling in that one case. In general, finding the parallel curve can't be approximated by affine transforms like scaling or translation, which is why it's so annoying.
Also this is surprisingly relevant in a lot of weird places. First I encountered this was trying to create a closed polygon "outline" based on a vector graphics font -- the idea was to approximate the concept of "stroke width" on a platform that could not understand strokes and instead only worked with filled polygons.
The easiest way to understand an offset curve is that it's one side of the shape you get by "stroking" along the source curve, in the SVG sense. Join two offset curves together (one for each side), cap off the ends somehow, and you have the entire shape of a stroked path. It's easy to see how this isn't the same as just translating the path.
More precisely, it's the curve you get by taking each point (of which there are infinitely many) on the source curve, tracing out a line of some constant length l along the line perpendicular to that point on the source curve, and joining each such new point together.
So I tried to create a demo of this (the second paragraph) on CodePen[1]. While it works fine for shallow angles, it leads to some unsightly loops when tracing tightly acute (deep? sharp?) curves.
I assume there's a way to mathematically identify the points that form these unwanted loops, so they can be discarded when drawing the parallel curve - but my maths skills aren't up to the job this morning.
In addition to that a parallel curve is not necessarily in the same class as the original curve. For example the parallel curve of a cubic Bézier is itself not expressible as a cubic Bézier.
So, while the translation of a curve is pretty simple the parallel curve can get tricky to deal with analytically.
A good source for this kind of stuff is also Knuth' work on Metafont, even if Metafont iself completely side-steps these issues by just painting many circles along the original curves to produce the parallel.
Given this definition I think you almost never really want the parallel curve though (at least in computer graphics). Its behavior on the inside of corners is degenerate and not what one is looking for in almost any scenario involving curves.
So while the authors work is certainly impressive, I would keep looking for a more useful definition and implementation of 'curve alongside another curve' than that of the 'parallel curve' (as mathematically defined in the linked Wikipedia article) before using this.
(Speaking as someone who likes model trains.) You need parallel curves to design railroad tracks. Its behavior inside corners is degenerate because in the degenerate case there's no possible way you could make a train turn a corner that tight! You need to take a larger radius for your turns if you don't want your train to end up all crashed in a mess. :-P
> There's software to help with designing these plans, which I guess must use parallel curves under the hood.
Siemens PLM / D-Cubed PGM component is one example of such software.
It's used for the offset function in the sketcher in Siemens PLM own NX/SolidEdge and also in SolidWorks.
> PGM automatically generates valid 2D offset profiles on points, lines, circles, ellipses, splines, general parametric curves, and offset curves.
> Gaps that arise between adjacent edges in an offset loop are automatically capped using a choice of capping techniques. Offset edges that shrink to zero length/radius, and any intersecting geometry is automatically trimmed, including self-intersecting offsets, offsets ellipses, and splines.
Disclaimer worked on the predecessor of that code as D-Cubed from 1995-2000.
As a side note, historically, in the Cambridge (UK) school of 3D mechanical CAD, BRep modeling (Parasolid, Acis) the approach to offset (aka parallel) surfaces (and later offset curves with the PGM) has been procedural - thus an offset surface is a pointer to an underlying surface plus a signed offset value. Accordingly, there would be no conversion to Cubic Béziers - all the modelling operations would be made to work with the procedural offset directly. Pretty much all exotic geometry had to be procedural to meet accuracy requirements - that is before the introduction of "Tolerant modeling" in Parasolid around 1992. So, for example, a Rolling ball blend (ie. a constant radius fillet) surface was procedural and had pointers to offset surfaces, which were also procedural. The advantage was accuracy, but the disadvantage was poor performance.
I'm not sure "degeneracy" is the best way to think about this. If the curvature exceeds the inverse of the offset distance, you get a cusp, and in general you need to think about how to deal with those. In some cases, that will be boolean path operations (stay tuned for an update on those!). But I'm not convinced there's any shortcut where you can avoid dealing with these cases altogether.
That makes sense, I have already edited my comment to say specifically computer graphics since posting since I realized it could have very useful applications in physical modeling, or other scenarios where you specifically care about the constant distance metric, like you mention.
Nice, makes sense. I don't know much about computer graphics. What are some examples where you'd use translated curves? Are you referring to stuff like modeling strands of hair, and you'd make them using translations of a template strand's curve? Or what kind of problems do you typically face?
Is there some kind of "compromise" option that uses parallel curves where possible but falls back to translated curve segments when there's a degeneracy? (If that would be useful at all - I have no idea)
> Given this definition I think you almost never really want the parallel curve though (at least in computer graphics).
Fonts want this a lot.
It helps more to think about this in terms of a closed curve. If you have an arbitrary closed cubic curve, what does it mean to make it "bigger", "smaller", "thicker", "thinner", etc. For example, a blob sticking out eventually needs to get "flattened" as you shrink the curve.
A lot of the problem with 2D graphics currently is simply that we don't have good underpinning theory and, especially, definitions like we do for 3D graphics. If you don't even agree on what things mean, you can't generate a theory for how to do them.
Actually, this seems rather important for computer graphics if you want to transform vector-defined strokes into vector-defined closed polygons (outlines). See: https://youtu.be/F2oSR0Oh25c?t=70
For "corners" on non-smooth curves usually there's some kind of way to resolve the degenerate (or rather, undefined) behavior. Usually this would be something like a stroke-cap property.
Yeah, the paper made me wonder how the typical modern graphics library even draws stroked béziers. Surely, when the line weight is greater than 1, you have essentially two parallel béziers you are filling in order to rasterize a fat stroke.
You don't need to represent them as curves, though. Yes, turning the parallel curve of a bezier into a smooth curve is difficult, but turning it into a sequence of line segments is pretty easy, and something you can do at whatever level of resolution you want.
I’d be interested to hear what you propose as useful alternatives, or why you say you almost never want the parallel curve…
Others pointed out stroking, and in graphics stroking isn’t just common, it’s ubiquitous and you’re using it right now, because fonts are stroked as are lots of your browser features (I can see stroked rounded edge on my comment edit box, in addition to strokes on vector based SVG icons sprinkled on my screen). Most/all paint and drawing apps have stroked paths, etc., like Photoshop & Illustrator. All the online “whiteboards” use stroked paths. These are all offset curves or parallel curves according to the above definition. 3d graphics uses offset curves too for hair rendering, rigging & simulation, just as one example.
Interesting! I remember running up into the same problem when I was trying to recreate a way to simulate pen ink strokes digitally but eventually gave up because it was quite difficult to get something working in all cases.
Could this approach be extended to variable thickness offsets? I mean can you setup an initial offset and a final offset and then interpolate between them along the curve?
Yes. The core curve-fitting technique is very general and can be adapted to any scenario where you can sample the desired curve (with normals) and compute a couple of numerical integrals on it. I am confident that includes variable-width strokes, though I also imagine it will be a nontrivial amount of work to make everything fit together in a satisfying fashion.
As a co-op student in the 80's I needed to solve this problem in Fortran IV for a CAD/CAM system that ran CNC milling machines that created shoe molds. This really brings back memories.
My solution was ... not as elegant ... and would fall firmly into the category of "Thus, in practice the approach is almost always to compute an approximation to the true parallel curve."
Luckily I could cheat because, given the domain, the paths were never pathological; though this solution falls apart in the worst cases as well or, I should say, falls apart in a way that wouldn't work for a a machining path or similar problem domain.
Raph, how well does the parameterization of the resulting parallel curve in your solver match a true parallel curve? Like if you evaluate the original Bézier segment, and the parallel spline at the same parameter value, is the vector between these two points approximately perpendicular to the original segment? (It might be nice to include a correspondence visualization in the app, maybe?)
I’m curious because I could imagine it’s possible to optimize for the shape of a parallel curve without matching the parameterization at all. You mentioned the parameterization difficulty with cusps, so I’d guess the solution would need to handle discontinuous parameterizations? You could ignore cusps and degenerate cases in my question above and assume continuity…
It is optimizing for the shape, and the parametrization is not guaranteed to match at all. Whether this is a good thing or a bad thing depends on your application. If you just want the parallel curve, then it's a good thing, as you're putting all possible degrees of freedom into making the curve more accurate (this is why you get O(n^6) scaling). If, say, you wanted to interpolate between the original and the offset, for example adding a grade axis to a variable font, it's a bad thing, as the two curves are not necessarily "interpolation compatible." It's not clear to me which of those you care about :)
I could add a correspondence visualization (I thought about something similar when describing the error measurement technique), but you can also see it pretty clearly by looking at the lengths of the control arms. That's easiest to see when accuracy is set to infinity.
Thinking about this a bit more. If you want an interpolation compatible offset, reduce the search to a one dimensional problem by fixing the area. This gives a simple rational formula for the length of one control arm as a function of the other. Then pick the value of that to minimize your error metric, for which least squares is probably the easiest to solve. I expect this will give you O(n^5) scaling.
I'll probably implement this, as it's very likely what you want in the variable font case.
Yes, and I should probably add those links. Quadratics get you O(n^4) scaling, which is pretty good. Figuring out where to put the control point is easy, there's basically only one place it can go if you want to preserve tangents. Deciding where to subdivide is a little trickier, and Suchowolski's approach looks pretty good to me. You do still need to deal with cusps though.
You can of course change curve representations (I did that with the Euler approach), but there are downsides. Just how bad depends on the application. One place where you really want to stay in a cubic representation is in a font or vector graphics editor. If you just want to add a little weight to the font, ideally you don't want the structure to change, or lots of new control points to appear. In other applications it might be fine though.
This is super impressive! Do you need to publish an academic paper also, to avoid some unethical scholar "discovering" a slight variation of your idea without citing your blog? I don't know anything about how academic publishing works, and I'd hate to see you be deprived of credit.
I would be quite flattered by an unethical scholar plagiarizing from my work, it would be far better than being ignored.
The actual conversation about motivation to publish a paper is a fairly deep topic. In this particular case, I do think it's worthwhile for a variety of reasons:
* The literature dates back about 40 years, and no really good solution has been published.
* The most commonly cited survey paper is possibly fundamentally wrong.
* One of the most commonly cited and used techniques is surprisingly bad.
* The curve-fitting technique I propose probably has many other applications; in the best case, it could potentially open up a new field of study.
So part of my motivation for publishing the blog post is to see if I can recruit a partner in crime, someone who has incentive for publication in academic circles. We'll see!
Offset curves are painful. Farouki et al suggest using a restricted subset of polynomials that are easy to work with. I ran into them when trying to get a constant-speed (arc length) parametrization of splines.
I've dealt with this in scientific illustration a couple times and been frustrated by having to use some hacky solution, it's validating to see that I wasn't just missing some small detail.
In my experience there is another very simple offsetting method that is better for "simple" curves. I think it is (or was?) used in paper.js.
You take the intersection point of the curves's handles and offset curve's handles. Then you calculate the distances from these points to the curve's/offset curve's end points and use these distances to scale the handles of the offset curve.
Here is the code, reformatted so you can basically copy it into your existing code. It should require fewer subdivisions than TH, but more than shape control and your impressive solution.
// This function needs to go into the Point class
scale(s) {
return new Point(this.x * s, this.y * s);
}
handle_scaling() {
// p: points on original curve
// q: points on offset curve
const deriv = this.c.deriv();
const p0 = this.c.p0();
const p1 = this.c.p1();
const p2 = this.c.p2();
const p3 = this.c.p3();
const q0 = p0.plus(this.eval_offset(0));
const q3 = p3.plus(this.eval_offset(1));
const tan0 = deriv.eval(0);
const tan3 = deriv.eval(1);
// s: intersection of handle vectors
const sp = ray_intersect(p0, tan0, p3, tan3);
const sq = ray_intersect(q0, tan0, q3, tan3);
// r: ratios of distances from handle intersection to end points
const r0 = sq.dist(q0) / sp.dist(p0);
const r3 = sq.dist(q3) / sp.dist(p3);
// calculate control points of offset curve
let q1;
let q2;
if (r0 > 0 && r3 > 0) {
q1 = q0.plus(p1.minus(p0).scale(sq.dist(q0) / sp.dist(p0)));
q2 = q3.plus(p2.minus(p3).scale(sq.dist(q3) / sp.dist(p3)));
} else {
q1 = p1.minus(p0).plus(q0);
q2 = p2.minus(p3).plus(q3);
}
return CubicBez.from_pts(q0, q1, q2, q3);
}
For reference: The quick and dirty solution to make an offset curve of distance l is to take the control cage of the original curve (i.e. the polyline you get by connecting every Bézier control point in sequence) and push out the points that make up control cage by the distance l along each point's normal. (Same way you do toon shaded outlines in games.) Since the vector between two cubic Bézier curves is itself a cubic Bézier curve, the convex hull property lets you easily compute a conservative error bound on this. If the error bound is too high, you can subdivide and repeat the process.
This solution isn't as good as what Raph describes here and I'm providing it only for reference. :)
I'm slightly confused by this. A point on a polyline doesn't have a single normal, as it's incident to two edges, and in general those edges have two different normals. If you push out the edge by the offset amount, that's what Tiller and Hanson proposed in 1984, and one of the things I found is that it performs surprisingly badly. It is in common use though.
If you're talking about a variation of that where you choose some other concept of "normal" relevant to the point, I'd be interested to learn more.
Yes, I'm using the term normal in the usual graphics sense of a vertex normal, as the average of the normals of the incident edges. It's the Tiller & Hanson 1984 approach.
Gotcha gotcha. I just coded this up, and averaging the two incident normals performs about the same as OG Tiller-Hanson, slightly worse. The original version (and my implementation) displaces the edge, which basically means computing the new control point as the intersection of the two displaced edges. I believe there are a bunch of T-H variations out there, but haven't collected any evidence yet that they significantly improve matters.
I'm thinking this needs the concept of miter limit, because it can generate very big paths for sharp corners.
Or perhaps the definition of the curve should be more strict (e.g. the result should contain only points that are exactly at distance x from the given path)
That cusp isn't a failure, so much as the definition of parallel not matching your expectations. You can't have the radius of the curve smaller than your offset without forcing a cusp.
If you imagine dragging an eraser across the 'roadway' produced by the parallel lines, you'll see that the solution with the cusp still obeys the constraint that the lines are a constant distance apart for every point along the road, it's just that it turns so tight your eraser needs to turn in place at the corner.
Actually, that's what the offset of the curve looks like. Take a pencil and paper and draw a parabola. Then try to draw an offset curve to the inside with an offset greater than the smallest curvature radius of the parabel. You will get this strange path with two cusps and an inverted curve between them. Try it out, it is an eye opener.
I'd like to see an exposition of how to slide a shape (e.g. ellipse) along a Bezier curve, and compute the resulting enclosing Bezier curve (something you'd need to render a calligraphic pen as a path, for example).
My understanding is that a parallel curve is what you get if you dilate the curve with a circle, ie the curve along the furthest extent of the dilation (see the picture in the wikipedia article) which evidently is not the same as just translating the curve down.
https://en.m.wikipedia.org/wiki/Parallel_curve