Good question, the simplified example doesn't make this clear.
The real implementation has a mutable `builder` argument used to gradually build the converted filter. If we perform the `transform().isDefined` call directly on the "main" builder, but the subtree turns out to not be convertible, we can mess up the state of the builder.
The second example from the post would look roughly like this:
val transformedLeft = if (transform(tree.left, new Builder()).isDefined) {
transform(tree.left, mainBuilder)
} else None
Since the two `transform` invocations are different, we can't cache the result this way.
The real implementation has a mutable `builder` argument used to gradually build the converted filter. If we perform the `transform().isDefined` call directly on the "main" builder, but the subtree turns out to not be convertible, we can mess up the state of the builder.
The second example from the post would look roughly like this:
Since the two `transform` invocations are different, we can't cache the result this way.There's a more detailed explanation in the old comment to the method: https://github.com/apache/spark/pull/24068/files#diff-5de773... .