Hacker News new | past | comments | ask | show | jobs | submit login
What Does "With Continuation" Mean? (2020) (snap.berkeley.edu)
107 points by d4nyll 11 months ago | hide | past | favorite | 58 comments



So there’s a funny thing this doesn’t touch on: the semantics of call/cc is genuinely confusing to understand! There’s a related construct that’s much more legible and has a much easier to understand: call with delimited continuation!

Oleg K wrote a very articulate piece about this some long time ago https://okmij.org/ftp/continuations/against-callcc.html


I think of `call/cc` as a parlor trick that helps introduce the concept of continuations more generally.

Threads and co-routines are on the heavy-weight end of the concurrent programming techniques spectrum because they require stacks -possibly large, with guard pages and all- and encourage smearing application state onto that stack, increasing cache pressure.

Continuation passing style (CPS) is on the light-weight end because it encourages the programmer to make application state explicit and compact rather than smearing it on a stack. Callback hell is hand-coded continuation passing style (CPS). Async/await is a compromise that gets one close to the light weight of continuations.

To understand all of that one has to understand the concept of continuations. In computer science, the cheap parlor trick that is `call/cc` is helpful in introducing the concept of continuations to students. After all, `call/cc` is shocking when one first sees it, and the curious will want to understand it.


I think now I get why I find Python's async/await semantics (asyncio) so much more convoluted than Javascript's.

Javascript starts with simple callbacks. Those indeed have no stack, only a shallow list of local variables as state. Then async/await is modelled as a relatively straightforward syntactic sugar on top of that: An await call is still just a callback behind the scenes; if one async function awaits another async function, you get something that looks like a stack, but is really just a chain of callbacks.

In contrast, Python starts with coroutines, which do have a stack, then models async/await by surrounding them with a scheduling runtime. Unfortunately, for async code to be useful, you still need support for callbacks, so you end up with both: an await call represents a mix of suspended coroutine stacks and callback chains, which can be much more complicated to reason about.


Continuations (which can be just closures following CPS conversion) can be used to construct co-routines, and so if you start with co-routines instead...

But callbacks and callback-based async/await force you to compress application state better, so you will get better performance out of that. No need for `call/cc` style continuations, just light-weight continuations, but this is mainly a mirage because in the callback model you do in fact have continuations, it's just that the continuation is only ever "the next step in processing this thing" rather than "the whole stack".

That is, with hand-coded CPS you get a very shallow stack to capture in continuations, so the continuations are very cheap.


Approximately everything by Oleg is great!

We half-jokingly considered renaming our Sydney Computer Science Paper Club to 'Oleg Fan Club'.


Hehe fun!

Yeah, delimited continuations have much saner semantics than undelimited ones. Partly because they’re always well defined


I like the notation "limited continuations" more than "undelimited". Drops the double negative and provides a pejorative element.

Call/cc still has a delimiter, it's just some second class thing above the scope of the current program that you can't do much with which thwarts composition. Let the programmer specify where the delimiter is and you get a less limited construct.


"undelimited" is not a double negative. "Delimited" means "limited by an explicit boundary."

Introducing a different concept with the same word "limited" is confusing.

"specifiable continuations" may be a clearer way to make a positive connotation.


Yes. It's also pretty clear how to use delimited continuations in a pure setting without mutation.

I'm not quite sure you can make use of something like call/cc without having at least one mutable variable or side-effect somewhere?


Delimited continuations are in fact harder to understand, because resumed delimited continuations hit an arbitrarily set brick wall, at which point they return like functions: the value which bubbles out of that brick wall is the return value. Thus they don't represent the true future of the computation. The true future of the computation will not hit a prompt and then stop dead, returning a value to the point where it was resumed. It's like a future ... on a yo-yo string. This is an extra feature. By adding code around full blown continuations, we can get delimited ones and so to understand that we have to understand full blown continuations, and that extra code.

Once you get it, you get it. You then understand that it's better for the continuation not to continue into an indefinite future, and easier to reason about when you know that it's going to stop at a certain enclosing contour, and you will have access to the value bubbling out of there.

Once you already understand it, it's very easy to explain it to yourself.


Is there any resource that explains the different varieties of the limited continuations in a way that doesn't require an advanced theoretical CS degree? I see that Racket has both prompt/control and shift/reset but I've never been able to make sense of how they differ, if at all


I took a Scheme programming class where we did tons of stuff with continuations. I got obsessed and wrote all kinds of weird code. I even started using continuation passing style in Python. It was a dark time. Looking back, none of that code makes any sense at all.

When I think back to my days of mucking around with call/cc my main emotion is relief that I’ve forgotten how it works. It’s a load off my mind


In case first-class continuations scare anyone away from Scheme: you probably will never use it directly in practice, unless you're doing something very unusual that happens to really need it.

For example, say you have a really hairy AI search algorithm, capturing a continuation happens to make backtracking easier.

Or you're implementing another language or DSL in Scheme, and you use first-class continuations to implement some control structure semantics exactly how you want them to be.

I think the closest I've used, in writing maybe a hundred modules, is an escape procedure (like a premature `return` statement in many other languages, which you generally avoid in idiomatic Scheme).


I’m having trouble recalling, call cc just fixes the stack, right? If you mess with the heap, that sticks around?

The backtracking example is a good one. I vaguely remember needing to be careful about global state, or visible in a given context. It’s not awful, but a little tricky.


There isn't necessarily a stack with call/cc, the model is that activation frames are heap allocated[1] and can be arbitrary composed. Normally they would be GC collected when a function returns, but call/cc exposes the caller activation frame (+ a function representing the rest of the caller function body) as a first call object, that once captured prevents it from being garbage collected. The stack is recovered by realizing that activation frames have a reference to the calling function activation frame (which is passed to every function as an implicit parameter).

Also an activation frame per se is immutable, but not the objects referenced by it, so all modifications are preserved when invoking a continuation.

[1] this is just a model, in practice there are many ways to implement this that optimize for the common FIFO allocation discipline.


For me it's the opposite: I can't forget how it works, I don't want to forget how it works, and I marvel at the beauty of it all.

When I was in school I carried one of Guy Steele's (I think, if I remember correctly) articles about continuations in my back pocket for a year, breaking it out when I was bored, until I got it.


Continuation passing style is mostly a good tool for writers of compilers, and perhaps interpreters.

It's very, very similar to single-static-assignment (SSA) style that will be more familiar to people coming from imperative languages.


How is it similar?


For straight-line code, something like

    %2 <- foo %0 %1
    ...
is effectively equivalent to

    (foo %0 %1 (lambda (%2)
      ...)
Phis are sorta modeled inversely:

    %1:
        %2 <- const 0
        br %5
    %3:
        %4 <- const 1
        br %5
    %5:
        %6 <- phi [%1 -> %2, %3 -> %4]
        ...
becomes:

    (letrec ((%1 () (const 0 (lambda (%2) (%5 %2))))
             (%3 () (const 1 (lambda (%4) (%5 %4))))
      (%5 (%6) ...))
      ...)
The nice thing on paper about both of these is that you've broken every computation down into nice nameable bits; if you want to do some analysis (e.g. abstract interpretation) over the programs, you can store intermediate results as a map from names (like %4) to values.

The traditional downside of CPS is that requiring lambdas be nested in order for things to be in scope can make some program transformations require "reshuffling" terms around in a way SSA doesn't require.

The "cps soup" [0] approach used in Guile fixes this, but your terms look like they violate lexical scoping rules!

[0]: https://wingolog.org/archives/2015/07/27/cps-soup


There's a short paper that shows how they're similar (PDF warning):

https://www.cs.princeton.edu/~appel/papers/ssafun.pdf


Variables are written once, and data flow is made explicit.


I'm just happy to see Professor Harvey still participating in the forums. He taught me SICP in Scheme 27 years ago and it is still some of the most important computer science I ever learned.


Same! (CS 61A Fall Semester 1996) And I agree!


Here's Oleg's great explanation of continuations in terms most programmers already understand: https://blog.moertel.com/posts/2008-05-05-olegs-great-way-of...


call/cc is undelimited.


I'm glad that this explanation involves a comparison with goto especially with a discussion of "goto considered harmful". But IMO excessively using continuations results in the same kind of spaghetti code as code that excessively uses goto. Delimited continuations, on the other hand, essentially places a restriction on where the continuation can return to. The analogy with using goto is that the target of the goto has to be within a well specified block of code. This restriction gets rid of the temptation by programmers to take shortcuts and write overly clever but unmaintainable code.


Simply using simple tail calls is goto-like spaghetti code.

Any if/goto program graph or flowchart can be turned into tail calls that have the same shape. For each node in the goto graph, we can have a tail-called function, 1:1.


Does any language other than Common Lisp provide a "delimited goto"?


For many languages, "delimited goto" is approximated by the language's exception mechanism. You can raise an exception pretty much anywhere you want including in totally different functions many stack frames deep, and the control transfers to the closest catch handler.

Personally I'm not a fan because even the name "exception" carries some baggage. Many people think it should only be reserved for exceptional situations, not expected control flow, and some languages have penalties in performance when using them. That's why they tend not be used for vanilla control flow.


Continuations are also the backbone of co-routines in Kotlin and has first class support. One nice feature of the coroutines framework is that they made it really easy to adapt existing asynchronous frameworks in Java (there are many) and on other platforms. The list of supported frameworks includes things like reactive Java, vert.x, spring webflux, Java Futures. And that's just the JVM. If you are using kotlin-js in the browser or on node.js, promises and async/await are covered too. And on IOS, coroutines also do the right thing with e.g. garbage collection and other mechanisms in IOS.

Most of this stuff is supported via extension functions. But for asynchronous things that aren't supported it's really easy to adapt them yourself with the suspendCancellableCoroutine function. This is a suspending function that takes a function with a continuation as the parameter. Inside the function you do whatever needs doing and handle your callbacks, futures, awaits, or whatever and call continuation.resume, continuation.resumeWithException, or continuation.invokeOnCancellation as needed (which sadly is not a feature every async framework or language supports).

With a few small extension functions you can pretend most things are nice kotlin suspend functions and not deal with the spaghetti code typically needed otherwise. E.g. Spring Flux is an absolute PITA in Java because of this and essentially effortless in Kotlin. Your code looks like normal kotlin code. Nothing special about it. All the flux madness is shoveled under the carpet.

In the same way the new virtual threads in Java are supported trivially because all of that is exposed via the existing Theading and Threadpool APIs. So, you can just dispatch your co-routine asyncs and launches as virtual threads via a dispatcher that you create with a little extension function on Threadpool. Most of this stuff "just works" out of the box. There's a lot of confusion on this topic in the Java community. For Kotlin this is just yet another way to do async stuff that joins a long list of existing other ways. It has its pros and cons for different use cases and is there if you need it. No big deal but very nice to have around. I've not found any use for it yet and we're on Spring Boot and java 21. We already have everything asynchronous and non blocking.


I don't know whether it helps, but here's another explanation that doesn't involve a visual programming language:

First, imagine that procedures (or functions) are first-class values, like in some languages are called "anonymous functions", "closures", "lambdas", etc.

Now imagine that every procedure call is effectively a goto with arguments... and the familiar part about being able to return from that procedure, to its calling point, is implemented by an implicit additional value created at calling time... which is a "continuation" procedure with argument(s) that you call to return the value(s) from the other procedure.

To make it first-class continuations, imagine that "continuation" procedure value you call to return the value(s) from another procedure... can be made accessible to the program as a first-class value. Which means you can store it in a variable or data structure like any other value. And you can call that "continuation" procedure value multiple times -- returning to that dynamic state of the calling context in the program many times.

It's one of those things that application code almost never uses directly, but that could be super useful and simplifying in the implementation of another programming language, or carefully controlled in particular libraries (e.g., you wanted to make a stateful Web backend library or DSL that made it really easy to express the logic for some kind of interaction tree responding to multiple steps of form input).


An alternative way to look at for C/C++/pascal/… programmers it is by looking at the return keyword not as a keyword, but as an implicit argument that’s a pointer to a function. Imagine this would work:

   global fooBack
   global barBack
   main() {
     print foo()
   }
   foo() {
     fooback = return
     bar()
     return “foo”
   }
   bar() {
     barBack = return
     if randomBool
       fooBack(“bar”)
     else
       return “bar”
   }
and that, 50% of the time, woud print “bar”.

In a C-like system with a call stack, that would give you a nicer setjmp (still with quite a few limitations). Systems with true continuations would allow code to call ‘up the stack’ for example if main were to call barBack in the scenario above. That wouldn’t work in C as bar’s stack frame wouldn’t exist anymore.


I think `bar` being able to return from `foo` has little difference to throwing exceptions in various OOP languages. But the C-style paradigm makes it more confusing for different scenarios, like as you describe with calling the deeper "return" from main.


So the particular example here isn’t too different from exceptions. You’re unwinding the stack up to a predefined point— here, the callsite of foo, where with exceptions it would be up to the surrounding try/catch. Scala actually implements non-local returns (the only practical use I’ve had for call/cc) using exceptions: https://tpolecat.github.io/2014/05/09/return.html


I'm not an expert, but I think the difference is that in exceptions, stack in unwound; while in continuations, all stack frame hang around in a sea of stack frames waiting to be garbage collected when no one holds a reference to them anymore. This would imply that you can jump into the same stack frame multiple times, or do other weird things.


> This would imply that you can jump into the same stack frame multiple times, or do other weird things.

Yep— this is how you can implement the `amb` operator with call/cc: https://ds26gte.github.io/tyscheme/index-Z-H-16.html


Sure, but then the context of the C family of languages hinders any comprehension of these different styles of use.


This is great, I finally understand how continuations work (since I now see how they might be implemented in the runtime). Thanks!

Could you provide a similar example of what delimited continuations are?


Required reference "Objects the poor man's Closures[continuations]" [1]. Well, the links actually say "Closures And Objects Are Equivalent", which is the point. But I'd offer a deeper point - continuations/closures are rightly considered one of the hardest things most programmers ever encounter while objects are things most programmers deal with daily. Sure, that means use continuations if you want to look like a bad ass but use objects if you want a project to succeed.

[1]http://wiki.c2.com/?ClosuresAndObjectsAreEquivalent


Continuations aren't equivalent to closures.


Continuations are closures. Closures aren't continuations. Though one can build continuations out of closures by converting code into continuation passing style, which makes continuations explicit, and then `call/cc` is trivial, since all it does is pass (to its function argument) its [now-explicit, after CPS conversion] continuation, thus reifying it.


Continuations are just closures in cps. Closures are just functions plus an environment parameter. Functions are just gotos plus a link pointer. Yet each abstraction is more than the sum of its components.


Continuations are closures because they capture everything that a closure would capture at the same point in the execution. But they also capture more.

In CPS, continuations are often closures. But: those closures also close over the surrounding function's hidden continuation parameter k, and make essential use of it! The last continuation-lambda in the function has to call k to simulate the return.


Ehh. I think it makes it a lot harder to understand what a continuation is if you say they’re closures.

Continuations capture the stack. Closures capture variables.


It is not wrong though. Once you reify the return continuation, a continuation is really just a closure, the stack is implicitly recovered by recursively following the captured return continuations. And with closures you do not necessary have a stack anyway but any arbitrary directed graph.


> Continuations capture the stack.

Yes, but if you take 'stack' too literally here then you'll think that `call/cc` copies the stack, when maybe it really doesn't.

> Closures capture variables.

Variables, yes, including the return address of the frame in which those variables are if that return address is make explicit (e.g., because of CPS conversion).

And now you can see that closures can be continuations if they implicitly capture the stack.


The easiest way to think about continuations is to consider them a generalization of function returns. The continuation of a C function f() is the return address and the saved frame pointer of the calling function -- and that looks a lot like a closure, and that's because it is, except that a) you can only pass that closure one argument in C: the return value, and b) you actually can't get a value for this closure in C, and c) there's no closures as such in C anyways :)

[And what is a closure? It's a tuple of {function body pointer, data pointer}, and "callbacks" in C often look just like that, but not as a singular value combining the two parts but as two separate values. In better languages a function is indistinguishable from a closure.]

Anywhere that you see `call/cc` you can think of it as making a first-class function value (a closure) of the current location (which would be the return address of the call to `call/cc`) and then passing that closure to the function given as an argument to `call/cc`. After you parse that sentence you'll realize that's a bit crazy: how does one make a function out of... a function's return address (and saved frame pointer)? The answer is: reify that {return address, saved frame pointer} as a closure. ('Reify' means, roughly, to make a hidden thing not hidden.)

Still, it must seem crazy. Yet it's not that crazy, certainly not if you make it so you can only call it once, and only if `call/cc` hasn't returned yet. What's crazy is that in Scheme this continuation can be called repeatedly, so how? Here's one way to do it: replace the stack with allocating function call frames on the heap!, which in a language with a GC means that all those function call frames remain alive as long as closures (which a continuation is) refer to them. (Another way to do it is with copying stacks.)

One can easily (for some value of "easily") implement a language that has `call/cc` in a language that doesn't but which has a) closures, b) a powerful macro system / AST / homoiconicity. All one has to do is take a program, apply continuation passing style (CPS) conversion to it (a fairly straightforward transformation where the result is unreadable for most humans), which automatically causes all function call frames to be captured in closures (thus put on the heap), but also automatically makes continuations explicit values (closures). The `call/cc` is a trivial function that passes its now-explicit continuation argument to the function that `call/cc` is given to call.

Allocating function call frames on the heap is a performance disaster. And that's where delimited continuations come in: the goal being to allocate function call frames on mini stacks.

`call/cc` is really a bit of a parlor trick, and a very nifty one at that, but continuations are a very important concept that comes up all over the place, and one that computer scientists and software engineers ought to be at least conversant with.


Simpler still is to recognise that "call a function" and "return from a function" are different syntax over the same thing. They both mean "jump to somewhere with a convention about where to find state".

If you replace "call a function" with goto, and replace "return from a function" with goto, then it becomes immediately obvious that "continuation" is a name for where you're going to jump to next. It only looks complicated from the context of calls/returns/stacks.

Confusing call and return for different things is unfortunate but popular. It gives rise to things like four registers available for passing arguments and one available for returning a result, when the calling convention really should be symmetric.

"Continuation passing style" is what all function calls use - x86 writes the address of the continuation to the stack, more sensible ISA's pass it in a specific register - modulo language syntax that somewhat obfuscates control flow.


> Simpler still is to recognise that "call a function" and "return from a function" are different syntax over the same thing. They both mean "jump to somewhere with a convention about where to find state".

That's pretty much what I was saying, but pithier and better stated, so thank you.

> If you replace "call a function" with goto, and replace "return from a function" with goto, then it becomes immediately obvious that "continuation" is a name for where you're going to jump to next. It only looks complicated from the context of calls/returns/stacks.

Yes, thus "lambda is the ultimate GOTO".

> Confusing call and return for different things is unfortunate but popular. It gives rise to things like four registers available for passing arguments and one available for returning a result, when the calling convention really should be symmetric.

It's a convenient abstraction for Algol family languages.

There is a difference between function call and function return in those languages: a call pushes a frame on the stack, and a return pops a frame off the stack, but both otherwise look very similar under the covers.

In CPS the "pop a frame off the stack" part doesn't happen, but instead you get tail-call optimization (TCO) to avoid the stack blowing up with calls to functions that never return (and which anyways maybe store their real call frames on the heap to boot, thus wasting all that stack space for nothing).


> like four registers available for passing arguments and one available for returning a result, when the calling convention really should be symmetric

The symmetry implies the support for multiple return values.

If the language model has single return values, then continuations take one parameter. Lots of historic papers about continuations model them that way.

Multiple values are tricky. 100% symmetry is never achieved with those things. The problem is that in many contexts, an expression is expected to produce one value. We usually want (foo (bar) (baz)) to call foo with two arguments even if bar and baz return two or more values. There may be times when we want to inerpolate all the values, or some of them, into the argument space, so we need some syntax to distinguish those situations. But if (foo (bar) (baz)) just takes one value from each function, then that means that the primary value is more of a first class citizen than the additional values. There is something special about it.

We can also go the other way: declare that functions should not only return exactly one value, but only take exactly one argument. That is also symmetric! Then currying can be used to combine functions in order to simulate multiple arguments.


This is tricky in the comment format but here goes.

Let us declare that a function is passed exactly one value, and that it passes exactly one value to the chosen continuation. However that does not imply that it takes exactly one named argument.

(lambda a ...)

Binds that one value to a. If you pass it a list, that's a variadic function.

(lambda (a b) ...)

Requires the argument be a list of two values and binds the elements to a, b.

(lambda (a (b c) d) ...)

Likewise, except it's now a list of three things where element 1 must be a list of two things.

The corresponding return/continuation part then looks like

(let a (foo) (b c) (bar) ...)

where the let binds whatever foo returned to a, binds a list of length 2 from bar and so forth.

This works really nicely with static typing in the absence of function currying. Destructuring bind on function arguments is somewhat common.

In lisp, you have to work out what let returns when the binding doesn't typecheck at runtime and that's a bit of a mess.

In SML the construct typechecks at compile time but I can't work out how to reconcile it with currying.

Parameter trees, the (a (b) c) idea, I first saw in kernel. I don't remember if it went as far as let binding / destructuring on the continuation invocation.

I like the destructuring syntax more than currying so haven't put as much thought into providing both as the latter might deserve.


> The symmetry implies the support for multiple return values.

Yes!

> Multiple values are tricky. 100% symmetry is never achieved with those things. The problem is that [...]

You basically need destructuring on the return values, and the syntax for that has to be fairly clean. And for the example you give where one function's return value(s) is(are) passed to another's things do get tricky: shall that use only the first value returned? or maybe the whole lot of them (as an array/list)? or both, where if the callee doesn't treat the thing as an array/list then it's the first value (implying dynamic typing)? or does what happens depend on the callee's parameter's type? or what? Yeah, it's very tricky.

Another option is to have generators, and if a function returns/yields multiple values in one go, maybe treat them as single values yielded separately. This one isn't that awesome either.

> We can also go the other way: declare that functions should not only return exactly one value, but only take exactly one argument. That is also symmetric! Then currying can be used to combine functions in order to simulate multiple arguments.

Yes :)

But then in Haskell, which does this, we end up with syntactic sugar with which to pretend there's multiple arguments, thus the 100% symmetry isn't quite.

Oh well.


Henry Baker showed that call/cc doesn't require a spaghetti stack with dynamically allocated frames. All you need is one linear stack, and never return.

Richard Stallman made similar observations in Phantom Stacks. If You Look to Hard They Aren't There.

Chicken Scheme, based on C, implements Baker's idea. Chicken Scheme's C functions never return. They take a continuation argument, and just call that instead of returning. So every logical function return is actually a new C funtion call. These all happen on the same stack, so it grows and grows. All allocations are off the stack, including lambda environments, and other objects like dynamic strings. When the stack reaches a certain limit, it is rewound back to the top, like a treadmill. During this rewinding phase, all objects allocated from the stack which are still reachable are moved to the heap.

Thus the combination of the CPS strategy (all returns are continuation invocations) and the linear stack with rewinding obviates the need for dynamically allocated frames.


I'm aware of this work, especially Chicken Scheme, but Chicken scheme basically combines the stack and the heap + compacting GC, so it's a reach to say that Chicken Scheme doesn't allocate frames on the heap... If you have to GC frames, then they are as-if on the heap. Same thing for related variants.

You can also just allocate all frames on the stack and use stack copying for `call/cc` -- this involves... a heap of stack copies :laugh: so I think I'm henceforth going to say that `call/cc` continuations always require heap allocations :)


yield based on delimited continuations in TXR Lisp, showing that unwind-protect works:

  (defun grandkid ()
    (unwind-protect
      (yield-from parent 'in-grandkid)
      (put-line "returning from grandkid")))
 
  (defun kid ()
    (unwind-protect
      (progn
        (yield-from parent 'in-kid)
        (grandkid))
      (put-line "returning from kid")))

  (defun parent ()
    (unwind-protect
      (progn
        (yield-from parent 'in-parent)
        (kid))
      (put-line "returning from parent")))

  (let ((fn (obtain (parent))))
    (prinl 'a)
    (prinl (call fn))
    (prinl 'b)
    (prinl (call fn))
    (prinl 'c)
    (prinl (call fn))
    (prinl 'd)
    (prinl (call fn))
    (prinl 'e))

Run:

  $ txr cont.tl 
  a
  in-parent
  b
  in-kid
  c
  in-grandkid
  d
  returning from grandkid
  returning from kid
  returning from parent
  nil
  e
Each yield captures a new delimited continuation up to the parent prompt. Each (call fn) dispatches a new continuation to continue on to the next yield, or termination.

So with all this back-and-forth re-entry, why do the unwind-protects just go off once? Because of a mechanism that I call "absconding": the answer to the dynamic wind problem.

Absconding is a way of performing a non-local dynamic control transfer without triggering unwinding. It's much like the way, say, the C longjmp is unaware of C++ destructors: the longjmp absconds past them.

With absconding we can get out of the scope where we have resumed a continuation without disturbing the scoped resources which that context needs. Then with the continuation we captured, we can go back there. Everything dynamic is intact. Dynamically scoped variables, established exception handlers, you name it.

The regular function returns are not absconding so they trigger the unwind-protect in the normal way.

absconding is an elephant gun, that should only be used in the implementation of primitives like obtain/yield.

15 second tutorial:

Cleanup yes:

  1> (block foo (unwind-protect (return-from foo 42) (prinl 'cleanup)))
  cleanup
  42
Cleanup no:

  2> (block foo (unwind-protect (sys:abscond-from foo 42) (prinl 'cleanup)))
  42
That's all there is to absconding. yield-from uses it. It captures a new continuation, packages it up and absconds to the prompt, where that is unpacked, the new continuation updated in place of the old so that fn will next dispatch that new one.


(2020)


Call/cc !


If I ever start writing statements like "Part the first", please take me out back and put us all out of that misery.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: