Hacker News new | past | comments | ask | show | jobs | submit login

That's what I was getting at w.r.t. naive vs smart implementations. The point of univalence is that, from a logical/functional perspective, `slowFoo(slowX)` is indistinguishable from `fastToSlow(fastFoo(slowToFast(slowX)))`. Applying this to composite expressions will produce lots of calls like `slowToFast(fastToSlow(x))` which can be optimised down to `x`.

The problem, of course, is that we can't enforce this within the logic (since those expressions are indistinguishable), so we must rely on compiler optimisations (i.e. deforestation/fusion).

That link looks interesting, but appears to be more about re-using the same computational-content/runtime-representation for multiple compile-time/proof-context semantics (akin to McBride's "ornaments"), which seems more like a dual problem to that of swapping out slow representations for fast ones.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: