It does sound like you might be after something closer to totality checking. I'm not as familiar with that, but I can maybe give a few pointers and a bit of terminology:
One easy way to guarantee termination, even for functions with input/output spaces that are infinite, is using "structural recursion". This is when we only make recursive calls on a 'smaller piece' of our input. The classic example is the `Natural` type, where we can do:
f Zero = somethingNonRecursive
f (Succ n) = somethingInvolving (f n)
We can tell just from the syntax that the argument will always get smaller and smaller for each recursive call. Note that structural recursion is syntactic: we cannot do something like `f n = f (n - 1)` since the syntactic form `n - 1` is not a 'smaller piece' of the syntactic form `n`; whereas `x` is a smaller piece of `Succ x`. This makes structural recursion pretty limited. As an example, the "Gallina" language of the Coq language/proof assistant does totality checking via structural recursion, and it very quickly becomes unusable (more on Coq below!).
There's actually a subtlety here: if `Natural` is defined inductively then all of its values are finite (it is the least fixed point of its constructors, sometimes written as a "μ" or "mu" operator; similar to the Y combinator). In particular its values are `Zero`, `Succ Zero`, `Succ (Succ Zero)`, and so on. This means that structural recursion on `Natural` will eventually stop, either by hitting `Zero` or some other "base case" (e.g. functions involved in the Collatz conjecture might stop at `Succ Zero`, etc.).
We could instead decide to use the greatest fixed point (sometimes written as a "ν" or "nu" operator); we call such definitions coinductive, and a coinductive `Natural` type would contain values `Zero`, `Succ Zero`, `Succ (Succ Zero)`, and so on and an infinite value `Succ (Succ (Succ ...))` (sometimes written as "ω" or "omega"; not to be confused with the non-terminating program omega `(λx. x x) (λx. x x)`!). Structural recursion on values defined coinductively might not terminate; but it does coterminate, which is the idea of being "productive" like I showed with `Delayed`. (Note that in my Collatz examples I was assuming that `Natural` was defined inductively and `Delayed` was defined coinductively). Note that all datatypes in Haskell are coinductive. Also note that a type may contain many different infinite values, for example if we define trees like:
data Tree : Type where
Leaf : Tree
Node : Tree -> Tree -> Tree
If this is coinductive we can have any arrangement of finite and infinite branches we like. Another interesting example is `Stream t` which is like `List t` but only contains infinite values (there is no base case!):
data Stream (t : Type) : Type where
Cons : t -> Stream t -> Stream t
We can extend structural recursion a little to get "guarded recursion". In this case when we define a function like `f(arg1, arg2, ...)` which makes recursive calls like `f(newArg1, newArg2, ...)` we must also specify some function, say `s`, such that `s(newArg1, newArg2, ...) < s(arg1, arg2, ...)`. The function `s` effectively calculates a "size" for each set of arguments, and if we can prove that the "size" of the recursive arguments is always less than the "size" of the original arguments, then we guarantee termination/cotermination. This would allow things like `f n = f (n - 1)`, given a suitable "size" function (which could be the identity function in this case) and a proof (manual or inferred) that it always decreases. The Isabelle proof assistant does totality checking in this way, and I think it's also how Agda works (but Agda tries representing this using "sized types", which confuses me ;) ).
Talking about meta levels, proof assistants can provide an "escape hatch". The way Coq works is that proofs are represented in the "Gallina" language, which is total. We can write Gallina terms directly, but it can be painful. Instead, we can use a meta-language called "Ltac", which is a (rather awful) untyped Turing-complete imperative programming language. We can use Ltac to construct Gallina terms in whichever way we like; when we "compile" or check a Coq program/proof, that Ltac is executed and, crucially, it might not halt! We don't actually need such a large separation between object level and meta level: there's a really nice Coq plugin called "Mtac" which extends the Gallina language with a `Delay t` type (a polymorphic version of my `Delayed` example; we could say `Delayed = Delay Natural`), and the only "meta" operation it defines is converting a value of `Delay t` to a value of `t`, which it does by unwrapping any `Later` constructors at compile/check time. Just like Ltac, this might not halt (since there could be infinitely many `Later` wrappers).
Another thing to keep in mind with totality checking is "strictly positive types". Types are called "positive" or "negative" depending on whether they contain functions which return or accept values of that type. I've never looked into this in much depth (since languages like Coq will tell you when it's not allowed), but it seems like types which aren't "strictly positive" allow us to define infinite values, which shouldn't be allowed for inductive types.
Another thing to look at regarding self-reference is predicative vs impredicative definitions. Impredicative definitions are like "lifting yourself up by your bootstraps" (see e.g. http://wadler.blogspot.com/2015/12/impredicative-images.html ;) ).
I don't follow research in totality checking too closely, only where it overlaps my interests in programming language design and theorem proving. One memorable paper I've come across is https://dl.acm.org/citation.cfm?id=2034680 but whilst it's a nice application of language and representation, it's not clear to me if it's much of an advance in totality checking (i.e. it's a neater way of doing known things). I also see a steady trickle of work on things like "parameterised coinduction" and "circular coinduction" which seem related to totality, but they're slightly above my ability to tell whether they're brilliant new insights or just niche technical results. At least I hope they lead to better support for coinduction in Coq. You can tell how long someone's worked with this stuff based on how much they joke about the name (although I couldn't resist laughing when someone seriously remarked that "well of course everyone loves Coq" ;) ) versus how much they joke about "lol lack of subject reduction".
Thanks for coming back to me! Totality checking sounds exactly like what I was looking for; because your examples of structural and guarded recursion closely match parts of the algorithm I had already figured out through playing around with logical structures.
You've given me a lot to look into and I sincerely appreciate the effort you've put into opening up these fields to me. There's a few concepts you've mentioned that are still a little hazy to me (particularly coinductive types and the "Delayed" concept - which sounds somewhat like a generator function?) but now I know the terminology I can begin to research these topics myself in earnest. I always that find with the sciences, the hardest part of diving into a new field is simply knowing what you're looking for, followed shortly by building an accurate mental model of the concepts. You've helped a great deal with both, so thank you :)
One easy way to guarantee termination, even for functions with input/output spaces that are infinite, is using "structural recursion". This is when we only make recursive calls on a 'smaller piece' of our input. The classic example is the `Natural` type, where we can do:
We can tell just from the syntax that the argument will always get smaller and smaller for each recursive call. Note that structural recursion is syntactic: we cannot do something like `f n = f (n - 1)` since the syntactic form `n - 1` is not a 'smaller piece' of the syntactic form `n`; whereas `x` is a smaller piece of `Succ x`. This makes structural recursion pretty limited. As an example, the "Gallina" language of the Coq language/proof assistant does totality checking via structural recursion, and it very quickly becomes unusable (more on Coq below!).There's actually a subtlety here: if `Natural` is defined inductively then all of its values are finite (it is the least fixed point of its constructors, sometimes written as a "μ" or "mu" operator; similar to the Y combinator). In particular its values are `Zero`, `Succ Zero`, `Succ (Succ Zero)`, and so on. This means that structural recursion on `Natural` will eventually stop, either by hitting `Zero` or some other "base case" (e.g. functions involved in the Collatz conjecture might stop at `Succ Zero`, etc.).
We could instead decide to use the greatest fixed point (sometimes written as a "ν" or "nu" operator); we call such definitions coinductive, and a coinductive `Natural` type would contain values `Zero`, `Succ Zero`, `Succ (Succ Zero)`, and so on and an infinite value `Succ (Succ (Succ ...))` (sometimes written as "ω" or "omega"; not to be confused with the non-terminating program omega `(λx. x x) (λx. x x)`!). Structural recursion on values defined coinductively might not terminate; but it does coterminate, which is the idea of being "productive" like I showed with `Delayed`. (Note that in my Collatz examples I was assuming that `Natural` was defined inductively and `Delayed` was defined coinductively). Note that all datatypes in Haskell are coinductive. Also note that a type may contain many different infinite values, for example if we define trees like:
If this is coinductive we can have any arrangement of finite and infinite branches we like. Another interesting example is `Stream t` which is like `List t` but only contains infinite values (there is no base case!): We can extend structural recursion a little to get "guarded recursion". In this case when we define a function like `f(arg1, arg2, ...)` which makes recursive calls like `f(newArg1, newArg2, ...)` we must also specify some function, say `s`, such that `s(newArg1, newArg2, ...) < s(arg1, arg2, ...)`. The function `s` effectively calculates a "size" for each set of arguments, and if we can prove that the "size" of the recursive arguments is always less than the "size" of the original arguments, then we guarantee termination/cotermination. This would allow things like `f n = f (n - 1)`, given a suitable "size" function (which could be the identity function in this case) and a proof (manual or inferred) that it always decreases. The Isabelle proof assistant does totality checking in this way, and I think it's also how Agda works (but Agda tries representing this using "sized types", which confuses me ;) ).Talking about meta levels, proof assistants can provide an "escape hatch". The way Coq works is that proofs are represented in the "Gallina" language, which is total. We can write Gallina terms directly, but it can be painful. Instead, we can use a meta-language called "Ltac", which is a (rather awful) untyped Turing-complete imperative programming language. We can use Ltac to construct Gallina terms in whichever way we like; when we "compile" or check a Coq program/proof, that Ltac is executed and, crucially, it might not halt! We don't actually need such a large separation between object level and meta level: there's a really nice Coq plugin called "Mtac" which extends the Gallina language with a `Delay t` type (a polymorphic version of my `Delayed` example; we could say `Delayed = Delay Natural`), and the only "meta" operation it defines is converting a value of `Delay t` to a value of `t`, which it does by unwrapping any `Later` constructors at compile/check time. Just like Ltac, this might not halt (since there could be infinitely many `Later` wrappers).
Another thing to keep in mind with totality checking is "strictly positive types". Types are called "positive" or "negative" depending on whether they contain functions which return or accept values of that type. I've never looked into this in much depth (since languages like Coq will tell you when it's not allowed), but it seems like types which aren't "strictly positive" allow us to define infinite values, which shouldn't be allowed for inductive types.
Another thing to look at regarding self-reference is predicative vs impredicative definitions. Impredicative definitions are like "lifting yourself up by your bootstraps" (see e.g. http://wadler.blogspot.com/2015/12/impredicative-images.html ;) ).
I don't follow research in totality checking too closely, only where it overlaps my interests in programming language design and theorem proving. One memorable paper I've come across is https://dl.acm.org/citation.cfm?id=2034680 but whilst it's a nice application of language and representation, it's not clear to me if it's much of an advance in totality checking (i.e. it's a neater way of doing known things). I also see a steady trickle of work on things like "parameterised coinduction" and "circular coinduction" which seem related to totality, but they're slightly above my ability to tell whether they're brilliant new insights or just niche technical results. At least I hope they lead to better support for coinduction in Coq. You can tell how long someone's worked with this stuff based on how much they joke about the name (although I couldn't resist laughing when someone seriously remarked that "well of course everyone loves Coq" ;) ) versus how much they joke about "lol lack of subject reduction".