Thanks! I'm considering writing a follow-up note on the process that went into the typesetting. The original source is a markdown-with-latex format that a tool called pandoc can work with. The style is primarily from tufte.css. The image work is mostly custom, but there are some re-usable approaches. If you're curious:
By the way, this is a nice article, but you unfortunately mix up "frequency" and "period" in your quantitative discussion. (A function that repeats every 50 units, for example, has a period of 50 units, as opposed to a frequency of 1/50 1/units.)
This is a fun exploration! It's interesting that the summed functions have a feature topology that mirrors each sub-function, but some of the local features retain their original scaling, while features in the overlap region adopt a new scaling. This reminds me a little of using absolute scaling in SVG or CSS on something that is parented or embedded (not at the root of the hierarchy). Sometimes you use that kind of thing for character rigs in movies & games - break out of the local coordinate system and use world space.
Since the scaling isn't uniform, this is what causes t1 & t2 to have to be piece-wise linear. I'd love to see what happens if the transform functions, i.e. t1 & t2, were restricted to affine transforms. Like, how much harder is it if you further require the summed function to be a uniformly scaled & translated version of the sub-function? What if you added to the formal definition something like:
f_S = a * f_L ( b * x + c ), where a,b,c are constants
This also reminds me of a mutation I added to my artificial evolution digital art project where I was trying to generalize on the idea of a fractal. I called it the 'harmonic mutation', and it takes any function f(x) and produces a new function with two or more scaled copies, like c0 * f(c1 * x + c2) + c3 * f(c4 * x + c5). Repeat several times with the same constants, and you can make a fractal out of any function at all. Very similar concept to Perlin turbulence too, where you have summed & scaled octaves of noise.
Congratulations, really cool and innovative! Have you looked into ArtMatic and how you can create a tree of functions ? It allows to create mutations that give results very similar to yours.
I hadn't before, thanks for the tip! It's been a while since I played with this stuff, but I might have to get a copy and explore. I have heard of MetaSynth in audio circles. I'm not sure, but the images and videos on the ArtMatic site look (to me) very inspired and influenced by the work of Karl Sims, which is the main source of inspiration behind my project... http://www.karlsims.com/papers/siggraph91.html
You can find a bunch of his videos on YouTube too.
Thank you for the link to the article!
I know Eric Wenger the author of MetaSynth / ArtMatic / Voyager since many years and his motivation to create ArtMatic was to create textures to feed MetaSynth to create music from visuals..
The definition of an exactly self replicating function seems to be any function 'f' such that there are continuous bijections a,b,c such that f∘a + f∘b = c∘f, and a does not equal b.
Edit: I think this includes all strictly quasi-concave functions, but I'm not entirely sure.
I think you're right. Your version of the definition is more elegantly stated, and I may want to use it going forward, as long as a, b, and c still make intuitive sense (thinking about that part).
'An example of a fractal sequence. That is, if you omit every other number in the sequence, you get the original sequence. And of course this can be repeated. So if you form the sequence a(0 2^n), a(1 * 2^n), a(2 * 2^n), a(3 * 2^n), ... (for any integer n > 0), you get the original sequence.'
Based on the title, I imagined this would be about AWS Lambda functions.
I wonder if anyone tried simulating evolution using these. I suppose you could even make the fitness be about earning money, e.g. solving captchas? Such that if a function can't pay for its own upkeep it'd die off, while if it does well it could spawn offspring.
Thanks! The original source is custom js rendering to svg. The html version of the article contains png files based on that output; the pdf version contains vectorized versions. I got some solid design input from Mike Sall on the images for this article.
Not only are the figures simple and beautiful but the typesetting and simple layout/lack of cruft is wonderful and refreshing. A+ on site design and also a very interesing article to boot! I work in game programming and would be interested in exploring self replicating functions for a variety of things...
This reminds me of the concept of a "ring of functions" (albeit slightly more constrained). Since you seem to be familiar with measure theory already, that idea is probably already well known to you (the author), but I figured I'd mention it anyway.
I highly recommend learning abstract algebra thoroughly if you haven't already, by the way. It's a lot more well connected and well researched than you might think after just reading about this or that definition (groups, rings, quotient rings, lattice of ideals) in a vacuum. The magic comes when you mix all the definitions together. If you learn it correctly and master both the rigour and intuition behind it all, you'll come to consider your pre-abstract-algebra-self in the same Platonic cave as you might see your pre-linear-algebra-self right now.
A while ago I thought about the fact that if you repeatedly apply f(x)=2+x/2 to a number, you approach 4. What happens if you repeatedly apply, with a 50/50 chance, either x/2 or x+2 to a distribution? What distribution do you get?
I could find a lot of analysis using matrices in discrete math, but I couldn't find much for continuous distributions. I haven't gotten all the way through reading this, but it seems like it could be helpful.
This sounds quite a bit like a probabilistic version of the Collatz problem (https://en.wikipedia.org/wiki/Collatz_conjecture). I thought I'd seen results on this before, but a quick Google result doesn't turn anything up. (Instead, Google results want to discuss probabilistic heuristics for why the deterministic Collatz conjecture holds.)
Hmm. Thanks for the thought. I'm not sure how related it is, but I might find something in the attempted solutions.
The original idea behind this was to figure out how to study the stability of certain probabilistic transformations. The hope was to be able to make transformations that are good at detecting non-random inputs, particularly in time-series data. Could potentially be used in a similar way to neural-nets.
After reading through his notes, I think my question differs significantly from his. He focuses on the extreme ends of ramp functions, which remain a single size, despite the domain in between them growing. The resulting function is therefor a piecewise transformation (is that the right term?) of the original function, rather than the exact same function or a simple transformation of it.
If you're interested in future directions, or working off of research that others might have generated while looking into similar questions, for the sake of taking this idea (self-replicat'n functions) to that next level.. it would be p interesting if you could use just a hint of the much-fabled category theory for the sake of generalization!
hm, now I wonder what happens if you take 'any old function', add it to a reversed and shifted version of itself, and normalize. do you eventually get one of these self-replicating functions, or are there functions that 'diverge' under this treatment?
Taking a random array and giving it this treatment, I seem to get a gaussian distribution after just a few (say, 100) iterations on a 100-element sequence, shifted by 1/3 and then downsampled to be a 100-element sequence again). but of course this is a global and downsampled view, the function might be very craggy locally.
That's all the solid work I have so far. Tons of ideas for what could be next. For example, consider convolutions on sets generated by L-systems (eg the Cantor set). I suspect those might offer continuous, finitely-supported, linearly-shifted self-replicating fns (the current article is mainly about non-linear shifts; t_1 is only piecewise linear). Maybe L-systems even completely characterize that category of self-replicating fns.
Other ideas:
* Higher dimensions (eg f1 + f2 + f3 = g with domain R^2)
* Shifted fns f1, f2 whose sum has surprisingly small diff from their sum: ie small |C(f1+f2)-shift(f1)|; the normal fn appears to fall into this category
You're right that the g1 and g2 in section 2 are also scaled in addition to being shifts of f. I plan to clarify this in the text.
This work is inspired by the self-similarity of fractals, which is in some cases primarily an intuitive or informal idea: for example, the Mandelbrot set is considered self-similar, but I'm not aware (despite searching) of any simple automorphisms of the Mandelbrot.
From the first sentence of the paper: “I explore functions f that can be written as a sum f=g1+g2 where g1 and g2 are shifted and possibly reflected versions of each other, both strongly resembling the original function f.”
Note, not f but “both strongly resembling the original function f.”
Articles like this make me so sad, because it seems absolutely fascinating, but I know nothing about math. I became a programmer later in life and never got a solid education in it.