You implicitly used an axiom to ignore the differences between the apples. Someone else could use different axioms to talk about the sizes of the apples (1 large + 1 small = ?), or the color of the apples (1 red + 1 green = ?), or the taste of the apples (1 sweet + 1 sour = ?).
People "axiom" their way out of 1+1=2 in this way: by changing the axioms, they change the topic, so they change the conclusion. I observe this pattern in disagreements very often.
I have used appropriate axioms, not arbitrary axioms. If you want to talk about size or color or taste, you would use “axioms” appropriate for you case.
Some other definition fun: Should we define 0 both positive and negative, or neither positive and negative? Does monotonically increasing mean x<y -> f(y)<f(x) or x≤y -> f(x)≤f(y)? Should we deny the law of excluded middle and use constructive math? Does infinity exist? If infinity exists, is it actual (as an object) or potential (as a function)? Is the axiom of choice true? Or, is the axiom of determinacy true?
Should we use a space-time manifold, or separate space and time dimensions? Do future objects exist, and do past objects exist? Do statements about the future have a definite truth value? Does Searle's Chinese Room think? Which Ship of Theseus is the original: the slowly replaced ship, or the ship rebuilt from the original parts?
I find that so many philosophy debates actually argue over definitions rather than practical matters, because definitions do matter. Well, add your own fun definition questions!
What's worse, French typically uses positif to mean "greater than or equal to 0", so some people will act confused if you use English 'positive' instead of 'strictly positive' to mean "greater than 0".
Unfortunately, when shrinking an array down to 0, you run into a complication. Detecting allocation failure now requires checking both size > 0 and sane_realloc returning 0. To simplify this further, just always allocate a non-zero size.
But the second sane_realloc now never frees. That's a problem shared by by the ISO C realloc.
According to ISO C, size zero can behave like this:
free(old)
return malloc(0)
and if malloc(0) allocates something, we have not achieved freeing.
There are ways to implement malloc(0) such that it returns unique pointers, without allocating memory. Or at least not very much memory. For instance we can use the 64 bit space to have some range of (unmapped) virtual addresses where we allocate bytes, and use a compact bitmask (actually allocated somewhere) to keep track of them.
Such a scheme was described by Tim Rentsch in the Usenet newsgroup comp.lang.c.
If an implementation does such a thing, adjusting the size to 1 will defeat it; allocations of size 1 need real memory.
(I can't fathom the requirement why we need malloc(0) to be a source of unique values, and why someone would implement that as efficiently as possible, when it's implementation-defined behavior that portable programs cannot rely on. Why wouldn't you use some library module for unique, space-efficient pointers.)
I would never rely malloc(0) to obtain unique pointers at all, let alone pray that it is efficient for that purpose.
I'd be happy with a malloc(0) which returns, for instance, ((void *) -1) which can be hidden behind some #define symbol.
saner_realloc isn't realloc; it is our API, and we can make it do this:
Now, a null return always means failure. The shrink to zero, or allocate zero cases give us SANE_REALLOC_EMPTY which tests unequal to null, and we accept that value for growing or freeing.
The caller can also pass in something returned by malloc(0) that is not equal to null or SANE_REALLOC_EMPTY.
I forgot to mention my own opinion. I think that malloc(0) ought to return a 0-sized object, and likewise realloc(ptr, 0) ought to resize the object to 0. Malloc and realloc always allocating feels more consistent to me. For a practical example, I have some code that reads an entire FILE stream into memory. This requires realloc with doubling the size every time. After finishing, I'd like to resize the buffer down to the total bytes read. If it reads 0 bytes, I'd like it to resize the buffer to 0.
I think my sane_realloc never freeing has much simpler behavior. As much as I hate the needless waste of 1 byte, if my code allocates thousands of 0-sized objects, I'd rather fix that before adding complexity to my sane_realloc.
With yours solving the 1 byte problem, it still interests me. We can simplify your code slightly.
A zero-sized object is immutable, since it has no bits to mutate. The pointer cannot be dereferenced. It may be compared to other pointers. So the question is: do you want to obtain unique zero-sized objects, or are you okay with there being one representative instance of all of them? If you're okay with one, you can just call SANE_REALLOC_EMPTY that object. I called it EMPTY on purpose; it represents an empty object with no bits.
If you want unique zero-sized objects, you can always simulate them with objects of size 1. The allocator API doesn't have to make that choice internally for you.
In a Lisp implementation, I could use unique, zero sized objects to implement symbols. The gensym function would create a new one.
Zero-sized distinguishable objects can have properties attached to them via hashes. At the API level, you cannot tell how a property (e.g. symbol-name) is attached to an object.
Properties, like the symbol name, would be attached to these objects via hashes.
It's not an ideal representation for symbols when all symbols have a certain property, like name.
If you have objects such that they have certain properties, but the properties are rare (only a few objects have a value of the property that requires storing space, and for the rest it is some "undefined" value, you can achieve a sparse representation by making the objects as small as possible, attaching the properties externally.
Nitpicking, the test itself should avoid overflowing. Instead, test "a <= UINT_MAX - b" to prove no overflow occurs.
For signed integers, we need to prove the following without overflowing in the test: "a+b <= INT_MAX && a+b >= INT_MIN". The algorithm follows: test "b >= 0", which implies "INT_MAX-b <= INT_MAX && a+b >= INT_MIN", so then test "a <= INT_MAX-b". Otherwise, "b < 0", which implies "INT_MIN-b >= INT_MIN && a+b <= INT_MAX", so then test "a >= INT_MIN-b".
I feel like this all motivates for a very expressive type system for integers. Add different types for wraparound, saturation, trapping, and undefined. Probably require theorem proving in the language to provably never overflow for undefined overflow integers.
>knowing that (A + (B + C)) can't overflow doesn't mean that ((A + B) + C) can't overflow
Here, the associative property works for unsigned integers, but those don't get the optimizations for assuming overflow can't happen, which feels very disappointing. Again, adding more types would make this an option.
I am not sure more types are the solution. I like types, but I do not like complicated things.
The practical solution is simply -fsanitize=signed-integer-overflow. If you need complete assurance that there can not be a trap at run-time, in the rare case where I wanted this, just looking at the optimized code and making sure the traps have been optimized out was surprisingly effective.
I really want to statically link OpenGL and Vulkan for exactly this purpose, but neither use a wire protocol (unlike X11 or Wayland). The whole "loading library" scheme feels like hazing for any beginner graphics programmer on top of the already complex graphics APIs.
I know at least for OpenGL, not all graphics cards/drivers would implement the entire featureset. So there was a reasonable justification for the dynamic linking and bringing in functions one by one.
I think that a wire protocol could support that with a query response for supported versions and functions. The decision of dynamic linking removes the overhead of serialization, but removes the option of static linking.
>plus weirdo stuff like the ability to hand out references to local data in functions that have already returned (which remains valid as long as you don't call the function again, which I think should be possible to enforce via a borrow checker).
The C programming language supports this with the static keyword. Further calls may overwrite the pointed data. I have played with allocating fixed-size data in static locals, but I always found that allocating in the caller does the same job better. For example, compare strerror() with strerror_s(). (A sensible strerror implementation should return a pointer to static immutable data, but the Standard doesn't specify it.)
A procedural language can achieve a statically bounded call stack by restricting self recursion, mutual recursion, and function pointers. I struggle to imagine how a language without a call stack could perform differently or better.
I just read through the study. I corroborate your summary.
> The tryptophan found in hydrolyzed whey binds to a different receptor than normal dietary tryptophan, thereby allowing your body to reuptake it and produce serotonin as usual. (This is all in the study.)
Took me a bit to find the quote.
> If tryptophan uptake was abrogated by poly(I:C) treatment, tryptophan supplementation should elevate serotonin levels even during viral inflammation. To corroborate this, we used a diet containing a glycine-tryptophan dipeptide, which bypasses the need for B0AT1 and enables tryptophan uptake via dipeptide transporters.33 This diet compensated for impaired uptake in poly(I:C)-treated mice and led to an increase in both tryptophan and serotonin levels in systemic circulation
Now I need to ensure that whey protein contains some glycyl-L-tryptophan. The study used a lab rat diet "TD.210749" (unsearchable, maybe a custom diet) from Envigo/Inotiv. The citation used pure glycyl-L-tryptophan "G0144" from TCI Europe (~$100/g haha nope).
I can't find anything on glycyl-L-tryptophan content in hydrolyzed whey (maybe you can help?), but found one on other tryptophan dipeptides, alanyl-tryptophan and tryptophanyl-tryptophan. The ACE receptor inhibition seems relevant, too. The PepT 1 protein appears to transport the dipeptides.
"Selective release of ACE-inhibiting tryptophan-containing dipeptides from food proteins by enzymatic hydrolysis" Diana Lunow et al. - https://doi.org/10.1007/s00217-013-2014-x
I'll try this out for my early waking insomnia, mildly reduced energy, and digestive problems (started after I got Covid, almost exactly 2 years ago). I need to find one without artificial sweeteners (hate the taste). I'll report back in exactly 2 weeks (sets calendar).
Yeah, thinking about this a bit more, shouldn't stomach proteases break the protein sources into dipeptides? I found a few comments online arguing that hydrolyzing doesn't matter and may cost more. If that's true, then we should only care about the protein types of whey, especially the alpha-lactalbumin content (higher tryptophan). I think bromelain would only help people with reduced protease production, and it likely won't break apart hydrolyzed protein any further. Of course, theory requires a test, but I won't add bromelain to my hydrolyzed whey test.
Bromelain cant harm though, is cheap, hydrolization is certainly not perfect, and it has other positive effects on body such as anti inflammatory, pain reduction etc.
I suggest practicing on perfboards, TH and SMT. Chipquik has boards for about $2 each. Compare different solders (SAC305, K100LD, SN100C), temperatures, pad sizes, fluxes, etc.
Saying 1/0=∞ means creating a new number system with ∞ as a number. Now you have to figure out all operations with ∞, like -1*∞, 0*∞, ∞*∞, ∞/∞, or ∞-∞.
Making wrong definitions creates contradictions. With 1*x=x, ∞/∞=1, the associative property x*(y/z)=(x*y)/z, and ∞*∞=∞:
But why would we go from what obviously should be a very large boundless number and just replace it with 0. Our few comment discussion is why it’s undefined in a nutshell.
The main issue lies in weakening the field axioms to accommodate any strange new numbers. Instead, defining division by 0 to 0 adds no new numbers, so the field axioms don't change (x/x=1 still requires x≠0). I hope you see the value in extending field theory instead of changing field theory.
If we add new numbers like ∞, -∞, and NaN (as the neighbor comment suggests with IEEE754-like arithmetic), now x/x=1 requires x≠0, x≠∞, x≠-∞, and x≠NaN. Adding more conditions changes the multiplicative inverse field axiom, and thus doesn't extend field theory. Also, now x*0=0 requires x≠∞, x≠-∞, and x≠NaN. What a mess.
The problem is simply that the definition is a lie.
I’m not suggesting that we add numbers or change the definition from undefined. I think undefined is a more accurate description of x/0, because x/0 is clearly far greater than 0.
that's largely solved problem. ieee758 defines consistent rules for dealing with infinities. even if don't use the floating-point parts and made a new integer format, it almost certainly would make sense to lift ieee754 rules as-is.
A IEEE754-like arithmetic (transrational arithmetic, or transreal arithmetic) creates new problems due to adding new values. 0*x=0 now requires x≠∞, x≠-∞, and x≠NaN. (x/x)=1 now requires x≠0, x≠∞, x≠-∞, and x≠NaN, so this system doesn't satisfy the field axioms. NaN lacks ordering, so we lose a total order relation.
However, you get cool new results, like x/x=1+(0/x). Definitely some upsides.
People "axiom" their way out of 1+1=2 in this way: by changing the axioms, they change the topic, so they change the conclusion. I observe this pattern in disagreements very often.