In fact there is, take A to be the space of differential forms on some manifold, and M the wedge product. Given two differential forms eta, mu, there is no way to recover eta and mu from M(eta,mu). Even simpler take two arbitrary numbers multiply them, unless they were prime you have no way of telling from the product what the two original numbers were. In other words multiplication destroys information.
In the case of numbers this is fine because there is in fact a way of copying them beforehand by an operation Delta : A -> A x A, which sends a number a to (a,a), so if given (a,b) you want both their product and the numbers themselves, you need a map A x A -> A x A x A given by (for example)
(id x M x id) . (id x id x Delta) . (Delta x id)
In the case of differential forms, there is no such (natural) map Delta.
No, applying a map does not mutate or consume the parameters, and therefore no information is destroyed. Function application in mathematics is referentially transparent.
To make this point perfectly clear: Whenever you encounter an expression such as "f(x)", you may freely re-use the expression "x" in a different place. This is a matter independent from the category you choose to work with -- for example, the expression "psi \otimes psi" makes sense in the monoidal category of Hilbert spaces.
This largely depends on your viewpoint, the map M : A * A \to A was meant to be a morphism in an arbitrary monoidal category. An element like x, would be interpreted as a morphism x : I -> A (in general the objects A in a monoidal category, or any category for that matter have no notion of elements). The expression f(x) is interpreted as the composition f . x, so no you are not free to use x a second time, once you formed that composition. This is swept under the rug in most cases because monoidal categories like Set are cartesian and allow you to freely copy morphisms like x : I -> A, via the Delta map Delta : A -> A * A. Explicitly you have Delta(x) = Delta . x = x * x. In general it might not be possible to even determine if two morphisms x : I -> A and y : I -> A are the same.
In some cases you might be able to assume from the start that you have a certain number of equal morphisms x : I -> A "in reserve", for example in linear logic you have the operator ! ("of course") which gives you an arbitrary number of identical morphisms to play with.
In any case this distinction is quite subtle and I understand, why you might assume that I'm simply misunderstanding things. In particular I should emphasize that almost no programming languages work this way, although with some effort you would be able to recast typical cryptographic / numerical code in this language.
It is also really easy that to see for example in the case of addition that indeed information is destroyed, clearly the map
(a,b) -> (a+b,a-b)
has an inverse if the characteristic isn't 2, the subsequent projection to the first factor destroys information. Categories in which that is possible have maps d_A : A -> I.
What tel is pointing out above, is that Delta and d together form a comonoid.
While psi \otimes psi certainly makes sense, the map psi \to psi \otimes psi is not linear and therefore not a morphism (Physicists call this the no cloning theorem for some reason).
I understand where you are come from, but you are conflating different meta-levels (external vs. internal language/logic): You are absolutely free to use any formal expression "x" a second time, and - crucially - it will have the same mathematical meaning if you choose to do so, as can be seen by pondering over the tautology "P(x) <-> P(x)". This is not a property of the category that you chose to work with, but rather of the formal language that we use when engaging in mathematics, as you have demonstrated yourself when you had permitted that "psi \otimes \psi" made sense.
The non-existence of a morphism "psi -> psi \otimes psi" and the notion of "destroyed information" that you are discussing in the rest of your post is independent from all of this. If you wish I can elaborate on the "no cloning theorem".
Well ok, then this was a simple misunderstanding, or miscommunication. My knowledge is largely based on reading of http://ncatlab.org/nlab/show/internal+logic and several other pages on that wiki, plus graduate training in math and physics. Based on your last reply I'm confident you have a similar background. Logic and category theory are not my specialty and I probably haven't expressed myself as clearly as I would in professional communication.
To check if I understand the argument: I'm guessing you're talking about the internal logic or language of some category, in contrast to the external one used to talk about that category.
Right I think that was the point of misunderstanding between me and asuidyasiud. If you think of x as an assumption in linear logic for example, then there is a difference if you assume x or x^2 and so on. Since in programming languages such x : I -> A are values of type A, this then ties in with duplication and destruction I was going on about above. I tried to express the sentiment that in many ways it is actually natural to think of those kinds of logic as more fundamental, because it makes the duplication of values (unnatural for things like resources, spaces of state, etc.) explicit or even impossible.
Yes, not injective essentially, which is not the same as not having an inverse. The notion can be extended though, for example functors that aren't faithful are certainly destructive .