I think this kind of thing is great, but it runs into some of the same considerations that make probabilistic graphical models hard.
If I have two variables defined as
x = [0,1];
y = [0,1];
z = x - y;
Then we see that z should be the interval [-1,1]. But for a similar problem,
a = [0,1];
b = a - a;
x = a;
y = a;
z = x - y;
b should clearly be the interval [0,0]. z is computed the same way as the first example, as the difference between two [0,1] intervals. But this time we want it to be [0,0]. That means we want to keep around some notion similar to the joint distribution of x and y. That makes arithmetic immensely harder than simple rules.
Not to say it's not worth it, just that it's hard.
>>> a=10.0**10
>>> a
10000000000.0
>>> b = 0.1
>>> b+a
10000000000.1
>>> c=b+a
>>> d=c-a
>>> d
0.10000038146972656
>>> d == b
False
>>> a + b - b == b
False
>>>
You could build a variable-tracking system on top of the numbers, or you could just use a symbolic computation package, then simplify the final equations so each variable figures only once.
No. You cannot do that because adding/subtracting/multiplying/... two uniformly distributed independent variables does not produce a uniformly distributed variable (see for example http://en.m.wikipedia.org/wiki/Irwin–Hall_distribution)
Thats true for other distributions, too. You could use the obvious choice of a normal distribution, if all you did is add and subtract, but that's (about?) as good as it gets.
If I have two variables defined as
Then we see that z should be the interval [-1,1]. But for a similar problem, b should clearly be the interval [0,0]. z is computed the same way as the first example, as the difference between two [0,1] intervals. But this time we want it to be [0,0]. That means we want to keep around some notion similar to the joint distribution of x and y. That makes arithmetic immensely harder than simple rules.Not to say it's not worth it, just that it's hard.