This looks like a fairly low power monte-carlo system. You just store samples, and the inference is sampling that sample set? That's just bootstrapping and has been far more explored in a much more extensive way by random forests and so forth.
You've shoe-horned this logic that usually belongs in linear algebra world into a database table form, there's some initiative there but this has also been heavily explored in the academic field under the topic of Probabilistic Databases. BayesDB is a full implementation of what you've just described, with a much deeper inference engine utilizing joint distributions rather than just distributions that exactly match the sample.
Not necessarily. One can "summarize" the samples, as shown, to get approximations using much less data. And various sub-sets can be switched on and off as needed (or weights turned down).
Re: "BayesDB is a full implementation of what you've just described"
Perhaps, but using tools similar to what office workers currently use, staff without PhD's can study and adjust results based on direct observation and specialty sub-division. It's more about an approachable tool-set and division of labor than technical accuracy. It's about "de-esoteric-izing" AI so that more can assist in its tuning.
BayesDB provides a toolset, what you've proposed is where you need knowledge of the underlying process. You've shown some initiative here, but I'd really recommend studying what's out there and doing a true contrast of your solution to see the shortfalls.