"I'd suggest not actually trying to simulate physical systems on a computer" I cannot disagree enough with this statement. I am double major in Physics and CS.
There are two kinds of physics, Math and Physics. Most of what you learn in uni physics is Math. But the Math is only one (outdated) description of the nature of physical law.
Leonard Susskind himself has said on multiple occasions that the equations of physics are at most highschool math however the solutions are sometimes phd+ level.
To me Physics is the study of the relationship between state. Mostly this state is finite and the laws governing its evolution are simple and local. A computer is the best tool to discover the solutions to this evolving state.
>There are two kinds of physics, Math and Physics. Most of what you learn in uni physics is Math. But the Math is only one (outdated) description of the nature of physical law.
Lord knows what you mean by this. By "math physics" do you mean theoretical, and by "physics physics" the experimental part? Even then, this doesn't make any sense. Every physics course teaches theory and experiment. Particularly, what do you mean by "Math is only one outdated description of the nature of physical law"? I'm very curious, maybe I'm missing something.
>Leonard Susskind himself has said on multiple occasions that the equations of physics are at most highschool math however the solutions are sometimes phd+ level.
So? Again, I don't see what you're trying to imply with this. Seems a perfectly correct observation.
>To me Physics is the study of the relationship between state. Mostly this state is finite and the laws governing its evolution are simple and local. A computer is the best tool to discover the solutions to this evolving state.
This to my opinion is, for a lack of a better word, "hipsterism". Physics is the science that studies the universe at the most fundamental level/low level. That's it. "State"? "This state is finite"? What do you mean by "this state is finite"? I struggle to see what this is trying to describe (there are infinite degrees of freedom in many theories throughout physics).
Faraday did a lot of "self studying of Physics". He was a book binder who didn't know any Math (had to drop out of school at 13). The reason he got into it was not because he read a Physics or Math textbook or worked out math problems, but because some one liked the way he bound books and gave him a ticket to a Humphrey Davy talk at the Royal Institution. Thats all it took to get into Physics.
Without him dreaming up his experiments and tinkering away, god knows when Maxwell's equations would have been discovered.
Why did we have to wait for a Faraday to show up to do his simple experiments? How come all the natural philosophers and mathematicians of the age, around the whole planet, had not thought to do them?
If Faraday had asked that bunch how do I self-study physics the answers wouldn't have been too different to what we see today.
Today's version of Faraday could very well be some kid hacking away at some computer game, dreaming up scenarios, exercising imagination and cooking up experiments in the same way Faraday did.
Let me see if I can do a defense that doesnt merely look like an obvious backpedal.
If you want to do a standard study in physics you are probably going to go through a significant amount of mathematics.
This is because the laws "universe at the most fundamental level/low level" are expressed as math equations.
If OP wants to go that route then there are tons of resources available to them.
But since OP already has a CS degree I think a possible entry point is exploring these laws through computers and in some way deriving them from simulation (here is one such simulation https://www.falstad.com/gas/)
lets leave the more outlandish claims alone for now. Perhaps I went to far.
> Most of what you learn in uni physics is Math. But the Math is only one (outdated) description of the nature of physical law.
Is this the lack of additional parameters for the equations? All of my physics problems (only took a couple courses) were like "what if this spherical horse rolled down a hill at 34 degree decline" but failed to include friction, atmosphere. Just gravity. It always felt fake to me. Perhaps this is what you were alluding to by the "outdated" description?
By outdated what I more mean is that I think if the greats of the ~1800 had modern computers they would probably not use mathematics in its current form to study physics. There already has been on transition of this kind. Newton used Geometry to study physics whereas today we use Math which more has its roots in symbolic logic.
It seems likely that anything that can be expressed in "any programming language" is isomorphic to something that exists in math and can be expressed in that language (automata theory, type theory, category theory, pick your poison).
I am implying that Mathematical semantics are broken/incomplete.
Mathematics uses denotational semantics. It implies pass by-value and pass-by-reference are equivalent. They are in theory, they aren't in practice. That is why Haskell's lazy evaluation leaks time.
Engineers/physicists intuitively rely at least on operational semantics. It's a higher order logic, and since Linear Logic time is localized.
So as a brief summary: laziness leaks time. Whether that's a good or a bad thing depends on what you want/need. It may be a bad thing if you are doing cryptography because time leaks allow side-channel attacks ( https://en.wikipedia.org/wiki/Timing_attack ).
As with all philosophical squabbles - there are arguments for each side. It's up to you to make up your mind based on your particular use-case.
And so in context of crypto + Haskell, have a look at these two posts:
I was more wondering how time leaks and laziness relates to denotational vs operational semantics? I couldn't find anything about either in the links (they seem to be general descriptions of haskell and laziness..?).
Trivially. For any given system 'safety' is an operational concept, not a denotational one.
You can't formalize the notion of 'safety' let alone prove (in the Mathematical sense) that your code has it without examining its runtime behaviour.
In the words of Donald Knuth: Beware of bugs in the above code; I have only proved it correct, not tried it.
In one context lazy evaluation may be 'safe' - it another it may be 'dangerous' - the context in which this assertion is made is always about human needs and expectations, not mathematics.
With particular example being that lazy evaluation allows for side-channel attacks in cryptographic systems. That's undesirable - hence operationally 'unsafe'.
>The observer effect in physics can be trivially expressed and evaluated as a mutating getter in any programming language.
>It cannot be expressed in any Mathematical grammar.
What on earth are you trying to say...? There is a whole mathematical treatment of quantum mechanics, measurement effect included (several treatments). More to the point, any algorithm can be formalised in e.g. a Turing machine, or any other equivalent universal model of computation. So there's literally nothing in "computer science" that you cannot express in mathematics.
This reeks of Dunning-Kruger: you don't even know enough about what you're talking about to realise how badly little sense you're making. I strongly suggest you familiarise yourself with a topic before opining about it. I'll be hard-pressed to believe you ever opened a physics textbook in your life after high school. Why then do you feel the need to talk blindly about it? I never opened a book about aeronautics in my life, therefore I would never presume to lecture anybody about plane-building... That would just be arrogance.
it's like... kinda true that the observer effect can be expressed as a mutating getter... but that isn't useful. The idea that it can't be expressed is really ignorant.
>This reeks of Dunning-Kruger: you don't even know enough about what you're talking about to realise how badly little sense you're making. I strongly suggest you familiarise yourself with a topic before opining about it. I'll be hard-pressed to believe you ever opened a physics textbook in your life after high school. Why then do you feel the need to talk blindly about it?
You can just address their ideas, this extra stuff isn't helpful.
>>The lesson is for those “experts” who “know” that all reasonable models of computation are equivalent to Turing machines. This is true if one looks just at functions from N toN. However, at higher types, such as the type of our function m, questions of representation become important, and it does matter which model of computation is used.
Mathematics only proves things up to isomorphism. Two things that are theoretically equivalent are not necessarily empirically equivalent.
Or, just the good ol' adage: in theory there is no difference between theory and practice, but in practice there is.
>So there's literally nothing in "computer science" that you cannot express in mathematics.
In addition to the examples in the blog post linked above, you cannot express a mutating getter in Mathematics.
The link you has its fallacy in the premise. It's defining "function" as "computable function", then complaining that many results that follow do not agree with the former definition! Either I'm really missing something or this is some poor attempt at trolling / bait-and-switch. Of course you have problems with that "computable function" definition when dealing with functions from Real -> Real, since many (in fact, almost all) real numbers are not computable!
Re mutating getter:
Monads for example are a standard way of formalising state in a pure-function universe. You seem to have some fundamental misconception about what "mathematics" is, since you keep repeating something about purity. Mathematics is merely the rigorous study of formal systems, of which your mystical "mutating getter" is one such system.
You have some deeply flawed (or entirely absent) philosophical upbringing. There are no "foundations" to anything - all definitions are arbitrary.
To assert that a premise is fallacious mandates that you have some prior notion of "fallaciousness".
You don't have an objective criterion for asserting whether one definition is better than another because you don't have a notion of "betterness" - it's all conventional.
It's precisely because I am an over-zealous formalist is why I see Mathematics for what it is - grammar, syntax and semantics.
Mutating getter? Sounds like a generator function that yields. Or a pseudo random number generator implemented with a monad (WorldState, Value). Or any tail-recursive non-void function that carry state without loss?
I'm definitely not an expert but aren't irrational numbers are the best examples to show that these types of constructs already exist in math? You can't calculate them in finite amount of time. You use an analytical formula or series expansions to get the nth term instead.
I just explained three different models to represent side-effects without leaving the purity. I'm not sure if you are trolling, throwing random big words or simply have a really misguided impression about how computer science connects with math.
What is special about a mutating getter? It operates on the state of the object S and outputs a new state S' and some value x. Some programming languages allow you to write 'functions' without explicitly passing all input state into the function, but does that imply anything fundamental or interesting about mathematics or computation? Is that anything more than a type of syntactic sugar?
What is "special" is that the system has a peculiar property: f(x) != f(x).
If we claim to be subscribed to denotational (Mathematical) semantics then the above contradicts the identity axiom.
And it's not any deep and world-changing insight either - it's obvious to anybody who sees that the LHS and RHS are only evaluated at runtime - they don't have any inherent (denotational) meaning, which is why I keep harping on: programmers use different semantics to mathematicians.
We care about things like interaction and control flow structures as first class citizens - those are precisely the things that have no mathematical equivalents. Timeouts, exceptions, retry loops.
It's not "syntactic sugar" - it's necessity for reifying control flow. In the words of the (late) Ed Nelson: The dwelling place of meaning is syntax; semantics is the home of illusion.
Of course it isn't. OP seems to have some kind of misconception about what "mathematics" means, since they keep fixating on a vague concept of purity of notation.
There are two kinds of physics, Math and Physics. Most of what you learn in uni physics is Math. But the Math is only one (outdated) description of the nature of physical law.
Leonard Susskind himself has said on multiple occasions that the equations of physics are at most highschool math however the solutions are sometimes phd+ level.
To me Physics is the study of the relationship between state. Mostly this state is finite and the laws governing its evolution are simple and local. A computer is the best tool to discover the solutions to this evolving state.