If you're into physics I'd recommend solving some problems using whatever language, but especially functional languages (i.e. Lisps, Haskell, etc.) because you have some big "A-ha!" moments as to what the math really means. Like when you program an integral from scratch for a mechanics problem and you go "Oh that's why we use an integral here!" There are also many problems (i.e. n-body orbital dynamics) where brute-force computation is the only way to get to a solution. Finding the path Rosetta/Philae took to comet 67P comes to mind.
There's an older course that's a bit of a sequel to Structure and Interpretation of Computer Programs (SICP) called Structure and Interpretation of Classical Mechanics (SICM). I've never done it but always thought it looked like fun. (If you're into Scheme or Lisp)
Do you know if it has changed enough to deviate from the course or not? I just found out about this and it sounds really interesting, and I'd like to get the newer book if I can.
Edwin (the text editor that comes with MIT-Scheme) is not quite emacs, but when I used it for SICP I kind of liked it...once I figured out the debugger. (edit: that sounded sarcastic, I actually liked Edwin)
I wouldn't say "the Racket people" gave up - rather that one Racket user gave up. If I were to port scmutils to Racket, I'd probably start from the Guile port.
(The name is a little vague, I haven't decided what to call it; "scmutils" doesn't convey much).
Having it in Clojure means it could be integrated with lots of other things; I'm considering graphics, at some point, or some kind of integration with javascript.
I'm a daily reader of HN, but not much of a contributor (sadly): do you think this might be worth a Show HN when it was more mature?
I've been putting off the SICM course and thus scmutils but the idea of a Clojure port makes me a little giddy. 2 good things I see: the possibility of using it in bigger projects (sorry, Scheme) or just not dealing with context switching from Emacs/Clojure to Edwin/Scheme.
Either way - really cool, wish I could help but I don't really know how.
I've yet to finish my way through. But it is amazingly precise and thorough. There are a few mistakes that the reader catches easily if they've been paying attention.
The authors are really onto something with programming as a means to learn other subjects. I know there is a very recent book on differential geometry topics from them also!
Would you happen to have proof of property d of exercise 1.33 lying around? (The one on the Euler-Lagrange operator of a composition of Lagrangian-like functions)
The authors are really onto something with programming as a means to learn other subjects.
I think this is because of the precision that programming demands. You have to really nail down every edge case, and think about things down to their essence.
This is one reason why programmers can succeed in jumping the fence to work in their customer's jobs, even when it's an unrelated field. (I've seen many programmers make swaps from fields as diverse as brand management, telecom customer support and bond trading)
I think it's actually as much to do with having to turn syntax and abstraction into something actionable that describes the solution.
That is, in programming you learn a small set of abstractions (programming language syntax), and the actual coursework in applying that is in how they combine, how to fit them to problems.
In math, you learn a large number of abstractions (notation), that can vary and have new ones across different domains, and which doesn't tell you what it is doing unless you learn all the abstractions.
The latter you can kinda get, if you think about it, but forcing you to actually turn it into something expressed in the former means it is synthesized into something applicable, rather than just abstraction that may or may not be.
is harder to understand when you come across it than
total = 0;
for(i = 0; i<=100; i++)
{
total += i;
}
return total;
or
sum([1..100])
You have to convert the first from a symbol that does not describe its action, to an action; the code describes the action. That extra translation step leads to additional cognitive load when learning, at least for me.
Alternatively, you could just note that your example reduces to 100*(100+1)/2. This simplification decreases cognitive load even more substantially! Taking a strictly programmatic view of a problem allows you to be lazy, and I believe that you lose a lot in the process. There is importance (numerically, even!) in thinking about things in a more formalized, mathematical way.
I'm a physics sophomore, and I would be very glad to see more programming, especially FP, integrated to physics courses. During my studies, I've programmed some simulations related to the physics courses I've taken. My main purpose has been to gain a deeper, more practical insight on the subject which would've otherwise remained quite theoretical and distant.
For example, I made a little rollercoaster simulation to demonstrate the power of Lagrangian mechanics and generalized coordinates to myself. On the electrodynamics course I programmed a solver for Poisson's equation using the finite difference method to see a little more than the few simple geometries we calculated by hand. That kind of voluntary activities have greatly motivated me and helped me to understand various concepts.
On some courses we already have some simulation work and numerics in homework problems. Maybe deeper integration of programming into teaching requires time and, more importantly, a driving force and resources behind it. Then, of course, not everyone would be happy to see that kind of integration – I'm sure some would feel like they're forced to learn to program. And, as it has been seen on our entry level numerical physics course, learning programming, numerics and physics the same time is really quite hard.
Given those shortcomings, I still feel this is the way to go for future physics education. A gentle introduction and slowly teaching programming alongside physics would be the key, I think.
Have a look at the Matter & Interactions textbook by Chabay and Sherwood. It's a really cool concept for a first-year physics course, and it incorporates a lot of programming (and even just programming-inspired perspectives). (The authors use the "VPython" programming package for easy creation of 3D simulations.)
Your activities sound useful and worthwhile. A comment on this:
And, as it has been seen on our entry level numerical physics course, learning programming, numerics and physics the same time is really quite hard.
Yeah. My feeling is that the numerical side of things is a lock to get short shrift in this type of course. In my own education, there were certainly lots of nods at numerical work, but they really undersold the subtle difficulties of doing numerics well.
I am intrigued by this approach (I have Structure and Interpretation of Classical Mechanics on my lengthy to-read pile) but I do wonder whether expecting ~sophomores to be trying to pick up Haskell and the physics at the same time is a bit much.
The aspect of this that interests me most is related to a classic observation that, for most people learning physics, it's hard to separate difficulties with the physical content and difficulties with the mathematical content. The distinction between these is vague, but the separations like "set up the differential equation" vs. "solve the differential equation" that this functional style suggests seem like a good approximation to "physics" vs. "math."
If you're only dealing with pure maths, Haskell really isn't that hard to pick up, it's only once you start wanting to do I/O and deal with monads that it can be a bit of a burden.
For something like this, where you're probably just loading some pure functions into GHCi? No harder than Mathematica or something like it really.
"One obvious use of types in physics that we have not explored in this work is the expression of
physical dimensions (length, mass, time) and units (meter, kilogram, second).
...This is not trivial to do with Haskell’s type
system because one wants multiplication to “multiply the units” as well as the numbers."
And note that there is no way to resolve this without some fundamental changes as Haskell requires that the two operands and the resultant type share a type. (Otherwise you could do some tricks with recursive types to accomplish this)
I think that this is true only if you want to be an instance of `Num`, which makes sense: the collection of, say, lengths is not such an instance, because you cannot multiply two lengths and get a length. Nothing stops us from defining (simplified)
data Unit a = Unit a [String]
(*) (Unit a as) (Unit b bs) = Unit (a Main.* b) (as ++ bs)
Of course, we will then have to disambiguate `` when we use it in the code. Another option would be to give it another name, like `unit` (or something less awful).
Back in undergraduate school, we were taught to learn programming (FORTRAN) through physics, not the other way around.
The idea that the process could be turned around really hammers home how much things have changed due to the access to computers at a young age that most kids have nowadays.
I learned programming independently, but I've done a Master's in physics.
I really think that you want the curriculum to go in this order: (1) teach a high-schooler to program via games; (2) leverage that programming knowledge to build up some abstract mathematics and love of patterns; (3) start into Newton's equations with a programming background.
Haskell is actually a pretty good choice for this process because it is functional. I wouldn't tell the student that it's a "functional language" but rather that it is "based on a simpler model of computation". You start with the idea of "expressions reduce", the naming of things, and backslash-function literals. Once you understand that, then there are data structures and pattern matching -- you expand slowly outwards this way.
The syntax is surprisingly simple if you explain it top-down, but is often "lighter" than Lisp's parentheses. Curried functions and operator sections let you easily speak about high-level functions really early on without a mess of syntax. Haskell guards and pattern matching really embodies the SICP value that "every good program starts with a case dispatch." And, you can still do SICP's trick of implementing a Scheme dialect in Haskell pretty easily (in the context of games this allows them to be "scriptable").
The value that you get is that in the mathematics courses, you are sneakily starting a student off in a proof-centric environment; you can start calculus with the discrete calculus
delta list@(x : xs) = x : zipWith (-) xs list
sigma = scanl1 (+)
...and you swiftly get an inductive proof that sigma and delta are inverse functions.
Because computers are stupid, you really break every idea down to the lowest common denominator, which makes it really easy to incrementally learn.
Eh, I think the approach we (as in, that's how I learned programming, too) learned with is the correct one, really. Computational physics is less about programming and more about the constant tension between the demands of efficiency and accuracy and the inherent imperfections of the methods (e.g. difference models) and underlying machine representation of numbers.
Physics isn't just about punching numbers into a computer or solving math problems, and I think a shift toward more programming centered learning of physics needs to be heavily tempered with that understanding.
If you're doing really advanced computational physics, sure, but that doesn't mean you can't start on an easier path and learn about those challenges later.
It's no different to practical work – you start off by focusing on the concepts, not worrying about budgeting and time constraints, even though those things are important in the real world too.
In our undergrad course we used C and it was completely the wrong tool for the job. No one learned more about numerical challenges than they could've with Python, and fighting with semicolons just puts people off.
The thing is from my experience, most computational individuals would be strongly opposed to FP. They may not have been raised on for loops, but once they learn about for, good luck on having them warm up to the idea of map and reduce.
I think the only way you'd succeed is by snatching their young before they go down that path. I don't really see many people warming up to new things aimed at them like Julia or even not-so-new stuff like numpy/matplotlib and friends. If it isn't Fortran or C or matlab, it doesn't ring well with them. Of course, the new kids who don't know programming (or physics yet, perhaps) are ripe for indoctrination of your religion as opposed to theirs.
most computational individuals would be strongly opposed to FP
There's ample evidence for and against. The most famous counterpoint I know of is Dijkstra, whose Ph.D. was in theoretical physics. What does a theoretical physicist do? They calculate, of course. On reams and reams of paper.
Dijkstra famously warned against the brain damage that comes with learning BASIC. I understand it to mean that one could be so caught up with the details of shuffling bits and bytes that a bigger picture is no longer conceivable to the poor fellow.
Because the poor fellow has fallen into the trap of premature optimization. For loop all the things, higher abstractions be damned.
* Dijkstra's Ph.D. was in theoretical physics, but I don't believe he ever actually did much. Instead, he was working on computing systems. Famously, when he was getting married, the Amsterdam official recording the marriage wouldn't accept "programmer" as a job, so his marriage certificate says "theoretical physicist."
* Dijkstra was not a wildly big fan of functional programming. Admittedly, he was less a fan of other things, and I've had this sneaking suspicion that his was simply another allergic reaction to recursion, but still...his guarded command notation was essentially imperative.
I'm not sure this is a bad thing. The criticism of C and its counterparts is valid in some respects, but I think the dogmatic views of FP proponents overstep the bounds of what is justified in many situations. Often a more procedural solution maps better to the problem at hand than a recursive or functional one would.
Just to clarify: I'm not anti-FP in any way. I've implemented some fairly large projects in OCaml and like the language a lot, however, I don't think FP supersedes more traditional programming styles in all (or even many) cases.
I expect some people to disagree with me and I'd love to hear why! I put my viewpoints forward not to say that they're the absolute truth but more to get responses from other people. I'd be more than happy to chance my opinions if given compelling evidence.
I love FP, but in order to learn to love it you have to get used to the idea that the language is going to be doing a few things that you might not like (collecting garbage, silently boxing things, not knowing how to convince the system to use the vector operations provided in hardware). With Fortran/C, it's easy to see how assembly language gets produced. I like Haskell a lot, but while I know how C(++) gets to the iron I'm pretty vague on how Haskell does it. FP can still win, or approach parity, but it costs some trust
A great deal of Mathematics were done in tables, vectors and matrix, that mapped (probably because of a feedback loop) so well the imperative indexed loop that it's not surprising they'll love it.
On the other hand, watching Gilbert Strang videos, I had the feeling that his views on maths were very FP-ish. Splitting matrices into sub-matrices, combining them with higher order operations. No i,j,k index ever to be seen.
At Georgia Tech, the labs associated with Physics I and II have a large programming portion. They had us use VPython [1], which is a strange package which includes a version of python and a graphics library. It worked pretty well, and I got a good kick out of it. They had us model gravitation of planets (using discrete time steps). In Physics II one of the assignments was to create vector field displaying a magnetic field, and then animate a magnet around the field in a circle.
I think the programming might have been a little too complex for some, as some people took physics first or second semester before having a programming class, and it became difficult for the TA's to help people with their code as they taught how to accomplish things, not accomplish it in a clean manner.
Overall though, I think it added a good bit of value to the course.
Yes. Though they are pretty "hush hush" about the differences between the two classes, in fact, the course numbers are the same, it's just common knowledge that one professor teaches "Modern" physics, and one teaches "Classical" though, those are kind of informal, the differences are not in the material, but mainly in the teaching style.
The linked paper doesn't address the elephant in the room - numerical integration is a finicky process to work with in many systems. Students who are still working to understand the underlying concepts are not going to be helped by things like non-energy conserving integrals. [1]
The lack of units in the type system also means the error-preventing properties of static typing are somewhat limited here; it's possible to write code that assumes F=a/m without any complaint from the compiler.
The author has just defined a 3d vector. But there is no "3" in the above because Vec is hardcoded to be three dimensional. The physics student is probably interested in what aspects of physics are special to 3 dimensions, and which generalize to higher dimensions. I think geometric/Clifford algebra somewhat answers this, but my knowledge is limited. Anyway, functional programming is still at the stage when the things that it can express about mathematics, are actually pretty obvious already. I have high hopes for the future (e.g HoTT), but for now functional programming is much more exciting for programmers than physicists or mathematicians (who aren't logicians or category theorists).
There's an older course that's a bit of a sequel to Structure and Interpretation of Computer Programs (SICP) called Structure and Interpretation of Classical Mechanics (SICM). I've never done it but always thought it looked like fun. (If you're into Scheme or Lisp)
course: http://ocw.mit.edu/courses/earth-atmospheric-and-planetary-s...
book: http://mitpress.mit.edu/sites/default/files/titles/content/s...