John Baez is a fantastic writer. His home page is full of wonderful illustrations and expository stuff (physics, mathematics, climate science, ...) http://math.ucr.edu/home/baez/
There's also a code base for GHC based on this work called Subhask. It's technically Haskell but replaces so much of the functor/monad hierarchy that it's effectively a different language.
Not related : I am looking for a book about quantum physics but without (applied) math part, i dont have any problems with math and I enjoy pure-math, but i dont want to spend my time reading calculations. Kind of like deep phenomena going on in sub particle physics.
Haven’t find any yet, i would appreciate if suggest me any book.
Leonard Susskind's series of courses at Stanford's School of Continuing Education are a treasure. They cover the math required, but the focus is on the pure math necessary so you can move on to the next thing, and not a lot on rote calculation.
There's a new book out this year, "Q is for Quantum" by Terry Rudolph. It uses only basic arithmetic and goes quite far into quantum entanglement and the nature of reality. And Terry is a cool dude (also a physicist at Imperial.)
It’s actually easier to understand quantum mechanics by only looking at the math, IMO. A lot of the weirdness simply disappears and it becomes the inevitable outcome of certain axioms.
Agreed. If you approach QM as
just "linear algebra with complex numbers" plus "probabilities that can be negative or complex" it gets you a long way toward eliminating the woo factor. It's an incomplete description of course, but I find it helpful.
"The road to reality" by Roger Penrose is the only book I can think of that contains stuff like that without calculations etc. It contains a huge amount of content, and becomes better and better the more of the physics you read about in detail elsewhere.
This book is a marvel, but a tricky one. While offering an overview of modern math and physics on a truly encyclopedic scale, the text seems to vacillate in regards to the intended audience, lingering on trivialities while often rushing through complex topics. It may be hard to read without looking elsewhere for additional explanations of things.
Quantum physics is a vast field, what are you interested in? The basics like single particle quantum mechanics, or quantum field theory, or something very specific like entanglement, the Higgs mechanism, or quantum electrodynamics?
Some areas are more amenable to classical analogies than others which in turn somewhat dictates how far you can get without math. But I would say generally quantum physics just requires math if you want to get more than a very rough and superficial picture. The classical world we developed our intuitions in is just a very strange world as compared to the way nature really works at a fundamental level and so we are not well equipped to understand the world at those scales.
So you don't know any quantum, but you know the right way to learn it? There is no royal road to physics, there is no way to learn quantum without mathematics, it is a mathematical model of reality. I appreciate that you don't want to learn Linear Algebra and Hilbert Space calculus when you could be learning quantum, but the two are inseparable. Unless you want to ready about spooky action and crazy dead cats and feel like you understand.
As someone else mentioned Road to Reality doesn't have a lot of symbols and equations, but it's a really tough book and you'd be better of with an undergrad QM textbook like Griffiths, Mandl or Dirac.
You could do worse than Feynman's QED. The main weakness from your probable POV is that it doesn't get into quantum computing or other modern developments. But it's short and very good.
Edit: No, wait! Did you _not_ want the mathematics? On a first reading, I thought you wanted the mathematical core, without getting bogged down in practicalities around it.
things life quantum tunneling and "not existing before measurements" are deep phenomena but might as well be magic until you understand decaying solutions to pdes and superposition, at which point they become very obvious. read a real book that's gentle.
People think of type theory as "internal language" of categories with certain properties (e.g. simply typed typed type theory for cartesian closed categories). A formal way to say this is that the syntax of type theory gives rise to category with certain properties, and this category is initial among such categories. This means that it is, in a sense, the minimal category that satisfies these properties, so that all constructions in type theory give rise to this construction in every such category.
To me personally, type theory feels a little like an ugly definition of this initial "category with X", and I'm currently exploring in my thesis whether a syntax derived from the categorical definitions itself could also be used for the same purpose as type theory is today.
Originally, categories arose as (mathematical) descriptions of various universes of functions. For example, there is universe of functions that are given linear functions (i.e. matrices), which is contained in the universe of functions given by polynomial expressions, which is itself contained in the universe of functions given by rational expressions (i.e. fractions of polynomials). If you've studied calculus other example are also universes of analytic functions, of smooth functions, of differentiable functions, and of continuous functions.
Significantly these are not just single-variable functions, or even functions whose values are real numbers; all that is necessary to describe a category is for functions to have specified domains and codomains and that any two functions with matching source and target can be composed into a third function by feeding the target of one into the source of the other, subject to the condition that composition is associative (it doesn't matter in what order you pipe the functions together), and for any object there is an identity function with from the object to itself, meaning a function which does nothing when composed with another function.
It turned out that the above axiomatization of how functions behave (i.e. that they have sources and targets and can be composed in an associative fashion) is very flexible. For example, you can essentially implement the category of linear maps (between finite-dimensional spaces) as the category whose "functions" are matrices, with source and target given by the numbers of rows or columns, and whose composition is given by matrix multiplication. Note in particular that matrices are just tables of numbers, i.e. they are not actually functions, and that the sources and targets are not the spaces themselves, but just their dimensions.
The relationship with type theory comes about as follows. The fundamental object of type theory is the type judgement "if a term x is known to be type X, and a term t is known to be of type T, then a term p(x,t) is known to be of type P". But the term p(x,t) can be understood as a function transforming a pair of terms of types T and X to a term of type P. The type judgment is then the validation that the expression p(x,t) determines a legitimate function (in the context of the specific type theory). Hence in this way, each type theory can be understood as being a syntactic presentation of a specific category of functions.
In this context it's probably wise to use "proof" carefully, otherwise it's easy to confuse with its mathematical usage. Maybe "Homotopy type theory is also a sign of the relationship between computation, logic and topology" is a better way to express what you were trying to say :)