Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In a nutshell, whatever this article suggests, just don't.

This oversimplifies to a point where all learners learn the same way. They don't.

I don't even see the wider jaw in that cartoonist depiction, and I'd never recognize the man from it: a good cartoon will amplify features of someone, but this is a completely non-existant feature of the face being drawn. If anything, the guy has overly oval face compared to an average face.

So basically, based on the wrong premise, it happily leads you to a wrong conclusion.

A better take-away would be to attempt to recognize multiple ways to learn something, and make an effort to see what works best for any single individual. If you can't afford that (too time consuming, thus too expensive, to cater to each individual student), choose what you optimize for: having most kids learn to a particular (likely lower) standard, or having "most-compatible" (eg. in maths, those who kinda already have the mathematical, algorithmic, abstract mind) get the best of their talent. But you will be compromising either way.



The model recursively self justifies.

That is: all models are wrong, some are useful. Including this one.

Have you ever taught a class? I have, and I absolutely need simplified (aka wrong, but the right kind of wrong) models of learning to be able to operate at all.


I haven't taught a class, but I don't think that discredits my opinion at all. Why do you think it does?

I never said you don't need simplification, but the way article poses it is disingenuous at best. Eg. look at the example of multiplication they give and how it's "technically correct" to call multiplication mostly reducing numbers. Bollocks. Neither "making things bigger" nor "reducing numbers" is technically correct: neither applies.

The natural approach to learning multiplication is to work with objects, and thus natural numbers. That's why it's called "multiplication": you make multiples of something.

You can easily introduce negative numbers as being in debt for X things, and in that sense, multiplication still only increases your debt. Negative numbers are a shorthand for "you owe me this".

The next step is to introduce rational numbers, which are parts of something. That's pretty clear too. Then you introduce multiples of parts of something, and it all still makes sense (quarter of a quarter is now something slightly different). Particularly inclined students will "feel" that you are now entering somewhat abstract "multiplication".

Similar goes for real and complex numbers though that requires a bit of leap of faith.

This is a proper way to teach multiplication. It starts simple, yet it's never wrong.

The premise of the article makes it a come up with a completely unnecessary and untrue statement: "People generally multiply positive numbers greater than 1, so multiplication makes things larger." People mostly multiply natural numbers: this is what drives the intuition and naming of the operation (technically, this still falls under article's statement, but that one is unnecessarily technical in allowing only numbers greater than 1, and allowing rational and real numbers, though I am not sure where complex numbers fit in ;-)).

By focusing on technicalities, the article misses the natural way to simplify things which are never wrong.

Mathematics today is beautifully built from very simple concepts up. Depending on the level you are teaching at, it requires different levels of suspension of disbelief.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: