Hacker News new | past | comments | ask | show | jobs | submit login

What do you mean? That's an incredibly important distinction in understanding mathematics for ML in the neural net age. Perhaps a bit of a sensitive spot for me personally, coming from a harmonic analysis group for my math PhD, but the short version basically goes like: Up until the 2010s or so, a huge portion of applied math was driven by results from "continuous math": functions mostly take values in some unbounded continuous vector space, they're infinite-dimensional objects supported on some continuous subset of R^n or C^n or whatever, and we reason about signal processing by proving results about existence, uniqueness, and optimality of solutions to certain optimization problems. The infinite-dimensional function spaces provide intellectual substance and challenge to the approach, while also being limited in applicability to circumstances amenable to the many assumptions one must typically make about a signal or sensing mechanism in order for the math model to apply.

This is all well and good, but it's a heavy price to pay for what is, essentially, an abstraction. There are no infinities, computed (not computable) functions are really just finite-dimensional vectors taking values in a bounded range, any relationships between domain elements are discrete.

In this circumstance, most of the traditional heavy-duty mathematical machinery for signal processing is irrelevant -- equation solutions trivially exist (or don't) and the actual meat of the problem is efficiently computing solutions (or approximate solutions). It's still quite technical and relies on advanced math, but a different sort from what is classically the "aesthetic" higher math approach. Famously, it also means far fewer proofs! At least as apply to real-world applications. The parameter spaces are so large, the optimization landscapes so complicated, that traditional methods don't offer much insight, though people continue to work on it. So now we're just concerned with entirely different problems requiring different tools.

Without any further context, that's what I would assume your colleague was referring to, as this is a well-understood mathematical dichotomy.




I made continuous/discrete distinction more in order to take a jab at people that don't know measure theory and therefore think these approaches can't be unified. (Though I do know for the record that in some cases, like the ones you mention, there is no overarching, unifying theory.)

Other than that I agree with you with everything up to the point where you say "Without any further context...".

The dichotomy that you describe needs graduate-level mathematics to be properly understood in the first place. I'm not sure why (luck?) it seems you are biased by being surrounded by people who are competent at ML and math. I guarantee you that is not the case. If you review for any of the big conferences (e.g. NeurIPS) you will see that really fast. :(




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: