I use math with computers all the time; have since pretty much the beginning of my programming experience 35 years ago. But I don't use it well. I depended a lot on other people to convert a mathematical equation into a program (for example, think of a summation-- that's really just a for loop incrementing an accumulator. And an integration isn't much more than that, just divide by a constant at the end). I learned gravitational simulations that way (amusingly, I was able to do mandelbrot on my own knowing just z = z ** 2 + c and brute forcing myself through the details).
For me math is more of a received wisdom. I'll have a problem I need to solve, and as part of that, I need to compute some function. But the naive version of the function that I was taught (say, factorial function) is slow, and might fail because of integer data types outside their range. In comes my professor, who mentions https://en.wikipedia.org/wiki/Stirling%27s_approximation which allows me to complete my project and graduate in time. Said professor also derived analytic derivatives of our objective function, since at the time (1993-4) we didn't have autodifferentiation.
At the time, I didn't really think too much about it. I had a problem and somebody handed me a practical solution. But I got curious... what was this gamma function and why is it defined over floats (reals!) over integers? And so that led down a rabbit hole of mathematical exploration (most of which was executed using a highly worn copy of Mathematical Recipes in C).
Another example is the mandelbrot set. You can take the raw definition and attempt to compute set members but your calculation will never complete. Instead, clever math people figured out ways to compute an approximately right answer faster- and in some cases, optimized for the limited hardware of the time (see FRACTINT for an integer-based fractal program for x86 machines pre-floating point hardware). This and many other tricks made fractal exploration on consumer hardware practical (although probably not very useful?)
Over time I've come to be better at math- at understanding concepts- and the relationship between practical high performance computing and both the underlying math and physics that are required to do it effectively. I've learned so many different ways to approach problems compared to when I started, much of it because I continued to learn more math, and practice at it. I see a close relationship between computing theory and the math/physics that enabled it (IE, transistors and vacuum tubes before them, and mechanical gears and switched before that).
I've also realized that I can learn some math easily- for example, more or less anything on a cartesian grid- while other things, like complex symbolics or tree structured algorithms- take a lot more thinking.
To me it's an endless world of unknown delights that I stumble across and periodically take 20+ years to understand. I am just now solving problems that my smarter grad school friends managed to do in a day, 20 years ago, because they're better at math (and logic, and memory, and more...)
For me math is more of a received wisdom. I'll have a problem I need to solve, and as part of that, I need to compute some function. But the naive version of the function that I was taught (say, factorial function) is slow, and might fail because of integer data types outside their range. In comes my professor, who mentions https://en.wikipedia.org/wiki/Stirling%27s_approximation which allows me to complete my project and graduate in time. Said professor also derived analytic derivatives of our objective function, since at the time (1993-4) we didn't have autodifferentiation.
At the time, I didn't really think too much about it. I had a problem and somebody handed me a practical solution. But I got curious... what was this gamma function and why is it defined over floats (reals!) over integers? And so that led down a rabbit hole of mathematical exploration (most of which was executed using a highly worn copy of Mathematical Recipes in C).
Another example is the mandelbrot set. You can take the raw definition and attempt to compute set members but your calculation will never complete. Instead, clever math people figured out ways to compute an approximately right answer faster- and in some cases, optimized for the limited hardware of the time (see FRACTINT for an integer-based fractal program for x86 machines pre-floating point hardware). This and many other tricks made fractal exploration on consumer hardware practical (although probably not very useful?)
Over time I've come to be better at math- at understanding concepts- and the relationship between practical high performance computing and both the underlying math and physics that are required to do it effectively. I've learned so many different ways to approach problems compared to when I started, much of it because I continued to learn more math, and practice at it. I see a close relationship between computing theory and the math/physics that enabled it (IE, transistors and vacuum tubes before them, and mechanical gears and switched before that).
I've also realized that I can learn some math easily- for example, more or less anything on a cartesian grid- while other things, like complex symbolics or tree structured algorithms- take a lot more thinking.
To me it's an endless world of unknown delights that I stumble across and periodically take 20+ years to understand. I am just now solving problems that my smarter grad school friends managed to do in a day, 20 years ago, because they're better at math (and logic, and memory, and more...)