Hacker News new | past | comments | ask | show | jobs | submit login

I don't know but

  def benchmark_sort_numpy():
      lst = np.random.rand(5000)
      np.sort(lst)
isn't awkward at all and I'd argue that numpy (and pandas etc.) are very natural choices for anyone working on the problems they solve well. That's precisely the beauty of Python. There are very good libraries for almost everything. Usually there's also great communities around those libraries and it's usually not very hard to identify the "state of the art" library for any given problem.

For me the main question is not "will the compiler optimize well in the general case" but rather "will you naturally reach for the right libraries which are optimized well". For me/Python I'd say more often than not the answer is yes. I understand that that's not the point the Julia team is trying to make but it's a decent practical approach (imo)




The motivating factor for using Julia in a lot of cases is: what do you do when the problem you're trying to solve hasn't been exactly solved already by someone else's C extension? Can an average person (scientist, grad student, etc) who knows the math behind the problem they're trying to solve, but doesn't want to jump through hoops of awkward extension compilation (where you have to know not only the high level language and the low level language, but also how to use the interface layer API's that sit between them), write a high-performance implementation from scratch without it taking too much time or effort? If libraries do exist, do they work in parallel? And the standout features of the language like multiple dispatch and metaprogramming also allow some new, very natural ways of approaching a lot of problems in technical computing.


That motivating factor can be achieved in Python using Numpy and Cython effectively. Check any of Ian Oszvald's High Performance Python talks.


Numpy is great for dense multidimensional arrays of (edit: fixed precision) floating point numbers. Most problems I face need to deal with richer, more complicated, less uniform data structures than that. Similarly Cython is way better than writing a C extension by hand, but it feels very tacked-on (why are you writing libraries in a different sub-language than you use them from?), what you can do in nogil mode is pretty limited, and the choice of supported compilers is depressingly limited for when you need C++11, inline assembly, Fortran, linking to libraries that build with autotools, etc all to work cross-platform. If absolutely everything in the Python ecosystem were written using Cython then Python would have less of a performance problem, but there's a productivity, distribution, and difficulty barrier there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: