Hacker News new | past | comments | ask | show | jobs | submit login

> It’s just that most grep implementations were written...

...with a design philosophy of composition. Rather than a hundred tools that each try to make too-clever predictions about how to parallelize your work, the idea is to have small streamlined tools that you can compose into the optimal solution for your task. If you need parallelization, you can introduce that in the ways you need to using other small, streamlined tools that provide that.

It had nothing to do with some prevalence of "single-core processors" and was simply just a different way of building things.




That just pushes the task of optimising the workload up to you, complete with opportunities to forget about it & do it badly.

I don't relish the idea of splitting sections of a file up into N chunks and running N grep's in parallel, and would much rather that kind of "smarts" to be in the grep tool


It has no choice but to read file data in chunks or exhaust memory.

If you need to do n parallel searches what better arrangement do you propose?


I propose the search tool decide how to split up the region I want searched, rather than me trying to compose simpler tools to try to achieve the same result.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: