Hacker News new | past | comments | ask | show | jobs | submit login

That's a pointless article. When you have a few GB of data, you can use anything. Command line tools or SQLite or anything in between would've worked fine.

Realistically, anyone contemplating Hadoop or anything bigger than traditional relational databases is dealing with the hundreds of TBs to PBs range which is not going to work with some unix tools.




No, what's pointless is the typical Hadoop workload. Sure there are some people who need it but I'll wager they're not even 1% of the people using it.


Ok, but this is a random tangent. People will always use things they shouldnt, that's their problem.

The point of the thread is that there's still no easy scale-out solution that has come along for relational databases to provide for data that typically gets put into proprietary data warehouse or hadoop installations. Citus data and memsql might get close but the whole industry is still far behind where it should be for this.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: