Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
xstartup
on March 29, 2018
|
parent
|
context
|
favorite
| on:
CockroachDB 2.0 Performance Makes Significant Stri...
We use clickhouse cluster with 1000 nodes and 50000 GB clickstream data.
_wmd
on March 29, 2018
[–]
That's only 50gb per node. Why do you/Clickhouse need so many nodes?
tedmiston
on March 30, 2018
|
parent
[–]
Maybe some space is dedicated to replication? Or for query execution temp space like Redshift. Or he could be trying to keep everything in memory.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: