Hacker News new | past | comments | ask | show | jobs | submit login

Hey everyone, it's Ozgun. When I first wrote this blog post, it was much longer. Based on initial feedback, I edited out parts of it to keep the post focused.

If you have any questions that aren't covered in the post, happy to answer them here!




Thats funny, because I was dying for it to be longer. I felt like the post was just an introduction. I'd love to see a part 2 with a more detailed description that touches more of the implementation of a sharding plan.

For me a major question I have as I consider sharding is what my application code will look like. Let's say I have a query like:

'select products.name from vendor inner join products on vendor.id = products.vendor where vendor.location = "USA"'

If I shard such that there are many products table (1 per vendor), what would my query look like?


Your application code shouldn't have sharding concerns in its logic. To achieve this, you should introduce an abstraction layer. One such example is vitess[0], which is used at YouTube.

If that's too much work, then an easy preliminary step is to add the abstraction layer in your application code. That gets you most of the benefits of a proxy for the purpose of having clean application logic, and makes it easy to switch over later, but is less powerful and feature complete.

[0]: http://vitess.io/overview/#features


Reading through your comment again, I realize I completely missed the mark on your question.

If you use Citus, you don't have to make any changes in your application. You just need to remodel your data and define your tables' sharding column(s). Citus will take care of the rest. [1]

In other words, your app thinks it's talking to Postgres. Behind the covers, Citus shards the tables, routes and parallelizes queries. Citus also provides transactions, joins, and foreign keys in a distributed environment.

[1] Almost. Over the past two years, we've been adding features to make app integration seamless. With our upcoming release, we'll get there: https://github.com/citusdata/citus/issues/595


Thanks for your input (also the_duke)! If time permits, we may come up with a second blog post on this topic.

If I understood your example query, your application serves vendors and each vendor has different products. Is that correct?

You can approach this sharding question in one of two ways.

1. Merge different product tables into one large product table and add a vendor column

2. Model product tables as "reference tables". This will replicate the product tables to all nodes in the cluster

Without knowing more about your application / table schemas, I'd recommend the first approach. I'd also be happy to chat more if you drop us a line.


Same here.

To me it read like just a basic introductory post to a longer series.


> On the benefits side, when you separate your data into groups this way, you can’t rely on your database to join data from different sources or provide transactions and constraints across data groups.

How is it a benefit that you are no longer able to join data in your separate tables? Is this sentence a mistake?


Thanks for writing the post. Sharding is something I’m consudering at my current job.

How long do these sharding projects usually take? Do you know of any posts that break down the steps in more detail?


Timeframes for sharding projects vary quite a bit. If you have a B2B database, we find that sharding projects usually take between one to eight weeks of engineering (not clock) time. Most take two to three weeks.

A good way to tell is by looking at your database schema. If you have a dozen tables, you'll likely migrate with one week's of effort. If your database has 250+ tables, then you'll take about eight weeks.

When you're looking to shard your B2B database, you usually need to take the following steps:

1. Find tables that don't have a customer / tenant column, and add that column. Change primary and foreign key definitions to include this column. (You'll have a few tables that can't have a customer column, and these will be reference tables)

2. Backfill data to tables that don't didn't have customer_id / tenant_id

3. Change your application to talk to this new model. For Rails/Django, we have libraries available that make the app changes simpler (100-150 lines). For example: https://github.com/citusdata/activerecord-multi-tenant

4. Migrate your data over to a distributed database. Fortunately, online data migrations are starting to become possible with logical decoding in Postgres.

If you have a B2C app, these estimates and steps will be different. In particular, you'll need to figure out how many dimensions (columns) are central to your application. From there on, you'll need to separate out the data and shard each data group separately.


I think you're understating how tough it can be.

There are applications that

  * are mature and complex

  * with 100s of tables

  * serving millions of users

  * have to be broken into multiple micro-services

  * have developer resource constraints
So you're easily looking at a 1-2 year project, not 1-8 weeks.

You've also ignored some of the complexities, such as resharding (moving data between shards), which may significantly add to the cost of the project.


Also, when architecturing for shards, you must take into account availability.

Having several shards can lower the availability of your application if it cannot handle the absence of a shard.

For example if you have 99.9% availability on your individual DBs, and if you split it up into 10 shards, availability will drop to 99% (8 hours VS 3 days of downtime a year).

To handle that, you need to add replication and automatic fail-overs, adding even more complexity.


At Prosperworks we offer a CRM which integrates closely with G Suite applications like Gmail and Calendar.

We consider our app to be maturing if not mature. It is certainly complex - we integrate with dozens of partners and external APIs. We have 80 tables and 300k LoC of Rails code which is runs on several TB of PostgresSQL data. We have not broken our app into multiple micro-services. Like everybody, we always feel that our developer resources are constrained.

Our data model is very similar to the CRM example in Ozgun's article: _mostly_ we have a master customer table and a wide halo of tables which are directly or transitively associated with customers. We called this the "company sharding domain". Since we allow one user to be associated with multiple accounts, we shard our user table independently: there is a smaller halo of tables in the "user sharding domain". And we have a handful of global tables for content and feature configuration in the "unsharded domain".

We kicked off our migration project from unsharded Postgres to sharded CitusCloud in early Q4 2016. We had one dev work on it solid for one quarter updating our code to be shard-ready. Then another 1.5 devs joined for a month in the final build up to the physical migration. We migrated in late Feb 2017, then consumed perhaps another 3 dev-months on follow-up activities like stamping out some distributed queries which we had unwisely neglected and updating our internal process for our brave new world.

Two years ago at another company I was tech lead on a migration of two much larger Mongo collections to sharded Mongo. That was a larger PHP application which was organized partly into microservices. That effort had a similar cost: as I recall I spent one quarter and two other devs spent about one month, and there were some post-migration follow-up costs as well.

I am confident that real world applications of significant complexity can be migrated from unsharded to sharded storage with a level of effort less than 1 year. I admit that 8 weeks feels fast but I'm sure I could have done it if we had been willing to tie up more devs.

Why were these efforts easier than 2 years? Because we didn't have to build the sharding solution itself - those came off the shelf from some great partners (shout outs to CitusData and mLabs). We just had to update our applications to be shard-smart and coordinate a sometimes complicated physical migration, derisking, and cutover process.

That said, I can imagine the work growing slowly but linearly in the number of tables, and quickly but linearly in the number of micro-services.


I used to think similarly several years ago. I now think differently for the following reasons:

* Citus and other technologies can now provide features that do a lot of the heavy lifting. Some examples are resharding, shard rebalancing, and the high availability features mentioned below.

* My estimates are for B2B (multi-tenant) apps. For those apps, we found that the steps you need to take in re-modeling your data and changing your app are fairly similar. At Citus, we used to shy away when we saw 200-300 tables. These days, complex apps and schemas have become commonplace.

* We saw dozens of complex B2B databases migrate in similar time frames. Yes, some took longer - I'm in the tech business and always an optimist. :)

I also don't want to generalize without knowing more about your setup. If you drop me a line at ozgun @ citusdata.com, happy to chat more!




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: