Hacker News new | past | comments | ask | show | jobs | submit login
Spending $5k to learn how database indexes work (briananglin.me)
236 points by anglinb on Nov 6, 2021 | hide | past | favorite | 199 comments



As the author touches on, the main problem here isn't learning about indexes. It's about "infinity scaling" working too well for people who do not understand the consequences.

In no sane version of the world should "not adding a db index" lead to getting a 50x bill at the end of the month without knowing.

I am a strong believer that services that are based on "scale infinitly" really need hard budget controls, and slower-scaling (unless explicitly overidden/allowed, of course).

If I accidently push very non-performant code, I kind of expect my service to get less performant, quickly realize the problem, and fix it. I don't expect a service to seemingly-magically detect my poor code, increase my bill by a couple orders-of-magnitude, and only alert me hours (if not days) later.


There's no free lunch. Cloud services trade performance woes for budget surprises. This may be preferable in some cases but the tradeoff should be recognised.


There's plenty of space in the middle though, no? Bank accounts cut you off if you hit a zero balance, or they can execute your transactions and charge you a fee. Why can't these services implement throttling or even halting if the charges hit a certain ceiling?


In some cases the query might have finished before the data hits the billing system.


That’s not an argument.

As long as query #2 doesn’t run, that’s better then nothing.


So you want to add an overhead for each query to check an internal LRU cache that then checks the billing system? Just the overhead of hashing the query into some cacheable identifier will hurt performance.


Disagree; cloud services trade reduced operation work for higher prices. There is nothing inherent to "cloud services" that requires budget surprises.


> Cloud services trade performance woes for budget surprises.

I'm not sure why you think this is a trade-off. In general cloud services automate operations. Whether they are faster is unrelated. Many are not--services that use object storage for backing storage can be orders of magnitude slower than equivalent software using nVME SSD.


Our internal monitoring alerts for performance anomalies. Quite possible to scale and warn you.


> This may be preferable in some cases but the tradeoff should be recognised.

It's not a "tradeoff", it's a product feature.


>I don't expect a service to seemingly-magically detect my poor code, increase my bill by a couple orders-of-magnitude

When you put it like that, it sounds like an awfully good business to be in.


Haha yep, I was like wait I'm used to getting feedback from the system telling me I messed up and this I barley noticed. PlanetScale has Query Statistics that are really useful for spotting slow queries but don't expose the "rows read" so you can't really tie this view back to billing. I think they're aware of this though and might expose that information.


It can't be like that. I have discussions with vendors sometimes and the first question I ask - if something lapses and we weren't paying attention - you won't cut our service right?

I think too, in most cases, people would rather run over than cut service.

Also how would such a system work? Let's say you sign up for some API and what, set your billing limit to 500 requests per day. Let's say you're now hitting fabulous numbers / signups - but suddenly you start hitting that 500. If that shuts off your signups or what have you, you're typically going to be worse off than if you just pay the overage bill.

I know it sucks, but the first time you pay your overage is probably your last.


It's important to think about this in an a la carte design, not one fixed solution for all use cases.

Step 1: You give people the ability to put in soft limits - "Warn me when I hit 500",

Step 2: You also give the ability to put in hard limits "Pull the plug at 10k" (caveat to both these things - you guarantee this at an eventual consistency level, like "Well you hit 500 but by the time our stats updated you were at 600",

Step 3: You introduce rate limits - "We're expecting 500 in a month, warn us if we hit 50 in a day or 10 in an hour".

Step 4: You introduce predictive warnings "Our statistics show you'll hit your monthly limit on the 23rd of the month"

Step 5: You put in predictive limits to allow scaling - "The last 3 months we've seen the following use trend, warn us if we exceed double that trend, cut off if we see 50x that trend"

You might set some of these limits or none of these limits depending how predictable your use case is.


> Let's say you're now hitting fabulous numbers / signups - but suddenly you start hitting that 500.

Sure, you get alerted, confirm it's reasonable, and then change your limits. You're also describing how many APIs actually work.

I'll say there is also a difference between going from 500 requests/day to 1000 requests/day, where you might say "this is probably legitimate and I want to run over", and from 500 requests/day to 25k requests/day.

One is mildly inconvenient, and the other is potentially bankrupting.


If you’re expecting something near 500, then why would you set your limit to 500. Set it towards something like 20k at least.

Or obviously if you don’t think this will be a problem, you have control to set it to uncapped.

I don’t understand what argument you have.


A billing limit feature is something that's been wanted for years, yet the most that's offered is budget alerts.


The question that follows would be: how do you know what was intended to be less performant versus optimized on-demand? The intentions can be easily inferred when the query at hand was a simple join, and to no surprise, many cloud database offerings _do_ provide optimization automation (Azure SQL will for example even automatically add obvious indexes if you let it). But what if the query did need to scan all the rows in a join, but was only a one-off, and you didn’t want to pay the continued perf and storage costs of maintaining an index? The cloud provider can’t know that, and even with proactive measures (“make it slower” can’t work because speed is part of the product design, and budget controls can only go so far before it impacts your own customers) there’s only so much that can be done. The choice of infinity scale tools comes with infinity scale costs, and so there’s a responsibility that engineers using these tools need to understand what they’re accepting with that choice.


> The question that follows would be: how do you know what was intended to be less performant versus optimized on-demand?

I'm saying that the cloud provider shouldn't try to make assumption either way, and I'm definitely not saying that it should try to manage indexes for you.

If you are typically using X ops/s, and begin using 50X ops/s, the default should not be "this customer probably wants to spend 50x their previous spend". It should maybe scale up some percentage of previous usage, but definitely not into a range that would be considered anomalous.

> The choice of infinity scale tools comes with infinity scale costs, and so there’s a responsibility that engineers using these tools need to understand what they’re accepting with that choice.

Sure, but I have never once seen one of these providers make clear that using them comes with the risk of being charged "infinity money".


Honestly, just a limit isn't bad, Just a option to "Stop all operations if bill exceeds 300$" would make this a LOT safer for most folks.


Or perhaps a “do not allocate more than $1/min” or something similar - which makes cloud servers mimic bare metal hardware - when you overload it slows down but keeps trying.


> In no sane version of the world should "not adding a db index" lead to getting a 50x bill at the end of the month without knowing.

Computers do what you tell them to do. If you are totally clueless and don't bother to take even a few minutes to try to understand a system you are using, the results are going to be poor. Thinking any system can overcome total user ignorance is the thing here that isn't sane.

What the person in this article did is like opening all your windows and setting the thermostat to 74 degrees. It will use massive amounts of energy and just keep trying to heat the house 24/7. If someone turns around after doing this and claims there is actually a problem with thermostats not being smart enough because what if someone doesn't know leaving the window open lets cold air in, well, they shouldn't be allowed to touch the thermostat anymore.


> Computers do what you tell them to do. If you are totally clueless and don't bother to take even a few minutes to try to understand a system you are using, the results are going to be poor. Thinking any system can overcome total user ignorance is the thing here that isn't sane.

In theory I agree, but this website features something like "how I nearly bankrupted myself with an AWS bill" on the homepage every month or so. People are blissfully unaware about the extreme costs they're paying to the scaling cloud providers that they often don't even need in the first place.

While I don't think services should block extreme spend all together, a monthly/weekly/daily limit would go a long way to prevent these stories. Very few services that abstract away performance costs have a good way to limit expenses. I don't know if that's intentional or if these companies just don't care, but it's infuriating to me.

It's fine to expose the same tool to both someone who doesn't know the difference between indexes and foreign keys and someone who's been building cloud infra for many years, but as a company you should be prepared to respond to your customers' most likely mistakes. This specific case would probably be hard to detect automatically, but so many wasted CPU cycles, kilowatts and forgiven bills could be prevented if someone would just send an email saying "hey, you've been using more than 10x the normal capacity today, everything alright?"


This is a lot of victim-blaming in a such a small response.

> If you are totally clueless and don't bother to take even a few minutes to try to understand a system you are using, the results are going to be poor.

Having a hosted system which behaves different than the underlying technology it's modelled on is not immediately clear. The realm of "things you don't know that you don't know" expands drastically with managed services.

> Thinking any system can overcome total user ignorance is the thing here that isn't sane.

It's never been suggested that this is possible. There is a large range of options in between "solve all user error" and "don't hand everyone a loaded foot-gun".


> Having a hosted system which behaves different than the underlying technology it's modelled on is not immediately clear. The realm of "things you don't know that you don't know" expands drastically with managed services.

So don't use managed services? They are expensive and the only thing that works consistently and well is the lock in, everything else is pretty iffy. Somehow people look at me like an idiot when I say this, but it's LESS effort to NOT use AWS and build everything yourself. I guess this seems impossible somehow, but at the scale you are ever going to operate it's not hard to just build a service to store and serve files (s3), and if you scale to the point where you can't build it easily, you will build it anyway because you can afford to hire enough engineers to build it and still save huge amounts of money. The same goes for every managed service offered on the cloud, they are not a good deal at any point, ever, for anybody.

> It's never been suggested that this is possible.

The gist of the article is they got a refund because they didn't bother to pay attention close enough to realize their queries were doing full table scans, and they didn't bother to pay close enough attention to realize this was causing the service to scale in capacity to an absurd degree.


Why not write a simple service that tracks various stats (like number of users, requests, etc.) as well as billed costs over time?

You could then get various interesting stats in real time as well as some pretty useful alerting.


Even with "infinite scale", you should still be monitoring, and be doing some form of budget monitoring.

The difference is that application performance metrics are generally available in near-real time, whereas billing metrics are 1) very platform specific, and 2) generally not even close to real time.

It's hard to react quickly when your platform has effectively transformed near-real-time performance alerts to delayed/rolled-up billing alerts (which would also be much more difficult to use to pinpoint where the underlying issue is)


If you create an inefficient process, you should be responsible for the consequences. Why would you expect some third party to take the responsibility?

If you create a horrible internal combustion engine, your gas station should not bear the costs.


If you create an inefficient internal combustion engine, you'd know because you have to go to the gas station every 5 miles. In this case it would be like someone was filling up the the gas without you knowing, and then a few weeks later you get the bill, and then you realize that your engine is inefficient.


In theory yes; in practice it’s very easy to push inefficient code to production by accident, as shown in the article.


> I am a strong believer that services that are based on "scale infinitly" really need hard budget controls, and slower-scaling (unless explicitly overidden/allowed, of course).

+1 on the budget control, but I don't think there are good arguments in favor of slower scaling.

The ability to scale on demand is sold (and bought) based on the expectation that services just meet the workload that's thrown at them without any impact on availability or performance. That's one of the main selling points of managed services, if not the primary selling point.

Arguing in favor of slower scaling implies arguing in favor of downtime. A service that's too slow to scale is a service that requires a human managing it. A managed service that is unable to meet demand fluctuations is a managed service that can't justify the premium that is charged for it.


I may have not been as clear as I should have; I'm not necessarily arguging that typical, or expected scaling action should be slowed down. Ie, throttling scaling from X -> 1.5X doesn't really make sense.

A scaling change that would be considered anomalous, and introduces an order-of-magnitude change over historical usage could be scaled more slowly.

> Arguing in favor of slower scaling implies arguing in favor of downtime.

Sure, I guess that in a limited scope, that is what I am saying. I would much rather have a short-term "downtime that requires human intervention" problem, then a long term "Johnny deployed bad code and now the company is bankrupt" problem.

> The ability to scale on demand is sold (and bought) based on the expectation that services just meet the workload that's thrown at them without any impact on availability or performance. That's one of the main selling points of managed services, if not the primary selling point.

I tend to disagree with this. Managed services are often bought on the expectation that they do not require management, deep operational knowledge, and are reliable. There's also often the trade off of upfront costs (either human or capex costs).

Scalability of obviously part of the analysis, but "scability" and "the ability to scale from 1X -> 100X in a couple seconds" are not necessarily the same thing.


> In no sane version of the world should "not adding a db index" lead to getting a 50x bill at the end of the month without knowing.

Oh, that would be actually quite useful for learning things if the bill would tell you that it got so high because you stupid dump-ass didn't use DB indices properly.

I'm every time shocked how many people using DBs don't know about indices! Those people should pay such a bill once. They would never ever again "forget" about DB indices I guess.

Of course I'm joking to some extend. But only to some extend…


I feel like indexes are a pretty fundamental type of DB knowledge. In fact I'd say it's table stakes knowledge you should have if you're working with them. Further more, knowing that ForeignKeys typically apply an index to that column is also in my head basic knowledge. I'm sorry you got burnt, and congrats on learning a lesson, but you could have gotten the same knowledge by ever googling MySql ForeignKeys and saved yourself a headache.

In fact it's like a big bullet point near the top of the docs page.

"MySQL requires indexes on foreign keys and referenced keys so that foreign key checks can be fast and not require a table scan. In the referencing table, there must be an index where the foreign key columns are listed as the first columns in the same order. Such an index is created on the referencing table automatically if it does not exist. This index might be silently dropped later if you create another index that can be used to enforce the foreign key constraint. index_name, if given, is used as described previously."

I'm not entirely sure why buzz around "developer learns basic knowledge" has this on the front page.


Good for you. But I think you're being uncharitable by failing to distinguish between "concept I didn't understand" and "thing I forgot to consider until I saw the problem it caused". The title also suggests the former, but I think the author is being a bit humble by underplaying his existing knowledge. Likely he actually did know what indexes are before; if you asked him to detail how MySQL foreign keys work he might have even remembered to say they add an implicit index. But it's super easy to miss that you're depending on a side effect like that until you see the slow query (or, in this case, high bill).

When you're programming, how many compiler errors do you see a day? (For me, easily dozens, likely hundreds.) Do you think each one indicates a serious gap in your knowledge?

Along these lines: imposter syndrome is a common problem in our industry. One way it can manifest is junior engineers can thinking they're bad programmers when they repeatedly see walls of compiler errors. I think it'd help a lot to show them a ballpark of how often senior engineers see the same thing. [1] I know that when I'm actively writing new code (especially in languages that deliberately produce errors at compile time rather than runtime), I see dozens and dozens of errors during a normal day. I don't think this is a sign I'm a bad programmer. I think it just means I'm moving fast and trusting the compiler to point out the problems it can find rather than wasting time and headspace on finding them myself. I pay more attention to potential errors that I know won't get caught automatically and particularly to ones that can have serious consequences.

I think the most important thing the author learned is that failing to add an index can cost this much money before you notice.

Ideally the author and/or the vendor will also brainstorm ways to make these errors obvious before the high bill. Load testing with realistic data is one way (though people talk about load testing a lot more than they actually do it). Another would be watching for abrupt changes in the operations the billing is based on.

[1] This is something I wish I'd done while at Google. They have the raw data for this with their cloud-based work trees (with FUSE) and cloud-based builds. I think the hardest part would be to classify when someone is actively developing new code, but it seems doable.


No you've missed my point, the author seemingly didn't know that ForeignKeys applied indexes by default in MySql. It's not "Concept I didn't understand", clearly they're capable of understanding because they did after they ran into the issue. It's about not having had basic knowledge to begin with.

But he didn't see compiler errors, he caused monetary cost to his employer.

When I deploy something that unintentionally causes a large monetary bill to my employer, then yes I do believe that indicates a gap in knowledge so I don't in anyway believe I'm being uncharitable. Or and this would be worse, a lack of caring. (Which is not what I think happened here though)

I won't respond to your imposter syndrome bit I don't really think it's relevant to my point.


> When I deploy something that unintentionally causes a large monetary bill to my employer, then yes I do believe that indicates a gap in knowledge so I don't in anyway believe I'm being uncharitable.

It depends, if you've been given a loaded footgun it's not entirely your fault when it inevitably goes off.

Let's go back to your "compiler errors" scenario, and let's say someone decided that the company should be using a cloud-based compiler that happens to charge per error. I wouldn't blame developers for falling into a trap that challenges all known assumptions.

The problem is that there is a DB that charges insane amounts of money per row processed with no upper limit and that someone actually thought it was a good idea to use it.


>The problem is that there is a DB that charges insane amounts of money per row processed with no upper limit and that someone actually thought it was a good idea to use it.

That's it in a nutshell. Usually you have an upper bound on compute, memory, disk space or some other resource for a specific price. When you hit those limits, you find performance issues and at that point you can choose to try optimizing your code or database, then decide whether you need to upgrade resources at cost.

I really don't understand this model that charges for rows read or, worse, "inspected". What's the upside of that model versus more typical pricing schemes, and how is it manageable/predictable from a budget perspective? With or without the indexing problem here, you'd really have to know your user behavior, then translate that to DB read counts by your app. And, while devs should all be optimizing code as much as reasonable, something as specific as minimizing DB reads seems an odd constraint to place on software.

I'm guessing there must be some use case I'm missing; else I don't know why this pricing scheme is even a thing.


I am somewhat shocked to find that an RDBMS is considered a “loaded footgun” in 2021. Perhaps grandparent isn’t the most charitable in their interpretation, but I am in full agreement. It continues to astound me how little about the basics of databases most developers know, and how strongly resistant they are to trying to learn.


An RDBMS that scales infinitely while charging you per-row goes against the usual assumptions learned in the past decades, so I'd say yes that's a loaded footgun.


Have a friend who had a BigQuery query that ended up costing $3k each time. It ran for only a minute, because BigQuery chews through data really fast. But you don't realize that when you push the run query button. And there's no spend guard rails. They switched to pay for given amount of parallelism after that.


First its not my "compiler errors" scenario it's the person who initially replied to me. Sure whatever, I don't think I ever insinuated I thought that was a good idea, it runs in parallel with the issue I have.


I think you didn't read through to this part of my comment:

> I think the most important thing the author learned is that failing to add an index can cost this much money before you notice.

> Ideally the author and/or the vendor will also brainstorm ways to make these errors obvious before the high bill. Load testing with realistic data is one way (though people talk about load testing a lot more than they actually do it). Another would be watching for abrupt changes in the operations the billing is based on.


No I did, but since I disagree with your earlier point about how much existing knowledge they have it kind of by default means I disagree with what they took away from this incident.

It's also highly speculative so like I'm not going to go back and forth on it.

Needing a vendor to hand hold your likely highly paid dev seems like a bad fix to me.

Also not having an index isn't an error it can be a valid choice based on your situation and query load which is why people should know the situations when they're needed.

I think people should simply be better. A lot of people don't like hearing that though so usually I keep it to my private chats where people seem more willing to cop to that fact.

I know we disagree, I know you're going to continue disagreeing, I know I don't want to have the conversation.


> I know we disagree, I know you're going to continue disagreeing, I know I don't want to have the conversation.

Please consider not chiming in on the next article like this then. I think your attitude of (paraphrasing) "no good programmer would have made the costly mistake you shared, and articles about it aren't worthwhile" is super harmful to our industry. It's the polar opposite of the blameless postmortem approach I'm fond of.


This one in particular is not worthwhile on the front page of HN, that's my take. They're most definitely useful for beginners, or maybe people just learning about databases.

I'm not going to not post simply because you find it disagreeable, there are plenty of people here who seem to agree with me.

Blameless post mortems are great, for your team. I am not his team mate, and I don't really feel a kinship with every developer under the sun. And for what it's worth I don't blame this developer for anything. If anything I lament the institutions that failed them on the way to this point in time. To me this is a symptom of systemic rot.


Your submissions are nothing but "ask HN". Leech. To chide about "not knowing" but then yourself ask the community seems a bit hypocritical.


I asked before making a mistake, I also asked to do some light market validation. Try to keep the ad hominems down. If you can't make an argument against the point and have to attack me as a person, it's just validation. Most of my contributions here are in my upvoted comments like my op, which clearly a good amount of people share.

Generally speaking I don't want to submit most things, largely and because like this post I find most thoughts people have to be garbage, including my own, most things and people simply aren't interesting or useful.


> I'm not entirely sure why buzz around "developer learns basic knowledge" has this on the front page.

The problem is that in the old days, not knowing about indexes left you with an underperforming system or downtime. But in The Cloud™ it leaves you with an unreasonably huge bill and that somehow as an industry we're accepting this as normal.


Which really is a head scratcher. You'd figure especially as a startup seeing a 5k oopsie isn't really as acceptable. Mistakes do happen and I don't mean any shade to this particular person (they'll never make this mistake again) but as an industry the aggregate consequence of this is you have a lot of waste and stupid choices that then have to be cleaned up when more knowledgeable (read highly paid) people are introduced later on.

They'll have to clean up the mess which causes real business consequences that, and I've personally seen this, will directly impact bottom line and have no quick or easy solution to wiggle out of.

Maybe it's acceptable for products like this because the balance between good engineering and company health probably aren't as cut and clear but stuff like this always makes me sad because it's such low hanging fruit, it doesn't require any real effort, just basic curiosity around your job.


I have no problems with a developer doing a 5k oopsie with things like card processing or an area that has a legitimate potential for direct monetary losses (such as payment processing where a bug could allow customers to order goods without actually paying).

I have a problem with whoever looked at <insert your favorite on-prem RDBMS here> and said "nah, let's go with a cloud-based solution that charges per-query and gives us an essentially infinite financial liability".


It's not so clear cut. What's the cost of losing the entire on-prem database? Do you trust a company who hired a developer who didn't know about indexes to hire a rock solid DBA? And how much does that DBA cost?


> What's the cost of losing the entire on-prem database?

Backing up an on-prem DB doesn't require specialist DBA knowledge. Basic UNIX skills are enough. Not to mention, since you're not in the cloud, bandwidth or efficiency is not a concern - feel free to rsync your entire DB off to a backup server every 5 minutes.

> to hire a rock solid DBA? And how much does that DBA cost?

They didn't have a DBA here either, and this "cloud" didn't save them. But at least with an on-prem Postgres the worst they'd have is significantly reduced performance*, where as here they had a 5k bill.

*actually the price/performance ratio for bare-metal servers is so good that a 100$/month server would probably take their unindexed query and work totally fine (as a side effect of the entire working dataset being in the kernel's RAM-based IO cache).


Rsyncing the database won't work in many cases, this doesn't ensure your backup is consistent. That is really, really dangerous advice, especially as you might not notice this if you test the process while the database is idle.

For Postgres you either use the pg-dump command and backup the dump or you setup WAL archiving and save base backups and the WAL files as they are created.

This isn't rocket science, but you really should read the manual at least once before doing this. Just copying the files is not the right way to backup a database (unless you really know what you're doing and are ensuring consistency in some other ways).


I am not saying that rsync or cp is the right way to backup a DB, I was just giving a very crude example. I absolutely agree with the issues you're raising.

However, I'd still take recovering a DB that has been backed up by rsync/cp over a DB that's not been backed up at all. If you really can't be bothered to do it the right way, you're still better off doing something than running with no backups at all.


HA/clustered/replicated DB setups are not rocket science. Backups are not rocket science. Losing an on-prem database irrevocably never happened for me in 20 years.


This is not a matter of hiring an elite DBA. This is a matter of reading the manual. Both indices and backups are right there in chapter 7 and 8 respectively. But that's something worth doing irrespectively if you are running your own database or using someone else's as-a-service.


There are also many options in between "cloud-based infinity scale" and "on-prem". You can use cloud services that abstract many day-to-day operational tasks of db management, but are still bound price-wise to your monthly instance costs.


You can test backups, that doesn't require much expertise, only effort.


The best technical people aren't always the best to start a business. The goal is to make money not have perfect code.


If someone comes to me and tries to sell me a service that can leave me with an infinite bill I'd look at them funny and walk away. But that's just me and maybe I just don't get it and I'm not "startuping" right.


Cool, and if this was a case of bike shedding over something that hadn't just cost that early stage startup 5 grand I'd agree with you.

However regurgitating a platitude that everyone, including myself, learned when we tried getting our first business off the ground doesn't add much value here.

Had this been a Database with 10million rows it would have cost them 50k, and this is incredibly basic programming knowledge.

Basic proficiency is a far cry from worrying about best technical talent and not a particularly egregious ask.


I've been involved in some terrible software behind reasonably successful businesses. I complained like the best of them having to clean up the horrible mess. But it worked they used their limited competence and limited funds then built something profitable.


Yes we really should not accept this. The ability to impose limits on spending is key to control an enterprise. Whole security certification guacamole is based on having established controls. But where the bit hits the fan control is absent.


Using money to solve business problems is good business sense, but only if that’s the best way to spend that money. I agree with you that the status quo is normal, but nonsensical.


> But in The Cloud™ it leaves you with an unreasonably huge bill and that somehow as an industry we're accepting this as normal.

No. Nobody finds that "normal", that's just untrue. It's even the _whole_ subject of this blogpost: the bill was not normal.

I don't disagree that some people are overrelying on cloud services, but that didn't become normality, it's still a beginner's mistake


> it's still a beginner's mistake

Absolutely, but previously that wouldn’t cost you €5000 extra.


I've been using relational databases for web apps for my entire career and probably would have made this same mistake if using PlanetScale for the first time.

The author had two misunderstandings:

1) An index isn't created automatically

2) You get billed for the number of rows scanned, not the number of rows returned

Even if I noticed #1, I probably wouldn't have guessed at #2 for the same reason as the author.


You are absolutely missing the point. The point is not about indexes or full table scans, but it's a about cloud providers who will charge you for every row "inspected" and how a full table scan might cost you $0.15 and it would add up. It's not about slow performance which you can diagnose and fix, it's about getting an unexpected $5k bill, which you can't fix.

And in the end, if the cloud provider wants to charge you for rows "inspected", this can't be buried in small print. That's unacceptable!

The billing must come with up-front, red capital letters warning, and must come with alerts when your bill is unexpectedly little high (higher than expected, not just 10x or 100x higher). It must automatically shut down the process, requiring the customer confirm they want proceed, that you actually want to spend all that money. And it must be on the cloud provider to detect billing anomalies and fully own them in case it goes the wrong way. This is the cloud "bill of rights" we need.


You'd be surprised and frustrated. If you ever see someone say "We hired Oracle consultants and they are miracle workers" or "NoSQL is sooo much faster than SQL" you can be pretty sure they missed databases 101 and the requirement to add indexes.


> I'm not entirely sure why buzz around "developer learns basic knowledge" has this on the front page.

Because it's a well written, humble account of learning from a mistake then using it as an opportunity to teach others to help them avoid the same mistake.

If anyone leads a team, I hope they might learn from this approach, rather than just bashing on people and implying they don't deserve any attention because they made a mistake a more experienced developer might have dodged.


Personally I find it trite but whatever floats your boat.


One of the best database habits I've ever developed is to run EXPLAIN on every query that I expect to run repeatedly, then sanity-check the output. It's very little effort, and has prevented so much hassle.


If we weren't using underwhelming ORM DSLs, I'd love to use/write a github bot that automatically runs EXPLAIN ANALYZE on queries updated in a PR and post the query plan!


You can do that still. Just a bit more work to evaluate the ORM layer first.


Seriously. Like, every junior dev has to learn DB indexing basics sometime, and apparently the other of this blog post just did. But really can’t understand why this article is getting voted to the top of HN.


What I gained from the article wasn't that the dev was unaware of indices, it's that he didn't realise indices were missing due to how planetscale's database disallows foreign keys.

I never worked with a database that doesn't have foreign keys and it's not unthinkable to forget when you do for the first time, that foreign keys were what created indexes for you automatically.

A little bit of planning could have prevented that though :/


they were using some kind of foreign keyless MySQLish whatever thing


This is definitely a lesson in the importance of indexes in general. We are well aware of the potential pitfalls with our current pricing. I’m happy to say we are nearly done modeling different metering rates for the product which would mean significantly lower bills for our users and avoid issues like this.

It’s core to our mission that our product’s pricing is accessible and friendly to small teams. Part of being in beta was us wanting to figure out the best pricing based on usage patterns. That work is nearly done. As the post mentions we’ve credited back the amount.


Thanks Sam! As mentioned in the post, the PlantScale team was quick to credit our account for the overages and help us figure out what was going on. I'm personally super bullish on PlanetScale!

With any new product there will be tradeoffs and rough edges but the positives, like easy migrations and database branches have definitely outweighed any difficulties.


Kudos for being open about your mistakes.

Could you share a little bit about what your thought process was in general when picking a database technology?

You call out "easy migrations and database branches" outweighing other quirks, so some pros and cons weighing must've happened :)

Is it easy, for example, to test things in your dev environment with realistic amounts of data, and to get an understanding of how the queries will execute, etc? These seem somewhat basic, and would've probably caught this (also kinda basic, sorry :) problem early. (As in discovering "why is this query that should be a few ms with an index lookup taking so long?" early on)


Thanks! The decision making process was pretty unsophisticated tbh. Basically I spent the last couple years working at GitHub as a security engineer and had been pretty comfortable with MySQL so wanted to stick with that. I had heard from our database team how annoying migrations were and I had previously locked a postgres database multiple times in production trying to deploy a migration, so MySQL + safe migrations + some of the best engineers I've worked with pouring all their time into PlanetScale, made a ton of sense. So basically a combination of proven underlying tech + believing in the team.

The migration workflow is really cool, basically when we create a PR we branch our production database and apply any migrations that are included in that PR and then that branch is used in our per-PR review environment. (Just Heroku Review Apps), then when we merge the PR, we also merge the deploy request in PlanetScale. Database branching is a super powerful concept once you've leaned into it.

We don't really do any sort of load testing in a dev environment. We have one customer who is also a co-founder of our company so we just deploy whatever changes we're unsure about (after automated testing) to his application and see what happens. If anything looks off in Grafana we'll make take a look but it's usually "good enough" or "totally broken", very rarely do we take time to make something 25% better if it already works. The time to fix vs speed of shipping features tradeoff doesn't make sense for us.

In this specific case, the query was taking place in a background job so 10ms - 500ms didn't really matter to us so we didn't really measure the timing, if we had we may have noticed it was slow but kinda a testament to PlanetScale that we didn't even notice ;)


Thanks for the background :)

Lock behaviour of migrations are indeed very important to be aware of.

Braintree (which are heavy Postgres users) have a pretty good post on that - https://medium.com/paypal-tech/postgresql-at-scale-database-...


We are so glad to have you as customers and I can't wait to partner as you grow to mega scale.


Glad you came here to say this because my takeaway was "What a terrifying pricing model. That would keep me up at night"


Would it be possible for you to add a spending cap? The user should be able to tell your system "here's how much I want to spend max" and if they exceed that they start getting errors or reduced performance.


Yes absolutely. We have daily and monthly budgets on the roadmap to make sure nobody gets a surprise bill in the future. This and more tools to make sure you are running your database optimally.


No foreign keys to make migrations easier. That doesn't sound like the best trade off to me.

Having the database constrained as much as possible makes maintenance so much easier. Many bugs don't escape into production as they're caught by the database constraints. Those that do get out do less damage to the data.

I know scale comes with trade offs but that seems extreme to me.


I'm a Vitess maintainer and I feel the same way. I don't plan to use any of the Online DDL because you'll have to pry my foreign keys out of my cold, dead hands. I understand the reasoning and limitations, but like you, the trade-off isn't worth it to me.


I'm so curious, so you maintain Vitess but don't use it personally?


I do use it in production and have for years, just not the online schema changes. It's fantastic and FKs are supported in a single shard, which we use heavily.


If you take out online schema changes and sharding, what's the use case for vitess?


Architected correctly, there's minimal need for cross-shard foreign keys. A common use case is sharding by tenant/customer id, which means that all records for a single customer live on a single shard. That lets you have all the FKs that you want, and any operations for that customer happen on a single shard, which gives you maximum speed and transactional guarantees.


If you take sharding out what's the point of using anything other than a normal sql database?


Vitess offers a lot of quality of life improvements over stock MySQL, including built-in backups to S3/GCS, managed replication, plus soon to be auto-failover detection. Additionally, with vreplication, you can do some pretty powerful materialized views that aren't possible with MySQL. Finally, Vitess Messaging is an awesome way to do async work, allowing for transaction guarantees where you can ack a message, do data work, then add to another queue, all without having to deal with weird side effects.


What are your thoughts on Citus and Cockroach where foreign keys are still supported when creating partitioned clusters?

Is due to fundamental differences in postgres vs innodb?


Vitess still supports foreign keys (single shard), using MySQL, just not with the Online DDL functionality.

I think Cockroach tries to be a little too magical, which is great for starting up a cluster, but I think you can architect for much better performance with Vitess and owning your sharding model. I'm also very happy to use InnoDB, one of the most battle tested db engines to ever exist, while Cockroach is currently rewriting theirs from scratch. At the distributed level, I don't know of any massive scale adopters of Cockroach yet, though I'm not 100% looped in, so forgive me if I'm ignorant of them. On the other hand, Vitess has seen adopters like Slack, GitHub, Square, HubSpot, YouTube, with many more in various stages of adoption.

I feel like Citus might be trying to be too many things and so hasn't gotten the traction that Vitess has. Vitess has nailed OLTP at scale, while Citus is trying to also do OLAP and be a single source. That's the holy grail, but I'm not sure that any technology is close to handling both of those well yet.


It's possible during a migration to drop a constraint, make the update and restore the constraint. If a schema migration tool doesn't automate this or at least permit it, it's not a good schema migration tool.


oh no, foreign keys are useless because it's the apps responsibility to delete

/s


My company runs a cloud service for ClickHouse. We've spent a lot of time thinking about pricing. In the end we arrived at (VMs + allocated storage) * management uplift + support fee.

It's not a newfangled serverless pricing model, but it's something I can reason about as a multi-decade developer of database apps. I feel comfortable that our users--mostly devs--feel the same way. We work to help people optimize the compute and storage down to the lowest levels that meet their SLAs. The most important property of the model is that costs are capped.

One of the things that I hear a lot from users of products like BigQuery is that they get nailed on consumption costs that they can't relate in a meaningful way to application behavior. There's a lot of innovation around SaaS pricing for data services but I'm still not convinced that the more abstract models really help users. We ourselves get nailed by "weird shit" expenses like use cases that hammer Zookeeper in bad ways across availability zones. We eat them because we don't think users should need to understand internals to figure out costs. The best SaaS services abstract away operational details and have a simple billing model that doesn't break when something unexpected happens on your apps.

Would love to hear alternative view points. It's not an easy problem.


I'm just leaving this here:

https://www.hetzner.com/dedicated-rootserver/ax161/configura...

Draw that nice red slide all the way to the right. No, it's not storage. Yeah, it's actually affordable. Yeah, that was a sexual sound you just made.

You do have to be prepared to know some basic sysadmin, or pay somebody to do it for you. My newest server has about 60 cores and half a tera of ram. Surprisingly, it's not uber sharp - I went with high core count so individual queries actually got slower for about 20%. But that load... you can't even tell if the cpu load gauge is working. I can't wait to fill it up :D Maybe this black friday season I'll get it to 10%.


...and Hetzner just started offering their services in the US a few days ago. (EDIT: not affiliated)

If you do something stupid with your code at least you won't go bankrupt, only your service will be slower.


Just to clarify, Hetzner now has Cloud servers in the US. Dedicated servers are still only available in Europe.


I appreciate that info. I tried Hetzner a decade ago from the US just to see what kind of latency I might expect. I've been waiting for them to finally get US located services to give them a serious shot (vs Digital Ocean & Co.).


I was just comparing the pricing to OVH's cloud VPS's (https://us.ovhcloud.com/) and, accounting for the currency conversion and OVH's huge amount of free unmetered bandwidth (vs 20TB for H), it actually looks like OVH is even cheaper.

How can DO and Vultr even compete? Probably on the basis of their nicer dashboards and easier sign-up flow (especially OVH's)


I think Hetzner removed their traffic limit for dedicated servers https://www.hetzner.com/news/traffic-limit/


I don't know about Hetzner, but Vultr has had quite nice customer support.

When I wanted to do some non-standard things, I simply talked to Vultr to demonstrate that I had a clue, had a reason for asking that I could articulate, and they approved it right then.


Tried OVH once. Couldn't even finish the account setup. Was rather dissapointed because I wanted not to depend on just one provider. Might give them a chance again.


I rented a VPS from the French website (their native language), and it was confusing. I remember looping through the same 2 pages 10 times before actually finding the instance's access instructions.

Of course, you don't have to care about the console once you have SSH access to the server. It seems to be pretty good service for the money.


Hetzner is German, not french.

Ovh is french.


Sure, but who is Hetzner? My comment was about OVH.


Omg. Hetzner is similar to OVH and DigitalOcean.

It seems that i misread somewhere? Weird

I think the sentence: how can DO and Vultr even compete was my interpretation of the better offering of Hetzner ( including uptime). Sorry


I just recently signed up for some stupidly low-priced VPS on OVH and encountered no problems at all with the sign-up. The only issue I had was when I went to increase the RAM - it said the price was $1/month, but when I bought it I was charged $5 and it took support a week to get back to me to tell me I didn't read the small print properly and that it said they would have to upgrade the Windows license which was an extra fee.

tl;dr: watch the small print and their support is slow as fuck. otherwise, incredible value for money.


With the performance of these servers you have a huge margin for stupidity before you even notice any slowdowns.


I used their VPS service before and I didn't have to pay for overage, but now that it's part of their cloud offering, overage is charged.


Oooooooooh!


We are currently on AWS, and we have several dedicated servers on LeaseWeb that we offload computational work on. These are cheap beasts.

Still, I'd not run my RDBMS on an unmanaged, non replicated dedicated server and I'd not bother setting up multiple servers with failover, automated backups etc and keep updating them. Fuck that, I'll pay whatever AWS says RDS costs.


What are you using this for?


www.couriermanager.com - SaaS for courier companies, basically. The new server is the common pool instance - I have others for dedicated clients.


The real answer here is cost limiting. I don't want my cloud provider to keep working at the cost of an order of magnitude higher bill than I was expecting because of a bug in my code. I want to be able to set a billing limit and have them degrade or stop there service if I exceed the limit.

AFAIK AWS doesn't have that. They do have the ability to send me alerts if my bill is unexpectedly high, but they still keep working until I go bankrupt. It's possible to use those alerts to implement your own "broke man's switch", but they don't have it built in.


That's why we use DigitalOcean a lot in Africa. You know upfront how much you will spend.


You can calculate how much RDS is gonna cost you per month beforehand.

In fact, it is slightly cheaper at AWS.

Ondemand PostgreSQL, Single Node, 1vCPU, 1GB MEM, 10GB Storage is $15 at DO

Ondemand PostgreSQL, Single Node, 2vCPU, 1GB MEM, 10GB Storage is $14.29 at AWS (db.t3.micro at us-east-2)

if reserved for 1yr no upfront

Reserved PostgreSQL, Single Node, 2vCPU, 1GB MEM, 10GB Storage is $10.57 at AWS (db.t3.micro at us-east-2)

Or you can use ARM and go lower.

Ondemand PostgreSQL, Single Node, 2vCPU, 1GB MEM, 10GB Storage is $12.83 at AWS (db.t4g.micro at us-east-2)


My experience is that on AWS there are hidden costs. Paying for traffic, and other stuff.


Don’t use DB providers that charge for rows/data scanned. Use Amazon RDS or Google Cloud SQL or just install it yourself on a VM. Pay for CPU, memory, and storage instead.


Rent metal and run your own MySQL/Postgres/...

One insert every 3 seconds. Could run that off a 10 year old laptop.


Sure, but we're addressing people who are so far on the other end of the spectrum they're using a "serverless" database where they pay for the number of rows scanned per query. I think a managed DB is a better middle-ground for their capability level while still delivering massive cost-savings.

Amazon RDS lowest-tier runs about $13/mo for 10GB storage, 2 vCPUs and 1GB memory with automated backups and push-button restoring. And that would have likely met all of their needs with capacity to spare.


The time spent setting it up and managing it and then having to deal with backups/environment clones/access control/scaling limitations/etc. outweighs the savings for almost any company paying US wages. Especially since you'd need metal for everything and not just the db due to network latency.


I think you're over estimating how complicated that stuff is...


It's very easy to do it in a half-assed way and much harder to do it at scale in a production environment with many developers without hurting developer productivity at all.


Every cloud-hosted startup I've consulted for had a full-time devops guy wrangling Terraform and YAML files. The cloud requires an equivalent time investment.


Bare metal requires the equivalent of all of that devops stuff and then more. That is if you actually want parity and not just a half assed version that hurts developer productivity and causes technical debt.


They clearly don't have the skills for that. And at one insert every 3s, a managed service like RDS won't really cost much.


... until you forget to create an index, apparently.


Rows returned model works really well for certain data loads (where all data customers use is customer-keyed)....

This model also scales DOWN really well .. while still providing good scalable availability...

That said, I DO agree with the sentiment of paying for a set performance level (clu, memory, storage), to provide predictable pricing.. obviously these guys were bit by the scaling capability.

I do a lot of pet projects, and I find DynamoDB works really well because my pet projects cost $0 most months... And I don't have to worry about servers, maintenance, or what not... I'm happy to do that at work, but I don't want that for my friends & fun projects... And I've not seen a decent DB managed RDS for <$5/month


Disclosure: I used to work on Google Cloud.

This is why BigQuery offers both models and lets you control the caps [1].

Buying fixed compute is effectively buying a throughput cap. Hard Quotas provide a similar function, but aren't a useful budgeting tool if you can't set them yourself.

"Serverless" without limits is basically "infinite throughput, infinite budget" (though App Engine had quotas since day 1 and then budgets once charging was added). The default quotas give you some of that budget / throughput capping, but again if you can't lower them they might not help you.

Either way, BQ won't drop ingestion or storage though because almost nobody wants their data deleted. As a provider, implementing strict budgets is impossible without having a fairly complex policy "if over $X/second stop all activity, oh except let me still do admin work, like adding indexes? Over $Y/second delete everything". I think having user adjustable quotas and throughput caps per "dimension" makes more sense but it puts the burden on the user and no provider offers good enough user control over quota.

tl;dr: true budgets are hard to do, but every provider should strive to offer better quota/throughput controls.

[1] https://cloud.google.com/bigquery/pricing

[2] https://cloud.google.com/bigquery/docs/reservations-workload...


That pricing model seems rather inherently tricky to me, and also quite expensive. At $1.50 per 10 million rows read this can get very expensive the moment you do a full table scan on any non-trivial table. And while this example is a trivial case where you only need minimal database knowledge to ensure that no full table scan is necessary, many real world cases are much more complex.

It also seems very expensive compared to just renting DBs by instance, if you put any real load onto this. I can see this being attractive if your use case only queries single rows by key, but it's essentially a big minefield for any query more complex than that. A database with a rather opaque query planner doesn't seem like a good fit for this kind of pricing.


I agree with this. You are also only one bug in the query planner away from going bankrupt. Imagine Planetscale upgrading to a version which contains a small edge-case bug and now you owe them tens of thousands because of it.


If we caused a bug that did that we would refund the customer of course.


I'm not a DB expert, but "750k users in a month." doesn't sound like a quantity that you'd need to use some kind of fancy special tooling for.


In a world where Juicero raised $120m, selling overengineered solutions for simple problems is not necessarily a bad idea.


The problem isn't selling (there's plenty of dubious or badly-priced products being around), it's that someone thought buying said product was a good idea.


I was thinking the same. It sound like this could be done on a private virtual server for less than $100/month.


It's far from big data even if they grow 10x in the next year, but if you are unfamiliar with database migrations and branches I can see the appeal of the product.


Are StackOverflow topics now eligible for HN as soon as you mention the savings? Or is mentioning some numbers about the users enough? Or did I just click on an advertisement? So many questions.


Gotta love cloud pricing. This is why I colocate.


I've seen things you people wouldn't believe. Millions burnt on consultants and licensing Oracle. I watched C series startups throwing it all away in a move to NoSQL. All those Amazon RDS fees will be lost in time.


> I watched C series startups throwing it all away in a move to NoSQL

have you forgot that MongoDB is web scale?

https://www.youtube.com/watch?v=b2F-DItXtZs


I felt a great disturbance in the Billing, as if millions of rows suddenly cried out in read and were suddenly repeated. I fear something terrible has happened.


Like tears in the rain


That last metaphore was added by the actor himself...the director asked him to have some human showing in him, and now it's history.


Actually, the whole part was added by the actor, not just the last metaphor.



These two might just be the best comments i have ever seen on HN


Time to die


"All those Amazon RDS fees will be lost in time" ..like tears in rain?


I knew the cadence of this sentence sounded familiar.

Nice!


I was actually considering PlanetScale, but them saying "Every time a query retrieves a row from the database, it is counted as a row read." when it's actually all the scanned rows, sounds intentionally confusing. "Retrieving" sounds like it should only be counted rows returned by a query.


0,15 $ per query... The world has gone insane.


It's not quite so crazy when you phrase it as $0.15 for reading a million items from the database.


25 years ago i built and ran a biggish database system that supported a reservation system.

Even given the limitations of the time (RDBMS cost, 9GB disks, Sun kit, etc), our cost of good sold for that type of workload was exponentially less. (At scale) Today, I could probably run that company off my MacBook Pro and have room to spare.

That said, the rationale for choosing this technology is cute: “After seeing a ton of the best GitHub engineers end up at PlanetScale and seeing the process GitHub went through to issue simple migrations, we chose to use their service.”

If you use the same methodology to choose a database that the public uses to choose between Bud Light, Miller Lite, and Coors Light, expect a suboptimal outcome.


I have seen so much over-engineered startup shit costing thousands in AWS fees that could run faster on my laptop.


Lol funny that you say that.

I’m not a developer by trade (mostly an email/excel/PowerPoint jockey these days), but am a local sme on a few things, one of which was absolutely critical for a very key project.

I was asked to mock-up a prototype of a core process that produced correct outputs. Dusted off my old toolbox and mocked it up in a combo of python and bash. Probably a total of 900-1000 lines of “code”. The mock-up, running on some little vm, ended up outperforming the production solution for quite some time! :)


No that's still crazy. Scanning a million items isn't a big workload.


I'm not saying that it's not crazy. Just that it's less crazy.


Fair enough!


I agree, it might be distributed in some way which is driving up the cost?


The greed is distributed across the cloud provider's C-suite for sure. To be fair to them they refunded the bill afterwards according to the article but IMO we should not be accepting this kind of pricing models as normal.

A million items isn't a big deal, distributed or not. If anything, if your distributed architecture makes reading a million items more costly than a single machine doing it then it's time to go back to the drawing board.


They mention that PlanetScale uses Vitess, so indeed it might be distributed


One of my employers was using BigQuery. I was so scared that I might accidentally run queries and get a big bill, even though our tables weren’t that big.

It is funny to look back, but getting huge bills without even realizing that we’re doing something wrong is very real possibility. Cloud vendors happily make their pricing opaque as it benefits them.

I’d avoid even the best product in the industry, if their pricing is opaque. Or if there is a “Contact Us” button when there needn’t be.


Still crazy, charging $0.15 for probably less than a few ms of computing power.


If those million items need to be read from different servers -- as they might well, in a distributed database -- it's definitely not just a few ms of computing power.

For reference, reading a million items of up to 1 kB each costs $0.125 with on-demand dynamodb.


> For reference, reading a million items of up to 1 kB each costs $0.125 with on-demand dynamodb.

Is that the same counting method as PlanetScale's "row read"? That is, `select title from posts order by title limit 10` on a table with 10 million rows and no index on `title` would cost $1.25 per query?


DynamoDB doesn't have SQL queries; but yes, if you're performing an operation which reads 10 million items from ddb it will be absurdly expensive. It will also take an absurdly long time; by default ddb is limited to 40k read request units (= 80k eventually consistent reads of up to 1 kB) per second. Being so slow would probably make users realize they're doing something wrong.


20 of those queries give me a VPS at Hetzner for a month.

Or 1 query = 3 gb / snapshot storage for a month.


Why do people get on stuff some known people use blindly?

That is such a bad habit like everyone getting on git and getting burned and now it's irreversible with all the existing ecosystem.

How hard is it to just spin up a beefy cloud instance and run a MySQL of your own with whatever backup strategy you got and do things the way it is than getting bitten by using stuff you're not even familiar with.


Huh, learning about this "Superwall" product constitutes as my horror-story-of-the-day. It's paywalling as a service, just what the industry needed. Thankfully it appears to be quarantined to iOS right now, but God does it feel like we're headed right back into Stallman's predictions about how SAASS will ruin the landscape of commercial technology.


Two sides to every coin. Ethical developers need ways to make low prices work, which is impossible without good testing suites.


These days, if you're developing something for profit it's pretty hard to see your software as ethical. You're either trying to empower your user or trying to monetize them, the two will always fight one another and snuff the other out unless you, the developer, take a stand.I fully understand the market for proprietary software, but trying to define some ethical middle ground is just blatant lip service, nothing else.


It's better than ads though.


It's better than targetted ads built on intrusive tracking that also enables several other abusive business practices. It's not better than "good old fashioned ads". Heck I'd even be ok with targeted ads if it could be done without the rest of the "destroy civil society" that seems to come along for the ride.


Even if you solve the privacy problem, there's still a problem with advertising which is that it's inherently at odds with the user's interests.

An advertising-funded product will always prioritize engagement - they want you to "engage" with the product even if it means degrading the experience intentionally such as making a process manual or take more steps than necessary (so that you are exposed to more ads). The "destroy civil society" problem you mention is a direct consequence of the pursuit of engagement.

In contrast, with a paid product, the company's interests are directly aligned with yours and they have no incentive to intentionally degrade the experience or get in your way any more than necessary. They don't care about how much you "engage" with the product as long as the bill gets paid (if anything, the less you engage the better as it uses less server resources).


Paid products can still do all the same datamining that advertisers do, and there are markets for buying/selling that info (eg. Palantir). The truth is that all forms of monetization are inherently at-odds with the user's interests. Paying for an app doesn't magically make this friction go away, and it certainly doesn't reduce the incentive for developers to abuse your trust.


> there are markets for buying/selling that info (eg. Palantir)

A major reason for these markets is that the information can be used for advertising or marketing targeting. In a world where the majority of products/services are paid and the amount of advertising is significantly reduced there will be much less demand for this information, leading to lower prices and even lower payoff from selling this information, not to mention potential legal risks (GDPR, CCPA, etc).


Palantir's biggest customers are government entities and market researchers, not advertisers. In a world where advertising has been significantly reduced, their products would become more valuable, since regular analytics would become inaccessible. Compared to the data these corporate aggregators collect, ad fingerprinting seems trivial.

The biggest crux of this, though, is the fact that both of these monetization schemes are destructive. Paying for software simply doesn't make sense in most cases, as there aren't that many people who are developing novel solutions these days. That's the exact reason why advertising is so popular: the market knows that relying on your conscious contribution is unsustainable, so why should you believe otherwise?


You're assuming it's either/or. When I worked at eHarmony we had a monitor that would scroll user feedback submitted online near the developer area. One of the most common complaints was about ads being shown to paid users.

I asked a PM about this and was told that the money they made was too much to turn down.

Capitalism. It's why we can't have nice things.


I don't disagree with that, but that's still not a reason to turn down paid options. With ads, you are guaranteed to get a bad experience. With paid product, there is potential for a bad experience, but at least it's not the guaranteed default.

Regulation around ads is the only definitive solution, but in the meantime if there's a business model that's non-toxic I'm not going to hate on it even if technically someone could still misuse it.


Yeah, It cost me two bad months of high RDS fees to learn about indexes. $900 in total.

Then a bro showed me one night about the magic of indexes. 5 minutes worth of advice saved me hundreds of dollars per month in the future and all he asked in return was for some beer and chicken wings.

Now that is a good bro.

I'm happy to say I've paid it forward myself.


What a crazy way to do billing though. At larger scales (more rows, more customers, more queries) the costs become absolutely insane.


Thanks. At least i'll never use PlatnetScale. A good service should have config for me to alert/prevent these kinds of money wasting cases.

Imagine how many wasted $$$ they earned based on common knowledge that they should prevent for customers instead.


It amazes me that things so basic and fundamental like understanding the way indexes work are often overlooked or not leveraged


One shouldn't assume people know anything (even the most basic thing) about databases just because they say they do.


I hate this pricing model

My company in boarded `fivetran` to source data from different tools.

Budget got exhausted in sourcing `iterable` data


Let me understand please. These people are selling a commercial product and their team has no idea whatsoever of what an index is? And this is news?


It sounds a bit more like they were confused by the automatic index creation for foreign keys they expected to be there. So they probably knew they'd need an index, just assumed this was implicit in the foreign key.


Well this was an embarrasing read.


TLDR: author forgot to create indexes in cloud-based MySQL database and paid too much for the queries which were run as full-table scans.

Interestingly enough, some DBs (like Cassandra) would refuse scan-type queries unless specifically asked to. I wonder if cloud-based DBs which charge per row inspected could have such mode... Though of course it's their incentive not to.


Ssh into an ec2 instance, install mysql, and you'll never pay more than $7.50 a month!


You need to spend a lot more on AWS if you want good performances.


But I'm talking about the contrapositive, where if you don't need good performance, you don't need to spend 5k.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: