Hacker News new | past | comments | ask | show | jobs | submit | ekidd's comments login

> It looks like Rust's cancellation model is far more blunt, if you are just allowed to drop the coroutine.

You can only drop it if you own it (and nobody has borrowed it), which means you can only drop it at an `await` point.

This effectively means you need to use RAII guard objects like in C++ in async code if you want to guarantee cleanup of external resources. But it's otherwise completely well behaved with epoll-based systems.

I find that a bigger issue in my async Rust code is using Tokio-style async "streams", where a cancelled sender looks exactly like a clean "end of stream". In this case, I use something like:

    enum StreamValue<T> {
        Value(T),
        End,
    }
If I don't see StreamValue::End before the stream closes, then I assume the sender failed somehow and treat it as a broken stream (sort of like a Unix EPIPE error).

This can obviously be wrapped. But any wrapper still requires the sender to explictly close the stream when done, and not via an implicit Drop.


> This effectively means you need to use RAII guard objects like in C++ in async code if you want to guarantee cleanup of external resources. But it's otherwise completely well behaved with epoll-based systems.

Which limits cleanup after cancellation to be synchronous, doesn't it? I often use asynchronous cleanup logic in Python (which is the whole premise of `async with`).


Correct. Well, you can dump it into a fast sync buffer and let a background cleanup process do any async cleanup.

Sync Rust is lovely, especially with a bit of practice, and doubly so if you already care about how things are stored in memory. (And caring how things are stored is how you get speed.)

Async Rust is manageable. There's more learning curve, and you're more likely to hit an odd corner case where you need to pair for 30 minutes with the team's Rust expert.

The majority of recent Rust networking libraries are async, which is usually OK. Especially if you tend to keep your code simple anyway. But there are edge cases where it really helps to have access to Rust experience—we hit one yesterday working on some HTTP retry code, where we needed to be careful how we passed values into an async retriable block.


Yeah, "safe Rust" is officially allowed to leak memory and other resources.

- The easiest way to do this is mem::forget, a safe function which exists to leak memory deliberately.

- The most common real way to leak memory is to create a loop using the Rc<T> or Arc<T> reference count types. I've never seen this in any of my company's Rust code, but we don't write our own cyclic data structures, either. This is either a big deal to you, or a complete non-issue, depending on your program architecture.

Basically, "safe Rust" aims to protect you from memory corruption, undefined behavior, and data races between threads. It does not protect you from leaks, other kinds of race conditions, or (obviously) logic bugs.


I remember speaking to several of the people who worked on America's Army back around 2007, when Counter-Strike was all the rage. I think this was at the GDC.

The people I spoke with were from the Army. And they found CS-style games agonizing to watch. So many people running around with no plan, so much friendly fire, so many unrealistic tactics. You could practically see them shudder.

They also had some people who worked in logistics. I remember one of them saying, "If the United States decides to invade a country, the software we wrote could calculate how much toilet paper we'd need."


Of course it is agonizing, CS doesn’t try to replicate real world scenario. In real life if you die - you die, you don’t get to respawn with $800.


You sure?


I haven't tried it myself, but heard that this is indeed the case.


I guess it depends on what you mean by "better graphics"? They're upgraded their graphics several times over the years. I'm playing on a 4K monitor and they're using most of those pixels. I can see a small amount of interpolation at full zoom at 4K.

If you zoom in, the graphics are high-res 2D sprites, rendered from 3D models. And the level of detail can be ridiculous. From this week's Factorio 2.0 update (and Space Age add-on), here's an example of the zoomed-in detail: https://www.factorio.com/blog/post/fff-396 See the foundry animations? Those videos are actually slightly more blurred than the in-game version. And the sound effects are synced to specific animation frames.

So the world of Factorio is oftentimes brown and grim and covered in grime, but that's a conscious artistic choice. (Not all of the new planets are brown. Gleba is green and irridescent and frankly creepy.) Similarly, Factorio's 2D nature has allowed the developers to focus on gameplay and quality-of-life more than many newer games in the genre. If you want to build big, intricate factories with complex train networks, for example, Factorio really shines.

If anyone would like a game with 3D graphics, or a different graphics style, try:

- Satisfactory: The 3D world is gorgeous, and Satisfactory shines at "walk around inside your factory and tinker with it." Gameplay-wise, it has only recently gained blueprinting tools that allow working at a medium level of abstraction.

- Shapez 2.0: This is pretty and colorful and full of great little puzzles. It occupies a different part of the game-design space and is just a joy to play.

(Dyson Sphere Project and Captain of Industry also have great gameplay, but I don't know if their graphics are likely to grab people who find Factorio graphically underwhelming.)


> Factorio wouldn't exist with Dwarf Fortress?

According to Factorio's developers, they were heavily inspired by Minecraft factory mods like IndustrialCraft. Minecraft is commonly said to have been inspired by Infiniminer.

I'm not aware of Infinimer being inspired by Dwarf Fortress.

RimWorld is basically "Dwarf Fortress on an alien planet", although the simulation doesn't go as ridiculously deep. RimWorld has in turn inspired a bunch of attempts to do "RimWorld on a spaceship," but none of so far have achieved the status of being a "classic".


Minecraft was inspired by both Infiniminer and Dwarf Fortress.


Some animals unfortunately pose a threat to human life.

Where we live on the Vermont/New Hampshire state line, we've had a several hundred percent increase in problematic black bears over the last 3-4 years. Normally, black bears are "more afraid of us than we are of them", and they avoid human contact. But once they discover that human houses represent food sources, well, they are 250 pound predators.

We've had a number of serious incidents in the last decade. A couple of examples:

- Woman attacked in her own home, loses eye: https://www.theguardian.com/world/2018/aug/18/black-bear-att...

- A bear ripped a hole into the exterior wall of a kitchen to gain entry. Sorry, I can't find the photo for this right now, but was similar to the exit hole in this article: https://www.chestertelegraph.org/2024/08/14/plenty-of-bear-s...

Right now, some of our friends are dealing with a black bear that has repeatedly loitered on their porch. They have toddlers, pets and farm animals. And that bear isn't showing much fear of humans at all, which is a serious warning sign.

Vermont has asked anyone who encounters an aggressive bear to report it to the game warderns. They have a process for evaluating the situation. But often, the only good answer is for the wardens to shoot the bear. When possible, people would prefer to leave this to the wardens than to shoot the bear themselves.

If you live in bear country, remember, "a fed bear is a dead bear." Do not leave food sources where bears can find them, and discover that houses are a food source. When this happens, it puts human safety at risk, and it all too often means the bear will need to be shot by a warden.


Let's assume you're not a FAANG, and you don't have a billion customers.

If you're gluing microservices together using distributed transactions (or durable event queues plus eventual consistency, or whatever), the odds are good that you've gone far down the wrong path.

For many applications, it's easiest to start with a modular monolith talking to a shared database, one that natively supports transactions. When this becomes too expensive to scale, the next step may be sharding your backend. (It depends on whether you have a system where users mostly live in their silos, or where everyone talks to everyone. If your users are siloed, you can shard at almost any scale.)

Microservices make sense when they're "natural". A video encoder makes a great microservice. So does a map tile generator.

Distributed systems are expensive and complicated, and they kill your team's development velocity. I've built several of them. Sometimes, they turned out to be serious mistakes.

As a rule of thumb:

1. Design for 10x your current scale, not 1000x. 10x your scale allows for 3 consecutive years of 100% growth before you need to rebuild. Designing for 1000x your scale usually means you're sacrificing development velocity to cosplay as a FAANG.

2. You will want transactions in places that you didn't expect.

3. If you need transactions between two microservices, strongly consider merging them and having them talk to the same database.

Sometimes you'll have no better choice than to use distributed transactions or durable event queues. They are inherent in some problems. But they should be treated as a giant flashing "danger" sign.


I would add that just because you add these things it does not mean you can scale afterwards. All microservices implementations I've seen so far are bolted on top of some existing layer of mud and serve only to make function calls that were inside processes run over the network with added latency and other overheads. The end game is the aggregate latency and cost increases only with no functional scalability improvements.

Various engineering leads, happy with what went on their resume, leave and tell everyone how they increased scalability. And persuade another generation of failures.


I agree with this.

Having worked at FAANG, I'm always excited by the fanciness that is required. Spanner is the best database I've ever used.

That said, my feeling is that in the real world, you just pick Postgres and forget about it. Let's Encrypt issues every TLS cert on the Internet with one beefy Postgres database. Computers are HUGE these days. By the time a 128 core machine isn't good enough for your app, you will have sold your shares and will be living on your own private island or whatever. If you want to wrap sqlite in raft over the weekend for some fun, sure, do that. But don't put it in prod.


Agreed - A place I worked needed 24/7 uptime for financial transactions, and we still managed to keep scaling a standard MySQL database, despite it getting hammered, over the course of 10 years up to something like 64 cores and 384GB of RAM on a large EC2 instance.

We did have to move reporting off to a non-SQL solution because it was too slow to do in realtime, but that was a decision based on evidence of the need.


Pretty great advice!

I think the one thing you can run into that is hard is once you want to support different datasets that fall outside the scope of a transaction (think events/search/derived-data, anything that needs to read/write to a system that is not your primary transactional DB) you probably do want some sort of event bus/queue type thing to get eventual consistency across all the things. Otherwise you just end up in impossible situations when you try to manage things like doing a DB write + ES document update. Something has to fail and then your state is desynced across datastores and you're in velocity/bug hell. The other side of this though is once you introduce the event bus and transactional-outbox or whatever, you then have a problem of writes/updates happening and not being reflected immediately. I think the best things that solve this problem are stuff like Meta's TAO that combines these concepts, but no idea what is available to the mere mortals/startups to best solve these types of problems. Would love to know if anyone has killer recommendations here.


I think the question is if you need the entire system to be strongly consistent, or just the core of it?

To use ElasticSearch as an example: do you need to add the complexity of keeping the index up to date in realtime, or can you live with periodic updates for search or a background job for it?

As long as your primary DB is the source of truth, you can use that to bring other less critical stores up to date outside of the context of an API request.


Well, the problem you run into is that you kind of want different datastores for different use-cases. For example search vs. specific page loads, and you want to try and make both of those consistent, but you don't have a single DB that can serve both use-cases (often times primary DB + ElasticSearch for example). If you don't keep them consistent, you have user-facing bugs where a user can update a record but not search for it immediately, or if you try to load everything from ES to provide consistent views to a user, then updates can disappear on refresh. Or if you try to write to both SQL + ES in an API request, they can desync on failure writing to one or the other. The problem is even less the complexity of keeping the index up to date in realtime, and more that the ES index isn't even consistent with the primary DB, and to a user they are just different parts of your app that kinda seem a little broken in subtle ways inconsistently. It would be great to be able to have everything present a consistent view to users, that updates together on-write.


The way I solved it once was trying to update ES synchronously and if it failed or timeouted - queue event to index the doc. Timeout wasn’t an issue, because double update wasn’t harmful.


In instances like that I tend to push back on the requirement, for example with this classic DB + Elasticsearch case:

1. How often is a user going to perform an update and then search for the exact same thing immediately after?

2. Suppose they did: if elasticsearch was updated in the background, is the queue/worker running fast enough such that the user won't even notice a latency of a second or two max?

It really depends on what you're doing, because if Elasticsearch is operating as its own source of truth with data that the primary DB doesn't have, then yeah, you're going to have trouble keeping both strongly consistent in a transactional manner without layering on complexity (like sagas with transactions and compensations). But if it's merely a search engine on top of your source of truth (for example, you search ES to get a list of primary keys and then fetch all the data from the DB), you've got some breathing room.

I mean, we're talking plucky upstart here and not enterprise FAANG, so there's definitely a case for 'less is more'.


I think a different framing for the question might be more helpful. What is your overall goal? You cannot have everything. In fact, if you try to have everything, you will get nothing.

I would say that 99% of time the implicit goal is to cut down development time. And the best way to cut development time on long-term is to cut down complexity.

To cut down complexity, we should avoid complex problems, use existing solutions to solve them or at least be able to contain them. Sometimes, the price is that you need to solve some easier problems yourself.

For example, microservice architectures promise that you need less coordination between teams, because parts of the systems can be deployed independently. The price is that you cannot use database transactions to guarantee integrity.

I think data integrity is almost always much more important problem to solve, partly because it is so difficult to solve by yourself. Actually it is often so difficult that most people just ignore it.

For example, if you adopt microservices architecture, you often just ignore data integrity, and call your system "eventually consistent". Practically this means that you push the data integrity problems to the sink system.

It is better to think of data integrity as a meta-feature, rather than a feature. Having data integrity helps you in making other features of your system more simple. For example, migrating schema changes in your system is much more manageable if you use a database which can handle the migration within a transaction.

In your example, there are various ways where system can be left in an inconsistent state after a crash, even if the database is the "source of truth". For example, do you always reconstruct the ES cache after a crash? If not, how do you know whether it contains inconsistencies? Whose job is it to initiate the reconstruction? etc.


I frequently see folks fail to understand that when the unicorn rocketship spends a month and ten of their hundreds of engineers replacing their sharded mysql from setting ablaze daily due to overwhelming load, it is actually pretty close to the correct time for that work. Sure it may have been stressful, and customers may have been impacted, but it's a good problem to have. Conversely not having that problem maybe doesn't really mean anything at all, but there's a good chance it means you were solving these scaling problems prematurely.

It's a balancing act, but putting out the fires before they even begin is often the wrong approach. Often a little fire is good for growth.


You really have to be doing huge levels of throughput before you start to struggle with scaling MySQL or Postgres. There’s really not many workloads that actually require strict ACID guarantees _and_ produce that level of throughput. 10-20 years ago I was running hundreds to thousands of transactions per second on beefy Oracle and Postgres instances, and the workloads had to be especially big before we’d even consider any fancy scaling strategies to be necessary, and there wasn’t some magic tipping point where we’d decide that some instance had to go distributed all of a sudden.

Most of the distributed architectures I’ve seen have been led by engineers needs (to do something popular or interesting) rather than an actual product need, and most of them have had issues relating to poor attempts to replicate ACID functionality. If you’re really at the scale where you’re going to benefit from a distributed architecture, the chances are eventual consistency will do just fine.


Great advice. Microservices also open the door to polyglot, so you lose the ability to even arrive that everyone uses/has access to/understands the things in a common libCompany that make it possible for anyone to at least make sense of code.

When I talk to people who did microservices, I ask them "why is this a service separate from this?"

I have legitimately - and commonly - gotten the answer that the dev I'm talking to wanted their own service.

It's malpractice.


> Microservices also open the door to polyglot

While I see your point about the downsides of involving too many languages/technologies, I think the really key distinction is whether a service has its own separate database.

It's really not such a big problem to have "microservices" that share the same database. This can bring many of the benefits of microservices without most of the downsides.

Imo it would be good if we had some common terminology to distinguish these approaches. It seems like a lot of people are creating services with their own separate databases for no reason other than "that's how you're supposed to do microservices".


Microservices with their own database are often a terrible design choice, but I will grant you it is one of the two dimensions that make sense:

1. is there a _very significant_ difference in the horizontal scaling footprint/model or demand (and demand is _only relevant_ if there is static footprint)? Home the function with a similar service, if there is one, otherwise yes, add a service.

2. is there a _genuine_ and _legitimate_ need for a different database, with completely independent schema, and not some horrible bullshit where you will end up doing cross-party transactions (and almost always failing to do so well)? Are you sure you need a different database or is your team just afraid of SQL and schema management (the usual mongodb garbage)? Is the issue that you don't understand how database security works? .. if all of these pass muster, then yes, ok, that's an OK reason.

Every architecture I've seen since 2013 at startups and big companies alike (since I do technical diligence professionally as a side gig) has been microservices or simple CRUD.

Almost all of the microservices ones were totally fucking wrong and had to be massively reworked, and usually multuple times, because they had no thesis at all for what they were doing and it was often - even mostly - an excuse not to have to learn their tools beyond tutorial level and/or a desire to play with new shiny or not read someone else's code. The CRUD guys were fine, they just did business until they needed to add caching, and so on, like real products.


> 1. Design for 10x your current scale, not 1000x.

I'd even say that the advice counts double for early stage startups. That is, at that scale, it should be design for 5x.

You could spend years building a well architected, multi-tenanted, microserviced system, whose focus on sound engineering is actually distracting your team from building core solutions that address your clients' real problems now. Or, you could instead redirect that focus on first solving those immediate problems with simplistic, suboptimal, but valid engineering.

An early stage solopreneur could literally just clone or copy/paste/configure their monolith in a new directory and spawn a new database every time they have a new client. They could literally do this for their first 20+ clients in their first year. When I say this, some people look at me in disbelief. Some of them, having yet to make their first sale. Instead, they're working on solutions to counter the anticipated scalability issues they'll have in two years, when they finally start to sell and become a huge success.

For another few, copy/paste/createdb seems like a stroke of genius. But I'm not a genius, I'm just oldish. Many companies did basically this 20 years ago and it worked fine. The reason it's not even considered anymore seems to be a cultural amnesia/insanity that's made certain practices arcane, if not taboo altogether. So we tend to spontaneously reach for the nuclear reactor, when a few pieces of coal would suffice to fuel our current momentum.


> spawn a new database every time they have a new client.

I've seen this work great for multitenancy, with sqlite (again, a single beefy server goes a long way). At some point though, you hit niche scaling issues and that's how you end up with, e.g., arcane sqlite hacks. Hopefully these mostly come from people who have found early success, or looked at by others who want reassurance that there is an escape hatch that doesn't involve rewriting everything for web scale.


This advice is good for people at top VPs/CIO/CTOs. Because mandate for micro service is coming from up top. Doing anything else is either not enterprise architecture approved or need to be justified against much powerful higher ups.

Here I have services working at high performance, low resource usage, fewer errors but only feedback I have is how soon can we break it into micro services, how soon we can get into cloud.


I wonder if anyone tried long-lasting transactions passed between services?

Like imagine you have a postgres pooler with API so you can use one postgres connection between several applications. Now you can start the transaction in one application, pass its ID to another application and commit it there.

Implement queues using postgres, use the same transaction for both business data and queue operations and some things will become easier.


> For many applications, it's easiest to start with a modular monolith talking to a shared database, one that natively supports transactions.

I don't think this handles "what if my app is a wrapper on external APIs and my own database".

You don't get automatic rollbacks with API calls the same way you do database transactions. What to do then?


You have a distributed system on your hands at that point so you need idempotent processes + reconciliation/eventual consistency. Basically thinking a lot about failure, resyncing data/state, patterns like transactional outboxes, durable queues, two-phase commit, etc etc. It just quickly gets into the specifics of your task/system/APIs so hard to give general advice. Most apps do not solve these problems super well for a long time unless they are in critical places like billing, and even then it might just mean weird repair jobs, manual APIs to resync stuff, audits, etc. Usually an event-bus/queue or related DB table for the idempotent work + some async process validating that table can go a long way though.


Sure sure sure

...but micro services is used as a people organization technique.

Once you're there you'll run into a situation where you'll have to do transactions. Might as well get good at it.


I disagree rather strongly with this advice. Mostly because I’ve spent almost a decade earning rather lucrative money on cleaning up after companies and organisations which did it. Part of what you say is really good advice, if you’re not Facebook then don’t build your infrastructure as though you were. I think it’s always a good idea to remind yourself that StackOverflow ran on a few IIS servers for a long while doing exactly what you’re recommending that people do. (Well almost anyway).

Using a single database always ends up being a mess. Ok, I shouldn’t say always because it’s technically possible for it not to happen. I’ve just never seen the OOP people not utterly fuck up the complexity in their models. It gets even worse when they’ve decided to use stores procedures or some magical ORM which not everyone understood the underlying workings of. I think you should definitely separate your data as much as possible. Even small scale companies will quickly struggle scaling their DBs if they don’t, and it’ll quickly become absolutely horrible if you have to remove parts of your business. Maybe they are unseeded, maybe they get sold off, whatever it is. With that said, however, I think you’re completely correct about not doing distributed transactions. I think that both you and the author are completely right that if you’re doing this, then you’re building complexity you should be building until you’re Facebook (or maybe when you’re almost Facebook).

A good micro-service is one that can live in total isolation. It’ll full-fill the need of a specific business domain, and it should contain all the data for this. If that leaves you with a monolith and a single shared database, then that is perfectly fine. If you can split it up. Say you have solar plants which are owned by companies but as far as the business goes a solar plant and a company can operate completely independently, then you should absolutely build them as two services. If you don’t, then you’re going start building your mess once you need to add wind plants or something different. Do note that I said that this depends on the business needs. If something like individual banking accounts of company owners is relevant to the greenfield workers and asset managers, then you probably can’t split up solar plants and companies. Keeping things separate like this will also help you immensely as you add on business intelligence and analytics.

If you keep everything in a single “model store”, then you’re eventually going end up with “oh, only John knows what that data does” while needing to pay someone like me a ridiculous amount of money to help your IT department get to a point where they are no longer hindering your company growth. Again, I’m sure this doesn’t have to be the case and I probably should just advise people to do exactly as you say. In my experience it’s an imperfect world and unless you keep things as simple as possible with as few abstractions as possible then you’re going to end up with a mess.


I wanted to respond to you, because you had some excellent points.

> Mostly because I’ve spent almost a decade earning rather lucrative money on cleaning up after companies and organisations which did it.

For many companies, this is actually a pretty successful outcome. They built an app, they earned a pile of money, they kept adding customers, and now they have a mess. But they can afford to pay you to fix their mess!

My rule of thumb of "design for 10x scale" is intended to be used iteratively. When something is slow and miserable, take the current demand, multiply it by 10, and design something that can handle it. Sometimes, yeah, this means you need to split stuff up or use a non-SQL database. But at least it's a real need at that point. And there's no substitute for engineering knowledge and good taste.

But as other people have pointed out, people who can't use an RDBMS correctly are going to have a bad time implementing distributed transactions across microservices.

So I'm going to stick with my advice to start with a single database, and to only pull things out when there's a clear need to scale something.


Well, I guess the side of my argument which is missing by my anecdotal experiences is that monoliths is what I work on because it was the trend. It’s probably easier to fix the complicated mess of a monolith than a complicated mess of micro-services done wrong.


All that you say is true, and people who do that are THE LESS capable of becoming better at the MUCH harder challenges of microservices.

I work in the ERP space and interact with dozens and I see the horrors that some only know as fair tales.

Without exception, staying in an RDBMS is the best option of all. I have seen the cosmical horrors of what people that struggle with rdbms do when moved to nosql and such, and is always much worse than before.

And all that you say, that is true, hide the real (practical) solution: Learn how to use the RDBMS, use SQL, remove complexity, and maybe put the thing in a bigger box.

All that is symptoms that are barely related to the use of a single database.


What I dislike about single databases is that it’s too easy for people to build unnecessary relationships. You obviously don’t have to do it and there are a lot of great tools to separate data. That’s not what people are going to do on a Thursday afternoon after a day of horrible meetings though. They’re going to take shortcuts and mess things up if it’s easy to do so. Having multiple databases, and they can all be SQL (should if that’s what your developers know), in isolation is to protect you from yourself, not so much because it’s a great idea technically.


But that is the same if you have many databases. Only that the problem spread!

Maybe is because we are in different niches?. In mine, I have never seen microservices having ANY improvement over the norm, and most certainly are far more negatives.

However, what is more, the norm is making a 2/3-tier from a monolithic, and that could be better.

P.D: In the ERP/business space you can have many, whole apps, with ETL in the middle orchestrating. That may improve things because the quality of each app varies, but what is terrible is to split apps into micro services. That is itself a bridge too far.


I agree with start with a monolith and a shared database. I’ve done that in the past quite successfully. I would just add that if scaling becomes an issue, I wouldn’t consider sharding my first option, it’s more of a last resort. I would prefer scaling vertically the shared database and optimizing it as much as possible. Also, another strategy I’ve adopted was avoiding doing `JOIN` or `ORDER BY`, as they stress your database precious CPU and IO. `JOIN` also adds coupling between tables, which I find hard to refactor once done.


I don't understand how do you avoid JOIN and ORDER BY?

Well, with ORDER BY, if your result set is not huge, sure, you can just sort it on the client side. Although sorting 100 rows on database side isn't expensive. But if you need, say, latest 500 records out of million (very frequent use-case), you have to sort it on the database side. Also with proper indices, database sometimes can avoid any explicit sort.

Do you just prefer to duplicate everything in every table instead of JOINing them? I did some denormalization to improve performance, but that was more like the last thing I would do, if there's no other recourse, because it makes it very possible that database will contain logically inconsistent data and it causes lots of headache. Fixing bugs in software is easier. Fixing bugs in data is hard and requires lots of analytic work and sometimes manual work.


I think a better maxim would be to never have an un-indexed ORDER BY or JOIN.

A big part of what many "nosql" databases that prioritize scale are doing is simply preventing you from ever running an adhoc un-indexed query.


When our kids were growing quickly, we went through a number of sub-$300 bikes, both new and gifted by family. I ended up doing about one repair every two weeks, including broken derailleurs, junky brakes, jammed wheels, you name it. And our kids did not abuse those bikes.

I ended up buying a bike stand and a basic toolkit just so I could fix those bikes quickly and get the kids back outside. The parts on those bikes were absolute garbage and the reliability was zero.

Meanwhile I have a medium/high-end mountain bike from 1997 that still has some original parts on it, despite having seen time as a daily commuter and a trail bike.

A good thing to look at is resale value. Around here, you can resell a $1200 mountain bike for a good price. But you'd lucky to get much for a $800 bike.


Machiavelli's "The Prince" will give you a decent understanding of what people usually mean by "Machiavellian". The book explains what methods would allow an absolute ruler to stay in control of state. It does not generally make moral judgments about those methods.

Machiavelli's "Discourses" is the one that will really confuse a reader looking to understand the colloquial meaning of "Machiavellian". In this book, Machiavelli lays out a vision of a healthy "republic" (or more precisely, res publica) which benefits the people who live in it. Among other things, Machiavelli argues that republics actually benefit from multiple competing factions, and from some kind of checks and balances. Apparently these ideas influenced several of the people who helped draft the Constitution of the United States.

Now why Machiavelli had two such different books on how governments worked is another interesting question...


> Machiavellian

> adjective

> uk /ˌmæk.i.əˈvel.i.ən/ us /ˌmæk.i.əˈvel.i.ən/

> using clever but often dishonest methods that deceive people so that you can win power or control

(from https://dictionary.cambridge.org/dictionary/english/machiave... )

Ymmv, but I think that's far from the point of the book, and isn't even the main topic. It's hard for me to imagine taking a person who'd never heard the term, letting them read the book, and then asking them to propose a definition, would produce anything like the above.


Juries absolutely do make errors, including ones which result in innocent people being put to death.

In a number of cases, the Innocence Project has certainly managed to find hard DNA evidence that linked the actual murderer to the crime, resulting in saving people from death row. One of my friends worked on some of these cases. As former law enforcement, he was very much aware of the various ways the system can fail. Police officers commit perjury on the stand, "expert" witnesses use pseudoscience with zero factual evidence (there are processes to prevent this which have gotten slightly better), and there are shockingly terrible public defenders.


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: