Hacker Newsnew | past | comments | ask | show | jobs | submit | mnutt's commentslogin

I have a large rails app that was plagued with slow specs using factory_bot. Associations in factories are especially dangerous given how easy it is to build up big dependency chains. The single largest speedup was noting that nearly every test was in the context of a user and org, and creating a default_user and default_org fixture.


That's a great, example, thanks.

Then you just refer to the fixture in your factory definitions? Seems very reasonable.


there's a profiler that can show you what to focus on, probably fprof here: https://test-prof.evilmartians.io/ (been a while and I don't remember exactly what I used)

(now maybe that's what you used to see what was causing the slowdown, but mentioning to for others to help them identify the bottlenecks.)


Would be interesting to see a benchmark with the rust binary with successively more “bloat” in the binary to separate out how much of the cold start is app start time vs app transfer time. It would also be useful to have the c++ lambda runtime there too; I expect it probably performs similarly to rust.

Tangent: when you have a lambda returning binary data, it’s pretty painful to have to encode it as base64 just so it can be serialized to json for the runtime. To add insult to injury, the base64 encoding is much more likely put you over the response size limits (6MB normally, 1MB via ALB). The c++ lambda runtime (and maybe rust?) lets you return non-JSON and do whatever you want, as it’s just POSTing to an endpoint within the lambda. So you can return a binary payload and just have your client handle the blob.


> we never had any issues, because we didn't depend on calling AWS APIs to continue operating. Things already running continue to run.

I think it was just luck of the draw that the failure happened in this way and not some other way. Even if APIs falling over but EC2 instances remaining up is a slightly more likely failure mode, it means you can't run autoscaling, can't depend on spot instances which in an outage you can lose and can't replace.


> it means you can't run autoscaling, can't depend on spot instances which in an outage you can lose and can't replace

Yes, this is part of designing for reliability. If you use spot or autoscaling, you can't assume you will have high availability in those components. They're optimizations, like a cache. A cache can disappear, and this can have a destabilizing effect on your architecture if you don't plan for it.

This lack of planning is pretty common, unfortunately. Whether it's in a software component or system architecture, people often use a thing without understanding the implications of it. Then when AWS API calls become unavailable, half the internet falls over... because nobody planned for "what happens when the control plane disappears". (This is actually a critical safety consideration in other systems)


Sure, you can only use EC2, not use autoscaling or spot and instead just provision to your highest capacity needs, and not use any other AWS service that relies on dynamo as a dependency.

We still take some steps to mitigate control plane issues in what I consider a reasonable AWS setup (attempt to lock ASGs to prevent scale-down) but I place the control plane disappearing on the same level as the entire region going dark, and just run multi-region.


Even their transfer rates between AZs _in the same region_ are expensive, given they presumably own the fiber?

This aligns with their “you should be in multiple AZs” sales strategy, because self-hosted and third-party services can’t replicate data between AZs without expensive bandwidth costs, while their own managed services (ElastiCache, RDS, etc) can offer replication between zones for free.


This is a great project writeup, I did something _very_ similar for my son. (HT802, asterisk, twilio, calling relatives) With all of the NAT involved I could never get sjsip to work properly so I ended up having to use the old sip module, but looking at yours makes me want to revisit it.

Once you have asterisk set up and running, it becomes easy to also set up all sorts of other extensions like "check the weather" / "tell a joke" / "check the train statuses". I put up some code for it here: https://github.com/mnutt/rotary


very cool, thanks for sharing!

Yes it opens a bunch of doors for the user to interact with various APIs through voice (I want to hook it up to home assistant soon too, so we can be fancy and call the "butler" and ask it to turn the lights off, etc)


As a four year old, my child loved playing "the subway game", which is similar to this but just in our heads: I name two subway stations and he tries to think of the fastest route between them. When that is exhausted, we move onto the fewest transfers, the most convoluted routes, the 1968 lines only, etc. There's just something about the NYC subway which really draws kids (and many adults) in.


You may like the Subwaydle: https://www.subwaydle.com/


That's amazing, what a great idea.


I did a writeup of my own experiences using Asterisk for this exact use case: https://github.com/mnutt/rotary


This is exactly what I was looking for—thanks! So cool that you did this with a rotary phone.


We tried this out a middle schoolers at an installfest in Alabama in the 90s, but something was broken with the resulting install and had to wipe it and start over. Funny that 25+ years later I learn that the problem could have been that the locale was set to en_RN…


You can even do neat things like having the revalidate fetch tell your backend “I served this object out of cache 1100 times since last fetch, maybe consider putting a few extra cpu cycles into the compression on this one”


That seems verbose


Looks interesting, I'm curious how you settled on WebDAV? A decade ago I built a NextCloud alternative backend that also used WebDAV, and I'm not sure it's something I would ever touch again. Lots of clients say they support WebDAV but they all do slightly different things, and if you own the clients then there are probably simpler protocols.


Reading the specs and lots of reverse-engineering multiple different popular clients. You can see most of it at https://github.com/bewcloud/bewcloud/blob/main/routes/dav.ts...

The desktop app uses WebDAV for the sync (via rclone) just because it was the simplest and most reliable way, but the listing of directories (to choose sync) and the mobile app use a simple HTTP API.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: