Hacker Newsnew | past | comments | ask | show | jobs | submit | more tetha's commentslogin

Yeah we got dinged by our pentesters a few years ago because the LB didn't clear X-Forwarded-For headers. So you could just set some trusted IP into the X-Forwarded-For header and various ip whitelists went "Well, it came from there, so we gonna let it though".

Oops :)

It is one of these trust-based headers that need to be cleared at the edge of your network / trust zone.


It does touch on what I was thinking as well at the end of the first section: Usually this makes sense if your application has to manage a lot of complexity, or rather, has to consume and produce the same domain objects in many different ways across many different APIs.

For example, some systems interact with several different vendor, tracking and payment systems that are all kinda the same, but also kinda different. Here it makes sense to have an internal domain model and to normalize all of these other systems into your domain model at a very early level. Otherwise complexity rises very, very quickly due to the number of n things interacting with n other things.

On the other hand, for a lot of our smaller and simpler systems that output JSON based of a database for other systems... it's a realistic question if maintaining the domain model and API translation for every endpoint in every change is actually less work than ripping out the API modelling framework, which occurs once every few years, if at all? Some teams would probably rewrite from scratch with new knowledge, especially if they have API-tests available.


I’d say where it’s more Important is when you need to manage database performance. This lets you design an api that’s pleasant for users, well normalised internally, while also performing well.

Usually normalisation and performance lead to a poor api that’s hard for users to use and hard hard to evolve since you’re so tightly coupled to your external representation.


I think that these fundamental things can be turned into an interesting topic, but you have to try for it.

Like, in a story background I'm pushing around, there's a coalition of a large amount of species developed on different planets. And you're a military officer, and you need to coordinate shifts, but - assuming some collectively normalized number of hours - some of your tiny dudes are tuned to 3 hours of sleep, 3 hours of leisure and 3 hours of work, others weird dudes with 2 arms and 2 legs are tuned to 38 hour cycles, and some huge dudes with a trunk in their face are tuned to 356 hour cycles.

Even if you could train and adjust this by an hour or two (which, for the 3 hour dudes would compare to an 8 earth-hour extension of duty for us), how the heck would you coordinate any kind of shifts across this? Or does every species have their own schedule? Good look finding crossover meetings then. Some of the small guys would have to do overtime for longer meetings even.

But you have to make it a point of the story and the challenges if you want to include it. If it is just a weird side note, just say that they figured out a conversion and that's it.


Some Star Trek books took the opportunity to work multiple species into the Enterprise's roster, when you don't have special effects problems with doing so.

But some others took the approach that Starfleet has a lot of vessels, and they're still somewhat segregated by species just because of those issues, and while the TV series don't corroborate that very well, I think it's better fanon overall. Peace and harmony among the species is great and all but trying to work 17 hour shifts in 2.5 Gs is going to get really old for the humans. And who wants to wear complicated breathing apparatuses for years at a time?

It would be an interesting direction to take a book series in... why do we see so much about the Klingons and Cardassians and Vulcans on TV? It's not because they're the only important species, it's because they're the species that breath our atmosphere at more-or-less our gravity and solar cycles. The Federation could be a whole bunch of parallel Federations-within-a-Federation where there's an entire set of species who also crew with each other but breath methane, need .7G, and work around 14-hour day/night cycles, and they just don't interact much with each other, not because they hate each other but just because it's so tedious to have prolonged contact.


This is why I've learned to present people with the concrete consequences and results of their service request. Especially if I get the feeling that someone does not comprehend what they are asking for.

"Your service request will result in X hours of downtime, as well as ireversible data loss between T1 and T2, and a reset of your system back to the state it was in at T1. All changes and interactions after T1 will be lost. Is this what you expect and want?"

Beyond a certain amount of service disruption or monetary investment, asking twice and making sure is prudent, not pedantic.


Hm.

On the other hand, I've heard people recommend running Postgres on ZFS so you can enable on the fly compression. This increases CPU utilization on the postgres server by quite a bit, read latency of uncached data a bit, but it decreases necessary write IOPS a lot. And as long as the compression is happening a lot in parallel (which it should, if your database has many parallel queries), it's much easier to throw more compute threads at it than to speed up the write-speed of a drive.

And after a certain size, you start to need atomic filesystem snapshots to be able to get a backup of a very large and busy database without everything exploding. We already have the more efficient backup strategies from replicas struggle on some systems and are at our wits end how to create proper backups and archives without reducing the backup freqency to weeks. ZFS has mature mechanisms and zfs-send to move this data around with limited impact ot the production dataflow.


Is an incremental backup of the database not possible? Pgbackrest etc. can do this by creating a full backup followed by incremental backups from the WAL.

For Postgres specifically you may also want to look at using hot_standby_feedback, as described in this recent HN article: https://news.ycombinator.com/item?id=44633933


On the big product clusters, we have incremental pgbackrest backups running for 20 minutes. Full backups take something between 12 - 16 hours. All of this from a sync standby managed by patroni. Archiving all of that takes 8 - 12 hours. It's a couple of terabytes on noncompressible data that needs to move. It's fine though, because this is an append-log-style dataset and we can take our time backing this up.

We also have decently sized clusters with very active data on them, and rather spicy recovery targets. On some of them, a full backup from the sync standby takes 4 hours, we need to pull an incremental backup at most 2 hours afterwards, but the long-term archiving process needs 2-3 hours to move the full backup to the archive. This is the first point in which filesystem snapshots, admittedly, of the pgbackrest repo, become necessary to adhere to SLOs as well as system function.

We do all of the high-complexity, high-throughput things recommended by postgres, and it's barely enough on the big systems. These things are getting to the point of needing a lot more storage and network bandwidth.


This was my understanding as well, color me also confused.


> When we finally got pull-requests, we really felt thrown into the future. It was just great. But after a while I started to miss the direct conversations about code with fellow humans.

Why is this disjoint though?

Our newer colleagues are currently bringing in their first bigger contributions and changes to the config management. We're using short-ish lived feature branches of a week or two with pull requests.

This is good, because I can spend some time understanding what they are doing and I can prepare some ideas and examples for improvements, before we have a call to talk about these changes.

I'm also entirely willing to move my bigger changes into branches and a PR and spend some afternoon with the team to talk about good practices, like structuring commits, naming, ansible code structure, ... and see if other people enjoy that as well. Management wants more stability and broader understanding of our code bases, so moving slower and reading and discussing more code seems up that alley.


Since the Github issue is turning into an unusable mess and I am currently experiencing emotions I don't have to unleash here...

There is an interesting comment by one of the older maintainers of stylus, Panya [1]. Taking this at face value, they claim to have published some malicious packages for research purposes about dependency confusion [2] (their link). This also fits with the comments of a few people claiming to be security researchers, [3] and [4], which at least say the same and point to three malicious packages published by Panya.

Based off of that, my own personal interpretation and simplest thesis is that Panya released some packages with questionable code. This triggered some security mechanism in npm and that system yanked packages they were a contributor of [5], because the account looked compromised or otherwise malicious. And then pipelines went red.

If this was an actual malicious act, or curiosity about security and security responses getting a fairly nuclear security response, I don't know. You need to apply your own security reasoning to this -- if you even want to trust this comment :)

I just wanted to collect the interesting comments in a place, because that ticket is getting impossible to navigate.

1: https://github.com/stylus/stylus/issues/2938#issuecomment-31...

2: https://medium.com/@alex.birsan/dependency-confusion-4a5d60f...

3: https://github.com/stylus/stylus/issues/2938#issuecomment-31...

4: https://github.com/stylus/stylus/issues/2938#issuecomment-31...

5: https://github.com/stylus/stylus/issues/2938#issuecomment-31...

5, also: https://github.com/stylus/stylus/issues/2938#issuecomment-31... (thanks to the sibling comment, I couldn't find that anymore)


This makes me wonder if Ozzy will have one last appearance at Wacken, like Lemmy did.


Black Sabbath, of which Ozzy was the frontman, is considered to be either one of around three, or the one founder of heavy metal. It may not be entirely right to say so, because there was a development going on around that time, but the entire giant metal genre goes back to these few guys, with Ozzy being one of them, in this timeline.

This heavier and more aggressive music was paired with a more krass and evil image to distinguish it more. That's where a lot of the dark, evil and satanic themes come from. Both of these are why he is the lord / prince of darkness in our circles.


Ask your local firefighters if their trainees want to see what happens if such a power bank goes up and how to handle it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: