Hacker News new | past | comments | ask | show | jobs | submit | rantanplan's comments login


Thanks, that was an interesting read. I'm curious how much real-world storage it would save for your average developer. Wonder if there's a nice little script out there you can run against an existing instance to get some kind of stats.


It would be easy to create a script like that. You'd save quite a significant storage space, especially when you have tables with hundreds of millions of rows. The real problem though is with further migration of your schema down the line, where you're going to add/remove columns, as it's almost always the case.


Yes, SSH certificates are the way to go and pretty easy to set up. But what these articles fail to address is the user management aspect.

For the SSH certificate to be accepted, the unix user must first be present on the system. As far as I can understand, FreeIPA(or similar LDAP systems) cannot be used in conjunction with SSH certs. Whereas SSH keys are supported by these systems.

Can anyone provide any insight/experience with this?


Its not the username that needs to match, its the principal. You can allow any principal for the root user, for example.

You can define principals when allowing a CA via authorized_keys, or you can configure allowed principals globally using sshd_config directives like AuthorizedPrincipals* .


Many years ago, I did all of this with an LDAP system. Public keys were generated by the user and entered into LDAP (or you could auto-generate keys, etc). Users were authenticated with their ssh key (stored in ldap, password based access was restricted). Authorization for access to each host was also in LDAP, as was sudoer status (as a group setting).

It was actually quite an elegant setup. You would still need to setup a CA for generating local certificates for TLS connections to LDAPS, but the auth was handled all in the LDAP server.

I think the main downside would be trying to have the authentication overhead on a single server (the ldap server) when you are dealing with many hosts. Over a handful of systems, it’s great. But it doesn’t scale when you’re taking thousands of hosts (or cloud vms that spin up/down).


In most circumstances, you want these two things (user is authorized on the system, user can be identified and authenticated) to be different. Having a process that creates the user on system in order to authorize them to login is pretty similar to all your other configuration management tasks.


I imagine it's a matter of automating the certificate insertion on the target servers when it's updated on the user's account in the LDAP server. In other words, it depends entirely on your systems and how far your administration is willing to go to automate it.


> What is astral projecting?

The illusion that your mind escapes your body and/or visits other planes of existence etc. It has been replicated successfully in lab conditions many times over and is proven to be just an illusion.


Illusion has a negative meaning.

If anything call it https://www.sleepfoundation.org/how-sleep-works/hypnagogic-h... but not "illusion".


Does “hallucination” not have a negative meaning?


When it's a vision, it depends what you get to see. Often enough, what you see is not literally different, but its perceived meaning is different. Sometimes revelatorily different.

"Hallucination" is a judgmental term; it may be appropriate when it is involuntary, or what you see is harmful. People with schizophrenia have hallucinations, and (in all cases I have known personally) suffer for it. Thus, "hallucinogen" is a judgmental term about a chemical, where people taking one for the beneficial effects call it an entheogen.


Depending on the context, not as much as "illusion". Read the link.


The link literally does not contain the word “illusion”. It has no discussion about why the word “hallucination” might be preferable.

I’m not clear why you linked it, honestly. I fail to understand the jump from “astral projection” to “hypnagogic hallucinations”.

Was your intent to just say that sometimes the medical profession uses the word “hallucination”? That doesn’t give the word positive connotations, especially with laypersons.


> it's easy to lose a billion dollars

:O


20 $50 million donations to various groups and $1 billion dollars is gone. You could probably do that in 5 years and still not come out as effective.


The average american makes ~2m in a lifetime, 1b$ is 500x that. "Losing" 1b$ is ludicrous unless you are actively looking to lose it (or don't care about it, like musk & al).

How do you lose what 500 people make in their lifetime?


Yeah, but the average American also isn’t making $50 million donations to keep a pet project going.


Let's say your net worth is ~100k. That's equivalent to a 2.5k donation. Investing 2.5k in a pet project is pretty common, and you don't find yourself in the streets for doing so.

I think people that say it's easy to lose 2b$ don't have a clear understanding of the amount of money that is. It's a crazy amount for one person.


I think you’ve lost the context. If signal does not figure out revenue, they will require more $50MM donations. Using your analogy, that would be like you spending $5,000 or $10,000 when you originally only planned to spend $2,500. Perhaps you wouldn’t flinch, but many would pull the plug after the first or second time, especially if disappointed about some aspect of the project. Your project is at risk by relying on your continued good graces, even though you may technically be able to afford more investment. Likewise, signal is at risk if it requires periodic investment of millions from Acton, even though he can afford it and currently supports the project. Since he donated a few percent of his net worth, it’s a better situation than if he had donated, say, half, but signal still needs to take the risk seriously.


Look at what happens to lottery winners.


There’s a world of difference between a million and a billion.

Just physically moving that amount of cash is difficult and takes time. Even those lottery winners don’t blow the whole lot in less than a year.


> Just physically moving that amount of cash is difficult and takes time.

...that's why it isn't moved physically?


Forget it in your pants, and it went in the washing machine.

You know, same problem as everyone else, really.


That's why they're making all of the cash money plastic, so these billionaires can stop accidently laundering their money. Genius!


Sure but even a meagre 8% return on $950M is $76M. Buying and holding SP500 index funds in 2021 would have returned more than triple that, at 26% or so.

You can give away 50 mil a year and still get tens of millions dollars richer if you're sitting on $1B.

It's only "easy" to lose $1B if you have absolutely no idea what you're doing.


Since when is an 8% return "meagre"?! I get it, the last year had a crazy stock price increase, but it was an outlier.


The average SP500 return for the last 50 years is north of 10%, so I consider 8 to be meagre.


4%. 4% is the number. Don’t count on anything over this.


The next 10 years? 0%. We’ll be lucky if the market is flat in the next decade.


10 year treasury bonds are essentially risk free, as they are guaranteed by the US Treasury. Current 10-year yields are a bit less than 2%, so that's essentially your minimum return there.


Assuming that interest rates remain above inflation, which is already no longer true. What's the point of earning 2% a year when the USD is losing 6% a year in value (and that's likely far undercounted).


The point is it's possibly the most secure investment available, so that's why it's a good proxy for the risk free rate.

Whether or not you want to invest at that rate is entirely a separate conversation.


Is that 4% inflation-adjusted?


Usually yes. In times of inflating asset prices, the value of the investment tends to inflate with it.


Usually yes. But … not every year exactly, so it works best when viewed over a longer term.


A meager 8%. You have betrayed your total ignorance when it comes to anything having to do with money. You have no idea what you’re talking about. The only investment that will give you 8% is an extremely risky one and that’s not an appropriate vehicle for a billion dollars. And you say this in the world wide environment of negative interest rates…

The reason not many rich people lose their money is because becoming rich is hard and you have to be smart to do it. You don’t have to be good or moral but you have to be smart. Even then there’s a ton of washout among new millionaires. But you don’t get to a billion by accident. Look at all the people who make tons of money through some other means than intelligence — lottery winners and football players — they all lose their money even if it’s millions. Sorry bud, the system isn’t rigged. Life is just hard.


>The only investment that will give you 8% is an extremely risky one and that’s not an appropriate vehicle for a billion dollars.

Buying SPY returns on average 10% or so, year over year.

Do you consider the S&P500 to be "extremely risky"?

>Sorry bud, the system isn’t rigged.

I feel like you replied to the wrong person? Between you and I, the only one who brought up the question of whether "the system" is rigged or not is you.


Or, you know, a one $1 billion dollars donation and it's gone as well :D


Search for "Ben Finegold" on YouTube. He has many lectures for beginners and he's one of the funniest GMs. That makes it less boring :)


> PG doesn't have a way to tell it how to organize data on disk so there is no good way around this (CLUSTER doesn't count, it's unusable in most use cases).

Aren't tablespaces (https://www.postgresql.org/docs/10/manage-ag-tablespaces.htm...) supposed to help with that?

Haven't used them, I'm honestly curious


Tablespaces allow you to store tables in defined directories on the file system.

What I was talking about is controlling the ordering of the rows within a table on disk. If you are going to be reading some group of rows together often, ideally you want those rows to be contiguous on disk as a sequential read of a range is much faster than bouncing around to dozens of locations to collect the needed rows. This becomes more important for very large tables. Imagine a 5TB `blocks` table and you need to read 50 blocks to render a given notion doc but those blocks could be scattered all over the the place on disk, it's a lot more work and it thrashes the page cache.

PG doesn't normally make any guarantees about how rows are ordered on disk and it may move rows around when updates are made. It does has a CLUSTER operation, which re-orders rows based on the order of an index you give it, but this is a one time operation and locks the table while running. This makes it functionally useless for large tables that are accessed and updated frequently.

Some other databases do give you control over disk ordering, SQL Server for example has `CLUSTERED INDEX` which you can apply to a table and it'll order data on disk based on the index order, even for new insertions / updates. It does cost a bit more on the write side to manage this, but it can be worth it in some cases.


Got it, thanks.


> Software is a highly personal and creative thing, it should have a personality

Not it isn't and it shouldn't. The process of creating the software is; huge distinction.

The product of your efforts should not have a personality or feel personal, it should just work as intended. Software is hard as it is and we don't need to make it more whimsical. A little experience can teach us, that whether we like or not, it will ultimately exhibit its own whims anyway.


> The product of your efforts should not have a personality or feel personal, it should just work as intended.

Genuine question: why it can't have both? I know it's hard to convey tone on the web but I'm asking the question because I'm genuinely interested in knowing what you think.

I personally think you can create software the right way, ship something that works and still incorporate some personality and make it less boring. I don't see why the two can't live happily together.


Because we haven't solved the main problem yet, which is to create robust software that does exactly what's supposed to do, no more no less. We have opinions, we have some indications of what may improve software development, but we are far away from comparing software engineering with other engineering domains. And it's only logical, because compared to other disciplines, software engineering is in its first baby steps.

Imagine if construction/aviation/etc engineers wanted their building/bridge/airplane to be whimsical and have its own personality. Are you scared yet?

Imagine if your out-of-band(not in the initial requirements/spec) and whimsical software contribution was responsible for a bug that brought down an airplane, or killed a patient. How whimsical would you be then? Well at least you wouldn't feel bored at your day job right? Anyway, I think you get my point.


> Imagine if construction/aviation/etc engineers wanted their building/bridge/airplane to be whimsical and have its own personality. Are you scared yet?

Are you aware that those things go through a design phase with the explicit objective of giving them a personality, right?

Mechanical engineers have an habit of breaking that personality due to their profession constraints, so most airplanes lose the original ones, but bridges usually are built just as intended.

Anyway, it's not like you can avoid giving your software a personality. You can't. What you can decide is if it will behave like a dull humorless thing, a holier than you all knowing braggart, or something people like having around. And yes, some software should have those two first options too, it depends on their application.


> Are you aware that those things go through a design phase with the explicit objective of giving them a personality, right?

Not really. I see quite the opposite; during the design phase the team sets the rules in order to guarantee consistency and cohesiveness and avoid any deviation from what has been agreed upon. This rules out personal or whimsical contributions, because by definition it would ruin the process.

> Anyway, it's not like you can avoid giving your software a personality. You can't.

I alluded to that, if you re-read my comment, but a software having its unintended whims, compared to intentionally trying to give it some "personality" is not the same thing at all.


Teams? Rules?

The bridge architects you are talking about behave very differently from the ones I've met.


I haven't met that many bridge architects :D

But do they build bridges by themselves? Are bridges built by a one man show?

And they don't have rules? And how do they get anything done?


Are you sure you did read my previous comment? You are replying to stuff that isn't there.

About rules, no, nobody pass rules down the stream. People communicate full designs of some issue (that is not the full design of the thing, designs are "sectorial" where people add their concerns into the overall thing). When it's done right, the design goes to and from those sectors changing the entire time. When it's done badly, somebody finishes a "general" design and sends it downstream for people to fill the other parts. A team does not work on the same issue, that would be chaos.


Doctor Hugh Mann and the whole narrative around him was one of the most cringe-worthy moments I've seen in a movie of that caliber/budget.

The motives behind the characters (Anne Hathaway's character deciding to sacrifice humanity to see her boyfriend, Caine's character withholding physics advancements for years, Murph's whole behavior... and so numerous others) was, objectively, bad writing. Of the kind that you really wonder how it got out in the public and into such a high profile movie. I can't think of a single person in that movie that acts realistically. To the point that I kind of believe that the "plot" was just a pretext for Kip Thornes awesome work on the visualizations of the wormholes.

The plotholes... well way too many. That was the movie that ruined Nolan for me. Up to that point I was a very big fan.


And here I am, recognizing some of the plot holes existing but Interstellar is one of my favorite movies of all time. I've watched it several times, watched the first 30 minutes 10 times, and cry most times I watch it all the way through.

Let go of trying to analyze the realistic-ness of the physics (even if it's in this list) and immerse yourself in the story, the moment, and I think the characters are actually very realistic.


I am not analyzing the realism of the physics, I'm not qualified for that. Physicists say it's realistic enough and I believe them.

But realism in the science aspects of a movie shouldn't be an excuse for a lack of a coherent story or bad character development.

I respect your take and what that movie means to you, but don't think that I didn't want to immerse myself in it or anything like that. I was highly anticipating this movie for years. It just didn't do it for me.


I don't understand these complaints at all.

Hathaway's Brand didn't want to sacrifice humanity, she just had a conflict of interest around a decision in a VERY information-limited environment. Her non-love-related argument about the black hole capturing things that would be needed to create planets more capable of supporting life - that Cooper thought was probably just rationalization - made sense and would have led them to a better decision than going down to Mann's planet did. Where we got exactly the sort of sterile environment she was predicting, like the water planet before.

Brand-on-earth's "withheld physics advancements" was sorta the reverse - he withheld his failure because his math hadn't enabled the advances NASA hoped they would and so releasing them would've been telling everyone on earth they were doomed. He decided to put the fate of the species ahead of releasing useless results that he believed would ruin the only chance the species had. (Even without that, not publishing negative results is common anyway!)

Not sure about your complaint with Murph, since you aren't specific, but both kid Murph and adult seemed consistent to me.

I HATED the magic black hole deus ex machina on first viewing, and still don't love it, but found everything up to that amazing. I think a bleaker story of founding a colony on a new planet without the magic trip back to earth at the end could've been even better, though.


> Caine's character withholding physics advancements for years

> I can't think of a single person in that movie that acts realistically

This is the most realistic action in the movie! https://en.wikipedia.org/wiki/Planck%27s_principle


This is not related at all to what happened in the movie.

We're not talking about experimenting on people (thus causing accidents/deaths/etc) but about withholding new knowledge.


> Caine's character withholding physics advancements for years

But he didn’t do that. He and Mann had determined there was way forward and they were right. Without the new data from the black hole physics had gotten as far as it ever would.

EDIT: and Hathaway’s character didn’t intend to sacrifice humanity. One option was as good as any other. She even turned out to be right and most of the crew would have survived if they listened to her from the start.


I don't really agree with all of your specific complaints but I do agree that Nolan's movies tend to have uninspired writing.

It's almost like an uncanny valley where the truly expert film-making delivers all the tone, gravitas, and emotion you could want from a script that feels bland and incomplete.


No we don't! How dare... um... ok yes we do struggle. Unless you're an ancient Greek teacher, it's very hard to make sense of it. But you can certainly pick up numerous -unchanged- words or roots(common parts) since modern Greek is derrived from them.


The problem was not Fedora per se, but GNOME and especially Wayland. Wayland is slightly above vaporware and infested with bugs.

Had you chosen Fedora with KDE you'd probably have 0 problems. I'm using it without any drama for 11 years and I keep suggesting it to co-workers and friends.

Wayland has done so much harm in the Linux ecosystem and that gives me a lot of grief :(


It's a pity that it really is that bad. It didn't have any glitches during normal usage but it quickly fell apart when challenged, obviously. So it's probably still too early and we need more distros picking Wayland up and help stabilise.

What didn't help was probably the desire to force myself into liking GNOME, when normally, I fall naturally into the KDE camp, because it is beautiful without having to make things too simple, like GNOME does. Even though I agree with many of the things GNOME stands for, there is lots of stuff you wouldn't NEED to customise, or use.

Will consider KDE on my next attempt!


The GNOME Xorg session is still available if you need to use it.

It would be nice if it were possible to avoid the compatibility issues with Wayland, but the design of X (and the reliance of applications on X11-specific APIs) is what makes this not really feasible.


Wayland is such a big pain. Breaks all kinds of things.


It breaks some kinds of things, not all.

Those some things are things, you shouldn't do in the first place, like snooping on windows and events that do not belong to your app or injecting events that end up being processed by other apps.

If you don't do that, Wayland is fine.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: