Hacker News new | past | comments | ask | show | jobs | submit | ohduran's comments login

Didn't he also told you about nostalgia? ;)


I'm surprised the book Sovereign Individual is not mentioned even once in this article.

The whole book engages in the same "projecting trends into the future", on the same topic. But in a more detailed way.

In fact, one of the early points it makes is that the Roman Empire, when the last emperor gave way to a tribe lord that invaded Rome, officially when the empire fell, everyone pretended it didn't.

Like the emperor is out, he's living somewhere else, but yeah the empire keeps going, nothing to see here.

Only years later, decades if I recall correctly, did the implications of the fall of the empire truly reached culture.

The shell was untouched, but the insides were rot already.


I'm already starting to feel a big "who cares?" When it comes to the federal government in general. Everything's so calcified it feels like they're barely keeping the lights on. How many debt ceiling "crises" have completely paralyzed Congress for weeks, just in the last few years? The media barely even bothers to report on them anymore. And the rest of their time seems to be occupied by infighting or attempts at pointless symbolic dunks on the other side.

For all the talk of the "deep state" it generally feels like the career bureaucrats are the only ones doing any actual governing anymore. Eventually the whole thing's just going to get too brittle to hang together.

Anyway I know this is almost totally fact-free, but that's just where I'm at now. Who the hell cares who's president?


I think what Yudkowsky is missing in this analogy is the fact that the human can use an AI too, which levels the competition significantly.


If you do everything an AI tells you, are you using an AI, or is the AI using you? And if it’s the latter, what happens if it decides to stop using you because it doesn’t need you?

Things are not as simple as we imagine them. It is already happening. Jobs are being lost to AI, and this processes is accelerating exponentially. From a certain perspective, this is simply businessmen and engineers using AI. But eventually they themselves won’t be needed.


The problem is that we don't know how to use AIs the same way we know how to use stockfish. They're inexplicable to even experts


Not to downplay the absolute behemoth of a task they manage to pull out successfully...but why not upgrading as new versions came along, with less fanfare?

It is a great read, but I can't shake the feeling that it's about a bunch of sailors that, instead of going around a huge storm, decided to go through it knowing fully well that it could end in tragedy.

Is the small upgrades out of the question in this case? As in "each small one costs us as much downtime as a big one, so we put it off for as long as we could" (they hint at that in the intro, but I might be reading too much into it).


OP here - we would have used the same approach for the minor upgrades. This isn’t a case of “we procrastinated ourselves into a corner” and more a matter of “if it isn’t broke, don’t fix it” recognizing we would need to make the jump eventually.


Just for your information, minor upgrades on Aurora Postgres does now claim increased resilience across minor upgrades, there are some caveats despite the Zero Downtime naming: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide...

I've relied on this as the minor upgrade method since it was available and it has worked as advertised, with no perceivable issues. This may be traffic and operation dependent obviously but worth having a look at.

Worth saying we do the minor upgrades incrementally, intra-day and a few weeks to a month after they are available, as a matter of routine, with a well documented process. Overhead is minimal to practically zero.


Upgrading N versions is just as much as a threat to availability regardless if N is 1 or 3.


Each one incurs some downtime. If their real answer is less than 60 seconds, then they’d have incurred that multiple times on the road to 15.


Noob here, how is Lean4 different from TLA+ or Alloy? Is that even a reasonable comparison?

Edit: Wrote Allow the first time, good stuff.


I'd say TLA+ is designed more for software people trying to design systems, write down specs, and reason about how the thing behaves dynamically.

Lean is used mostly for writing down math proofs, and a lot less for software (although by the Curry–Howard correspondence, math proofs and programs have an equivalence, so the line is a little blurry). Lean has "mathlib", which is like a standard library of formally verified math that people can contribute to and use in new proofs.

A big multi-year effort to start formalizing the proof of Fermat's Last Theorem in Lean 4 was approved recently: https://www.ma.imperial.ac.uk/~buzzard/xena/pdfs/AITP_2022_F...


Leo de Moura wants Lean 4 to be used for software verification too.

A cool thing about Lean 4 is that it's also a programming language, using the same syntax as for proofs, making it easy to consider proving correctness properties of programs you write. Most of Lean 4 and its tactics are written in Lean 4 (though at this point almost none of this code has any associated proofs).


Yeah, I believe they said they intend for it to be used as a general purpose programming language. I used it to complete Advent of Code last year.

There are some really interesting features for general purpose programming in there. For example: you can code updates to arrays in a functional style (change a value, get a new array back), but if the refcount is 1, it updates in place. This works for inductive types and structures, too. So I was able to efficiently use C-style arrays (O(1) update/lookup) while writing functional code. (paper: https://arxiv.org/abs/1908.05647 )

Another interesting feature is that the "do" blocks include mutable variables and for loops (with continue / break / return), that gets compiled down to monad operations. (paper: https://dl.acm.org/doi/10.1145/3547640 )

And I'm impressed that you can add to the syntax of the language, in the same way that the language is implemented, and then use that syntax in the next line of code. (paper: https://lmcs.episciences.org/9362/pdf ). There is an example in the source repository that adds and then uses a JSX-like syntax. (https://github.com/leanprover/lean4/blob/master/tests/playgr... )


Yeah, here is where TLA+ fails to gain adoption, because it is only a model, how people actually implement the architecture is completely unrelated to how the model was validated.


> Most of Lean 4 and its tactics are written in Lean 4 (though at this point almost none of this code has any associated proofs).

Do you have to actually prove a tactic work for it to be sound? A tactic will rewrite the program into a simplified program that will verified by a kernel; if the tactic wrongly expands something, the kernel will later reject the program. Only the itself kernel need to be verified for the whole thing to be sound.

Or am I missing something?


You're not missing anything. I was going to mention this but decided not to get into it. (One detail: you can't verify the kernel exactly because of Gödel incompleteness issues.)


Thank you so much for your answer


Lean is for verifying proofs, not writing them. It helps you find mistakes, but doesn’t help you understand or express ideas in a human readable way.


I am absolutely not an expert at either of them, but I believe TLA+ is interested mostly about all the possible ordering of “code”, that models your program, verifying the absence of concurrency bugs.

Do correct me if I’m wrong, but Lean is more about total functions/FP code, not sure how well does it handle threading, if at all. It might be more related to the correctness of the actual implementation of a serial algorithm.


In my humble experience, you gain a LOT from writing the questions yourself, I point that I'm not sure if the author is stressing enough.

There's also the problem with cards that are difficult to memorize, which is something that you don't know in advance, and that's a good thing! It shows you were you should pay more attention, write cards in different forms, etc.


There's a somewhat famous movie in Spain called the Miracle of P Tinto[0]. It's a very surreal humor kind of movie, if you fancy that.

The plot starts with the star character who wants to form a family, but when he asked adults how to have kids, they would all pull up the suspenders and sing "tralari, tralari" to evade the question.

And so the poor guy did exactly what he was told: pull his suspenders and sing, for years, to no avail.

I think LLMs are a bit like that: because they can't escape the world of text, they can't go beyond the reality that text is trying to be isomorphic to. Which means that LLMs are forced to infer answers in the context of its training data.

If LLMs were told that in order to have kids you, well, you know, right? Bees and Love? Then LLMs, asked about sex, would infer that the answer has something to do with apiculture and flowers.

[0]: https://en.wikipedia.org/wiki/The_Miracle_of_P._Tinto


Interesting idea.

Humans have as input the real world. For example the wonder they experience when they perceive something preternaturally beautiful. Then they try to express this experience in language.

That's what LLM are missing. They have -- in a way -- only second-hand experiences.

I was a bookworm as a child. But then when I grew up I had the chance to experience myself some of the things I read about. Then I felt like I did a giant step of understanding the world.

LLM never experience themselves what they have in their data.


I get the impression that humans have separate parts of the brain that do LLM type linguistic stuff and others that do non verbal intelligence of the sort that smart animals have. Maybe they'll figure a way to tag the other stuff on.


yeah, I opened it just in case they explained how it's done. For research purposes of course.


You could say that it requires God levels of discipline.


So let me get this straight: the authors of this study find that adults are showing signs of infantilism. And rather than looking at what are the consequences, they focus on the causes, and label the child - adolescent - adult stages as...traditional, hinting that it's just one of many ways in which we can "stratify" lifespans, so to say.

In fact, in the very first sentence, they say that this view is no longer relevant. As in, for who knows how long, it was. Suddenly it isn't.

This framework for viewing ways of organizing society as equally valid and inconsequential defeats the purpose of what, I believe, social sciences are for. I don't need a taxonomy of options, I want actionable insights!


> This framework for viewing ways of organizing society as equally valid and inconsequential defeats the purpose of what, I believe, social sciences are for. I don't need a taxonomy of options, I want actionable insights!

The article is describing academic research, not self-help. In this context it is perfectly reasonable to examine causes rather than consequences, and taxonomies rather than "actionable insights". Thats not to say that consequences etc aren't valid areas on inquiry, just that this article isn't examining them.


> This framework for viewing ways of organizing society as equally valid and inconsequential defeats the purpose of what, I believe, social sciences are for. I don't need a taxonomy of options, I want actionable insights!

social sciences suffer from a lack significance and repeatability more than others, which suggests to me that they are indeed foremost required to establish taxonomies and definitions because anything more is just guessing with a great potential to harm ppl.


"I want actionable insights!"

so juvenile :)

it's important to understand the phenomenon, this is science. action is about politics.


Ha ha ha, you're probably right, I came across as an entitled teenager.

But seriously though, taxonomies are part of the whole science enterprise. Unless we formulate hypothesis that can be used to predict new phenomena, we're missing out. Or just cargo culting.


> I want actionable insights!

For example, Lego is now targeting adults as well as kids. You can make a living out of collecting sets as an investment vehicle or by starting a YouTube channel where you review and show your collection.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: