Hacker Newsnew | past | comments | ask | show | jobs | submit | gammalost's commentslogin

If its a random ID then I'd argue that all of them are equally close to each other. With that said, I do not know how GUIDs are generated


> You still need to learn the names of models, understand their use cases, concepts like MoE, then you have different architectures like diffusion vs transformers, agents etc

Why? When you think you might need something just search for it. There are too many models with incremental improvements


To have enough context in your mind to understand things faster and remember easier.

Getting it and also remembering it properly are different things.

And it might just be an hour installing ollama and UI tools and playing with it but it gives you a lot of Anker points/experiences .


If you care about reducing the amount of back and forth then just use QUIC.


Why would they not use a company laptop in the first place?

"They ran out" is no excuse


I agree. I feel that Springer is not doing enough to uphold their reputation. One example of this being a book on RL that I found[1]. It is clear that no one seriously reviewed the content of this book. They are, despite its clear flaws charging 50+ euro.

https://link.springer.com/book/10.1007/978-3-031-37345-9


Yeah, ages ago, when I was doing typesetting, it was disheartening how unaware authors were of the state of things in the fields which they were writing about --- I'm still annoyed that when I pointed out that an article in an "encyclopedia" on the history of spreadsheets failed to mention Javelin or Lotus Improv it was not updated to include those notable examples.

Magazines are even worse --- David Pogue claimed Steve Jobs used Windows 95 on a ThinkPad in one of his columns, when a moment's reflection, and a check of the approved models list at NeXT would have made it obvious it was running NeXTstep.

Even books aren't immune, a recent book on a tool cabinet held up as an example of perfection:

https://lostartpress.com/products/virtuoso

mis-spells H.O. Studley's name on the inside front cover "Henery" as well as making many other typos, myriad bad breaks, pedestrian typesetting which poorly presents numbers and dimensions (failing to use the multiplication symbol or primes) and that they are unwilling to fix a duplicated photo is enshrined in the excerpt which they publish online:

https://blog.lostartpress.com/wp-content/uploads/2016/10/vir...

where what should be photo of an iconic pair of jewelers pliers on pg. 70 is replaced with that of a pair of flat pliers from pg. 142. (any reputable publisher would have done a cancel and fixed that)

Sturgeon's Law, 90% of everything is crap, and I would be a far less grey, and far younger person if I had back all the time and energy I spent fixing files mangled by Adobe Illustrator, or where the wrong typesetting tool was used for the job (the six-weeks re-setting the book re-set by the vendor in Quark XPress when it needed to be in LaTeX was the longest of my life).

EDIT: by extension, I guess it's now 90% of everything is AI-generated crap, 90% of what's left is traditional crap, leaving 1% of worthwhile stuff.


What reputation would that be?

It was, in part, Springer that enabled Robert Maxwell.


I think it is just a question of community. Sure, you can make your own website, but people will (most likely) not read it. So what is the point? Some do it for the love of writing, but that is a niche at best.

On social media, you just comment on someone's post and (if spicy enough take) the comments will come.


Directly from the article:

> Unfortunately, this is what all of the internet is right now: social media, owned by large corporations that make changes to them to limit or suppress your speech, in order to make themselves more attractive to advertisers or just pursue their owners’ ends.


Yeah, if that -- the semi-fake "interaction" of algorithm-mediated "likes" and comment flamewars (or endless streams of vapid cat pictures) -- is what you're after, fine.

This article was about another Web (like all of it used to be), when that's not what you want. There is a difference.


Well, isn't that a summary of most things? Most things worth learning are hard, but many things not worth learning are also hard. So we have to prioritize what hard things are worth learning. Math is low on the list for many people for (I think) understandable reasons.


What I meant was I think it's detrimental to be priming the kids with a negative view, or nurturing any negative views.


math is low on the list then they bitch that they’re unemployable with a soft skill degree doing middle school level work. i get this is ironic as a pure math student is also fairly unemployable without extraneous skills, but they also tend to shine the brightest once they make it in.


You can't really remove dependencies in open source. It is so intertwined at this point that doing it would be too expensive for most companies.

I think the solution is to containerize, containerize and then containerize some more times and make it all with zero trust in mind.


Containerizing is entirely the worst response here. Containers, as deployed in the real world, are basically massive binary blobs of completely uncertain origin, usually hard to reproduce, that easily permit the addition of unaudited invisible changes.

(Yes yes, I know there are some systems which try to mitigate this, but I say as deployed in the real world.)


Your application is already most likely a big binary blob of uncertain origin that's hard to reproduce. Containers allow these big binary blobs of uncertainty to at least be protected from each other.


Pretty much; updating say libssl in a "traditional" system running app, or maybe 2-3 dependent apps fixes the bug.

Put all of them in containers and now every single one needs to be rebuilt with the dep fixed and instead of having one team (ops) responsible, you now need to coordinate half of the company to do so. It's not impossible but in general much more complex, despise containers promising "simpler" operations.

...that being said I don't miss playing whack-a-mole game with developers that do not know what their apps need to be deployed on production and for some retarded reason tested their app on unstable ubuntu while all of the servers run some flavour of stable linux with a bit older libs...


Docker containers are not really a security measure.


It is a security measure. Sure it doesn't secure anything in the container itself. But it secures the container from other containers. Code can (as proven) not be trusted, but the area of effect can be reduced.


Only with additional hardening between the container and the kernel and hardware itself.


You are already dependent on the government from birth. It is the government that has a monopoly on violence and they are the enforcers of private property.


What do you mean with "modern NN" that differ from the basic FNN?

How do you train a modern NN if not through backpropagation?


Fnn are easy for sequential data because there is no relationship between the elements. Hence, these NN are a frequent target of simplified analyses. However, they also never led to exciting models which we now call AI.

Instead, real models are an eclectic mix of attention or other sequential mixers, gates, ffn, norms and positional tomfoolery.

In other words, everything that makes AI models great is what these analyses usually skip. Of course, while wildly claiming generalized insights about how AI really works.

There’s a dozen papers like that every few months.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: