Hacker News new | past | comments | ask | show | jobs | submit | eis's comments login

Isn't that a bit of a holy grail though? If your software can fact check the output of LLMs and prevent hallucinations then why not use that as the AI to get the answers in the first place?


Because you - hopefully - have a check against something that is on average of higher quality than the combined input of an LLM.

I'm not sure if this can work or not but it would be nice to see a trial, you could probably do this by hand if you wanted to by breaking up the answer from an LLM into factoids and then to check each of those individually, and to assign a score to them based on the amount of supporting evidence for the factoid. I'd love that as a plug-in to a browser too.


I don't think they are a hint that spacetime is not fundamental. But I do think spacetime has to be some kind of real physical reality.

The modifications of spacetime that we see as effects of gravity are relative changes to our immediate surroundings or reference frame.

Similarly how you can't tell who is actually stationary and who is moving when two objects are in freefall and all you can note is the relative speed between the two, it would be equally valid to say the objects inside spacetime are getting distorted relative to spacetime.


Even on my Macbook with Firefox the site has a strange feel when scrolling. It's not exactly struggling but it feels unnatural and slightly off/slow/uneven. Like it's on the edge of struggling. Bit hard to describe. The effect gets worse towards the mid section of the page with the side scrolling logo circles. I removed that section via dev tools which helped with performance. When I have that part of the page in view I get 80-90% CPU usage of one core. But even after removing it I can saturate a core by scrolling around, especially towards the lower part of the page.

It is indeed one of the worst optimized CSS I've seen in a while. Weird for a project that is all about speed.


If every site did that then it would be harder to quickly spot one in a long list of tabs. A neat trick but I don't think it is a particularily good idea.


The title on HN at the current time [0] says the police chief was raided.

There is only one person mentioned and therefor "his" can only refer to that person. "His" can not refer to the newspaper.

[0] "Paper investigating police chief prior to the raids on his office and home."


I just clicked on the thread now and it still is listed as mentioned by GP.


It says that the paper investigating the police chief was raided. I don't know many news papers that engage in raids on police, so it's pretty clear.


It doesn’t say the paper was the entity engaged in the raid. If I didn’t know the broader context, I would assume that sentence meant “A newspaper was investigating a police chief at the time the police chief’s home was raided by another law enforcement agency”. That seems way more likely than a newspaper being referred to as “him”.


not a single person here agrees with you. The title didnt say who performed the raiding. A police chief could be raided by the FBI.


The german site of the source speaks of 0.1mm so you were correct

   > bei Toleranzen von teilweise nur 0,1 Millimeter
https://www.ipp.mpg.de/de/aktuelles/presse/pi/2020/01_20


It's not a 100kb model. It's 100kb config files for a several GB model. A small trained layer to stick on top of the real model for fine tuning.


This looks like something between fine tuning a top layer and a zero shot approach.

This is probably what future voice models will begin to look like as they begin to capture prosody and other fine characteristics in a few hundred kb.


Yes, although it is decently interesting that a model can be fine tuned by just tweaking a small number of weights and training for just a few minutes


There is some meat to the story, I agree. But it's not surprising. The fine tuning model of course will be small in file size and not take too long to train because by definition it is applying changes to a small subset of the main model and is trained only on a small amount if input data. You can't use the small tuning model for "Teddies" with a query that has nothing to do with Teddies. You could see these small tuning models as a diff file for the main model. And depending on the user query one can choose an appropriate diff to be applied to improve the result for that specific query.

When you train a model with new inputs to fine tune you can save the weights that got changed to a separate file instead of the main file.

In other words one can see the small tuning models as selectively to be applied updates/patches.


Isn't this just another method of a LoRa like what we've already seen in Stable Diffusion?


One difference is that you are aware that you can't do it and state so. Our current LLMs will just give whatever result they think it should be. It might be correct, it might be off by a bit or it might be completely wrong and there's no way for the user to tell apart from double checking with some non-LLM source wich kinda defeats the purpose of asking the LLM in the first place.


Q1 and Q2 of 2022 were negative growth. The past 4 quarters were not and Q2 of 2022 was just barely. So technically there was a brief recession in H1 of 2022. Right now there is no clear sign of a recession as per definition.

I think recessions are also widely misunderstood as being a binary thing. Like going from "everything is A-OK" to "OMG it's all going to shite". There can be a recession which people barely feel. It's not like an event horizon from which there is no turning back.


I'm giving Orion a try every couple months because the premise is great but unfortunately for me it's so buggy that it's unusable. But then again I rely on a lot of very modern web APIs like WebRTC. Hopefully one day it'll get there but it's a very long road ahead. Not sure where those bugs come from either because Safari doesn't suffer from the same issues.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: