I don't see how this would be relevant for a code model?
The code model isn't trained to retrieve papers/articles, it's meant to complete code. Whether or not you find hallucination in a unrelated task isn't particularly interesting.
Damn, this is how I learn that HN doesn't have a block function. What a shame.
My friend, can you do me a favour and actually click the link and have a play with the app? If you do, you will discover that what you're dealing with there is an LLM. That's literally why it's being compared to other LLMs.
No idea what you were trying to achieve with this comment. "The code model isn't trained to retrieve articles." a) neither is any other LLM, what's your point? and b) the app on the other end of that URL retrieves articles - it's not even tangential to the app, it's key functionality.
Classic Malthusian viewpoint, nothing new to see here. If only it were true that oil is running out - it might force us to think about alternatives more.
Extractive summarisation of news isn't very hard, I guess it doesn't hurt to put it in an API wrapper and have a pay per use model, but don't expect this to sell.
Also FYI, depending on the News outlet the important info is usually at the top - it's maybe the first thing they teach you in Journalism (don't bury the lede). You don't need to read the "Bible-size" article if you read the first paragraph and it's well written.
However, if you did abstractive summarisation instead, that might be more interesting esp for financial news - you might have buyers.
Reminds me of a C++ project I use to work on. The norm was to have generous function naming and not stick to the outdated C convention of short and cryptic names. The only exception was the function "anal", which was curiously shortened from "analyze".
I can honestly say that I never made this association, and it never crossed my mind once. Now that you mentioned it, I can see it. I just always say "analang" like golang
That's okay, I made a logo once that if you turned it to the side it was very bad, and I didn't notice for months. I like the other poster's suggestion of Annalang.
I first read this as an April fools joke and honestly it got me good that it wasn't.
As someone who works in AI and multi-modal models, our modern "AI" are just tools. Yes models can get designed by other models, and yes some of these models (Bert and co) have been getting more general over the recent years. But to say that we are close to AGI is like saying that you are close to a moon landing when you've only started jumping - it's ridiculous.
We'd all be better off if we spent less time hyping it up and theorising on what it could mean for human society. And yes I do understand the paperclip argument but I don't buy it.
sure transformer models may not be intelligence as people often claim, but undeniably ML has been conquering all the sensory modalities one after another. And the human brain itself is mostly devoted to sensory processing. It is conceivable that the remaining part of the puzzle of intelligence is not far out of grasp yet. While intelligence may seem undefined and elusive, it will probably turn out to be less complex than we think it is (if we are to take a hint from biology )
Watch out you got a typo on the front page: "A central dashboard shows all secuirty events and directs your attention to the exact part of your application that has issues" - security is spelled wrong
The code model isn't trained to retrieve papers/articles, it's meant to complete code. Whether or not you find hallucination in a unrelated task isn't particularly interesting.