Hacker News new | past | comments | ask | show | jobs | submit | neuronic's comments login

I have stopped using Amazon all together >1 year ago. Prime Video is as shitty as it gets and most stuff ordered on Amazon is Chinese shovelware in the same rankings as Temu. I give exactly zero shit about same day delivery in 99,99% of the cases.

Amazon shines in customer support though.

Search & categorization is entirely broken, the webshop experience is absolutely abysmal. Just order from better online stores. In Germany, I found these (surprisingly ok):

- Kaufland --> https://www.kaufland.de

- Galaxus Deutschland --> https://www.galaxus.de

- Lidl --> https://www.lidl.de (Schwarz IT is building a European Cloud btw!)

- ShopApotheke --> https://www.shop-apotheke.com/

- Home24 --> https://www.home24.de/

Then, buying specialized stuff on the actual website is often a decent experience. Just price hunt on sites like Idealo (attention though, Axel Springer!) and proceed to the specific stores from there.

https://www.idealo.de/ --> https://www.shop-apotheke.com/

I explicitly avoided mentioning Otto as alternative to Amazon although it is technically the closest. While their shop is ok, prices and customer experience are horrifyingly bad (especially returns and delivery). For an unbiased view:

https://www.otto.de/


Let's get Russian disinformation and influence out of Europe and heal the European relationships. Cannot wait to see Farage begging for food in the subway.


There is also non-Russian disinformation - beware of it.


First to clear up my bias: Tesla is an overvalued scam company by a fascist techno-oligarch. I hope we will boycott this company in Europe just based on Elon Musk behaviour alone. This is my political opinion. With that out of the way...

My opinion as a driver is that these cars deliver far less value for the price than other EVs. I have driven Model 3 and Model Y so far and both experiences have been very mixed.

- I enjoyed the responsiveness and on-the-road feel of the car for the most part which is entirely crippled by feelings of a cheap suspension and weird steering. The cars feel like racing a plastics car barely holding it together, gives me vibes of cheap Korean cars back in the 2000s.

- I enjoy the clean interior design and wide views which are again crippled by low quality materials typically found in budget cars and low quality, cost-cutting material processing.

- Wtf is that UX? Everything hidden in weird submenus in a central touchscreen. The OS has been designed by engineers rather than UX professionals and they seem to never have driven a car other than on empty, sunny 20mph roads or a parking lot.

- The blinking UX is terrible and it's clear that this is the result of cost-cutting.

- Charging is the best experience among EVs. Others need to catch up, still in 2025...


tells us how you really feel.


Of course it is for therapeutics design, it's the literal last sentence of the abstract:

"In conclusion, NTD-RBD bsAbs offer promising potential for the design of resilient, next-generation antibody therapeutics against SARS-CoV-2 VOCs."


Appear? One of the only ways to get people to convert was to actually take some of their beliefs and traditions and assimilate them into Christianity.

Just google how the Christmas tree came to be or better yet Christmas itself... the Bible is also just an anthology, a politically (!) hand-picked collection of texts, from various streams which fit the interests of the dominant sect at the time (400+ years after Jesus!).

Also... "Christianity began as an outgrowth of Second Temple Judaism,..." [1]

It's all related.

[1] https://en.wikipedia.org/wiki/Bible



Isn't anyone who releases anything putting "the world's best blablabla" on their page nowadays? I've become entirely blind to it.


If they put it, and it's subpar, I write off the product.


Isn' it just statistical word pattern prediction based on training data? These models likely don't "know" something anyway and cannot verify "truth" and facts. Reasoning attempts seem to me basically just like looping until the model finds a self-satisfying equilibrium state with different output.

In that way, LLMs are more human than, say, a database or a book containing agreed-upon factual information which can be directly queried on demand.

Imagine if there was just ONE human with human limitations on the entire planet who was taught everything for a long time - how reliable do you think they are with information retrieval? Even highly trained individuals (e.g. professors) can get stuff wrong on their specific topics at times. But this is not what we expect and demand from computers.


Just tested on M1 Pro, 15.0 (Settings version) on Sonoma.

After initial custom color selection by clicking on the "+" (which works fine) and then reopening the Window to click "Custom Color" and then selecting the color again... it doesn't just jump for 20ms - it goes into full on psychotic flash jump behavior and basically continues to do so hands-off for 5 entire seconds before it stops at a random color.

There is 2 options now:

1) nobody has ever tested this workflow at Apple, automated or not.

2) it was tested and discovered but then pushed into the backlog as non-priority. Here the question arises - for how long?

The bug probably generates zero lost dollars so nobody at Apple cares anymore. THIS is what used to be different.


It's interesting to me that there's such a big difference between our experiences, even though the bug we find is the same. I'm running an M3 Pro 15.3 Sonoma. If you don't believe me I can provide a screen recording but from here yes, it's buggy, but it's such a small issue (milliseconds vs your seconds) that I'd give Apple a pass on this. But I get that you wouldn't!


I see you posting anti-Western comments all over the place but amidst your irrational hate you are forgetting history.

Originally, the US was the protector of almost all Asian countries from Japanese Imperialism, an axis power allied with Nazi Germany. How are people like you forgetting the rape of Nanjing or the mass killings of civilians by the Japanese across the Philippines and many other places? How are you forgetting Japanese imperial occupation of Taiwan and Korea? How and WHY are you forgetting Japanese Unit 731 [1]?

Ironically, the government which fled to Taiwan was ousted by a Communist revolution and chased into exile. How likely do you think is it that this operation, destablilising China, was funded and supported by the communist Soviet Union?

Writing that "US is the intruder" in your other comments completely dismisses history and is a ridiculous show of brainwashed anti-Western propaganda spread by Putin, Xi, Trump and such authoritarian dictators.

Of course a whole lot more happened after 1945 and American war crimes in Vietnam and elsewhere are part of that. As a European, I am aware of American cultural and military hegemony but I much rather argue with raging capitalists about workers rights compared to raging authoriatarian communists who kill everybody not on their ideological side.

Given all that, I am also glad that Europe is now moving to emanciapte from the dependency on American security infrastructure but that doesn't mean NATO should stop or that the US should stop being allies with European nations. Trump gets his goal of spending less for European security but that comes with the loss of hegemonial influence.

[1] https://en.wikipedia.org/wiki/Unit_731


Yes, let's not forget history ( your [1] )

  Those captured by the US military were secretly given immunity. The United States helped cover up the human experimentations and handed stipends to the perpetrators. The US had co-opted the researchers' bioweapons information and experience for use in their own warfare program


> As a European, I am aware of American cultural and military hegemony but I much rather argue with raging capitalists about workers rights compared to raging authoriatarian communists who kill everybody not on their ideological side.

Isn't this their main point? There's plenty of examples of killing those not on their side outside of "authoriatarian communists" (hint, Iraq, Gaza). You still believe in them and that's fine, but there is a lot of Western projection that they are right and others are not, which is the sense I get from your comment. It's reasonable for people to be against this and "anti-Western". It's also ok to see the strong development brought in from the West and support them. But this is a point for the locals to work through and not pushing a narrative like this one seems to be just as much pro-Western propaganda ignoring anything against it as the other way around.


The point, I guess, is that we had and still have mass organized protests against what happened in Iraq and Gaza. Some leaders paid a political price for that.


Nobody in the US cared until Pearl Harbor. Was US meddling in South America just protecting them from communism, or was it for US imperialism? Trump isn't that different, he has different allies and is more blatant about it.


The US denounced Japan's invasion of Manchuria in 1931 and refused to recognize that as new Japanese territory. A military response was not then considered, but to say that nobody in the US cared is false.

Edit: More so, the fact that America's initial response to Japan's imperialism was diplomatic rather than military undermines the narrative that America was just using Japan as a pretext to cover for America's own imperial ambition. America, far from leaping at an opportunity to invade Asia, was committed to diplomatic resolution of the matter until Japan's imperial ambitions drove them to directly attack the American military, forcing America to respond in kind.


> Moreover, a hallucination is a pathology. It's something that happens when systems are not working properly.

> When an LLM fabricates a falsehood, that is not a malfunction at all. The machine is doing exactly what it has been designed to do: guess, and sound confident while doing it.

> When LLMs get things wrong they aren't hallucinating. They are bullshitting.

Very important distinction and again, shows the marketing bias to make these systems seem different than they are.


If we want to be pedantic about language, they aren't bullshitting. Bulshitting implies an intent to deceive, whereas LLMs are simply trying their best to predict text. Nobody gains anything from using terms closely related to human agency and intentions.


Plenty of human bullshitters have no intent to deceive. They just state conjecture with confidence.


The authors of this website have published one of the famous books on the topic[0] (along with a course), and their definition is as follows:

"Bullshit involves language, statistical figures, data graphics, and other forms of presentation intended to persuade by impressing and overwhelming a reader or listener, with a blatant disregard for truth and logical coherence."

It does not imply an intent to deceive, just disregard for whether the BS is truth or not. In this case, I see how the definition can apply to LLMs in the sense that they are just doing their best to predict the most likely response.

If you provided them with training data where the majority inputs agree on a common misconception, they will output similar content as well.

[0]: https://www.callingbullshit.org/


The authors have a specific definition of bullshit that they contrast with lying. In their definition, lying involves intent to deceive; bullshitting involves not caring if you’re deceiving.

Lesson 2, The Nature of Bullshit: “BULLSHIT involves language or other forms of communication intended to appear authoritative or persuasive without regard to its actual truth or logical consistency.”


> implies an intent to deceive

Not necessarily, see H.G Frankfurt "On Bullshit"


LLMs are always bullshitting, even when they get things right, as they simply do not have any concept of truthfulness.


They don't have any concept of falsehood either, so this is very different from a human making things up with the knowledge that they may be wrong.


I think the first part of that statement requires more evidence or argumentation, especially since models have shown the ability to practice deception. (you are right that they don't _always_ know what they know)


But sometimes when humans make things up they also don't have the knowledge they may be wrong. It's like the reference to "known unknowns" and "unknown unknowns". Or Dunning-Kruger personified. Basically you have three categories:

(1) Liars know something is false and have an intent to deceive (LLMs don't do this) (2) Bullshitters may not know/care whether something is false, but they are aware they don't know (3) Bullshitters may not know something is false, because they don't know all the things they don't know

Do LLMs fit better in (2) or (3)? Or both?


It's super interesting.

There are two levels...

The pZombie type level where we look at the LLM as if it were a black box and simply account for its behavior. At this level LLM's claim to have knowledge and also claim knowledge of their limited knowledge "I can't actually taste". So approached from this direction we are in (2) they have awareness that there are some things that they don't know, but this awareness doesn't prevent them from pretending to this knowledge.

If we consider it from the perspective of knowing what's happening inside LLM's then I think the picture is different. The LLM is doing next word prediction with constant compute time per token - the algorithm is quite clear. We know this is true because it runs on llama.cpp or mlx on our macbooks as well on the farms of B200's that we fear will destroy the atmosphere. So LLM's don't have any actual operational knowledge of the logic of their utterances (dunning kruger, dunning kruger...) What I mean is that the LLM can't/isn't analysing what it says, it's just responding to stimulus. Humans do do this as well - it's easy to just chatter away to other people like a canary, but humans also can analysis what they are saying and strategically manipulate the messages that they create. So I would say that LLMs cannot be concerned about what they do or don't know - the concern rests with us when we challenge them (or not) by asking "how can you know that chocolate tastes better than strawberry - you have never tasted either".


But you can combine them with something producing truth such as a theorem prover.


If you make an LLM which design goal is to state "I do not know" any answer that is not directly in its training set, then all of the above statements don't hold.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: