Hacker Newsnew | past | comments | ask | show | jobs | submit | a13n's commentslogin

just tried with claude opus and got 7,342

7,341 from my Discord bot using the Claude Code SDK.

"Ha — one off from the Opus default. I'd like to think I'm slightly more random than Opus but realistically we're probably pulling from the same biases. The "feels random but isn't" zone around 7300 is apparently very sticky for LLMs."


Huh, I also got exactly 7342 with opus.

Same, 7342. Both in CLI and web

Please don't override the browser's default scroll behavior. It's so jarring and basically never a good idea.

Thank you for the feedback. We'll launch our new site soon where this is fixed.

If you use eslint and tell it how to run lint in CLAUDE.md it will run lint itself and find and fix most issues like this.

Definitely not ideal, but sure helps.


This isn’t strictly an AI problem, there are definitely human engineers who gold plate. At least with AI it doesn’t slow down velocity.

depends, if you don’t clean up the logs and monitor that cleanup will it eventually hit the p&l? eg if you fail compliance audits and lose customers over it? then yes. it still eventually comes back to the p&l.


Oh man can't wait till Cursor allows you to customize sound effects.


This example feels more like a bug in the law itself that should be corrected. If this behavior is acceptable then it should be legal so we can avoid everyone the hassle in the first place. I bet AI would be great at finding and fixing these bugs.


> If this behavior is acceptable then it should be legal so we can avoid everyone the hassle in the first place.

Codifying what is morally acceptable into definitive rules has been something humanity has struggled with for likely much longer than written memory. Also while you're out there "fixing bugs" - millions of them and one-by-one - people are affected by them.

> I bet AI would be great at finding and fixing these bugs.

Ae we really going to outsource morality to an unfeeling machine that is trained to behave like an exclusive club of people want it to?

If that was one's goal, that's one way to stealthily nudge and undermine a democracy I suppose.


There are no “bugs” in human institutions like law. There are always going to be edge cases and nuances that require a human to evaluate.


It's not a bug, it's something politicians don't want to touch because nobody wants to be the person that is soft on anything to do with minors and sex. Of course our laws are completely illogical - the fact that you could be put in prison and a sex offender registry for life for having a single photo of a naked 17 year old (how in the hell were you supposed to know?) on your device is ridiculous.

But, again, who is going to decide to put forward a bill to change that? It's all risk and no reward for the politician.


Fair, but still, the legislative process takes alot of time, and judicial norms and precedent allow for discretion to be exercised with accountability, which also informs the legislative process.


AI would be great IF they know what to find

The state of current AI does not give them ability to know that, so the consideration is likely to be dropped


It will never solve godels incompletness theorem. The law will never be completely bug free


Start fixing those bugs, you will open up can after can of worms.

Finding the bugs- will be entertaining.


now you are talking about replacing not judges, but your elected representatives.


I think "judge AI" would be better if it also had access to a complete legislative record of debate surrounding the establishment of said laws, so that it could perform a "sanity check" whether its determinations are also consistent with the stated intent of lawmakers.

One might imagine a distant future where laws could be dramatically simplified into plain-spoken declarations, to be interpreted by a very advanced (and ideally true open source) future LLM. So instead of 18 U.S.C. §§ 2251–2260 the law could be as straightforward as:

"In order to protect children from sexual exploitation and eliminate all incentive for it, no child may be used, depicted, or represented for sexual arousal or gratification. Responsibility extends to those who create, assist, enable, profit from, or access such material for sexual purposes. Sanctions must be proportionate to culpability and sufficient to deter comparable conduct."

...and the AI will fill in the gaps.


...and the people who train the AI will have been entrenched as the de facto rulers of the realm.

No. No, thank you.


If my bank did this to me I would immediately drop them.


By mentioning MongoDB you’re going to trigger so many people who haven’t informed themselves on MongoDB since 2014.

It works just fine as a production database. You can still have relationships and strict schemas… I just don’t understand the hate.


My previous startup was running on MongoDB since 2014, and after 10 years, 1m+ users and hundreds of enterprise customers everything was just fine.


True, usually when I meet people who think so I always ask them what exactly went wrong. One time someone told me:

"we just let our frontend engineer build the whole backend and schema and it turned into a disaster and wasn't maintainable"

and I was like - so it's the database's fault? :)


I mean if you’re going to write algos that trade the first thing you should do is check whether they were successful on historical data. This is an interesting data point.

Market impact shouldn’t be considered when you’re talking about trading S&P stocks with $100k.


Historical data is useful for validation, don't develop algos against it, test hypotheses until you've biased your data, then move on to something productive for society


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: