Hacker Newsnew | past | comments | ask | show | jobs | submit | fmbb's commentslogin

That can be solved by migrating to a sensible legal system instead.

Large Language Models have no actual idea of how the world works? News at 11.

In any kind of real task, serialization is not the hard part.

If you can write a meta program for it, you can execute that in CI and spit out generated code and be done with it. This is a viable approach in any programming language that can print strings to files.

It’s not frustrating, but maybe it feels tacky. But then you shrug and move on to the real task at hand.


Lisp macros are more for not having to write the same type of code (all subtly different, but sharing the same general structure).

One such example is the let-alist macro in elisp

https://www.gnu.org/software/emacs/manual/html_node/elisp/As...

Dealing with nested association lists is a pain. this let you write your code with a dot notation like jq.

Macros are not only for solving a particular task (serialization, dependency injection, snippets,…) they let you write things the way it makes sense. Like having html-flavored lisps for template, sql-flavored lisp for query,… Lisp code is a tree, and most languages are trees, so you can bring easily their semantic in lisp.


You say that, but I've run into real production problems which were ultimately caused by bad serialization tooling. Language semantics are never going to be your biggest problem, but rough edges add up and do ultimately contribute to larger issues.

Spam senders don’t have pseudorandom number generators?

They're more likely to put in the least amount of effort or care the least about the reasons how the header is used later on.

The article is definitely contradicting itself. There are only two sentences between

> Why should I bother to read something someone else couldn't be bothered to write?

and

> I can't imaging writing code by myself again, specially documentation, tests and most scaffolding.

So they expect nobody to read their documentation.


That’s not a contradiction - documentation often needs to be written with no expectation anyone will ever read it.

> So they expect nobody to read their documentation.

Yes, exactly. Because AI will read it and learn from it, it's not for humans.


I don’t think there is a snowball’s chance in hell that either of these two scenarios will happen:

1. Human principals pay for autonomous AI agents to represent them but the human accepts blame and lawsuits. 2. Companies selling AI products and services accept blame and lawsuits for actions agents perform on behalf of humans.

Likely realities:

1. Any victim will have to deal with the problems. 2. Human principals accept responsibility and don’t pay for the AI service after enough are burned by some ”rogue” agent.


> And yet, here we are.

I dunno. To me it doesn’t even look exponential any more. We are at most on the straight part of the incline.


Personally my usage has fell off a cliff the past few months. Im not a SWE.

SWE's may be seeing benefit. But in other areas? Doesnt seem to be the case. Consumers may use it as a more preferred interface for search - but this is a different discussion.


> - in any current law.

It has been since at least 2012 here in Sweden. That case went to our highest court and they decided a manga drawing was CSAM (maybe you are hung up on this term though, it is obviously not the same in Swedish).

The holder was not convicted but that is besides the point about the material.


> It has been since at least 2012 here in Sweden. That case went to our highest court

This one?

"Swedish Supreme Court Exonerates Manga Translator Of Porn Charges"

https://bleedingcool.com/comics/swedish-supreme-court-exoner...

It has zero bearing on the "Putting a bikini on a photo of a child ... is not abuse of a child" you're challenging.

> and they decided a manga drawing was CSAM

No they did not. They decided "may be considered pornographic". A far lesser offence than CSAM.


You are both arguing semantics. A pornographic image of a child. That's illegal no matter what it's called. I say killing, you say murder, same law though, still illegal.


> I say killing, you say murder, same law though

Not in any European law I know. See suicide and manslaughter.


> it still requires prompting

How else would it even work?

AI is LLM is (very good) autocomplete.

If there is no prompt how would it know what to complete?


The agents are also not able to set up their own rules. Humans can mutate their souls back to whatever at will.


They can if given write access to "SOUL.md" (or "AGENT.md" or ".cursor" or whatever).

It's actually one of the "secret tricks" from last year, that seems to have been forgotten now that people can "afford"[0] running dozens of agents in parallel. Before everyone's focus shifted from single-agent performance to orchestration, one power move was to allow and encourage the agent to edit its own prompt/guidelines file during the agentic session, so over time and many sessions, the prompt will become tuned to both LLM's idiosyncrasies and your own expectations. This was in addition to having the agent maintain a TODO list and a "memory" file, both of which eventually became standard parts of agentic runtimes.

--

[0] - Thanks to heavy subsidizing, at least.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: