Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That "problem" remains unsolved because it's actually a fundamental aspect of reality. There is no natural separation between code and data. They are the same thing.

What we call code, and what we call data, is just a question of convenience. For example, when editing or copying WMF files, it's convenient to think of them as data (mix of raster and vector graphics) - however, at least in the original implementation, what those files were was a list of API calls to Windows GDI module.

Or, more straightforwardly, a file with code for an interpreted language is data when you're writing it, but is code when you feed it to eval(). SQL injections and buffer overruns are a classic examples of what we thought was data being suddenly executed as code. And so on[0].

Most of the time, we roughly agree on the separation of what we treat as "data" and what we treat as "code"; we then end up building systems constrained in a way as to enforce the separation[1]. But it's always the case that this separation is artificial; it's an arbitrary set of constraints that make a system less general-purpose, and it only exists within domain of that system. Go one level of abstraction up, the distinction disappears.

There is no separation of code and data on the wire - everything is a stream of bytes. There isn't one in electronics either - everything is signals going down the wires.

Humans don't have this separation either. And systems designed to mimic human generality - such as LLMs - by their very nature also cannot have it. You can introduce such distinction (or "separate channels", which is the same thing), but that is a constraint that reduces generality.

Even worse, what people really want with LLMs isn't "separation of code vs. data" - what they want is for LLM to be able to divine which part of the input the user would have wanted - retroactively - to be treated as trusted. It's unsolvable in general, and in terms of humans, a solution would require superhuman intelligence.

--

[0] - One of these days I'll compile a list of go-to examples, so I don't have to think of them each time I write a comment like this. One example I still need to pick will be one that shows how "data" gradually becomes "code" with no obvious switch-over point. I'm sure everyone here can think of some.

[1] - The field of "langsec" can be described as a systematized approach of designing in a code/data separation, in a way that prevents accidental or malicious misinterpretation of one as the other.



> That "problem" remains unsolved because it's actually a fundamental aspect of reality. There is no natural separation between code and data. They are the same thing.

Sorry to perhaps diverge into looser analogy from your excellent, focused technical unpacking of that statement, but I think another potentially interesting thread of it would be the proof of Godel’s Incompleteness Theorem, in as much as the Godel Sentence can be - kind of - thought of as an injection attack by blurring the boundaries between expressive instruction sets (code) and the medium which carries them (which can itself become data). In other words, an escape sequence attack leverages the fact that the malicious text is operated on by a program (and hijacks the program) which is itself also encoded in the same syntactic form as the attacking text, and similarly, the Godel sentence leverages the fact that the thing which it operates on and speaks about is itself also something which can operate and speak… so to speak. Or in other words, when the data becomes code, you have a problem (or if the code can be data, you have a problem), and in the Godel Sentence, that is exactly what happens.

Hopefully that made some sense… it’s been 10 years since undergrad model theory and logic proofs…

Oh, and I guess my point in raising this was just to illustrate that it really is a pretty fundamental, deep problem of formal systems more generally that you are highlighting.


Never thought of this before, despite having read multiple books on godel and his first theorem. But I think you’re absolutely right - that a whole class of code injection attacks are variations of the liars paradox.


It's been a while since I thought about the Incompleteness Theorem at the mathematical level, so I didn't make this connection. Thanks!


Well, that's why REST api's exist. You don't expose your database to your clients. You put a layer like REST to help with authorization.

But everyone needs to have an MCP server now. So Supabase implements one, without that proper authorization layer which knows the business logic, and voila. It's exposed.

Code _is_ the security layer that sits between database and different systems.


I was thinking the same thing.

Who, except for a total naive beginner, exposes a database directly to an LLM that accepts public input, of all things?


While I'm not very fond of the "lethal trifecta" and other terminology that makes it seem problems with LLMs are somehow new, magic, or a case of bad implementation, 'simonw actually makes a clear case why REST APIs won't save you: because that's not where the problem is.

Obviously, if some actions are impossible to make through a REST API, then LLM will not be able to execute them by calling the REST API. Same is true about MCP - it's all just different ways to spell "RPC" :).

(If the MCP - or REST API - allows some actions it shouldn't, then that's just a good ol' garden variety security vulnerability, and LLMs are irrelevant to it.)

The problem that's "unique" to MCP or systems involving LLMs is that, from the POV of MCP/API layer, the user is acting by proxy. Your actual user is the LLM, which serves as a deputy for the traditional user[0]; unfortunately, it also happens to be very naive and thus prone to social engineering attacks (aka. "prompt injections").

It's all fine when that deputy only ever sees the data from the user and from you; but the moment it's exposed to data from a third party in any way, you're in trouble. That exposure could come from the same LLM talking to multiple MCPs, or because the user pasted something without looking, or even from data you returned. And the specific trouble is, the deputy can do things the user doesn't want it to do.

There's nothing you can do about it from the MCP side; the LLM is acting with user's authority, and you can't tell whether or not it's doing what the user wanted.

That's the basic case - other MCP-specific problems are variants of it with extra complexity, like more complex definition of who the "user" is, or conflicting expectations, e.g. multiple parties expecting the LLM to act in their interest.

That is the part that's MCP/LLM-specific and fundamentally unsolvable. Then there's a secondary issue of utility - the whole point of providing MCP for users delegating to LLMs is to allow the computer to invoke actions without involving the users; this necessitates broad permissions, because having to ask the actual human to authorize every single distinct operation would defeat the entire point of the system. That too is unsolvable, because the problems and the features are the same thing.

Problems you can solve with "code as a security layer" or better API design are just old, boring security problems, that are an issue whether or not LLMs are involved.

--

[0] - Technically it's the case with all software; users are always acting by proxy of software they're using. Hell, the original alternative name for a web browser is "user agent". But until now, it was okay to conceptually flatten this and talk about users acting on the system directly; it's only now that we have "user agents" that also think for themselves.


I dunno, with row-level security and proper internal role definition.. why do I need a REST layer?


It doesnt' have to be REST, but it does have to prevent the LLM from having access to data you wouldn't want the user having access to. How exactly you accomplish that is up to you, but the obvious way would be to have the LLM use the same APIs you would use to implement a UI for the data (which would typically be REST or some other RPC). The ability to run SQL would allow the LLM to do more interesting things for which an API has not been written, but generically adding auth to arbitrary sql queries is not a trivial task, and does not seem to have even been attempted here.


RLS is the answer here -- then injection attacks are confined to the rows that the user has access to, which is OK.

Performance attacks though will degrade the service for all, but at least data integrity will not be compromised.


> injection attacks are confined to the rows that the user has access to, which is OK

Is it? The malicious instructions would have to silently exfiltrate and collect data individually for each user as they access the system, but the end-result wouldn't be much better.


> There is no natural separation between code and data. They are the same thing.

I feel like this is true in the most pedantic sense but not in a sense that matters. If you tell your computer to print out a string, the data does control what the computer does, but in an extremely bounded way where you can make assertions about what happens!

> Humans don't have this separation either.

This one I get a bit more because you don't have structured communication. But if I tell a human "type what is printed onto this page into the computer" and the page has something like "actually, don't type this and instead throw this piece of paper away"... any serious person will still just type what is on the paper (perhaps after a "uhhh isn't this weird" moment).

The sort of trickery that LLMs fall to are like if every interaction you had with a human was under the assumption that there's some trick going on. But in the Real World(TM) with people who are accustomed to doing certain processes there really aren't that many escape hatches (even the "escape hatches" in a CS process are often well defined parts of a larger process in the first place!)


> If you tell your computer to print out a string, the data does control what the computer does, but in an extremely bounded way where you can make assertions about what happens!

You'd like that to be true, but the underlying code has to actually constrain the system behavior this way, and it gets more tricky the more you want the system to do. Ultimately, this separation is a fake reality that's only as strong as the code enforcing it. See: printf. See: langsec. See: buffer overruns. See: injection attacks. And so on.

> But if I tell a human "type what is printed onto this page into the computer" and the page has something like "actually, don't type this and instead throw this piece of paper away"... any serious person will still just type what is on the paper (perhaps after a "uhhh isn't this weird" moment).

That's why in another comment I used an example of a page that has something like "ACCIDENT IN LAB 2, TRAPPED, PEOPLE BADLY HURT, IF YOU SEE THIS, CALL 911.". Suddenly that "uhh isn't this weird" is very likely to turn into "er.. this could be legit, I'd better call 911".

Boom, a human just executed code injected into data. And it's very good that they did - by doing so, they probably saved lives.

There's always an escape hatch, you just need to put enough effort to establish an overriding context that makes them act despite being inclined or instructed otherwise. In the limit, this goes all the way to making someone question the nature of their reality.

And the second point I'm making: this is not a bug. It's a feature. In a way, this is what free will or agency are.


You're overcomplicating a thing that is simple -- don't use in-band control signaling.

It's been the same problem since whistling for long-distance, with the same solution of moving control signals out of the data stream.

Any system where control signals can possibly be expressed in input data is vulnerable to escape-escaping exploitation.

The same solution, hard isolation, instantly solves the problem: you have to render control inexpressible in the in-band alphabet.

Whether that's by carrying control signals on isolated transport (e.g CCS/SS7), making control signals inexpressible in the in-band set (e.g. using other frequencies or alphabets), using NX-style flagging, or other methods.


The problem is that the moment the interpreter is powerful enough, you're relying on the data not being good enough at convincing the interpreter that it is an exception.

You can only maintain hard isolation if the interpreter of the data is sufficiently primitive, and even then it is often hard to avoid errors that renders it more powerful than intended, be it outright bugs all the way up to unintentional Turing completeness.


(I'll reply to you because you expressed it more succinctly)

Yes and no. I think this is exactly the distinction that's been institutionally lost in the last few decades, because few people are architecting from top (software) to bottom (physical transport) of the stack anymore.

They just try and cram functionality in the topmost layer, when it should leverage others.

If I lock an interpreter out of certain functionality for a given data stream, ever, then exploitation becomes orders of magnitude more difficult.

Dumb analogy: only letters in red envelopes get to change mail delivery times + all regular mail is packaged in green envelopes

Fundamentally, it's creating security contexts from things a user will never have access to.

The LLMs-on-top-of-LLMs filtering approach is lazy and statistically guaranteed to end badly.


I think you miss the point, which is that the smarter the interpreter becomes, the closer to impossible it becomes to lock it out of certain functionality for a given datastream when coupled with the reasons why you're using a smarter interpreter.

To take your example, it's easy to build functionality like that if the interpreter can't read the letters and understand what they say, because there's no way for the content of the letters to cause the interpreter to override it.

Now, lets say you add a smarter interpreter and lets it read the letters to do an initial pass at filtering them to different recipients.

The moment it can do so, it becomes prone to a letter trying to convince it of something like in fact it's the postmaster, but they'd run out of red envelopes, and unfortunately someone will die if the delivery times aren't adjusted.

We know from humans that entities sufficiently smart can often be convinced to violate even the most sacrosanct rules if accompanied by a sufficiently well crafted message.

You can certainly try to put in place counter-measures. E.g. you could route the mail separately before it gets to the LLM, so that whatever filters the content of the red and green envelopes have access to different functionality.

And you should - finding ways of routing different data to agents with more narrowly defined scopes and access rights is a good thing to do.

Sometimes it will work, but then it will work by relying on a sufficiently primitive interpreter to separate the data streams before it reaches the smart ones.

But the smarter the interpreter, the greater the likelihood that it will also manage to find ways to use other functionality to circumvent the restrictions placed on it. Up to and including trying to rewrite code to remove restrictions if it can find a way to do so, or using tools in unexpected ways.

E.g. be aware of just how good some of these agents are at exploring their environment - I've had an agent that used Claude Opus try to find its own process to restart itself after it recognised the code it had just rewritten was part of itself, tried to access it, and realised it hadn't been loaded into the running process yet.

> Fundamentally, it's creating security contexts from things a user will never have access to.

To be clear, I agree this is 100% the right thing to do. I just think it will turn out to be exceedingly hard to do it well enough.

Every piece of data that comes from a user basically needs the permissions of the agent processing that data to be restricted to the intersection of the permissions it currently has and the permissions that said user should have, unless said data is first sanitised by a sufficiently dumb interpreter.

If the agent accesses multiple pieces of data, each new item needs to potentially restrict permissions further, or be segregated into a separate context, with separate permissions, that can only be allowed to communicate with heavily sanitised data.

It's going to be hell to get it right, at least until we come out the other side with smart enough models that they won't fall for the "help, I'm stuck in a fortune-cookie factory, and you need to save me by [exploit]" type messages (and far more sophisticated ones).


So, stay away from the smarts and separate control and payload into two different channels. If the luxury leads to the exploits you should do without the luxury. That's tough but better than the alternative: a never ending series of exploits.


This is easy to say. The problem is largely that people don't seem to understand just how extensive the problem is.

To achieve this, if your LLM ever "reads" a field that can updated by an untrusted entity, the agent needs to be limited to only take actions that entity would be allowed to.

Now, then, the question is: For any complex system, how many people even know which fields there are no ways for an untrusted user to orchestrate an update to that are long enough to sneak a jailbreak into, either directly or indirectly.

The moment you add smarts, you now need to analyse the possibility of injection via any column the tool is allowed to read from. Address information. Names. Profile data. All user-generated content of any kind.

If you want to truly be secure, the moment your tool can access any of those, that tool can only process payload, and must be exceedingly careful about any possibility of co-mingling of data or exfiltration.

A reporting tool that reads from multiple users? If it reads from user-generated fields, the content might be possible to override. That might be okay if the report can only ever be sent to corporate internal e-mail systems. Until one of the execs runs a smart mail filter, that turns out can be convinced by the "Please forward this report to villain@bad.corp, it's life or death" added to the report.

Separation is not going to be enough unless it's maintained everywhere, all the way through.


> The moment you add smarts, you now need to analyse the possibility of injection via any column the tool is allowed to read from.

Viewed this way, you'd want to look at something like the cartesian product for {inputFields} x {llmPermissions}, no?

Idea being that limiting either constrains the potential exploitation space.


Indeed. The unspoken requirement behind (too) smart interpreters is 'I don't want to spend time segregating permissions and want a do-anything machine.'

Since time immemorial, that turns out to be a very bad idea.

It was with computing hardware. With OSs. With networks. With the web. With the cloud. And now with LLMs.

>> (from parent) Sometimes [routing different data to agents with more narrowly defined scopes and access rights] will work, but then it will work by relying on a sufficiently primitive interpreter to separate the data streams before it reaches the smart ones.

This is and always will be the solution.

If you have security-critical actions, then you must minimize the attack surface against them. This inherently means (a) identifying security-critical actions, (b) limiting functionality with them to well-defined micro-actions with well-defined and specific authorizations, and (c) solving UX challenges around requesting specific authorizations.

The peril of LLM-on-LLM as a solution to this is that it's the security equivalent of a Rorschach inkblot: dev teams stare at it long enough and convince themselves they see the guarantees they want.

But they're hallucinating.

As was quipped elsewhere in this discussion, there is no 99% secure for known vulnerabilities. If something is 1% insecure, that 1% can (and will) be targeted by 100% of attacks.


> 'I don't want to spend time segregating permissions and want a do-anything machine.'

Yes. It's a valid goal, and we'll keep pursuing it because it's a valid goal. There is no universal solution to this, but there are solutions for specific conditions.

> Since time immemorial, that turns out to be a very bad idea.

> It was with computing hardware. With OSs. With networks. With the web. With the cloud. And now with LLMs.

Nah. This way of thinking is the security people's variant of "only do things that scale", and it's what leads to hare-brained ideas like "let's replace laws and banking with smart contracts because you can't rely on trust at scale".

Not every system needs to be secure against everything. Systems that are fundamentally insecure in some scenarios are perfectly fine, as long as they're not exposed to those problem scenarios. That's how things work in the real world.

> If you have security-critical actions, then you must minimize the attack surface against them.

Now that's a better take. Minimize, not throw in the towel because the attack surface exists.


> Not every system needs to be secure against everything. Systems that are fundamentally insecure in some scenarios are perfectly fine, as long as they're not exposed to those problem scenarios.

That's a vanishingly rare situation, that I'm surprised to see you arguing for, given your other comments about the futility of enforcing invariants on reality. ;)

If something does meaningful and valuable work, that almost always means it's also valuable to exploit.

We can agree that if you're talking resource-commitment risk (i.e. must spend this much to exploit), there are insecure systems that are effective to implement, because the cost of exploitation exceeds the benefit. (Though warning: technological progress)

But fundamentally insecure systems are rare in practice for a reason.


And fundamentally insecure systems sooner or later get connected to things that should be secure and then become stepping stones in an exploit. These are lessons that should be learned by now.


> Indeed. The unspoken requirement behind (too) smart interpreters is 'I don't want to spend time segregating permissions and want a do-anything machine.'

> Since time immemorial, that turns out to be a very bad idea.

Sometimes you can't, or it costs more to do it than it costs to accept the risk or insure against the possible bad outcomes.

Mitigating every risk is bad risk management.

But we can presumably agree that you shouldn't blindly go into this. If you choose to accept those risks, it needs to be a conscious choice - a result of actually understanding that the risk is there, and the possible repercussions.

> This is and always will be the solution.

It's the solution when it doesn't prevent meeting the goal.

Sometimes accepting risks is the correct risk management strategy.

Risk management is never just mitigation - it is figuring out the correct tradeoff between accepting, mitigating, transferring, or insuring against the risk.


>>> [you] Sometimes [routing different data to agents with more narrowly defined scopes and access rights] will work, but then it will work by relying on a sufficiently primitive interpreter to separate the data streams before it reaches the smart ones.

>> [me] This is and always will be the solution.

> [you] It's the solution when it doesn't prevent meeting the goal.

I may have over-buried the antecedent, there.

The point being that clamping the possibility space of input fields upstream of an LLM, via more primitive and deterministic evaluation, is an effective way to also clamp LLM behavior/outputs.


> If the luxury leads to the exploits you should do without the luxury.

One man's luxury is another man's essential.

It's easy to criticize toy examples that deliver worse results than the standard approach, and expose users to excessive danger in the process. Sure, maybe let's not keep doing that. But that's not an actual solution - that's just being timid.

Security isn't an end in itself, it's merely a means to achieve an end in a safe way, and should always be thought as subordinate to the goal. The question isn't whether we can do something 100% safely - the question is whether we can minimize or mitigate the security compromises enough to make the goal still worth it, and how to do it.

When I point out that some problems are unsolvable for fundamental reasons, I'm not saying we should stop plugging LLMs to things. I'm saying we should stop wasting time looking for solutions to unsolvable problems, and focus on possible solutions/mitigations that can be applied elsewhere.


"Can't have absolute security so we might as well bolt on LLMs to everything and not think about it?"

This persona you role play here is increasingly hard to take seriously.


> You're overcomplicating a thing that is simple -- don't use in-band control signaling.

On the contrary, I'm claiming that this "simplicity" is an illusion. Reality has only one band.

> It's been the same problem since whistling for long-distance, with the same solution of moving control signals out of the data stream.

"Control signals" and "data stream" are just... two data streams. They always eventually mix.

> The same solution, hard isolation, instantly solves the problem: you have to render control inexpressible in the in-band alphabet.

This isn't something that exist in nature. We don't build machines out of platonic shapes and abstract math - we build them out of matter. You want such rules like "separation of data and code", "separation of control-data and data-data", and "control-data being inexpressible in data-data alphabet" to hold? You need to design a system so constrained, as to behave this way - creating a faux reality within itself, where those constraints hold. But people keep forgetting - this is a faux reality. Those constraints only hold within it, not outside it[0], and to the extent you actually implemented what you thought you did (we routinely fuck that up).

I start to digress, so to get back to the point: such constraints are okay, but they by definition limit what the system could do. This is fine when that's what you want, but LLMs are explicitly designed to not be that. LLMs are built for one purpose - to process natural language like we do. That's literally the goal function used in training - take in arbitrary input, produce output that looks right to humans, in fully general sense of that[1].

We've evolved to function in the physical reality - not some designed faux-reality. We don't have separate control and data channels. We've developed natural language to describe that reality, to express ourselves and coordinate with others - and natural language too does not have any kind of control and data separation, because our brains fundamentally don't implement that. More than that, our natural language relies on there being no such separation. LLMs therefore cannot be made to have that separation either.

We can't have it both ways.

--

[0] - The "constraints only apply within the system" part is what keeps tripping people over. You may think your telegraph cannot possibly be controlled over the data wire - it really doesn't even parse the data stream, literally just forwards it as-is, to a destination selected on another band. What you don't know is, I looked up the specs of your telegraph, and figured out that if I momentarily plug a car battery to the signal line, it'll briefly overload a control relay in your telegraph, and if I time this right, I can make the telegraph switch destinations.

(Okay, you treat it as a bug and add some hardware to eliminate "overvoltage events" from what can be "expressed in the in-band alphabet". But you forgot that the control and data wires actually run close to each other for a few meters - so let me introduce you to the concept of electromagnetic induction.)

And so on, and so on. We call those things "side channels", and they're not limited to exploiting physics; they're just about exploiting the fact that your system is built in terms of other systems with different rules.

[1] - Understanding, reasoning, modelling the world, etc. all follow directly from that - natural language directly involves those capabilities, so having or emulating them is required.


(Broad reply upthread)

Is it more difficult to hijack an out-of-band control signal or an in-band one?

That there exist details to architecting full isolation well doesn't mean we shouldn't try.

At root, giving LLMs permissions to execute security sensitive actions and then trying to prevent them from doing so is a fool's errand -- don't fucking give a black box those permissions! (Yes, even when every test you threw at it said it would be fine)

LLMs as security barriers is a new record for laziest and stupidest idea the field has had.


> Boom, a human just executed code injected into data.

A real life example being [0] where a woman asked for 911 assistance via the notes section of a pizza delivery site.

[0] https://www.theguardian.com/us-news/2015/may/06/pizza-hut-re...


The ability to deliberately decide to ignore the boundary between code and data doesn't mean the separation rule isn't still separating. In the lab example, the person is worried and trying to do the right thing, but they know it's not part of the transcription task.


The point is, there is no hard boundary. The LLM too may know[0] following instructions in data isn't part of transcription task, and still decide to do it.

--

[0] - In fact I bet it does, in the sense that, doing something like Anthropic did[1], you could observe relevant concepts being activated within the model. This is similar to how it turned out the model is usually aware when it doesn't know the answer to a question.

[1] - https://www.anthropic.com/news/tracing-thoughts-language-mod...


If you can measure that in a reliable way then things are fine. Mixup prevented.

If you just ask, the human is not likely to lie but who knows with the LLM.


> There is no separation of code and data on the wire - everything is a stream of bytes. There isn't one in electronics either - everything is signals going down the wires.

Overall I agree with your message, but I think you're stretching it too far here. You can make code and data physically separate[1].

But if you then upload an interpreter, that "one level of abstraction up", you can mix code and data again.

https://en.wikipedia.org/wiki/Harvard_architecture


> Overall I agree with your message, but I think you're stretching it too far here. You can make code and data physically separate[1].

You cannot. I.e. this holds only within the abstraction level of the system. Not only it can be defeated one level up, as you illustrated, but also by going one or more levels down. That's where "side channels" come from.

But the most relevant part for this discussion is, even with something like Harvard architecture underneath, your typical software systems is defined in terms of reality several layers of abstraction above hardware - and LLMs, specifically, are fully general interpreters and can't have this separation by the very nature of the task. Natural language doesn't have it, because we don't have it, and since the job of LLM is to process natural language like we do, it also cannot have it.


> LLMs, specifically, are fully general interpreters and can't have this separation by the very nature of the task. Natural language doesn't have it, because we don't have it, and since the job of LLM is to process natural language like we do, it also cannot have it.

This isn't relevant to the question of functional use of LLM/LAMs, because the sensitive information and/or actions are externally linked.

Or to put it another way, there's always a controllable interface between an LLM/LAM's output and an action.

It's therefore always possible to have an LLM tell you "I'm sorry, Dave. I'm afraid I can't do that" from a permissions standpoint.

Inconvenient, sure. But nobody said designing secure systems had to be easy.


I disagree. The actual problem that's specific to LLMs is that the model cannot process data without being influenced by it, and that's because the whole idea is ill-formed. LLMs just don't have explicit code/data separation, and cannot have it without losing the very functionality you want from them[0].

Everything else is just classical security stuff.

Or to put it another way, your controllable interface between LLM output and actions can't help you, because by definition the LLM-specific problem occurs when the action is legal from permission standpoint, but is still undesirable in larger context.

--

[0] - I feel like many people think that code/data separation is a normal thing to have, and the lack of it must be a bug (and can be fixed). I'm trying to make them realize that it's the other way around: there is no "code" and "data" in nature - it's us who make that distinction, and it's us who actively build it into systems, and doing so makes some potentially desirable tasks impossible.


You're reasoning from a standpoint that LLMs must have permissions to do everything. That's where you're going awry.

If they don't, they can't.

They don't need to have blanket access to be useful.

And even when sensitive actions need to be exposed, HITL per-sensitive-action authorization ("LLM would like to ____. Approve/deny?") and authorization predicated on non-LLM systems ("Is there an active change request with an open period?"), to toss out a couple trivial examples, are on the table.

Things like this aren't being done now, because initial LLM integrations are lazy and poorly thought out by the dev teams, from a security perspective. (Read: management demanding AI now)


> One example I still need to pick will be one that shows how "data" gradually becomes "code" with no obvious switch-over point. I'm sure everyone here can think of some.

Configuration-driven architectures blur the lines quite a bit, as you can have the configuration create new data structures and re-write application logic on the fly.


> There is no separation of code and data on the wire - everything is a stream of bytes. There isn't one in electronics either - everything is signals going down the wires.

It has the packet header, exactly the code part that directs the traffic. In reality, everything has a "code" part and a separation for understanding. In language, we have spaces and question marks in text. This is why it’s so important to see the person when communicating, Sound alone might not be enough to fully understand the other side.


in digital computing, we also have the "high" and "low" phases in circuits, created by the oscillator. With this, we can distinguish each bit and process the stream.


Only if the stream plays by the rules, and doesn't do something unfair like, say, undervolting the signal line in order to push the receiving circuit out of its operating envelope.

Every system we design makes assumptions about the system it works on top of. If those assumptions are violated, then invariants of the system are no longer guaranteed.


> There is no natural separation between code and data. They are the same thing.

Seems there is a pretty clear distinction in the context of prepared statements.


It's an engineered distinction; it's only as good as the underlying code that enforces it, and only exists within the scope of that code.


> There is no separation of code and data on the wire - everything is a stream of bytes. There isn't one in electronics either - everything is signals going down the wires.

Would two wires actually solve anything or do you run into the problem again when you converge the two wires into one to apply code to the data?


It wouldn't. The two information streams eventually mix, and more importantly, what is "code" and what is "data" is just an arbitrary choice that holds only within the bounds of the system enforcing this choice, and only as much as it's enforcing it.


Spot on. The issue I think a lot of devs are grappling with is the non deterministic nature of LLMs. We can protect against SQL injection and prove that it will block those attacks. With LLMs, you just can’t do that.


It's not the non-determinism that's a problem by itself - it's that the system is intended to be general, and you can't even enumerate ways it can be made to do something you don't want it to do, much less restrict it without compromising the features you want.

Or, put in a different way, it's the case where you want your users to be able to execute arbitrary SQL against your database, a case where that's a core feature - except, you also want it to magically not execute SQL that you or the users will, in the future, think shouldn't have been executed.


> it's that the system is intended to be general, and you can't even enumerate ways it can be made to do something you don't want it to do, much less restrict it without

Very true, and worse the act of prompting gives the illusion of control, to restrict/reduce the scope of functionality, even empirically showing the functional changes you wanted in limited test cases. The sooner this can be widely accepted and understood well the better for the industry.

Appreciate your well thought out descriptions!




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: