Remember, secure encryption, good compression, and truely random data are indistinguishable.
It's best to paste that encrypted payload into a JPG with some bullshit magic headers and upload that to a trusted Exfil pivot instead.
Or, to get SuperMarioKart.rom to work with your chromeApp-XEMU emulator to play during down-time at work, just rename it to SMB.png, and email it you yourself.
> Remember, secure encryption, good compression, and truely random data are indistinguishable.
Yes, and the only reason the bad guys get away with this is the people who trust signature-based scanning at the perimeter to detect all threats.
One of the hacks I'm most proud of in my whole career was when we were doing a proof of concept at an enterprise client we were being deliberately obstructed by the internal IT group due to politics between their boss and the boss who sponsored our POC. For unrelated trademark-related reasons we were prevented by a third party from having the software on physical media but we had a specific contractual clause agreeing to let us download it for install. So while we had been contractually engaged to provide this software and we had a strict deadline to prove value, the enterprise IT group were preventing us from actually getting it through the virus-scanning firewall to get it installed. What to do?
The scanner looked for the signature of executable or zipped files and blocked them. It would also block any files larger than a certain size. So what I did was write two shell scripts called "shred" and "unshred". "Shred" would take any files you gave it as input, make them into a tarball, encrypt that to confuse the virus scanner and then split it up into chunks small enough to get through the firewall, and "unshred" would reverse this. This almost worked, but I found that the first chunk was always failing to transmit through the firewall. The scanner noticed some signature that openssl was putting at the front of the file when encrypting it. The solution? Change shred to add 1k of random noise to the front of the file and unshred to remove it.
Job done. Our files were transmitted perfectly (I got the scripts to check the md5sum on both sides to be sure), and though the process was slow, we could continue.
The funny thing was the POC was a bake-off versus another (more established) vendor and they couldn't get their software installed until they had done a couple of weeks of trench warfare with enterprise IT. "To keep things fair" the people organising the POC decided to delay to let them have time to install, and eventually the person blocking us from installing was persuaded to change their mind (by being fired), so "shred" and "unshred" could be retired.
I did basically the same, to get some important CLI tools past the company firewall, just a few months back.
Crazy that this is easier than dealing with the bullshit politics, to get some essentials tools to do my job. German public service is a joke. I quit since.
Good compression should still be cryptographically distinguishable from true randomness right?
Sure the various measures of entropy should be high, but I always just assumed that compression wouldn't pass almost any cryptographic randomness test.
If there is a difference, and LLM's can do one but not the other...
>By that standard (and it is a good standard), none of these "AI" things are doing any thinking
>"Does it generalize past the training data" has been a pre-registered goalpost since before the attention transformer architecture came on the scene.
Also, his remarks during the Jan6th meandering are indiscernable from MLK's or others, but had the "violent" (no broken statues, no fires, no fatalities...?) "insurrection" been any less "welcomed", then his plagiarism would had likely been spotlighted instead.
Six fatalities, depending on who you ask and who's doing the causality calculus: three natural causes (overstress), one drug overdose, one natural causes next-day (suspected undiagnosed trauma during event), and one gunshot wound fatality.
Specifically, shuffling compression, bit-rate, encryption, and barely human-perceivable signal around mediums (x-M) to obscure the entrophic/random state of any medium as to not break the generally-available plausible-deniability from a human-perception.
Can't break Shannon's law, but hides who intent of who is behind the knocks on the all doors. Obscures which house Shannon lives in, and whom who knocks wishes to communicate.
the point here is to dissipate it across enough mediums as to be indiscernible from noisy background fluctuations regardless of existence, giving general-deniability to all mediums eventually, thru signal to noise ratio.
all security is just obscurity, eventually, where you are obscuring your private key's semi-prime's factors.
> all security is just obscurity, eventually, where you are obscuring your private key's semi-prime's factors.
This is a lazy take that obscures the definition to uselessness. It’s perpetuated by people who make insecure systems that break when the algorithm is known.
There is a vast gulf between:
- security depends on secret algorithm
- security depends on keeping a personal asymmetric key secret
The latter is trivial to change, it doesn’t compromise the security of others using the scheme, and if it has perfect forward secrecy it doesn’t even compromise past messages.
Please don’t repeat that mantra. You’re doing a disservice to anyone who reads it and ultimately yourself.
All security is obscurity. I think it's laughable that you believe you know what someone does just because they say this. Consider there's many levels of knowledge about a topic and sometimes when you get to a deeper level your conclusion or the labels you use for stuff "flip".
Understanding the differences that you outlined is so basic that a good commenter wouldn't assume they don't know the difference, they are making a deeper point.
When a commenter doesn't know how to even spell the word "steganography", it's quite safe to assume that they don't possess deeper level knowledge and are not making any deeper point about it.
trivial grammar/spelling mistakes are worse than running analogies into the ground without hitting the "context" button, or even the reductio ad absurdium train HN has been on lately.
yes my latin half-Freudian trans-alliterations can be tempting to pick out, i had another tab with stylometry obfuscation described, incident, and mitigated.
also giigles spellcheck sucks ass, and im tired of being gaslit of my word choice/spelling by giigles, who should know every word by now, in all languages
>don't possess deeper level knowledge
umm besides error-correcting codes reducing the bitrate, compression, and random byte padding to fend off correlation/timing attacks, there is no where to hide data, outside of the shannon limit for information thru a medium.
but its easy to hide data you cannot perceive; and everyone being conscious of this feat/fingerprinting, even if barely, does more towards efficacy to deter leaking via second-order "chilling effect" than the aftermath; I.P theft is hard to un-approximate
also stenography, ironically still being the only "real" signature, is still security thru obscurity with more steps; your literal stenographic signature is unique, but not preventable from duplicity, so it is un-obscurable.
if googles' "Add to Dictionary" button worked more than their new 100+ languages i wouldn't felt gaslit by the same words having needed re-googled weekly
But you do have to admit that they know very many big important sounding words, go off on extremely dope tangents ("second order chilling effects!" Fuck yeah!) AND say "giigle" instead of Google, which is a.) super cool (obviously) but I suspect there's b.) a darker reason: they are probably a rouge cryptoanarchist being hunted down by The Algorithm and are only able to survive on the streets because of there every-day-carry RF blocking wallet and screwdriver combo and their ability to outsmart Google, because it hasn't learned all the words yet.
Good luck bro, continuing to obscure the entropic state of the x-M medium and remain plausibly deniable. Shannon in the (his?) house, mothafucka! Stenography FTW!
in the context of preventing leaks: if/when this nears ubiquity, the first ID'ing of leaks will obviously lead to the second effect of deterring further leaks.
>their ability to outsmart Google
it knows all the words: that is why i should not had had to had reminded it incessantly.
>continuing to obscure the entropic state of the x-M medium and remain plausibly deniable
lemme draw this out cuz you seem intimidated with simple abstractions.
imagine a three page power-point composed of Header, Text, companyLogo, no other data, aside from inclination from the plane.
under the plausibility presumption the header and text and company logo cannot be within ~15
interval degrees from the plane, you only have a state-space of so many combinations, which puts a hard limit (Shannon's) on the medium's maximum signal/noise ratio.
assuming people cannot collude to delineate between copies, they arent going to be able to perceive subtle shifts in the inclination/position/font/inclusion/exclusion of elements.
however more generally, this key-space needed for the LEAKER_ID wont be much larger (in magnitude) than the user pool of potential leakers, with a simple CRC for resiliency.
We are pawns, hoping to be maybe a Rook to the King by endgame.
Some think we can promote our pawns to Queens to match.
Luckily, the Jester muses!