This is an obvious backdoor attempt, as the code doesn't make sense otherwise. Yet, the attempt was far too unsubtle and underspecific for agencies such as the NSA. The payoff was low compared to the possibilities - local privilege escalations were a dime-a-dozen.
Worse, agencies such as the NSA have two missions: offence and defence. Adding in backdoors helps the offensive mission, but hurts the defensive mission, so it only makes sense if the backdoor isn't so easy to find. An obvious backdoor hurts the US far more than it helps the US. This one was too obvious.
Some ideas:
1) A script kiddie found some way to break-in and edit CVS. The entire idea being to have something to brag about. This was caught too early to be brag-worthy (breaking ancient CVS isn't something to brag about).
2) It was a warning shot from some Western agency meaning "tighten up your security".
If memory serves me right the CVS bug was originally discovered and exploited by a member of an infamous file sharing site. After descriptions(?) of that bug were leaked in underground circles, an east European hacker wrote up his own exploit for it. This second exploit was eventually traded for hatorihanzo.c, a kernel exploit, which was also a 0-day at the time.
The recipient of the hatorihanzo.c then tried to backdoor the kernel after first owning the CVS server and subsequently getting root on it.
The hatorihanzo exploit was left on the kernel.org server, but encrypted with an (at the time) popular ELF encrypting tool. Ironically the author of that same tool was on the forensic team and managed to crack the password, which turned out to be a really lame throwaway password.
And that's the story of how two fine 0-days were killed in the blink of an eye.
Not that I'm aware of, but I wish. Memory is getting hazy these days. AFAIK the kernel.org breaches were made by the kind of hackers doing it for fun and games (if you get that thing) and not the kind working for nation states. I'm sure you can (or at least, at some point could) find others who know more details at your favorite compsec conf.
> 2) It was a warning shot from some Western agency meaning "tighten up your security".
That's an interesting theory that'd certainly make for a powerful message. Has anything like that been done before or is there any precedence for Western agencies to do these sorts of things covertly?
I can't point to any evidence, but two things to note:
A) Even on HN the temptation has come up. e.g. some comments in posts about ransomware make a similar argument for transparently damaging and self-serving actions. Three letter agencies with much more power and ability probably had people making the same arguments.
B) The payoff was extremely low compared to the possibilities. Either whomever did this was unaware of the possibilities or not really interested in a major hack. Perhaps the idea was that if this actually works, the damage isn't so big, aside from embarrassing the Linux kernel team, and when the team noticed they'd tighten up.
This theory seems so outrageously far fetched to me. Why in the world would a "friendly" intelligence agency sneak a working backdoor into a project to "teach a lesson"??
Here's what our intelligence agencies do when they decide to "teach a lesson"[1]. It doesn't include sneaking working backdoors into software. They do THAT when they plan on using the backdoors.
I'm pretty fine with the script kiddie thesis. But if we go for an intelligence agency, we have to explain why the hack was so.. small. A local privilege escalation that is relatively easy to find* should be of very limited use at best. They(tm) get ability to fake linux kernel source and that's all they do!?
* Even if the linux kernel folks had failed to notice the CVS hack, someone would have eventually diffed the kernel versions and found it. Assigning uid to 0 is rather obvious, and quite a lot of linters warn about assignment in comparison.
But if they had included (for example) some sort of off-by-one buffer overflow, the hack would have been a lot less apparent. Now do that for a remote exploit, and they get way more possibilities.
The NSA used to have a defensive mission. They fully compromised their ability to do that by subverting the security of American products time and time again. The Shadow Brokers disclosure alone has completely undermined any trust anyone in the industry has for the NSA.
The NSA still has a defensive mission, and it hasn't changed. It just might not be the defensive mission you assumed it was. IIRC, it's mainly to defend US Government systems and communications from adversaries. To the extent they help with the defense of civilian systems, their goal seems to be to give them adequate security, not absolute security.
For instance, take this episode from the development of DES during the 70s:
> NSA worked closely with IBM to strengthen the algorithm against all except brute-force attacks and to strengthen substitution tables, called S-boxes. Conversely, NSA tried to convince IBM to reduce the length of the key from 64 to 48 bits. Ultimately they compromised on a 56-bit key.
> Assume that the coder meant == 0 what is he trying to enforce. If these 2 bits (_WCLONE and _WALL) are set and your are root then the call is invalid. The bit combination is harmless (setting WALL implies WCLONE [...]), and why would you forbid it for root only.
> ...code change in the CVS copy that did not have a pointer to a record of approval. Investigation showed that the change had never been approved and, stranger yet, that this change did not appear in the primary BitKeeper repository at all...
I'll attach this here for people who read the article too quickly and think it may, somehow, have been a bug. This code was a very deliberate attack.
I would put parentheses here, I never like mixing logical operators with other types (or even different types of logical operators). While it's of course entirely redundant here, it also makes the code easier to read IMO.
I think the parent's point is more convincing: why make this check only for root in the first place?
Sure, my point was that even with the proper == comparison I'd still write the (now redundant) parens because I find it more readable that way.
Actually in languages like Rust with type inference that make it cheap and non-verbose to declare intermediate values I tend to avoid complicated conditions altogether, I could rewrite the provided expression like:
let invalid_options = (options == (__WCLONE|__WALL));
let is_root = (current->uid == 0);
if invalid_options && is_root {
// ...
}
One might argue that it's overkill but I find that it's more "self-documenting"
that way. I find that the more experienced I get, the most verbose my code
becomes. Maybe it's early onset dementia, or maybe it's realizing that it's
easier to write the code than to read it.
Of course you can do that in C as well but you have to declare the boolean
variables, and in general it's frowned upon to intermingle code and variable
declarations so you'd have to add them at the beginning of the function so it
adds boilerplate etc...
It is short, errors based on human perception (here = vs ==) are good enough, it is innocent looking under syntax highlighting, is is not platform dependent, and it even passes the "irony" check. It is just the plausible deniability that is not great, but it is still defensible with a lot of bad faith.
now I'm wondering if syntax highlighting shouldn't somehow make an assignment inside an if statement (and the variants) a bright red, or something like that.
It would break these kinds of constructs, which are common.
if ((fd = open(...)) != -1) {
/* do something with fd */
} else {
perror("open");
}
The compiler outputs a warning when you have something like "if (a = b)". If that's what you mean (it sometimes is), you have to write it "if ((a = b))" to silence the warning.
Exactly my thoughts, I was about to comment on that but I was too lazy so I omitted the declaration. Obviously, this is HN, so someone had to point it out ;)
Anyways, I am a bit torn about the second option. I like the idea of putting the call inside the if clause as it makes for a very explicit error handling but the uninitialized declaration is ugly. What I do in practice tends to depend on the situation but it is rarely satisfying.
Your last suggestion would be ideal, but as you said, it is invalid code unfortunately.
Maybe this
for (int fd = open(...); fd != -1; fd = -1) {
/* do stuff */
}
Maybe it is a smoke screen, put in something likely to be found and something that won't. Everyone pats themselves on the back for finding the obvious one...
GCC warns of assignment in conditional, even without -Wall or -pendantic. I don't know when it started doing that, but it seems like a sore thumb today, different in 2003 maybe?
It only warns if the assignment doesn't have an extra pair of parentheses. These were added in this case, to silence the warning (so the attack would not be noticed). The parentheses are also needed in this case to get the precedence right, but they won't be needed if '==' were written, so anyone coding this by accident would immediately be warned of the mistake.
I admit that I read the code and completely overlooked the single equals sign. Makes me wonder why it would be so easy to change the userid. Shouldn’t there be some safeguards in place to stop the userid from being updated from unsafe places.
These days it'd be harder to write code which is "easy to overlook" -- the innocent version would be something like
if (/* ... */ || current_euid() == GLOBAL_ROOT_KUID)
But the "backdoor" version would fail to compile (current_euid() is a macro but it's written to not be a permitted lvalue). You would need to write something more obvious like the following (and kernel devs would go "huh?" upon seeing the usage of current_cred() in this context)
if (/* ... */ || current_cred()->euid = GLOBAL_ROOT_KUID)
In addition, comparisons against UIDs directly are no longer as common because of user namespaces and capabilities -- correct code would be expected to look more like
if (/* ... */ || capable(CAP_SYS_ADMIN))
Which you can't write as an "accidental" exploit. And since most permission checks these days use capabilities rather than raw UIDs you'd need to do
commit_creds(get_cred(&init_cred));
Which is bound to raise more than a couple of eyebrows and is really non-trivial to hide (assuming you put it somewhere as obvious as this person did).
But I will say that it would've been much more clever to hide it in a device driver which is widely included as a built-in in distribution kernels. I imagine if you managed to compromise Linus' machine (and there are ways of "hiding" changes in merge commits) then the best place would be to shove the change somewhere innocuous like the proc connector (which is reachable via unprivileged netlink, is enabled on most distribution kernels, and is not actively maintained so nobody will scream about it). But these days we also have bots which actively scan people's trees and try to find exploits (including the 0day project and syzkaller), so such obvious bugs probably would still be caught.
"(current_euid() is a macro but it's written to not be a permitted lvalue)"
I'm not an expert at C. I followed up on this kernel macro out of curiosity, and it was a confusing learning experience because it turns out the forbidden assignment
({ x; }) = y;
is silently permitted by GCC (for example, with -Wall --std={c99,c11,c18}), and does actually assign x=y. Even though that's expressly prohibited by the C standard (-Wpedantic).
I assume this is old news to C programmers, but its insidiousness surprised me.
Example #5984 of why I don't like the kernel's convention of masquerading macros as function by making them lowercase. I wasted so much time deciphering weird compile errors or strange behaviour only to finally realize that one of the function calls in the offending code was actually a macro in disguise.
It's especially bad when some kernel macros, such as wait_event, don't even behave like a function would (evaluating the parameter repeatedly).
One more thing Rust got right by suffixing macros with a mandatory !.
The best one is "current" -- which is a macro that looks like a variable but becomes a function call and thus if you ever want to use the variable name "current" you will get build errors. :D
Huh, I assumed (just as you did) that this would obviously not work -- but you're right that GCC ignores this and allows the assignment anyway.
However it turns out that you still get a build error, and even the more explicit versions also give you a error:
kernel/cred.c:763:17: error: assignment of member ‘euid’ in read-only object
763 | current_euid() = GLOBAL_ROOT_UID;
| ^
kernel/cred.c:764:23: error: assignment of member ‘euid’ in read-only object
764 | current_cred()->euid = GLOBAL_ROOT_UID;
| ^
kernel/cred.c:765:22: error: assignment of member ‘euid’ in read-only object
765 | current->cred->euid = GLOBAL_ROOT_UID;
| ^
So it is blocked but not for the reason I thought. current_cred() returns a const pointer and all of the cred pointers in task_struct are also const. So you'd need to do something more like:
Absolutely. This is an example of the poor design of the C language. Other languages that were around at the time C was created choose `:=` as assignment and `=` for equality tests, making this type of typo quite impossible.
Common Lisp makes the Hamming distance even larger; equality tests are written as `(eq foo bar)`, while changing a value is `(setf foo bar)`. Common Lisp may have features which are undesirable in an OS kernel (garbage collection), but it does make the code wonderfully clear and easy to read.
What db48x neglected to mention is that some of those languages also featured assignment as strictly a statement; it could not be a subexpression. As in:
fun(x := 42); (* syntax error in Pascal *)
x := 42; (* OK *)
x = 42; (* hopefully a statement with no effect warning *)
If assignment is a statement, it's possible to use the same token. Classic BASIC:
10 X = 5
20 IF X = 5 GOTO 10
This doesn't cause the C problem of mistaken assignment in place of a test, so it's rather ironic that C managed to shoot itself in the foot in spite of dedicating twice the number of tokens.
You make a good point, but in a monolithic kernel the kernel is the “safe place.” Most likely the effect of this would be subtle and not necessarily long lived.
Same; my Java indoctrination is kicking in and is asking why that field is apparently public and there's no controls as to what process can set it.
That said, counterpoint, it's the kernel and performance is super important; the overhead of adding setters (etc) or an utility function like "current->isRoot()" is probably a tradeoff they made at some point.
Same! I saw the if statement, was 100% sure this was going to be an "= instead of ==" thing... and still missed it. I spent too much mental energy looking at ((__LOUD|__NOISES)) and missed the obvious "current_user = 'root'" statement.
A uid of 0 being root is just such a bad idea to begin with because 0 is a default value of so many data types. It’s an accident waiting to happen and, in this case, a good way to hide something malicious as an accident.
AFAIK only external and static variables are default initialized in C. For all other variables, the default value is undefined, so 0 is as good a choice as any other here.
That's not quite true. While it is undefined 0 is a fairly common value for memory and registers meaning that your "undefined" values is likely 0 a higher than average amount of the time.
Unfortunately in C and its derivatives, the safeguards would have to be external tools (static analysis, linters); it's a perfectly valid statement in code.
I wouldn't mind if languages simply mark assignments in conditions as errors. It's clever code, but clever code should be avoided in critical systems. And in general, I guess.
Not all c-syntax languages let you implicitly convert from integer or pointer to boolean though. Java and C# don't. I have heard MISRA C doesn't allow it.
I actually don't mind this feature of C personally, just playing devil's advocate. Some people feel really strongly about not implicitly allowing conversion to bool. This is why.
Assignments in conditionals can be handy handy, but I think it's better when there's a keyword for it. The Rust/Swift `if let` syntax is pretty nice for this.
Since you're using Rust as an example there, worth noting that unlike in C the assignment operator in Rust does not evaluate to the assigned value (it evaluates to the unit value `()` instead). In combination with the fact that `if` expressions in Rust require their conditions to be bools (the language has no automatic coercion to bool), this means that `if foo = 0` is guaranteed to be a type error.
(This difference in the behavior of the assignment operator is a result of Rust's ownership semantics; since assignment transfers ownership, having the assignment operator evaluate to the assigned value would actually result in the original assignment being entirely undone!)
The ace in Linux's pocket is that you're free to read it all. That can't be said for Apple, and Microsoft or any of the OS's running switches and hubs out there. Let alone all the server side cloud code.
Parent said "in the source code" not "in the Linux source code". Given the abysmal standards of security everywhere, it's quite logical thing to assume that many parties have backdoors scattered around various OSes. A tempting target with such multiplicative benefits.
I don't think it's a paranoid question and I don't think it's even a question. It's a natural assumption and I'd demand exceptionally good evidence to challenge that.
Points for Linux for its openness, people will probably catch some of these.
This particular glitch was inserted via an attack on the BitKeeper repository. (EDIT: it was actually a CVS mirror of the repo.)
But for the normal contribution flow, code review isn't the only safeguard. There's also a deterrent in that should a backdoor be inserted via a contribution that went through the normal process, an audit trail exists. If the backdoor is later discovered, there would be reputation harm to the contributor.
Depending on how much an open source project knows about its contributors, it may be more or less difficult to track down a culprit, but in any case the audit trail makes such attacks more complicated.
Not if you surround the expression with extra parenthesis. And that's what they did here.
Assignments in if-statement can be useful, and that's how you prevent the compiler from complaining. That warning is intended for honest mistakes, not to catch backdoors.
The parentheses here aren't actually "extra", without them the meaning would change - since && binds tighter than = without the parentheses the left hand side of = would not be an lvalue and compilation would fail.
This is something C linters have been catching probably since there have been C linters, either from looking for that specific pattern (a lone equals sign in a conditional) or by "inventing" the notion of a boolean type long before C had one and then pretending that only comparison operators had such a type.
Needless to say, the better class of compiler catches this fine. gcc 9 does with -Wall and makes it an error with -Werror. Ditto clang 9. (Look at me giving version numbers as if this were recent. Any non-antediluvian C compiler worth using will do the same.) My point is, any reasonable build would at least pop up some errors for this, making it appear amateurish to me.
Contrary to popular opinion, Noah's C compiler was actually highly advanced, but he only brought one copy on the ark with him. No backups, and less than ideal storage conditions... you can guess what happened next. A triceratops ate the parchment tape containing the only copy of Noah CC, and Noah threw the offending triceratops off the Ark, because in his rage, he thought "I have a spare tricero". Only afterword did he realize the error in his logic, thus dooming the triceratops to extinction.
* Only found in highly divergent manuscripts, widely assumed to be late additions.
I think I recall reading that around that time (remember this is 2003) Linus was either against -Werror or against spending effort eliminate warnings. The reason being that GCC had a few false positives, and the effort of making Linux kernel build with these spurious errors was not worth the risk of breaking code that likely worked ok.
However I can't find anything where this is directly said, all I can find is a collection of Linus' early 00s emails on the subject of GCC which includes a LOT of reference to said warnings: https://yarchive.net/comp/linux/gcc.html
Note that there are parentheses around the assignment which the compiler takes as an indication that this is intentional. Also note that the parentheses are required because without them the precedence would be wrong.
Since the parentheses are required due to precedence, then they are not there to show "I intend this assignment to happen". That would have to be:
if ((options == (__WCLONE|__WALL)) && ((current->uid = 0)))
As an aside, note that this particular case also has the problem that the assignment expression makes the entire test expression false, which is suspicious. If an assignment expression occurs in the controlling expression of a selection or iteration statement, such that the entire expression is always true or false as a result, that should probably be warned about no matter how many parentheses have been heaped on to the assignment.
It was worse than that. They took whatever unreleased code was in the gnu repository on a random day, and started patching that. gcc 2.96 was known for miscompiling all sorts of stuff. GNU caught a lot of flack for a compiler they didn't even release.
AFAK Red Hat did this as they wanted to support ia64, but no (released) gcc version had a backend for it.
Has this happened since the source-control was changed to git? I imagine it would be almost impossible to break into Linus Torvald's git server amend previous commits, considering each one's hashed on the previous commits...
Even if you could break SHA1, it's unlikely that your replacement source code would look like it was human-written. Instead, it's going to look like human-written source code containing kilobytes or megabytes of random-looking comments. The comments will only be there to change the hash of the new content back to the hash of the original content. It's not going to be subtle at all.
That's true of a CRC code, but hashes are a lot harder to break.
Git hashes each file, and puts those hashes into a tree object, like a directory listing. Then it hashes the trees, recursively back up to the root of the repository. Finally the hash of the root tree is put in the commit object, and the commit object is hashed. Thus the two places you can put additional data to be hashed are the file contents (either in existing files or new files), or in the commit message. You can get a few free bits by adjusting less obvious things like the commit timestamp or the author's email address, but not nearly enough to make your forged commit have the same hash as an existing commit.
I'm still not following why it'd require so much data? I thought the goal was to have the commit hash collide with an existing commit hash, is that not enough?
I looked around, and it seems like the right place to hide the added data is in the "trailer" section of the commit. It's where signed-off-by lives and is used to generate the commit hash.
You might want to come up with a plausible reason for random data to go in there though. (likely using a header that wouldn't normally get printed out)
In a CRC-style code, you're essentially adding up all the bytes and letting it overflow the counter, so that the counter is a fixed size (usually 16 or 32 bits). Then you add a few more bytes, exactly the same size as the counter, so that the data bytes plus the extra bytes add up to zero. The extra bytes are delivered along with the data bytes, so that the recipient can repeat the calculation and verify that the total is still zero. If you modify the data, it is trivial to recalculate the CRC code so that the total is still zero.
Hashes are much, much more complex, and they're non-linear. Each bit of the hash output is intended to depend on every single bit of the input, so that changing a single bit in the input creates a radically different hash output.
In a paper published this year, https://eprint.iacr.org/2020/014.pdf, the authors Gaëtan Leurent and Thomas Peyrin changed the values of 825 bytes out of a 1152 byte PGP key in order to generate a new key with the same signature (aka, the same hash). It only cost about $45k, too.
The git hash surely also takes the contents of binary files into account, so I imagine that in any repo that contains non-text files, an attacker would try to hide the garbage inside e.g. some metadata field of an image file.
> That said, if you could rewrite an older commit, the change would only be applied in a fresh clone, right?
I think so, assuming the fetch algorithm is using the hashes to get the deltas which I think it does.
I'm not sure about CVS but with GIT rewriting a _previous_ commit _object_ itself with different blobs but making the commit object itself have the _same_ hash by messing with it's comment wouldn't cause any difference in child commits since commits are pretty much independent other than the pointers to parent/child and incorporating that into it's hash (i.e they would have different trees so the changes would not propagate to the HEAD of the branch).
I think the only way have something end up in the HEAD of a branch AND persist is to break the SHA1 of a blob (i.e a file) by inserting the extra SHA1 breaking content into the blob itself rather than a commit tree (provided that exact blob hash is part of the tree in the HEAD of a branch). Then you would also need to hope that the malicious blob is fetched by the person who writes the next commit to be based upon the HEAD of that branch AND modifies the same file blob so that it persists into the next revision of the blob... seems pretty hard to pull off - pun intended
There is also the issue of pushing a blob that already exists on the remote according to the hash. Even with re-write permission GC might make that hard to do quickly.... I wonder if you would need direct access to the git server to do this.
[EDIT]
Thinking about swapping out SHA1 in the future, you would still want to rehash all of the blobs and trees to prevent SHA1 attacks on old blobs that are unchanged going forward to essentially prevent what I described above.
If you only hashed new blobs with the new algorithm you would need to wait until every file had been touched to be safe.
I'm curious, wouldn't this also be caught by static code analysis tools, at least today? An assigment inside an if condition is both, most likely a mistake, and fairly easy to detect automatically.
I would guess this is part of the reason why most modern compilers will indeed emit a warning about assignment within if, for, and while - branch checks.
At the same time, the standard implementation of strcpy is:
while((*dst++ = *src++));
which has a legitimate reason for doing assignment inside the while condition. Then again, one could argue that the above code is 'too clever'. And I would probably agree.
However they do not emit a warning if the assignment is parenthesized, like in the exploit. I think static analysis tools are the same, they would be way too chatty if they emitted warning for a parenthesized assignment.
Static analysis already has way too many false positives as it stands. For a well maintained code base the rate can easily be 100% false positives, which gets annoying after some time.
Unless the first character was null, in which case it would be ignored by the condition... Also, you don't need to dereference a pointer in order to increase it.
I feel like this is idiomatic C but needlessly verbose. Most people would combine the increment with the assignment. And most people would recognize putting it in the while condition as a common strcpy.
I think this is why there are parantheses around current->uid = 0. gcc has the option -Wparentheses, which gives a warning if you write something like this:
if (a = b) doSomething;
But there is no warning if you write it like this:
if ((a = b)) doSomething;
The convention is that with these unneeded parantheses, you are signalling that you actually want the assignment here. I would assume other static code analysis tools use this convention as well.
Was this a backdoor or not? Following the comments on the article and previous posts here on HN it seems the jury is out AFAICS.
The crucial question to me seems to be if this condition:
options == (__WCLONE|__WALL)
can be willfully introduced by a bad actor, and otherwise never really occur. Unfortunately I don't know this (not familiar with Linux development) but herein lies the answer it would seem.
The following Linux-specific options [..] can also, since Linux 4.7, be used with waitid():
__WCLONE [...] This option is ignored if __WALL is also specified.
__WALL
So to trigger this:
* You have to call a deprecated function
* With a flag that was at that time illegal (linux < 4.7)
* And a second illegal flag that is cancelled out by the first illegal flag.
This is something any userspace process can do, but no sane process should ever do.
It has been a long time since I make sure my codebases have `-Wall -Werror`. This bug is from 2003 both when that wasn't as common & when compiler diagnostics weren't as good/reliable.
each commit's id is an integrity hash of the repository at the time of commit. git doesn't provide access control; it relies on access controls built-into whichever transport mechanisms you choose to enable (https, ssh, etc).
you can sign commits with PGP signatures and with hooks, you can reject commits that aren't signed. i believe maintainers sign commits in the linux repo.
and then the compiler complains if you go fiddling with userid outside this function where you deliberately opened a backdoor to write to it. (and you can wrap pragmas around that function to turn off warnings).
There are legitimate reasons to change the uid at runtime. For example, some server software starts as root and then drops to a less-privileged user. Android relies on this too, zygote, the fully-initialized "blank" runtime process, runs as root and gets forked and changes uid to the corresponding unprivileged user whenever an app is launched.
I think that this might be an typo or at least it has plausible deniability. I have changed my coding style to always put constant on left side just to avoid such an error (such typo gave me a few days of debugging multithreaded code and I have just said "Never again!!" :D)
Even with the "typo" corrected, the patch makes no sense. There is no plausible deniability why it should do what it would do. It was definitely a deliberate attack.
Worse, agencies such as the NSA have two missions: offence and defence. Adding in backdoors helps the offensive mission, but hurts the defensive mission, so it only makes sense if the backdoor isn't so easy to find. An obvious backdoor hurts the US far more than it helps the US. This one was too obvious.
Some ideas:
1) A script kiddie found some way to break-in and edit CVS. The entire idea being to have something to brag about. This was caught too early to be brag-worthy (breaking ancient CVS isn't something to brag about).
2) It was a warning shot from some Western agency meaning "tighten up your security".