Consider what happens if neither player wants to go first. P1 passes. P2 passes. Now p1 has to play or the game is drawn. If the game is played with win3 draw1 lose0 rules, then p1 should play even if at a slight disadvantage. A 49 % chance of payoff 3 is better than a 100% chance of payoff 1.
Just elaborating on this. One way of looking at it is that we turn the attacked program into an interpreter thus bypassing the requirement for executability.
It turns out that a lot of programs can be tricked into interpreting a family of languages called rop chains.
You're absolutely right that "100% right" is an unreasonable standard. I submit that knowing that there are places on the planet where, at times, the sun doesn't set, however, is a reasonable thing to expect of someone who's made it far enough in life to be writing code (or testing it! How did this pass QA?) for Apple.
f.lux gets it right if I set my zip to Barrow, AK. Why can't Apple, with how many orders of magnitude more resources at their disposal — including knowledge?
No they don’t. Certain types of clearances for DoD or the Intelligence Community may require polygraph along with the background investigation process, but the vast majority of cleared US government or contractor personnel do not have to undergo polygraph testing – they only go through routine reinvestigations at the 5- or 10-year marks following favorable adjudication for their initial clearance.
Ancestor post said "lie detector". A polygraph test would be a separate thing entirely. I'm not entirely certain why you thought to link them.
The lie detector in the regular process is when the federal employee investigates whether the answers on your questionnaire match up with public records and in-person interviews.
The polygraph is security theater intended to intimidate the subject and possibly reveal previously undisclosed issues by provoking a stress response. That's why they keep using it.
That’s true, although the requirements for different clearance levels are drastically different, and most people don’t refer to the standard background (re-)investigation process as a “lie detector,” even if the investigators are in fact attempting to determine your honesty in addition to evaluating other signals about your behavior and potential ability to be influenced or manipulated.
Most of the time, comments about “lie detectors” are a reference to polygraph tests, which only apply to an extremely minor percentage of the overall cleared workforce; I just wanted to point that out, that it’s not quite as bleak as implied by the parent.
I was tongue-in-cheek trying to break the implied association between polygraph and "lie detector".
The lie detector in a polygraph test is always the human running it, and they're about as fallible and unreliable as anybody else, with respect to determining honesty. They could just chuck the machines in the trash and call it a "veracity interrogation", but selling the machines and training the people to use them is a better money sink, and gives more ass-cover when someone invariably deceives the investigators. "Trained to beat the machine" sounds better on paper than "really good liar".
Security theater needs its props.
As far as I know, only those working in secure compartmentalized facilities and with high-value assets ever get polygraphs.
If you hold a key and wait by a coded terminal in a nuclear missile silo, you get one. If you reduce and analyze anti-ballistic missile test telemetry, you don't. If you write systems code for submarines, you might get one. If you write route-planning software for in-flight refueling tankers, you don't. My guess is that it ultimately depends on how much Country X would probably pay you to borrow or copy your access. If it's above $Y, they do a little more to scare you into being a good little guardian of the nation, and hope you're not another Snowden.
They just have way too much need for cleared personnel to spend enough to actually make certain, for everybody. Doing it correctly always costs more, in time and money. Why do it right when you can make it look like you did it right, and get paid the same?
Quote from a friend who works at such a job: "I just want to drink beer and get my dick sucked"
I really don't think there is much hope. He just thinks everything works out in the end when in reality it is people fighting tooth and nail and giving up their lives to fight for this shit.
FMRI data that is taken from direct brain scans is admissible in courts in India to determine if a suspect is lying in high profile cases. India also has the largest biometric database (outside of the NSA).
Kinda and kinda not. They have done well so far but they could conceivably be beat by a dedicated chip. They need make sure they keep up with the research so they build what the community wants.
> The current conventional wisdom is that checking backfires.
And a not so subtle imputation which probably has a much larger effect on the willingness to look at facts in the first place. But I want everyone not sharing my opinion to be irrational too.
I think something is lost when doing this. I'd bet the researcher who first builds a model and then reaches for ml will outperform the researcher who goes straight for ml.
Building a custom model will help with feature selection. It will provide a baseline to compare the ml model to which can help debug problem points of the ml model. And finally it serves as a sanity check that you aren't leaving a lot of performance on the table.
We've been told that the antidote to deep fakes is supply chain security. It's troubling to hear that supply chain security is already broken and by fairly unsophisticated attackers from the sounds of it.
I would not use deep fakes to refer to counterfeit goods. Deep fakes refers only to artificially generated videos of a real person (usually face, but it could include voice).
Deepfake (a portmanteau of "deep learning" and "fake"[1]) is a technique for human image synthesis based on artificial intelligence. It is used to combine and superimpose existing images and videos onto source images or videos using a machine learning technique known as generative adversarial network.[2] The phrase "deepfake" was coined in 2017.
Because of these capabilities, deepfakes have been used to create fake celebrity pornographic videos or revenge porn.[3] Deepfakes can also be used to create fake news and malicious hoaxes.[4][5]
I didn't read the parent post as using it that way - I believe they're saying that any claim that "supply chain security" (of videos) will combat deep fakes* isn't necessarily a credible model of security considering that the "real" supply chain (of physical goods) isn't apparently particularly secure.
I don't think I actually agree, however. Things like hashing and cryptography make me inclined to believe a digital 'chain of custody' is easier to prove and validate than a physical one.
( *I'd never heard this before, but it's an interesting claim )