Anyone else find the constantly refreshing search results annoying? If so, do you know of any cognitive psychology literature to explain why? I personally find this instant search infuriating and just want my standard old Google search back, where things do NOT change unless I'm mentally prepared for them to change.
The technology in the linked article would appear, at least superficially, to be similar to RFID. Although one difference might be that RFID works at a fixed frequency (to match the harmonic frequency of the receiving radio circuit) whereas this apparently works with "ambient" radio, implying a broad spectrum of frequencies, which is quite clever.
Either way, the broadcast signal is losing power at a rate proportional 1/d^2. So these devices necessarily have to work at very low power. What Tesla did - or is supposed to have done - was to figure out how to transmit power without the 1/d^2 loss. I.e, much like a collimated laser beam, he could transmit power over large distances and power high current devices.
He had dramatic ideas for free, wireless power transmission for everyone. His wealthy backer (was it JP Morgan?) said he couldn't see the profit angle in that and wasn't interested, saying something like "Where do you attach the meter?".
This is same technology as crystal radio of first half of 20th century, which was also powered only by absorbed EM energy (which is also same as passive RFID and similar things). Tesla's designs are slightly different (in that they are probably not exactly wireless). See http://amasci.com/tesla/tmistk.html
Sleep(1) is very noisy on Windows machines. The time slice given by default is ~15-20ms [1], and calling Sleep(<15) relinquishes the rest of the time slice. Windows has a multimedia API that can be used to get intervals down to 1ms, but it requires system calls that increase your system load. So you usually just need to use proper synchronization anyways, which is what the Python guys should have done :)
This crummy Sleep() implementation has some nice effects on programmers. Those who like to solve problems with lots of copy/paste code are forced to think about using proper synchronization primitives when running high resolution loops that wait for events, or their code just won't run very fast.
When I did driver programming in Windows, it was well-known that Sleep had a resolution of 10 ms; it is based on the interrupt timer (not the high frequency timer). You could change the interrupt timer's duration, but its ticks are what guide Sleep. Not counting the effect of context switching, since you are waiting for the timer ticks, your actual times vary from 10 ms to 19.9999 ms. 15 ms is a nice way to say "on average", but I would not rely on that measure.
Timers are hard to get right. Tread warily, programmers! This is one of those areas where it is good to understand some things about the computer hardware behind the software.
EDIT: I should add that the high frequency timer is not a panacea either. It will work for you most of the time, but there are two circumstances that will occasionally trip you:
(1) At least in Windows XP and 2000, there is a KB (I do not remember it now) that explains that for a certain small set of manufacturers, if there is a sudden spike in high load on the PCI bus, the high frequencer timer will be jumped ahead to account for the lost time during that laggy spike. This correction is not accurate. This means that if your initial timestamp is X, and you are waiting for it to be X+Y, wall clock time may be between X and X+Y, but Windows itself altered the timestamp to be X+Y+Z, and your software thinks the time has elapsed. I personally experienced this bug.
(2) You actually have more than one high frequency timer -- one for each CPU on your system. Once you start playing on a system with multiple CPUs, how do you guarantee that the API is providing you a timestamp from the same timer? I remember there may have been way to choose if you dropped to assembly to make the query but that the API at the time did not support a choice. The timer starts upon power-up. If one CPU powers up after the other, you will have a timestamp skew. Some high frequency software algorithms attempt to correct for this skew. I do not know all the details to that now.
"The issue has two components: rate of tick and whether all cores (processors) have identical values in their time-keeping registers. There is no promise that the timestamp counters of multiple CPUs on a single motherboard will be synchronized. In such cases, programmers can only get reliable results by locking their code to a single CPU."
The entry also mentions that hibernation can affect the counters. I wonder if power savings implementations that speed up or slow the CPU could also have an effect.
Noisy means there is a lot of variance in the actual time the process spends sleeping. When you say sleep(1) most OSes interpret that as saying, sleep as short as you can. Based on the scheduler internals, that can vary a lot.
Which OS interprets sleep(1) (ie, "sleep for 1 second") as "sleep for as short as you can"?
On WinAPI, Sleep is denominated in milliseconds.
On BSD, sleep(3) is a library wrapper around nanosleep(2).
Linux's man pages make no mention of the magic number "1" as a "sleep 1 timeslice" shortcut; also, older Linux man pages warn that sleep(3) can be implemented in terms of alarm(1), which is used all over POSIX as an I/O timeout and would blow up the world if it alarmed in milliseconds.
If you want to sleep "as short as you can", sleep for 0 seconds, or call any other system call to yield your process back to the scheduler.
Thanks for the correction. I was just talking about the de facto behavior I have seen on Linux and BSD for very short sleep intervals (way shorter than 1 second), not necessarily about the behavior as specified by the system call. I should have been clearer.
Ok, but (a) this article is talking about literally POSIX sleep(3), and (b) there is a ton of confusion on this thread about whether sleep is ms-denominated or seconds-denominated.
Wait, what? If I call sleep(1), you're saying it's going to sleep for THIRTY EIGHT SECONDS?
People, it's right there in the man pages.
Are you maybe thinking about WinAPI's Sleep? That's ms-denominated. It would make sense that attempting to sleep for 1 millisecond wouldn't work, and would build in the time for the scheduler and the timeslices for every other process. We're talking about OpenSSL and POSIX sleep(3).
People seem to have a skewed perception of how to defeat timing attacks, generally. At the end of the day, it's more about making things constant time than trying to make the timing difficult to detect. Simple example:
You have two hashes and want to see if they're equal. The naive approach is to iterate over each byte in both hashes and compare them, then break when you find a byte that doesn't match. That approach, however, could be vulnerable to a timing attack because you could potentially measure how many times it iterates. An implementation that's resistant to timing attacks could XOR each byte of each hash and accumulate across them; if that accumulator is zero at the end of the loop, it's equal. That approach is constant time, rather than being dependent on the data you're dealing with.
Besides, I see no evidence that the sleep was intended to thwart timing attacks. It looks like it's all about easing the debug process somehow.
Incidentally, the primary problem here is not the mere presence of a debug flag that governs a sleep, it's the fact that PySSL_SSLdo_handshake sets that debug flag. Right?
In other words, it's not a bug in OpenSSL itself, but rather the Python wrapper for OpenSSL. That's how I understand it.
However, if it's actually useful, why wasn't it invented layer as part of OpenGL / taught in standard graphics courses?
Given that OpenGL already has the 3D model, and has to redraw everything 30 or 60fps anyway, there's no performance hit to slightly change the camera angle.
What are the limitations / side effects of this technique that cause it to not be used in practice?
The side effect is that people made to endure this more than a couple of minutes as a novelty will want to murder you. They'll then plead temporary insanity and likely get off with a light scolding and a medal.
Doing multiple passes with slightly different camera settings is becoming more common, but you would only use it with something like the NVidia 3D system, or any other 3D display technology. You would never use it to do something like this.
It's potentially huge, because of the potential for sub-nyquist signal reconstruction. Imagine if the only extant photo of a historical event was a shitty jpeg or comparatively low-resolution picture. Compressed sensing yields techniques for reconstructing a higher quality version of the image than the original hardware or codec was capable of recording.
It's frustrating to me as I work a lot with audio and video but the math is much harder than anything else I've encountered in DSP. I feel stupid every time I dig into it.
Many, many kinds. A typical photo is quite sparse. Oh sure, there is a lot of variation in the individual pixels etc., but it's usually a picture of something with shape and fairly strongly defined visual characteristics (which is what made it interesting enough to record in the first place). Random chroma noise, by contrast, is not sparse at all. In that sense, it has a much higher information content than a picture.
That Terry Tao blog linked above is by far the best entry point I've found into the subject but I'm really not qualified to explain or simplify it well, I'm afraid.
Why is 48 laws of power BS? There are rules in the book that contradict each other, but it provides a framework for recognizing patterns of human behavior and analyzing it.