Hacker News new | past | comments | ask | show | jobs | submit | typedef_void's comments login

Anyone else find the constantly refreshing search results annoying? If so, do you know of any cognitive psychology literature to explain why? I personally find this instant search infuriating and just want my standard old Google search back, where things do NOT change unless I'm mentally prepared for them to change.


I'm missing something important. How does this relate to Stanford?


I think he is preparing for the written portion of the SAT.


Anyone else ironically hoping MS creates a better iPad just for the opportunity to install Linux on it?


Didn't Tesla claim to have developed wireless power a long time ago. Those more familiar with Tesla/Physics ... is this the same technology?


The technology in the linked article would appear, at least superficially, to be similar to RFID. Although one difference might be that RFID works at a fixed frequency (to match the harmonic frequency of the receiving radio circuit) whereas this apparently works with "ambient" radio, implying a broad spectrum of frequencies, which is quite clever.

Either way, the broadcast signal is losing power at a rate proportional 1/d^2. So these devices necessarily have to work at very low power. What Tesla did - or is supposed to have done - was to figure out how to transmit power without the 1/d^2 loss. I.e, much like a collimated laser beam, he could transmit power over large distances and power high current devices.


It is interesting, that we are only now scratching on the surface of what Tesla did 100 years ago.


There is no kidding that he was off the charts brilliant, but he was also - especially towards the end - insane.

That is why it is so hard to figure out what Tesla actually did or didn't do.


He had dramatic ideas for free, wireless power transmission for everyone. His wealthy backer (was it JP Morgan?) said he couldn't see the profit angle in that and wasn't interested, saying something like "Where do you attach the meter?".

Not exact quotes, but the source is my print version of : http://books.google.com/books?id=40NzjS5FunkC&printsec=f...;


This is same technology as crystal radio of first half of 20th century, which was also powered only by absorbed EM energy (which is also same as passive RFID and similar things). Tesla's designs are slightly different (in that they are probably not exactly wireless). See http://amasci.com/tesla/tmistk.html


How is this better than JITting?

It seems like most recent performance gains come from runtime/dynamic optimizations, not static/compile time optimizations.


> It seems like most recent performance gains come from runtime/dynamic optimizations, not static/compile time optimizations.

Are you sure? Have you heard of stream fusion for Haskell?


is sleep(1) noisy?

if so, it might be vs timing attacks


Sleep(1) is very noisy on Windows machines. The time slice given by default is ~15-20ms [1], and calling Sleep(<15) relinquishes the rest of the time slice. Windows has a multimedia API that can be used to get intervals down to 1ms, but it requires system calls that increase your system load. So you usually just need to use proper synchronization anyways, which is what the Python guys should have done :)

This crummy Sleep() implementation has some nice effects on programmers. Those who like to solve problems with lots of copy/paste code are forced to think about using proper synchronization primitives when running high resolution loops that wait for events, or their code just won't run very fast.

[1] http://social.msdn.microsoft.com/forums/en-US/clr/thread/fac...


Aha, so that's probably why JavaScript timers on Windows have around 15ms accuracy:

http://ejohn.org/blog/accuracy-of-javascript-time/


True however Sleep on windows is different than sleep on posix/unix.

Sleep() on windows takes ms.

sleep() on nix takes seconds.

Windows:

    VOID WINAPI Sleep(
      __in  DWORD dwMilliseconds
    );
Sleep(1) is as fast as it can go which turns out to be 15-20ms.

nix:

    #include <unistd.h>
    unsigned int
    sleep(unsigned int seconds);
usleep() can be used for more granular delay on nix.


When I did driver programming in Windows, it was well-known that Sleep had a resolution of 10 ms; it is based on the interrupt timer (not the high frequency timer). You could change the interrupt timer's duration, but its ticks are what guide Sleep. Not counting the effect of context switching, since you are waiting for the timer ticks, your actual times vary from 10 ms to 19.9999 ms. 15 ms is a nice way to say "on average", but I would not rely on that measure.

Timers are hard to get right. Tread warily, programmers! This is one of those areas where it is good to understand some things about the computer hardware behind the software.

EDIT: I should add that the high frequency timer is not a panacea either. It will work for you most of the time, but there are two circumstances that will occasionally trip you:

(1) At least in Windows XP and 2000, there is a KB (I do not remember it now) that explains that for a certain small set of manufacturers, if there is a sudden spike in high load on the PCI bus, the high frequencer timer will be jumped ahead to account for the lost time during that laggy spike. This correction is not accurate. This means that if your initial timestamp is X, and you are waiting for it to be X+Y, wall clock time may be between X and X+Y, but Windows itself altered the timestamp to be X+Y+Z, and your software thinks the time has elapsed. I personally experienced this bug.

(2) You actually have more than one high frequency timer -- one for each CPU on your system. Once you start playing on a system with multiple CPUs, how do you guarantee that the API is providing you a timestamp from the same timer? I remember there may have been way to choose if you dropped to assembly to make the query but that the API at the time did not support a choice. The timer starts upon power-up. If one CPU powers up after the other, you will have a timestamp skew. Some high frequency software algorithms attempt to correct for this skew. I do not know all the details to that now.


Presumably the high frequency counter is driven off the cycle counter, and not the Intel High Precision HPET Timers.


http://en.wikipedia.org/wiki/Time_Stamp_Counter

"The issue has two components: rate of tick and whether all cores (processors) have identical values in their time-keeping registers. There is no promise that the timestamp counters of multiple CPUs on a single motherboard will be synchronized. In such cases, programmers can only get reliable results by locking their code to a single CPU."

The entry also mentions that hibernation can affect the counters. I wonder if power savings implementations that speed up or slow the CPU could also have an effect.


Yes. The TSC, which counts cycles, is what WinAPI reads with GetTickCount(), and is not the same thing as the HPET timers.


This has nothing to do with timing attacks and would do nothing to defend against them. Read 'daeken's comment below, though.


Excuse my ignorance: what do you mean by noisy?


Noisy means there is a lot of variance in the actual time the process spends sleeping. When you say sleep(1) most OSes interpret that as saying, sleep as short as you can. Based on the scheduler internals, that can vary a lot.


Which OS interprets sleep(1) (ie, "sleep for 1 second") as "sleep for as short as you can"?

On WinAPI, Sleep is denominated in milliseconds.

On BSD, sleep(3) is a library wrapper around nanosleep(2).

Linux's man pages make no mention of the magic number "1" as a "sleep 1 timeslice" shortcut; also, older Linux man pages warn that sleep(3) can be implemented in terms of alarm(1), which is used all over POSIX as an I/O timeout and would blow up the world if it alarmed in milliseconds.

If you want to sleep "as short as you can", sleep for 0 seconds, or call any other system call to yield your process back to the scheduler.


Windows also only guarantees that your process will sleep at least as long as you specify. Not that it will sleep exactly as long.


Thanks for the correction. I was just talking about the de facto behavior I have seen on Linux and BSD for very short sleep intervals (way shorter than 1 second), not necessarily about the behavior as specified by the system call. I should have been clearer.


I really don't think you've ever seen BSD return in milliseconds after a 1-second sleep. Respectfully, I think you're pretty much just wrong.


Well, that is not what I meant at all, I meant that I've seen BSD return in say 2 microseconds after calling usleep(1).

But that's usleep not sleep, which is the inaccuracy I was admitting to in the first place.


Ok, but (a) this article is talking about literally POSIX sleep(3), and (b) there is a ton of confusion on this thread about whether sleep is ms-denominated or seconds-denominated.

Sorry to pile on you, though.


Read it again. He's talking about near-zero sleep calls, not one second sleep calls.


POSIX sleep() doesn't take subsecond intervals. Maybe he's talking about usleep()?


Sleep is noisy.

When you do sleep, depending on the hardware, the OS, the configuration, the kernel flags, etc. the minimum you actually get is around 38.

But that varies.


Wait, what? If I call sleep(1), you're saying it's going to sleep for THIRTY EIGHT SECONDS?

People, it's right there in the man pages.

Are you maybe thinking about WinAPI's Sleep? That's ms-denominated. It would make sense that attempting to sleep for 1 millisecond wouldn't work, and would build in the time for the scheduler and the timeslices for every other process. We're talking about OpenSSL and POSIX sleep(3).


My apologies. Yes, I do mean MS - I started off doing *nix development, but now am doing Win32 programming 18 hours a day.

Please discard my above comment, everyone.


on second thought, my comment makes little sense,

if they wanted noisy sleep, it should be something like

sleep(func(rand()))


That makes no sense either, an attacker can usually average such things out. Besides, there are better (faster) ways to guard against timing attacks.


People seem to have a skewed perception of how to defeat timing attacks, generally. At the end of the day, it's more about making things constant time than trying to make the timing difficult to detect. Simple example:

You have two hashes and want to see if they're equal. The naive approach is to iterate over each byte in both hashes and compare them, then break when you find a byte that doesn't match. That approach, however, could be vulnerable to a timing attack because you could potentially measure how many times it iterates. An implementation that's resistant to timing attacks could XOR each byte of each hash and accumulate across them; if that accumulator is zero at the end of the loop, it's equal. That approach is constant time, rather than being dependent on the data you're dealing with.


Besides, I see no evidence that the sleep was intended to thwart timing attacks. It looks like it's all about easing the debug process somehow.

Incidentally, the primary problem here is not the mere presence of a debug flag that governs a sleep, it's the fact that PySSL_SSLdo_handshake sets that debug flag. Right?

In other words, it's not a bug in OpenSSL itself, but rather the Python wrapper for OpenSSL. That's how I understand it.


Yes, exactly.


This looks pretty cool.

However, if it's actually useful, why wasn't it invented layer as part of OpenGL / taught in standard graphics courses?

Given that OpenGL already has the 3D model, and has to redraw everything 30 or 60fps anyway, there's no performance hit to slightly change the camera angle.

What are the limitations / side effects of this technique that cause it to not be used in practice?


The side effect is that people made to endure this more than a couple of minutes as a novelty will want to murder you. They'll then plead temporary insanity and likely get off with a light scolding and a medal.


Doing multiple passes with slightly different camera settings is becoming more common, but you would only use it with something like the NVidia 3D system, or any other 3D display technology. You would never use it to do something like this.


It's kinda annoying.


Besides MRI research, has any useful applications come out of this?


It's potentially huge, because of the potential for sub-nyquist signal reconstruction. Imagine if the only extant photo of a historical event was a shitty jpeg or comparatively low-resolution picture. Compressed sensing yields techniques for reconstructing a higher quality version of the image than the original hardware or codec was capable of recording.

It's frustrating to me as I work a lot with audio and video but the math is much harder than anything else I've encountered in DSP. I feel stupid every time I dig into it.



Yeah ... but what type of real world data is sparse?


Many, many kinds. A typical photo is quite sparse. Oh sure, there is a lot of variation in the individual pixels etc., but it's usually a picture of something with shape and fairly strongly defined visual characteristics (which is what made it interesting enough to record in the first place). Random chroma noise, by contrast, is not sparse at all. In that sense, it has a much higher information content than a picture.

That Terry Tao blog linked above is by far the best entry point I've found into the subject but I'm really not qualified to explain or simplify it well, I'm afraid.


Why is 48 laws of power BS? There are rules in the book that contradict each other, but it provides a framework for recognizing patterns of human behavior and analyzing it.


GPUS: nVidia/ATI have amazing toolchains, especially CUDA.

FPGAS: we have amazingly bad toolchains


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: