Hacker News new | past | comments | ask | show | jobs | submit | mjlm's comments login

This wouldn't work well because in the frequency domain representation, different "pixels" have very different importance for the overall appearance of the image: The pixels at the center of the frequency domain representation represent low frequencies, so compressing them will drastically alter the appearance of the image. On the other hand, the corners/edges of the frequency domain representation represent high frequencies, i.e. image details that can be removed without causing the image to change much. That's the crucial benefit of the Fourier transform for compression: it decomposes the image into important bits (low frequencies) and relatively less portant bits (high frequencies). Applying compression that doesn't take that structure into account won't work well.


Minor note. If the original data is a time-signal like in electrical engineering (amplitude vs. time function), then the "frequency domain pixels" (its transform) are different frequencies (points in frequency domain: how many repetitions in a second, etc.) and the time-signal's transform function becomes an amplitude vs. frequency graph.

But if the original data is an image (matrix or grid of pixels in space), then the "frequency domain pixels" are different wave-numbers (aka spatial frequencies: how many repetitions in a meter, etc.) and the Fourier transform (of the pixel grid) is a amplitude vs. wave-number function.


I'm into the glitch art scene and this makes me wonder what happens if you crop/erase patterns of the frequency domain representation and put it back together...


Currently, they buy carbon offsets to be net-neutral. The goal is to buy carbon-neutral energy in the first place.


I think this persistent state is one of the main advantages of the notebook environment, or the Matlab workspace, which I guess it was inspired by. It allows you to quickly try alternative values for certain variables without having to re-calculate everything. Saving snapshots would not be feasible if the project contains large amounts of data. If you want to reset everything, just "run all" from the beginning, or use a conventional IDE with a debugger.


No, not Matlab but Mathematica: "We were inspired originally by the excellent implementation in Mathematica" [1].

[1] http://ipython.org/ipython-doc/dev/whatsnew/version0.12.html...


And that came from Emacs and old Lisp environments---and perhaps something yet earlier?

As late as 2000, this was the single biggest advantage and single biggest impediment to new programmers in MIT's 6.001 lab: a bunch of nonvisible state, mutated by every C-x C-e. The student has tweaked two dozen points trying to fix a small program, and re-evaluated definitions after many of them, but maybe not all. The most straightforward help from a teacher is to get the buffer into a form such that M-x eval-region paves over all that, sets a known environment of top level definitions, and---more than half the time---the student's code now works.

I have similar concerns about much of Victor's work, for the same reason. Managing a mental model of complex state is a n important skill for programming, but it's best learned incrementally over long experience with more complex programs. These very interactive environments front load the need for that skill without giving any obvious structure for helping the student learn.

Contrast Excel and HyperCard, which have no invisible state: you can click and see everything.


But you cannot recalculate if your calculation has trashed your inputs.And if it hasn't then the snapshot does not impose a cost. If you are willing to forego the opportunity to replay to save memory, just put the producer and consumer in the same cell.


It is essentially already that if you are careful with the variables.

Currently, you can easily control which things are saved and which aren't. So it is the best of both worlds.


Minor point: What you describe isn't usually called working memory. Working memory is what you can "keep in mind" at any one point. It lasts for a few seconds and then has to be refreshed, e.g. by repeatedly saying a phone number to yourself in your head. Working memory is more or less synonymous with short-term memory.

What you describe is long-term memory (everything beyond a few seconds is considered long-term).

Edit: Too slow. Some more justification: Very broadly, one hypothesis is that working/short term memory is stored in the currently present activity patterns of neurons, which fade/decorelate after a few secodns. Anything longer is thought to be stored in the weights of the synapses between neurons (there are alternative theories but I like this one).


Sending before deducting the money seems like an obvious design flaw that should have raised red flags. Is there any explanation for why it was implemented this way, and why it wasn't spotted by the developers?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: