Hacker Newsnew | past | comments | ask | show | jobs | submit | incognito124's commentslogin

Back in my uni days I did not get why that works. Why are sine waves special?

Turns out... they are not! You can do the same thing using a different set of functions, like Legendre polynomials, or wavelets.


>Turns out... they are not! You can do the same thing using a different set of functions, like Legendre polynomials, or wavelets.

Yup, any set of orthogonal functions! The special thing about sines is that they form an exceptionally easy-to-understand orthogonal basis, with a bunch of other nice properties to boot.


To be maximally pedantic, sine waves (or complex exponentials through Euler's formula), ARE special because they're the eigenfunctions of linear time-invariant systems. For anybody reading this without a linear algebra background, this just means using sine waves often makes your math a lot less disgusting when representing a broad class of useful mathematical models.

Which to your point: You're absolutely correct that you can use a bunch of different sets of functions for your decomposition. Linear algebra just says that you might as well use the most convenient one!


>They're eigenfunctions of linear time-invariant systems

For someone reading this with only a calculus background, an example of this is that you get back a sine (times a constant) if you differentiate it twice, i.e. d^2/dt^2 sin(nt) = -n^2 sin(nt). Put technically, sines/cosines are eigenfunctions of the second derivative operator. This turns out to be really convenient for a lot of physical problems (e.g. wave/diffusion equations).


Another place where functions are approximated is in machine learning which use a variety of non-linear functions for activations, for example the ReLU f(x)= max(0,x)

Something about appreciating only after losing

also, demonstrating a marked improvement in the experience.

it really does seem like we all gonna be using jj soon enough

I recall pijul.org that was another working prototype of better git

and I wonder how much overlap is there in the way they have made the improvements.


Jujutsu has "first class conflicts", but it's different from Pijul's "theory of patches". As far as I know, the other big stuff like "working copy is a commit" and the "operation log" (which allows for `jj undo`, safe concurrency, etc) is not present in Pijul. The approaches to Git interop are very different.

pijul is one of the projects that I’d just sponsor a team for a few years if I was a megacorp or a government research agency because it’s just so damn cool in theory, but has too many rough edges in day to day practice (IOW I’d like to try it but would need pijul colocate for it to make sense)

Supposedly, Pijul doesn't have the "force-push to trunk" problem. This alone makes it interesting.

pijul uses a completely different model of version control than git (stores diffs rather than snapshots). And so the cost of switching and interoperation is a bit higher than jj which basically acts like a nice UI over git.

You reminded me of a rule of thumb that says: Keep the complexities in data structures and simplicity in algorithms

I have seen subscription systems built following that rule of thumb. It collapses pretty well, as the data structure then becomes impossible to engage with unless you are an expert, and the callers are never experts.

Things make more sense when the data structure lives in a world where most, if not all illegal atates become unrepresentable. But given that we often end un building APIs in representations with really weak type systems, doing that becomes impossible.


Ironically, the attempt to prevent illegal states may create "complex" code (quoted since it may be due to perceived or actual complexity).

The entire KDE ecosystem is on gitlab

https://community.kde.org/Infrastructure/GitLab


I truly appreciate people sharing their dotfiles, I learned so much about vim and zsh just by reading other people's configuration alone (and the occasional comments there).

Also, the quality of life improvements like `alias ..='cd ..'`, or mapping `l` such that it either opens a pager or lists a dir, depending on the argument. I'd never come up with those, and they're beyond useful.


Last one sounds interesting, could you share a link or snippet?


I imagine it's something like

    l() { if [ -d "$1" ] ; then ls -alFh -- "$1" ; else "${PAGER:-pager}" -- "$1" ; fi }
in the .bashrc


Another telescope hosting service I heard about, based in Spain: https://www.pixelskiesastro.com/


Another factor was the rollout of 5G at that same time:

https://www.scientificamerican.com/article/5g-wireless-could...


It's mostly the US and a few other small markets that even have millimeter wave 5G NR. This is mostly due to the fact that FCC had not wound down analog broadcasts in time, and mmWave/FR2 was the only way to do 5G in the US initially, as lower C-band were not freed up until 2021. Deployments of mmWave exist solely due to the sunk cost of existing equipment and narrow use-cases like stadiums and concerts.

The article predates our current reality where C-band (3.5GHz) is available for 5G


Did you name that law by yourself?


https://flownet.com/ron/papers/tla.pdf

>In retrospect, in the story of the three-layer architecture there may be more to be learned about research methodology than about robot control architectures. For many years the field was bogged down in the assumption that planning was sufficient for generating intelligent behavior in situated agents. That it is not sufficient clearly does not justify the conclusion that planning is therefore unnecessary. A lot of effort has been spent defending both of these extreme positions. Some of this passion may be the result of a hidden conviction on the part of AI researchers that at the root of intelligence lies a single, simple, elegant mechanism. But if, as seems likely, there is no One True Architecture, and intelligence relies on a hodgepodge of techniques, then the three-layer architecture offers itself as a way to help organize the mess.



I wonder if attention span can be increased somehow. Also related, I noticed context switching from one deep task to another requires significant energy and time, I also wonder if that can be optimized.


Most effort goes into loading the working memory. The longer the disconnect the more effort reloading takes.

It occurred to me that it would be beneficial to use flashcards to trigger the memory. It sounds unconventional/unorthotodox use of flashcards but why not. Take screenshots of your codebase/functions/commits, capture progress on your workbench, take little notes of your progress/procedures/learnings/recipes and resurface them. I'm sometimes annoyed to rediscover a useful tool I made time ago.

I yet haven't done this systematically, much less using a SRS, but it is sure worth a try. Difficult to predict when this is worth the effort. But it's a good habit to keep "lab notes" anyway.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: