Do we really need yet another list of algorithms to memorize for interviews? Is another list going to make a better JavaScript engineer? Will help me understand JavaScript better? Will it help me on the job? Will it make me a faster and better React developer?
The obvious answer to all of the above rhetorical questions - fuck no.
That being said, it is still a topic of interviews, so I will up-vote bc this is pretty good implementation, even though this repo has been submitted to HN at least 4 times in the past.
Someone also mentioned Mnemonist [1] - that one is nice too.
Are you saying it’s not important to know what a binary search is?
If I interview someone I don’t go deep into algorithms but I want to know that they know how about binary search or hashmaps. Ignorance of these shows ignorance of the basics of computation.
Not just GOOG, but all of FAANGMULAetc... and smaller start ups as well. Robinhood asked me a dynamic programming leetcode question for their web engineer (front end) role and Bloomberg asked me a OOP design question (asked to code in Python, even though I do not have Python on my resume...interviewer was a Python developer) for a javascript role - both were 1st round phone screens.
The tech industry has normalized to the state of asking such question - its rare to be interviewed on domain expertise, unless you are interviewing at a fortune 500 big corp.
Being able to rattle off exactly how any particular data structure is implemented isn't useful for most developers. Being able to reason about algorithms and data structures _is_ useful in day-to-day development. And that does start with having a basic understanding of some subset of them.
You can clearly see the impact of computational ignorance on the modern web. JS engines have been optimized to hell and back and there are _still_ sites out there that are godawful slow for no good reason at all.
Thanks for your response -- I think you're probably right, but alot of JS's slowness seems to just be all the junk a typical page needs to pull in via HTTPS for all the ads, frameworks and what not. Also videos that autostart themselves, obnoxious pop-ups that take your full screen up after you scroll 20% into an article, etc. That's not necessarily JS's fault. Also algorithms wouldn't help there. :p
Is there any practical reason use, say, a linked list from this library vs javaScript's native growable array. You can use unshift() to delete in the middle.
Are there practical reasons for using a linked list in JS? This is an honest question. I’ve taken immutability to heart, so storing a stable yet presumably mutable reference to a cell seems like a bad idea to me. (If it’s immutable, you can keep and use the cell’s value instead of caring that it came from a list.)
TIL ES6 Map is guaranteed to iterate in insertion order. That always made me crazy in C++ too; the structure that I usually actually want is std::unordered_map not std::map.
There are other algorithms and use cases for a stable iteration into an ordered sequence, but I can’t think of any off-hand. It is true that it’s rare to prefer a linked list on modern CPU architecture.
This property is only helpful if you have a reference to the node or are only making head/tail updates. If you have to do any amount of iteration, you're usually better off just rebuilding the list. It takes less memory, and is better on the cache.
The one other time a linked list is really good is if the list is large, and you can't afford the chance that you need to resize the array backing the vector.
Though, weirdly, the author lists the deletion BigO as O(n). I would have thought maybe they meant traversing to the item to be deleted, but that would imply insertion is also O(n), since you'd have to traverse to the point of insert. Unless they are treating deletion as deleting from an arbitrary location, and insertion only at the head. Very strange.
But, yes; while real life performance characteristics vary (cache locality can lead to surprising wins for arrays even on insertions/deletions), at least theoretically, there are use cases where a LinkedList will win out.
EDIT: Huh. They also list the insertion and deletion cost of a hash table as being O(n). That...is not right. I mean, they call out "in the case of a perfect hash function it would be 1", but treating the literal worst case as the actual runtime is like saying quicksort runs in O(n^2) (which the author does not do).
I did some quick benchmarks the other day between sorted array vs red black tree of https://github.com/preludejs and it looked like rb tree had better generic performance for N > 20k - ie. memcpy is better for most cases/cases that fall under 20k. You should measure of course for your use case. Libraries/your implementation should make it easy to switch between underlying representation.
There is a resource that shows algorithms in multiple languages - rosettacode.org - it exposes the task (problem to solve) and its solution implementation.
I think there is a strong correlation between the number of Javascript programmers interested in algorithms and data structures and the number of C programmers interested in front-end frameworks.
[1]: https://yomguithereal.github.io/mnemonist/