On the list of things I always Google, is how to create a symbolic link under Linux. I just can't figure out a way to remember what comes first; the source or the destination. The man pages add to the confusion by calling the "source" the target. So, the rule of thumb I now follow is cp or mv semantics.
Here's how I finally memorized it: ln has a 1-file-argument invocation, so ln -s ../../a_fine_file will create a symbolic link to that file under the current directory and under the "a_fine_file" file name. The single argument case has to have the file you want to link to as input. That generalizes nicely as the 2-file-argument invocation maintaining the logic.
A guy in the office always remembers it as remembering that wedding saying - something old, something new, something borrowed, something blue. Whenever he did a symlink he would always say out loud something old, something new, ... That has stuck with me as well for all these years so I've never needed to figure out which was which, I always just knew it.
With the added '.' at the end. That way, I can remember to change the period out if I don't want the same name (or if I'm linking in the same directory).
Interesting. Thanks, that's helpful. It's also a necessary approach for using relative paths as targets when you refer to the current directory to add the link in.
I think of it like `cp`. Source to destination, where destination is the soft link. so it's like copying a file somewhere but instead of a copy you're making a link
This. "I want something here that points to there" is how I always think about it. That's why "link" on my boxes is mapped to "ln -s" with the arguments swapped :)
> I think of it like `cp`. Source to destination, where destination is the soft link.
That doesn't make sense to me. The way I think about cp is "copy this, and put it over here". If you think about symlinks that way, you mix it up.
When you say "source to destination, where destination is the soft link", that makes it more confusing for me, because if you consider the symlink that is being created as the "destination" (which at least is the intuitive way to think about it for me, but I suppose it's individual), you actually end up with:
# cp [source] [destination]
# ln -s [destination] [source]
Where "source" is "the new thing that should be created".
Since I seem incapable of getting out of this way of thinking about sources and destinations, my rule of thumb is that when creating a symlink, you always decide where it should point first. Not intuitive perhaps, but this time I've made the mistake so many times it kinda sticks.
If it was a hard link, I suppose my way of thinking about it would make more sense, since all hard links are as valid, pretty much the same. That is, after running
ln /some/file /other/file
/some/file and /other/file are hard links. So in this case ln is just copying a hard link, while afaik cp would be copying data, making a hard link pointing to the new data. In userspace? That seems to be what's happening here - https://github.com/openbsd/src/blob/master/bin/cp/utils.c
This is made a bit more confusing by the differences in man pages. For GNU ln, it's shown as ln TARGET LINK_NAME, but for OpenBSD, ln source [target]. But the usage is pretty much the same?
I do this as well, but it only works if I don't think about it. If I start thinking about it, I begin second guessing myself on whether I got it right and end up having to look it up.
I always think of links as pointing from source to destination. Your way would break that intuition for me. It would be the source of the address and the destination for the address to be copied to.
I understand fully; that’s also how I thought of it for years and why I couldn’t ever remember it. But if you think about it like copying a file it’s easier to remember. And it is like copying a file— it’s copying a reference to a file.
Given the number of things that use different source vs destination with the same type, I just have trained myself to always look it up, even if I am 99% sure I remembered it correctly, because holy shit when you get it wrong sometimes it's bad.
softlink dest <- src
file cp src -> dest
std::memcpy dest <- src
java arraycopy src -> dest
golang io.Copy dest <- src
etc etc
I'm only 80% confident I didn't make an error in the above 5 examples...
2. The second argument is optional. Therefore, the first argument must be the real file, and the second the link name. It does not make sense to omit the real file.
Two minor points: They are both real files. The symlink may be created first.
"Source" and "target" are confusing names because the target of the ln command is the file it creates, which is the symlink that points to the source file, which is the "target" of the symlink in an intuitive sense. The mnemonic works because it matches mv and in that case the file that exists must obviously come first. In ln without -s the file also must exist, which makes perfect sense. So it's easy enough to remember if you understand what the ln semantics are and don't get hung up by the source and target jargon.
Reading the man page instead of Googling this will make you a better engineer, and it will also cost more of your time while you figure out what the man page is trying to say. This knowledge may save you time later, if you use it again, or not. So the advice I would give is, never Google if the answer can be found in the man page. Unless it's a tool you're unlikely to ever use again, of course.
Man pages are so insufferably long and mostly irrelevant though. It's come up on HN before, I think, how if you're ever looking for anything specific like commonly used arguments and common usages, it's a complete waste of time to go through an entire man page.
Except that, if you think about it, every sentence in a well-written man page was put there by an absolute expert in the tool who thought you might need to know it. If you want to master your tools, it's absolutely not a waste of time to read the friendly manual. When you use a system professionally every day and refuse to read the manuals because it takes too long, and you're in way too much of a hurry, you're only hurting yourself in the long run.
You're probably going to notice the time you spend reading man pages, when you just! want! to! get! something! done! And that's frustrating. What you're less likely to notice is the next five times you reach for the wrong tool, or have to go hunting around for some bubble-gum-and-twine way to do something, because you didn't previously read the man page that tells you exactly what you needed to know to do it easily and correctly.
Well, no. Even if I read the damn thing, there are tens of tools I use day to day. I'm not going to remember any of it. It doesn't matter how much of an expert the author was if it's not structured in a useful way. And I've read enough fucking man pages. I've retained zero except for maybe what I ended up using. If I used it more than once. Maybe. Man pages don't tell you shit. They tell you everything and you still need to parse, interpret and analyze it because it's so ridiculously dense and there's never any guidance on actual real-world usage. Theoretically they're useful, realistically, they completely ignore everything about how human beings function. They're not written for people.
Hell, I often find that while the author has managed to cram everything and the kitchen sink into it, there's so much missing from it regarding how a tool works, caveats, whatever. All these things that are crucial to actually understanding the tools you're using beyond what some flag or other does on a surface level.
Reading, interpreting, and using dense technical documentation is a skill, just like reading maths. If you don't get anything out of it, maybe this is a skill you are presently lacking and something you should improve. Taking notes on what you are doing and reviewing your initial guesses, resources consulted, and eventual solutions may help you consolidate this ad-hoc knowledge into understanding.
I've read many man pages, many many times, and if I get nothing out of it, I find it is invariably because I am in a hurry, and I'm probably about to fuck something up. The thing to do is usually to slow down and do it right. Only rarely is the right thing to ask someone else for the solution, or see what some other poorly-informed person on the internet thought, although in an emergency asking for help is almost always the right call.
A huge amount of what man pages don't tell you is general Unix philosophy. Man pages don't tell you the lore around the thing, they describe the implementation. It's up to you to infer the consequences and how it can be used. Man pages don't hide the underlying bones of the system either, and generally assume you're comfortable writing a C program to test a syscall if you're not sure about something. So sure, reading man pages isn't always easy, but neither is getting regular exercise or eating your vegetables.
You're insufferable like every other Linux elitist out there. You realize being polite while being an arrogant prick doesn't change how much of an asshole you are being? I hate this kind of bullshit and I'm tired of it. And I'll get warned because I'm the one getting mad over it. Fuck this shit.
I know it sucks when your environment sucks and your tools suck and people tell you your habits suck too. I can tell you're frustrated by your tone in this thread even before I told you that you may lack the skill of slowing down enough to read technical documentation. So, you're right. I'm telling you to be better and try harder, and I don't even know your situation! Maybe that does make me an asshole. Certainly most people higher on "agreeableness" than I am wouldn't say those mean things.
By the way: I might be a Linux elitist, but nobody's ever called me that. I use OS X in my day job. I've been programming in some capacity for about 30 years. Your mileage may vary. If you take my advice you may hate me for it, but it will almost certainly make you a better developer in the long run. And I don't mind if you think I'm an asshole. I know I'm right, and you probably know I'm right too.
Finally, if we weren't having this interaction in public, I'd be kinder, gentler, and might not say anything at all. But if one other person who hates reading difficult technical material is pushed to overcome that limitation by reading this thread, it's worth it, even if it means making you mad.
And just in case you're still reading, in addition to reading man pages, and taking notes, here are the other things you should be doing:
Preferring textbooks over blog posts and youtube videos to learn a new field.
Reading original research by pioneers in the field, like Turing, Shannon, etc, rather than their main findings rehashed by lesser minds.
Reading source code rather than jumping from documentation to Stack Overflow when something doesn't work.
Reading Knuth on algorithms, Stevens on Unix networking, etc. In other words, read the classics that everyone says you should read but most people don't. Work through the exercises. It's the closest thing to an actual superpower.
Look around for people better than you at all this stuff to help you improve.
Its one of those things that after almost 20 years, I should remember. But it just does not go in.
It becomes interesting when junior programmers are watching how I do something and I end up googl'ing things that they know.
I use the excuse that it fees up more space for other more interesting stuff, a bit like some execs just ware T-shirt and jeans to reduce the number of things to distract them in the morning so that can concentrate on the important things.
It isn't just like `cp`, on some systems `ln` is just a symlink to `cp`! `cp -s` does the same thing as `ln -s`, although the other flags are generally different.
The place you're putting something is last in each case; we can even include everything that doesn't modify its location.
All I can think of that breaks the rule is `rm`, `unlink`, and `umount`. But they're hardly gotchas, and they're not reversing arguments they just don't have a 'destination'.
That's because with zip (and tar, and other archivers) you can have multiple items to move to a destination archive.
You can also do that with cp and mv if you need to, using `-t`. `cp -t .dotfiles/ .nanorc .bash*`. Generally more useful when you want to move a bunch of files around.
I think 'ln' as a tool which creates a symlink. And symlink is a new object, a new kind-of file. So it must be the last argument. The source is just an attribute of that symlink "file" holding the path it points to. If I want to overwrite an existing symlink "file", I simply add -f meaning force-overwrite-if-the-symlink-file-already-exists.
I couldn't remember also.
The way I finay did it was to memorize 'ln -s -T' -T stands for target that way... I know the next thing must be the target then the source.
If you google/duckduck go something more than once, I think it should be put into documentation. I have one central org file where I keep that majority of my one off questions about a tool, and these days I look there often first before man or info because I have condensed the information into something faster to parse for me.
Right now my main battle is deciding to port docs like this into asciddoc(tor) or keep them in org, both being exported to html5 eventually.
I search (duck duck go) for this constantly. For a couple of days last week I remembered it, but it’s always helpful to check. Some day I should see what happens if I get it backwards. Probably it will say no and that’s it. If that’s true then I can just try until it works!
I remember it by first remembering that it is possible to not provide the destination (ln /.../file) to create a symlink with exactly the same name in the current directory. So it has only 1 required argument which always has to be first.
That is the symlink target, not the ln target, which is exactly why the "target" nomenclature is confusing. Also note that not every "ln" has a "--help" or uses that help message as the output. For example, try on OS X `/bin/ln -h`.
Yes, it would be platform specific as the OSX ln command would not be the same program as the GNU coreutils ln program.
For GNU ln at least I don't find it confusing at all particularly considering the only other option is LINK_NAME. I guess YMMV though and it seems kinda pointless to argue whether it is or is not confusing. Perhaps a poll could quantify it.
I wasn't speaking specifically about GNU. Note though that even there you see a --target-directory, -t, -T flags that make literally no sense anymore once GNU reversed the meaning of the term.
I totally get that you don't remember what goes first, it happens to me with tons of commands... but googling it? Why don't you just execute the command and see if it worked?
Haha, glad to see I'm not the only one who struggled with remembering this. I always refered to a task in a fabfile to remember. +1 for "follow cp or mv semantics"
There are some clever tips here how to remember. Ages ago I struggled too so for half a day I just kept repeating the words "ln target linkname" in my head. Anyone else?
I feel like a lot of this stuff is rote memorization that we shouldn't be so eager to demonize. The medical profession has you memorizing effectively random shit for years and that's fine. But in our industry, we think we should just magically remember stuff we enter once in a blue moon, and if not, who cares, just Google for it. It bothers me. And it's something I've increasingly turned to flash cards for. I'm tired of googling that command I used once 3 months ago but would be insanely useful right now.
The infuriating thing is that Windows' mklink does it the other way around... link, then target. It took me years to stop RTFM every single time I'd user mklink/ln
I have remembered the order of arguments since I figured out that the second argument is optional. So the link name is of course optional, not the original file's one.
I don't understand where the confusion is with scp. It's just like cp, except you have a syntax for making some of the arguments remote. It doesn't change the order, it always does semantically the same thing of moving something(s) somewhere.
This is not to criticize - we all have places our mental models break down unexpectedly. I'm just interested in how that's happening.
I no longer have issues with tar since I found out that most systems I use don't need the compression format anymore, and are happy to figure it out themselves. Not sure if it works with compressing or not (I generally use 7z or zip), but you can just `tar -x[v]f` on anything, regardless of file extension, and tar will extract anything it supports.
What has plagued me is not a notion that an experienced engineer needs to look stuff up, but a similar one that feels related: that as an experienced engineer I should be able to immediately incorporate a new thing I have no experience with, right then and there in a pairing session.
It comes in the form of "go ahead and install this thing, and throw this config value into it, and it should just work". And my reaction is "I want to read up on that thing first. I want to know what stuff it writes to my computer and where and what paradigms it uses. I want to think about how it will best integrate with the tools and workflows we have already. And then once I've done that, I'll probably be able to move forward with it comfortably. Then I'll want to document it so it becomes part of the regular setup others do when they onboard onto the project."
Just this past week when I said a version of this, I got a reaction like there is something wrong with me.
This is probably a 10-50x fewer searches than I do in a week. I am assuming there are many searches left out. I'd be interested in seeing how the OP tweaks their searches as the results don't return exactly what they want.
Here is an actual single search progression for me (In reverse order because copy/pasta :shrug:
react context optimize rerender "props.children"
react context optimize rerender
react usereducer dispatch async
react usereducer dispatch api
usecontext usememo
optimize usecontext react
usereducer rerender usecontext
usereducer rerender
when to use usereducer
react hooks usecontext and usereducer
This is a fairly simple search, too - no use of negative search terms and minimal use of phrase matching. I didn't see any of those enhancements used in the OP which seems odd.
It is pretty dependent on what the search results look like after the first search.
In the above example, I would probably filter out results having to do with other react hooks so adding `-useState` would help accomplish that. If I am googling specific syntax or an error log, then wrapping it in quotes will do a phrase match and filter out results that don't contain the phrase.
I don't use the `site:` search pattern because just adding `github` does a good enough job and
Another pattern I use is `filetype:pdf`
:thinking: and lastly I nearly always use the tools to filter by results within the past year. That does an okay job filtering out older documentation, tutorials, articles etc
This is a fun exercise, then ask the second question "How productive can I be in my job when the network is unavailable?"
The author states : What I’m trying to show with all this is that you can do something 100 times but still not remember how to do it off the top of your head.
My experience differs from this, if I were to rewrite it I would say something like:
"You can do something 100 times, and as long as you can look it up somewhere, it is okay to not memorize how to do it."
You have to evaluate the impact on your flow of stopping to look something up, you have to evaluate what you consider your 'base' skill set is to evaluate if you should memorize something.
Before Google, the canonical case here was arithmetic. Who needs to memorize multiplication tables if you have a calculator handy to do simple arithmetic. Basically if you cannot do basic arithmetic in your head, you are always going to be at a disadvantage with respect to someone who can.[1]
I have found a reasonable compromise, when I Google something like this, I write down the solution in an Evernote notebook that I keep for such things. So if the same question comes up I can always find the answer and don't have to have either the web page or Google around to get to the answer.
[1] And as a "magic trick" you can hand a cashier what appears to them to be an odd amount of money, only to have them discover when they enter it into their register the change is a minimal number of coins/bills.
> "You can do something 100 times, and as long as you can look it up somewhere, it is okay to not memorize how to do it."
We used to memorise information, now we memorise meta-information - a mental map of concepts and trigger keywords. We have learned to quickly grok new concepts and we still have to understand how things work in order to do anything.
That's one of the cool things about the Red Hat Certified Engineer, no internet access. You still of course have access to anything available from the yum/rpm repositories, so man pages and documentation are there for the browsing.
I definitely felt proud of myself for passing, especially when the other 6 senior linux admins all failed because they were too arrogant to study or prepare.
Yes, there is value in memorizing things, as things in our memory are easier to work with for thinking than things we have to go out, search for, access, and then get into our memory. See for example Barbara Oakley's discussion on mathematical fluency:
Or Feynman on the need for mathematical fluency to do physics:
> What we have to do is to learn to differentiate like we know how much is 3 and 5, or how much is 5 times 7, because that kind of work is involved so often that it’s good not to be confounded by it. When you write something down, you should be able to immediately differentiate it without even thinking about it, and without making any mistakes. You’ll find you need to do this operation all the time—not only in physics, but in all the sciences. Therefore differentiation is like the arithmetic you had to learn before you could learn algebra.
> Incidentally, the same goes for algebra: there’s a lot of algebra. We are assuming that you can do algebra in your sleep, upside down, without making a mistake. We know it isn’t true, so you should also practice algebra: write yourself a lot of expressions, practice them, and don’t make any errors.
Of course, in this particular discussion re: javascript, I think the design of the language doesn't help much here. Consider Javascript's `Date`, which the OP calls out as an API with a particularly difficult to remember set of conventions. Authors of languages and libraries can remedy this by having a coherence in design, naming, and behavior that help people build mental structures to memorize how these work and achieve the kind of fluency they need to do things without googling. Ruby's standard library I think is particularly good at this(here "principle of least surprise" helps not just with discovery, but retention).
EDIT: also want to add that using documentation and external resources is totally valid, as the grandparent comment states. There's just a balance to be had between relying on Google versus what you can draw from your mind quickly. Also think it's worthwhile to note that there is interesting work to be done in making documentation systems better and more integrated into our runtimes, see for example: https://www.geoffreylitt.com/margin-notes/
Flow is key. At the same time, I agree on not having to (or trying to) memorize everything.
I've been exploring building a tool which is sort of a "universal search bar" for programming questions: finding answers via google, searching local repos, finding code snippets you've saved, etc. Another way to think of it is like an extension of your memory.
The idea is to make the information super fast to retrieve so you break don't break flow while you're programming.
I also run into this while learning a language. There are words I'd have to look up in Spanish every time I encounter them. Started writing down these problem words in a notebook to commit them to memory to end the cycle.
Personally if the internet is down my employer wouldn’t expect me to get any work done, so it doesn’t matter. Would be different if I was self employed though.
My web search activity went up a lot with experience. Back in the day, almost all of my work consisted of cranking out stuff in the main language (PHP or Python), spiced up with SQL and occasional web or db server setup. I needed manuals sometimes, but not web searches so much. After the hipster-programming/devops explosion of the early 2010s and my dive into highly optimized heterogeneous solutions, the work switched to whipping up logic in a handful of different languages with incessant fiddling of couple dozen specialized technologies and trying out new ones constantly. I don't need the manuals for the core languages anymore after all the years, but I'm not gonna learn Maven, Gradle, the Big Pile of Java Options and whatever else to set up another data-crunching daemon, in Java this time.
I also looked into freelance jobs lately, and it's pretty crazy there too, especially with the JS' frantic mutations. You'd mostly need a handful frameworks and a whole load of specialized microsolutions, but still: last year everyone was using Ionic, now React Native is everywhere. Web APIs are also evolving nonstop. Chill out for some months, and you're lagging behind the others.
And that's not even beginning to mention compiling other people's C programs under MacOS. Pretty sure that ‘The voodoo of satisfying the compiler’ would be a book of respectable thickness without ever getting into programming proper.
I know gradle, and pretty well at that. I had to learn the basics of maven to develop a Jenkins plugin (at work). Developing jenkins plugins for a long time required using maven. Now there's the option for using gradle, but my recollection was that it was a sub-par experience.
Afaict, everything just relies on Maven and its packages as the backbone. At least, after any sort of Java-dev activity I risk discovering 1.5 GB of Maven's package cache in my home dir.
(That last character is a capital "eye"/I, for "I'm".) It does a Google "I'm Feeling Lucky" search for your query, and restricts the results to the Python docs. So, e.g., I can type:
py enum
And get the Python docs for the enum module.
I recommend this for MDN, too, e.g. I can type,
mdn map type
and get the JS `Map` docs.
I need to add one for crates.io & the Rust STL. Note, of course, that this sends your searches to Google by virtue of using the "I'm Feeling Lucky" functionality.
Or, use duckduckgo.com instead (works on all major browsers) and use the following when searching (saves you time from having to setup/maintain it yourself):
Wow. I was gonna say "Why not just right-click the search box on docs.python.org and add that?" But I tried your search and it's so much faster and more useful.
You can set up that MDN shortcut easily using the website https://mdn.io/, which does nothing but redirect to an I’m Feeling Lucky Google Search like you described. In your keyword bookmark (Firefox) or custom search engine (Chrome), just use the URL pattern “https://mdn.io/%s”. I titled my bookmark “Search MDN via mdn.io” and gave it the keyword “mdn”, same as deathanatos.
This is a really interesting peek into the developers mind. Right away I notice I google things differently than the author. I try and hit up phrases that match a question (which is why I probably over index on stack overflow). The author seems to hit up ideas that remind me of something that might be the title of a blog post or something. Also, my search is definitely more full of stack trace keywords. Probably 80% stack trace keywords by weight.
Stack traces have been great at fingerprinting the issue, especially to find something on a Github issues page. And the success rate is inversely proportional to how long you have spent working on a project using that dependency. The more time you've had, the more creative ways you find to screw it up.
Completely agree. If you're interested in giving insight into your own developer mind, I'm working on a social network to do just that[0]. The goal is to be the sweet spot between Twitter and a blog – short enough to jot down (as blogs take too much effort), and long enough to go beyond Twitter's character limit. And you get markdown!
It's very barebones MVP at the moment, but you can expect more features very soon. Open to feedback!
I almost never search for stack traces: I pick out class names or error codes and search for that instead. How's your success rate with finding useful information?
This is great but the premise seems kind of straw man. I don’t know a single developer that thinks “googling stuff means you’re not a proper engineer.”
Perhaps this is a reference to the interviewing process. Tests are a different thing, though. I’ve been on practical technical screens where you can google, because the test was about building a larger system, and they don’t care if you don’t remember this or that API syntax. But at the same time I get if you’re trying to assess someone’s aptitude for devising novel algorithms, sometimes a closed book test makes sense.
I think there are many devs (myself included earlier in my career) that push back in the algorithm tests because, well, it’s really hard. How often do you have to derive novel algos? A lot of our work is kind of UI frontends to DBs or gluing things and most of the sort of hard scalable algorithm problems someone else has implemented in a library. So why go through 6 months of really hard study, just to get that kind of job?
Well, I think here should be “maker” roles where you can bypass the hard CS stuff and just crank code, if you have an impressive portfolio of work. But having a deep understanding of data structures (not just arrays and maps) and algorithms really gives you a mastery of your craft, especially around performance and scalability.
Nobody should be screened out because they didn’t remember a particular bit of syntax though. (But it should be noted most algo tests don’t require anything but the language basics.)
> I don’t know a single developer that thinks “googling stuff means you’re not a proper engineer.”
I've met lots of younger devs who have varying degrees of impostor syndrome, so anecdotes like this probably help people like that feel less bad about not knowing everything.
I'm of the opinion that not googling stuff means you're not a proper engineer.
And from what I've seen, those who don't google consistently also fail to understand the answers they find when they do google things. It is, in fact, an essential skill.
I’ve asked pre-internet programmers and they said they used to keep reference books at their desk or even a small library/book room at their employers.
I’ve heard some government contractors can’t google because they work on non-internet connected computers. They probably keep a lot of books.
Starting around 8th grade, I would walk to the recently opened Barnes and Noble with a pen and notepad and hand copy material from programming books I couldn't afford, then walk back home and try the stuff out in Turbo C. Rinse, repeat.
I haven't looked anything up in a book on programming in a long time, but I'm not really a software engineer presently. While I rely on Google in general and stack overflow in particular, I don't believe everything is on the Internet. I went through a period where I started to think everything was, but in recent years I realized it isn't.
And just making the decision to type things into Google is far from the ability to search effectively. Increasingly search engines will converge on a bad answer for a given question because it's the most popular. You just cannot assume that the correct solution is going to be prominent, because once another one has critical mass, it cannot be dethroned. The result that sounds just barely plausible enough to fool the average person making the search (not the average programmer) wins, and often it's terribly wrong.
A friend works at a military contractor doing (effectively) windows GUIs for power system controls. His only access to an internet-connected computer is on his breaks, where he has to wait in line to use one of a very small number of computers in a special room to google things.
I don't think he has books, but he has an enormous amount of documentation (something like the whole MSDN library, a bunch of internal documentation and reference info, and other documentation) installed on his computer.
I've never worked pre-internet. In fact, I've had internet access in all of my jobs, but I've definitely worked pre-"internet was a useful source for searching for answers to programming questions". I mean, we had Usenet, and you could ask a question, but we didn't have search engines that were useful in that capacity.
I definitely had a lot of books, but actually the vast majority of my information came from man pages, RFCs and specs. I actually learned C++ by reading the spec -- I've never read any C++ book in my life, even though I was a professional C++ programmer for at least 10 years. For learning the STL, I did it by reading the source code!
Probably the biggest thing I miss in the age of Google is that offline documentation is getting hard to find. One of the reasons I chose Rust for my latest side project is that it's extremely easy to install offline information for virtually everything.
I like doing web searches searches and Stack Overflow is really stupendously awesome. Especially when I'm working in a new language or framework, I can find idiomatic solutions to problems without having to read through thousands of lines of source code. However, I do think that newer programmers today are missing out when they don't learn how to search primary sources to get their answers.
If you have a question about how the language works, it really does pay off to look at the spec. In doing so, you learn about things in a larger context. I'm always grateful for SO answers that have links to primary sources like specs in their answers, but I've noticed that my younger colleagues almost never click on those links. When they have the answer they want, they are back to their own code. Often they miss important nuances because they are too focused on getting the answer.
Similarly, I have found that there is often a huge reluctance to read source code. A colleague once joked that he used dependencies so that he didn't have to read the source code. It's funny because it's true ;-) However, if you at least try to answer your questions first by looking at the source code for a dependency, then it will tell you a lot about that dependency (mostly: OMG! We need to ditch this ASAP! ;-) )
I used to have a collection of hundreds of RFCs on my computer. Really, anything that is about the internet has an RFC (or did back in the day, anyway). These days, even I'm lazy and don't bother to maintain offline collections of this information. However, it is worrying that I often run into developers who build very complex internet applications and don't even know what an RFC is let alone use them to learn how they should build their systems. They will have very ingrained opinions on how build things, but they have no idea how the HTTP protocol works, for example. Discussions on the topic of how to design something usually results in snippets of blog posts obtained through Google searches, rather than pointing to the relevant RFC and saying, "It works like X so we need to do Y". Negotiation of how to proceed often involves considerable heated discussion about which notable internet pundit we should trust the most.
Yeah, this is an "old programmer rant" ;-) However, if I could wave a magic wand and get younger programmers to all add something to their arsenal of tricks it would be to read primary sources early and often.
> we didn't have search engines that were useful in that capacity.
I remember that. I think it was a combination of there not being good reference websites in some cases, not good question sites in others, and search engines generally being both less good and having less of those sites to index.
I specifically remember finding sites that were goldmines for certain topics, and hoarding them in my bookmark list of useful resources.
RFCs are great and definitely under utilized. I was working on some stuff with CORS and Cookies at my last job and reading through portions of some RFCs really helped me solve my problems and understand how CORS works.
I have a friend who used to work for the defence civil service and one time he was out in the desert in the USA with no proper internet connection just email - he emailed his techie friends to ask how you did some arcane vbscript stuff as we could look it up.
I worked for a few months without an internet connection, I'd go to public wifi spots and load up on all the documentation and packages I needed. I wouldn't recommend it but it definitely teaches you to be resourceful.
The first programming language I learned was BASIC, using the (quite obscure) Z-BASIC environment. I put a lot of wear on the reference manual, and had many page numbers memorized.
Google does such a good job with personalized results, that we have have forgotten how to search. Remember the old days when using a search engine effectively was a specialized skill? Search engines have gotten a lot better since then, but a lot of the improvements have been due to personalized results. It’s not easy to transition from Google to Bing or DuckDuckGo.
That being said, Google is also a good search engine, so it’s difficult to compete with them regardless of personalized results.
Using a search engine is a more specialized skill than ever.
What "Google getting better" actually means is that it is more likely to return what most people want as the first page hits. That is a tradeoff, because it obscures everything else whenever what most people want is wrong.
As a programmer, you're only going to be doing searches for things that are non-obvious, so there is a vastly higher chance of Google giving you the wrong answer than for the average search.
There is a fundamental contradiction between a search engine that tries to show you only what it thinks you want, and the fact that you don't know exactly what you want when you search, until you see some results.
I think there's something like a conservation law, that the more you make it easy to find some things on the Internet, the harder you make it to find others. Kind of like the proof that you can't compress all strings.
Bing has issues with not scraping dynamic content very well if at all. We had to eliminate libraries that render important content after page load with javascript because they'd show up on Google but not Bing.
As far as I can just about every search engine uses results from Google directly or indirectly by using a source like Bing that in turn gets some of their results from Google. If you won't call something a search engine unless it doesn't use anyone else's search results you might have a hard time naming more than one. You can't escape Google.
It really does. I've been a professional developer/programmer for a few years - since I was 18 - and I'm only just going back to college (at the behest of my employer).
Really enjoy taking advantage of it and reassuring the younger kids (only two or three years younger than me) that they're more than capable of this shit. And it helps me be more confident when I'm at work - at school they see me as the guy with all the answers (ha, I wish - but it's a good reminder that we're all our own worst critics).
I saw Scott Hanselman speak, and he said he has impostor syndrome. I thought to myself, great, if Hanselman has impostor syndrome, I'm not even qualified. I must be an impostor-syndrome impostor.
I have this all the time, but to counter it I think about this time last year and it wouldn't have worked at all, now it works but it may be ugly, even better I almost understand why it works.
Isn't that what a whiteboard interview is? Proving you can do things without google or else you aren't a proper engineer? (For the record I disdain whiteboard interviews as the accepted metric for testing one's ability)
IMO it's not. I've never given or taken a whiteboard interview where the I (or the candidate) was not allowed to either google things or to make assumptions about how certain things worked.
For example, if your solution involved generating all permutations of a sequence, and you didn't remember how to do that efficiently, I would let you make up a black box function that just does it. We might come back to this later in the interview, but I'm generally uninterested in whether you memorized an algorithm for it or remembered how to call the standard library function that does it.
That said, if you make up the black box function, I might dig into why you designed the function signature the way you did, and I'd do that to see whether you can talk about code design from a "other people will read and use and debug your code" perspective.
> if your solution involved generating all permutations of a sequence, and you didn't remember how to do that efficiently, I would let you make up a black box function that just does it
But sometimes, the interview question is generating all the permutations :(
Not necessarily. The purpose is to show how you reason your way through a problem where you can't simply google "how do I solve x problem." It's intended to see how you communicate, how you handle mistakes, how thorough you are in double-checking yourself, your work style (do you jump right in or ask lots of questions first), etc.
In any case, there's a lot of strategies to show that you understand a concept without relying on exact memorization. For example, a common answer I give if I can't remember something might be something like: "The language has a sort function that can accept a list, I don't remember the exact API or the underlying implementation but let's assume it's n log(n) as that's a common runtime complexity for sorting--I'm going to define that as 'sort(listArg)'." I don't remember the exact function, so I'll just define and explain how it might work myself. If you're expected to be able to compile the code you're writing, simply ask the interviewer and explain the API you're looking for, I've had great success with that as well and prefer when an interviewee asks me over staring at the screen or board.
If it's a problem where you're expected to produce some algorithm, if you can't come up with any solution, explain to your interviewer what you're stuck on. They may provide a hint that will get you rolling. I had this happen during an interview at Microsoft (spoiler, I got the job) where I forgot how to determine the length of the hypotenuse of a triangle haha. The interviewer wrote the formula on the board and we moved on. Interviewer later told me I seemed nervous and he chalked it up to a brain fart--good call, I was really nervous! The point of that problem wasn't to see if I knew the Pythagorean theorem--it was just a small piece of the puzzle I was stumped on. Similarly start from a simple, naive solution, and interatively optimize rather than trying to recall a perfectly optimal solution. If time is running low, explain your intended optimization or ones you think might be meaningful, making note of any uncertainties in that explanation.
I've worked at Microsoft, Amazon, some other notables and interviewed at many others. I am speaking from that experience. Others may have better ideas. Of course, not all places approach this in a reasonable fashion and some are just missing the point entirely with their interview and they do expect you to somehow memorize everything.
I totally understand your philosophy and you make some great suggestions, but overall there's one big flaw with whiteboard interviews: Artificial Pressure and its impact on people's ability to think clearly on the spot. I am a very methodical thinker which is why I tend to prefer work in large scale ore high-performance systems where you consider the implications of thread contention and so forth. I've designed and built multiple large scale production systems and i've also simultaneously forgotten whether it should be += or =+ during a whiteboard interview because of a phenomenon I can't quite describe. Obviously I was laughed at for someone who's actually had 10 years experience with c++ to forget something so basic but it happens when you are on the spot.
The biggest problem with whiteboards is that they reward exactly what they say they try to weed out, rote memorization. It's no secret you could spend weeks practicing on leetcode for an interview but those skills simply don't translate to real-world programming. Algorithm design is maybe 2% of software engineering. The vast majority is knowing what pieces of technology are out there in order to avoid reinventing the wheel all the time. Whiteboards don't test that skill. It tests a marathon runner on their 100 meter dash time.
> Obviously I was laughed at for someone who's actually had 10 years experience with c++ to forget something so basic but it happens when you are on the spot.
This 100% happened to me (apart from being laughed at) only it was even more of a meltdown. Also 10 years experience and a problem I could do in my sleep. It was my first interview after a long time and it triggered performance anxiety and nervousness.
The solution is you need to practice live interviews, not just leetcode solo.
If the system is so poorly evaluating people that they have to train an irrelevant set of skills to succeed in it, what does that say about the system?
I am a big fan of the take-home problem. For example, Symantec once asked applicants to design a simple virus detector with wildcards (similar to grep) and then they gave interview spots to people with the best performance. During the actual interview, however, there wasn't much whiteboard coding only designing or "how would you do X" sort of questions.
I've never done a take home test that produced anything that could have been vaguely useful for the interviewing company. Usually the problems are pretty artificial.
Well if they are good enough for you to put their code into your program then shouldn't they be good enough to be offered a job? Seems like it works exactly as intended in terms of trying to find someone who can do the job
I've had multiple take home tests result in changes to the products that company offers. I've never received a job offer from those companies. Maybe your experience differs, but I've had a sour taste left by such practice.
I wouldn't say coding algorithms is "irrelevant". It's sort of like saying the SAT is irrelevant. It's a general aptitude test. It's not about the "relevancy" of trigonometry to your day to day work.
You could argue it's stupid and colleges should only look at personal essays and GPAs and extracurriculars. But then there's problems with that too (lack of standardization for one).
You could argue that high schools shouldn't teach trig or calc because they're largely irrelevant to most tasks in most fields and "what does that say about the system". But actually they do come into play once and awhile and lay a conceptual foundation, so they're still sort of useful to learn. And thus they form a good common foundation to test aptitude inside an hour or so, or at least no one's thought of a better one.
Whiteboard interviews exist because without them companies wouldn't have a metric to artificially restrict hiring. Passing a whiteboard interview makes both the company and interviewee feel special and maintains the bubble of software engineering salaries.
> A work sample test, structured interview, or IQ test (alone or in combination) would be a much better filtering process.
I'm not sure what any of these mean in concrete terms. Is an IQ test like those "why are manhole covers round" Microsoft questions of yore? Yuck. Is a work sample a homework test? That takes more time for both parties. It's not a good substitute for a 1 hour first round screen. And what's a "structured" interview??
You should probably look up what an IQ test involves instead of just guessingly blindly, getting it wrong and then writing off the suggestion based on a wrong assumption.
in sofar as IQ helps predict actual job performance, I'd say yes.
Now I'm not saying the situation isn't complicated. In the mid-20th century, IQ testing was a great driver of social mobility. However, as was pointed out in _The Bell Curve_, it's becoming the opposite. From what I understand, the heritability of IQ and assortative mating (i.e. people marrying other people of similar IQ/educational attainment) are the main drivers of this.
I don't think the formation of an entrenched class system is necessarily a good thing, but I think in the local context of our industry there's still a lot of good that could be done by improving our hiring process. I mean, would you rather hire people who only sound smart, or who are smart?
Every engineer on HN (several threads that I've browsed) seems to have a serious disdain for coder-pad interviews/whiteboard sessions. And I think HN has a good representation of software engineers. So, who are these companies hiring?
I think there's a serious disdain for interviewing in general. The whole process sucks from end to end. Often it's because the interviewers just aren't good at it. But when it comes down to whiteboard interviewing, I think it's about your value system.
More specifically, I think every interviewer has an opinion on the practical <--> theoretical spectrum, and the problem is when the candidate disagrees with the interviewer's opinion on where on the spectrum the interview should be.
The upside of a practical interview is that you are testing whether they can do the work. The downside of a practical interview is that it indexes heavily on experience and lightly on the ability to cross disciplines. Theoretical interviews are the reverse.
So the important question is: what does your team care about? Google will care much more about the ability for their engineers to move around than your startup will. In general large companies will care much more about portability. They have deeper pockets and are therefore more willing to train you on the job, so generally speaking if they think you can ramp up (even for senior hires), they're happy. As much as startups will say they see things this way, it's not really true. Startups in general care that their senior hires come with substantial domain expertise.
I'm not sure HN is representative of anything, especially if you only look at what's upvoted. If I had to guess, I'd say that this sentiment is posted by people with 10 years of experience who feel like they should be talking about something else during interviews, but that it's being upvoted by college students or recent grads (no matter what you're asked to learn, a large fraction of the group will be frustrated by it)
They are hiring a bunch of people who have a serious disdain for the way they are interviewed. A better question might be, "who are these companies putting in charge of their interview process?". I think the answer there is largely, "people who have a serious disdain for this kind of interview, but still need to interview people and don't have any better ideas for how to do it".
I would rather do an exam every year, and get a grade on the skills that they expect. Exams are standardized, granted that they are not perfect, but its closest to a system that works.
Am not at the whims and fancies of an individual interviewer.
I'm not sure what you mean by standardization. You're totally at the whim of the exam-maker either way. If anything I find algorithm screens tend to draw from a semi-standard set of fundamentals as outlined in books like Cracking the Coding Interview and tested on Leetcode among many other examples.
I let candidates use their phone to Google or ask me to Google on their behalf. What they're Googling for never matters in the context of the interview and it's mostly to make them feel more comfortable.
It's a whiteboard, I'm looking out for far more important things than whether the method is named `size` or `length`.
A good whiteboard interview doesn't require much outside of basic syntax/data structures, and will offer some form of help along the way (often in the form of a somewhat helpful interviewer).
I think whiteboard interviews are fine, so long as the candidate is told ahead of time what the subject the whiteboard test will be about and has time to study for it (perhaps even given a take-home coding assignment about the same subject) and where the subject is actually relevant to code that has been written at the company.
But I do think that Computer Science Jeopardy on a whiteboard is a bad idea.
A good interviewer will not care about syntactic things, but rather try to sniff out your workflow and thoughts / processing when breaking down a process. It should be a proper two-way interaction.
A shitty interviewer will follow a rigid script and "ding" you on trivial errors.
> I don’t know a single developer that thinks “googling stuff means you’re not a proper engineer.”
I see it at least a couple times a week on sub-reddits related to software development. Junior devs who are convinced more senior developers are super-geniuses who know everything from memory.
We (I'm 50+) know for sure many more things and I see the surprise of junior developers but the exact details? There are too many of them, especially because senior developers often are not narrowly focused on a single part of a project.
A simple example. I'm working with Rails, Django, Phoenix and React (3 different projects). I know I need to do a strftime but I don't remember the exact details in any of Ruby, Python, Elixir and JavaScript. I always google them or grep the source code for other instances.
Yeah, absolutely. Even more experienced folks sometimes get stuck on this. There's some selection/confirmation bias: you ask questions in chat or whatever, and there's someone who's got a lot of answers, and you're grateful and excited by the help. But you ignore all the time they were only partially right, or wrong, or -- most important! didn't even answer at all because they weren't familiar with the topic. (But someone else did answer, and also gets added to your mental "expert" list.) And of course you don't see all the things that they don't know or are struggling with. (Some of which could easily be things that seem simple and comfortable to you!)
I try to dissuade this kind of thinking when I see it: I like answering people's questions, and I take pride in knowing a lot of stuff. But there's way more stuff that I don't know than that I know. And I might need these folks' help with something I'm unfamiliar with, so I don't want them thinking they're "below" me and there's no way they could know something I don't. I'm sure they do!
This comment propagates the nonsense that skills with algos are somehow harder or more exclusive than software engineering skills. They are orthogonal and interviews that conflate them are part of the problem.
Consider this, many of the most novel algorithms were published without ever being run in actual software or in part of a larger system.
Most people would say algorithms are the building blocks of software engineering, not orthogonal.
Most screens also include a systems design test. Maybe you could argue all we ever should do is system design tests. But they don’t always involve coding.
Aluminum is a building block of modern large airplanes. When hiring aeronautical engineers I don’t grill them on the finer points of aluminum smelting.
I always bring a laptop to interviews (haven't had one in a long time) and Google stuff during the interview and pull up my notes. If they say something I responded with "well I will always have a laptop at work anyways".
Even though I was embarrassed when I couldn't state what inversion of control was because I didn't have my notes to remind me of the "shopping cart vs government form" metaphor, I think having me look it up and read it to them would have been worse. Do you mostly use it for syntax, not stuff like "how do you deal with a deadlock"?
> I don’t know a single developer that thinks “googling stuff means you’re not a proper engineer.”
Yes the whole interview process is based on the idea of doing stuff for the company that you wouldn't actually do.
Indeed I interview my candidate's ability to find resources, to know what considerations are needed, how to deal with IDEs, collaborative tools, the git protocol. I will fire someone on the spot for implementing an academic algorithm if there was a library and potentially peer reviewed library they should have used instead.
> the whole interview process is based on the idea of doing stuff for the company that you wouldn't actually do
The process for successful interviewing at places that use whiteboard testing for interviewing involves digging in and putting in a lot of hard work studying previously unknown concepts-- work that most people don't consider very fun.
To me, this sounds like exactly the kind of thing that a person will need to do to be successful in their first few months in a big company. Learning about their particular systems, authentication and authorization methods, security audit procedures, deployment methods, how they do dependency injection, logging, diagnostics, troubleshooting...
And I think those “maker” types of roles don’t need a hard C.S. screen. But algorithms aren’t just about reimplementing the classics. There are points even in ordinary “DB front end” type of work where understanding algos will affect how you scale. You may find you’ve just been getting away with brute force approaches because that data sets were small. You can go very far like that but there may come a day when it matters and then it will really matter.
...and when that day comes you call a specialist to handle the case and let your "ordinary" worker get back to work on the 99% of the job space she's qualified for?
I find this very ironic: Hiring for suitability to handle edge cases is also a brute force approach that doesn't scale.
A good algorithm optimizes for expe ted inputs and farms out cache misses and unexpected behavior to some other system.
I don't disagree with you, and it all comes up after the 20 second standup and in the code reviews, not in the interview, and not in a two week long orientation
Overfitting for the possibility that it might be necessary in a design choice is not the way to gatekeep candidates
Googling too much is an indicator that your environment or process isn't working well. It's not a point of pride if half your job requires looking up how to use tools you don't understand.
The problem is that people naturally blame themselves for not knowing everything rather than accepting that we have built monuments of trash code that nobody should be expected to understand and that therefore we should stop doing that and stop using that.
Ideally, I would sit down and be so familiar to the problems I'm solving, the tools I'm solving them with, and the tools so suited to solving those tasks that I can just type in my solution without having to stop to consult something on the internet.
Ideally, I'd be able to have tasks that are highly specified in advance and with priorities that are stable enough that I do not have to have multiple social interruptions per day in order to be on the correct task.
Basically, I wish I could achieve productive software development from an isolated cabin without an internet connection to provide distractions, with lots of subtle natural ambiance and my own thoughts instead of interruptive social interactions, and synchronize my work product with society on the same cadence with which I would travel to town to get groceries.
> Ideally, I would sit down and be so familiar to the problems I'm solving, the tools I'm solving them with, and the tools so suited to solving those tasks that I can just type in my solution without having to stop to consult something on the internet.
This is how you can get into Csikszentmihalyi's "flow", if you add a little cognitive work and struggle to it, so it's not just typing in the solution.
> Ideally, I'd be able to have tasks that are highly specified in advance and with priorities that are stable enough that I do not have to have multiple social interruptions per day in order to be on the correct task.
Some tasks are like that, some business opportunities are amenable to that approach, some teams work that way, and many more aren't and don't. But yes, not spinning your wheels is a prerequisite to getting maximally engaged in your work and getting good results. Sometimes that requires a lot of interruptions of the actual work, because the work is rarely well specified in advance. If it was, it probably would not be interesting because it would already have been done.
> Obviously, this is not the situation.
It could be the situation, and there is work that needs to be done in that way. In fact, this work tends to have high value, including business value, precisely because our dominant working style prevents almost all of us from achieving it. We can get closer by mastering our tools and by choosing tools that afford mastery in their use.
I think compounding the issue is a lot of devs jump complex technologies pretty quickly so we're constantly trying to overcome a deficit in knowledge but never stopping long enough to do so.
I don't typically have to google much but I recently attempted to update webpack/babel at work and I must have made over 100 searches of error messages in build tool packages I didn't even know existed.
> I think here should be “maker” roles where you can bypass the hard CS stuff and just crank code
I would call these "user" roles. And perhaps there is a space for a "professional software user" career as we continue building software so unnecessarily complex that only a professional can use it.
> But having a deep understanding of data structures (not just arrays and maps) and algorithms really gives you a mastery of your craft, especially around performance and scalability.
Ok.. Find all the words that match a search string prefix.
I.e. you have a search field and want to show the possible matches as you type. With each new char you’re calling this search and getting back a list of words. The dictionary is in memory in whatever data structure you choose. You only have to return matches if the search is 3 or more chars long.
Now, how does your implementation scale when the dictionary has a million words?
Prefix tree is the right answer here right? From an interview standpoint.
But my question is: why would you want to roll your own here? Wouldn’t the appropriate move be to look for solutions for your specific use case that’ve already been made? Ie if you’re talking web, I see no reason why a standard database index would work. MySQL’s index, a b-tree, is perfectly serviceable for the question, no? I’m having trouble imaging a situation where an out of the box solution wouldn’t work...
I don’t think I have a deep understanding of efficient data structures. I couldn’t re-implement a red black tree for example without looking it up. I wonder: is it enough to have a cursory understanding of things like runtime complexity, space complexity; understand the different basic data structures and how they have trade offs that can help or hinder different operations, and how that connects to real world work. Especially how different actual solutions use different data structures internally. What I don’t understand is the value from going from cursory understanding of data structures to deep understanding.
FWIW, even highly accomplished software developers / hackers like Jonathan Blow[1] have said that in almost every case, you should just use an array as your data structure (or some variation of an array), unless you actually have a good reason not to (that will get you a lot of value)
Of course that doesn't mean you shouldn't know how advanced data structures work and be able to work with them, it just means you shouldn't reach for say, a fibonacci heap before you just use an array.
> Of course that doesn't mean you shouldn't know how advanced data structures work and be able to work with them..
We don't disagree. Understanding data structures is about reaching for the right tool, and very often the hammer of an array/map is the right tool. But the problem I gave isn't that exotic. It's a case where "reaching for an array" is a brute force solution that won't scale.
So how do you separate candidates who understand those limitations from those that don't? By asking about those cases. They're not that common, but that doesn't mean they never come up or you won't get a nasty bug if you don't understand these foundations.
I would start by asking the question without mentioning scaling at all, and see how they respond.
If they immediately jump to using a fancy data structure like a suffix array, but they don't ask any questions about how critical performance and scaling are, then it shows they know a lot of stuff about CS, but they may be lacking in more practical experience.
If they tell you they would just use an array or a map (which would be extremely inefficient with large amounts of data), then you ask a follow-up question about scaling, and see how they respond. If they can't answer that question, then they lack practical experience and fundamental knowledge of advanced data structures.
I totally agree with asking the basic question and then adding scale and complexity.
I don't know if I agree that using a trie at the outset reflects poorly on a candidate. They might see where you are going with the question, even though "you didn't mention scale." Good candidates are going to think about scale at least a little bit. That's not necessarily "lacking practical experience." It's not like basic tries or linked lists are super complex "fancy" data structures.
I think the key is for both parties to communicate thought process. If you're concerned about overengineering then prompt them to explain why they didn't choose an array.
No, you can't run elasticsearch offline on a web client.
Look, we can keep going back and forth and maybe you'll dig up some javascript library that implements a search. And that's fine, I believe there are "maker" roles that largely are about gluing things together and translating requirements into code and shouldn't require screening for CS fundamentals. But at some point it gets a little hacky and suboptimal if you lack a good practical grasp of computer science.
I have done what you could call deriving novel algorithms, and in a commercial context no less[1]. What is or is not an algorithm not very well defined. There was a famous paper which defined an algorithm as "logic + control", which could really describe any code that we write.
I think what the experience taught me (where a CS education might help) was how to think algorithmically. The point of algorithmic thinking and algorithmic code is to be able to prove (formally or informally) that certain undesirable states are impossible. Algorithms focused around performance have proofs that unnecessary computations are not performed, cryptographic algorithms have proofs that secrets are not revealed during the computation, concurrency algorithms prove that data races do not occur, and so on.
It's actually quite easy to apply this sort of thinking to highly problem-specific code. The more specific the code is, the stronger the guarantees you can prove.
What becomes more difficult is writing code that is both generic and is able to make interesting guarantees about the state space of it's execution. It's the code that kind of code (generic with strong guarantees) that are what people typically mean by "algorithms".
As programmer becomes more proficient at work in things like language syntax, api's, library and algorithms, the usage of googling will go down, as it becomes distraction, instead of helping in doing a programming task quicker.
Screening and interview selection is driven by need. If one needs a subject matter expert in specific area, it does make a sense to test the person's familiarity with subject without googling.
It also helps to test Duning-Kruger effect. [1]
But this is wrong to assume that googling is bad. Being able to search, assimilate and effectively use information (by book or Google), is also a very important skill in itself.
I feel doing a test to figure out if the person has understood some common data structures and algorithms is good to see how a person approach and solves the problems, since programming is still an inherently logical puzzle solving.
> As programmer becomes more proficient at work in things like language syntax, api's, library and algorithms, the usage of googling will go down, as it becomes distraction, instead of helping in doing a programming task quicker.
This has been the opposite of my experience. Earlier in my career there was a lot more "sit and think about how to assemble these basic building blocks." Now it's a lot more "man, it would be nice if complex object A could play well with complex object B, I wonder how to make that work and what the pitfalls are." I'm googling different things than I used to, sure, but pretty much any task I'm tackling at this point involves a fair amount of googling for docs, github issues, stack overflow, etc.
A technician at CERN maintaining the accelerators is probably pretty damn smart, they probably also understand some particle physics. They may not be the ones driving/designing the experiments and they probably aren't getting their names on those physics papers but that doesn't mean they didn't contribute any value...
Right, but theoretical computer science also deals with provability and goes into various areas of topology, reasoning, and proofs. These are not useful to engineers, or are only slightly useful, while a course on systems programming isn't useful to a theoretician, because a universal turing machine doesn't have hardware interrupts. Different specializations for different roles.
I agree. However I think it is not the distinction we’re talking about here, which is about not having a firm grasp of fundamental algorithms and data structures and big O scalability.
In school terms it would be more like a technician or vocational training around coding for particular types of projects.
Computer Scientists - Come up with the foundational algorithms etc.
Software Engineers - Translate and map those foundational algorithms into more readily available tooling, occasionally directly building a complex system using engineering principles.
Software Developers - Use the tooling created by software engineers to create user friendly solutions for the masses, aka developing apps or websites.
Correct me if I am wrong but isn't the central premise of the book that if you take a late project and add more people to the team it takes the project longer???
I've literally never met a manager that practiced this. Is the book still relevant? Not disagreeing, just curious.
It's been ages since I read that book, but there is more to it that just the tag line you mention. It is very common to take a plan for work and to say, "This would take 10 people 1 year to developer, but we need it in 1-2 months. So let's put 100 programmers on the team". This is very much more obvious in Enterprise space and government contracting, but you see the reasoning everywhere.
Here's quite a famous example: https://www.statista.com/statistics/272140/employees-of-twit... It shows a graph of the number of employees at Twitter over time. In Jan '08, there are 8. In Jan '09 there are 29. In Jan '10 there are 130. In Jan '11 there are 350. In Dec '13 there are 2712.
Although a lot of that staff are going to be sales and marketing (and probably the growth is justified), the development team is also growing exponentially during that time. The Mythical Man Month would say that probably they are spending a lot more money than they need to to get the growth they needed.
I used to work for a manufacturer of telephony equipment. On one of the products I worked on they had 5000 developers! (A single piece of code!!!). The average amount of code deployed was 1 line of code per day per developer. As much as we can argue that KLOC is a bad measure of productivity, if your average developer is only producing considerably less than 1 KLOC in an entire year, you know you have really, terrible, terrible problems. One of the questions you might want to ask is, if you want to write about 5000 lines of code a day, where is the sweet spot, actually? I think we can agree it's not at 5000 programmers. However, it's often really, really difficult to talk to non-technical management and get them to understand that more programmers does not usually equal more productivity.
I could regale you with literally hundreds more examples, but I think it's sufficient to say that, yes, the Mythical Man Month is still really relevant these days.
You've never been on a project that was behind and management decided to add resources to it? Impressive!
The book is worthwhile, not just for the central lesson, but WHY (TL;DR: communication overhead between different individuals increase as you add "nodes" - interestingly, you can make analogies to L2 cache expiration problems and others) and also a nice view at IBM of the past. And it was written in an era where there wasn't the need to make every book 300 pages long, so it's generally a lot more meat/page, though hardly perfect.
In Finland, we have a two-tiered system where "proper" research universities primarily offer master's degrees, with a bachelor's just an intermediate step for most, even if they do not intend to pursue an academic career. So, a programmer with a CS degree likely has an MSc. In parallel, there are "applied science universities" that are typically not research-oriented, and by default offer bachelor-level degrees in more practical fields of study such as nursing and engineering (including software engineering).
I do this with many terms and secretly hope that everyone agrees that X sucks. I then skip over the X doesn't suck results and then convince myself that X does indeed suck.
sigh.
I sometimes do that if I'm learning X and like it. I will probably disagree with an "X sucks" article, but if it's well written, it will teach me about some pitfalls of X.
I use dash to keep all of the reference material for libraries and programming languages I use stored locally. It has cut my searches down to almost nothing. IMO, it is better to use the reference material, because it forces you to understand the mechanisms involved and solve the problem yourself vs looking up a solution. This act helps imprint the knowledge into your long term memory. Of course, this is my anecdotal experience. As an aside, having the docs local is a godsend when working without internet or on a bad connection.
For anyone wondering what Dash is, it’s an API documentation browser macOS app. It works offline and has fuzzy search that shows results as you type. You can get it at https://kapeli.com/dash. The full version costs $30.
https://devdocs.io/ is a free alternative. It’s a web app that can be set up to work offline. It was missing a few convenient features of Dash last time I checked, but I can’t remember which features those are right now.
Why would you want to imprint this stuff into your long term memory? Google is an extra brain with effectively infinite free space, why would I want to use mine?
When I finished engineering school the prof said :
"The most important knowledge imparted to you here is how to look things up, how to verify your thoughts and get inspiration for new ones to make products faster , better and more efficient."
That was at a time when the latest slide-rule model was state of the art in calculatory equipment. So, yes, do google something before you reinvent the wheel .
I appreciate this as some newer developers I have worked with have been concerned that Googling things is something they need to grow out of. Google engineers, being somewhat revered in my Big Corp, Googling things every day would go a long way to encourage them.
I don't think she works at Google. But you make a good point. It's mostly a psychological barrier and largely a result of how the interview process is conducted; in closed rooms, without access to any search engine.
I recommend every software engineer download/purchase Dash ( https://kapeli.com/dash ). It’s worth more than whatever that I paid for it (probably like $50, I think). That app is a lifesaver; especially when I’m offline for a bit and trying to work
I agree, Dash is extremely useful. You forgot to describe what it is, so for anyone who is wondering: Dash is an API documentation browser macOS app. It works offline and has fuzzy search that shows results as you type.
Dash currently costs $30. For anyone who doesn’t want to commit to that, you can also try https://devdocs.io/, a free web app alternative with fewer features. DevDocs can even work offline, like Dash does.
I've noticed that as I produce more projects and more code, I often turn to searching my own stuff before I search google. I will do an `ag --python foo.bar` before I will search google, because if I wrote the original code it's easier to understand.
“cors - today is going to be bleak” as the first search of the day resonates with me on a deep level. Our field is incredibly complex, and some parts of it we just like to not think about too much.
Regex is one of those things I have memorised albeit without the back reference stuff I rarely use. I think it’s worth memorising and if you use it to help you find replace in your editing you soon will.
It's always worth once in a while trying to do something without using the web, use the man pages, readme's...local documentation to work out the answer. I say that as today people grow up with information on tap and the ability to use search engines well is and has become in many area's, more key than the ability to do the job. As the ability to adapt, get the job done is always bankable as it's the end results. Also over time, you remember all that google and build up a set of skills that you tap into your memory and not extended memory as search engines have become.
I say all this as somebody who grew up without the internet, let alone computers in homes so primarily developed the skills to self learn from the local information and more so, the ability to work it out myself. With the internet, you save reinventing the wheel as you can look up what wheel you need and where. But the ability to build a wheel is always a skill you should hone every once and a while. After all 5 9's SLA's is still moments when you won't have internets.
That all said, I'm pondering analysing my browser history more, be educational. Certainly worth doing just for your job reviews to justify what training you would benefit from the most.
1) What ‘level’ of engineer do you consider yourself to be, and does your employer consider you to be? (for example, do you have employees who are junior to you)
2) What’s your income from that job?
3) Do you think that income is fair and appropriate?
Apologies for the deep-dive in to details you may be uncomfortable sharing, they are important context that very few people share.
Nice, I've been there with Django cookies too. One thing, I love about the programming industry is that people aren't afraid to write blog posts to debunk the bullshit expectations that develop every now and then. Not something you really see elsewhere.
In this type of job, at this level of complexity, we just can't hold it all in our minds.
It's too much, and it changes all the time. We need a near-telepathic reference to get the necessary info when required. Thankfully, we have that and it makes the job possible.
This was a fun read ("woman shouting at cat") but also, I appreciate the idea of sharing the amount we engineers google relatively simple things we just choose not to remember.
Funny, I don't do that many web search for my job (must have searched two different things this month). Beyond a mix of reasons (a bunch of third-party docs that aren't public, build systems that are developed in house (based of ant) and existing internals documentation), I wonder what it means about me and the software (vanilla Java, vanilla JS) I'm working on.
I made a search history inspector tool -- https://prototypes.cortico.ai/search-history/ -- that processes Google Takeout files and ranks and summarizes the queries to assist me in this kinds of introspection. It's especially interesting to see which queries I've made again and again over many years.
Definition List is the only description not immediately obvious, but understanding a definition as a key/value pair clears things up and isn't hard to remember.
That "Provisional headers are shown" Chrome message drives me crazy. Some time ago, I was able to disable some flags which made it go away temporarily, but now it's showing again, presumably after Chrome made some changes. It's a problem because it means that it's not showing the real request headers. Does anyone know how to fix this for good?
I use Firefox because its dev tools don't lie to me.
I'm baffled any time someone says chrome's dev tools are better. They lie, and usually about the single most important thing I'm looking for. That is not better.
I often look up things I already know how to use just to see if there's a better way of doing it or any unintuitive side effects.
A lot of my work involves using Windows API and it has been around for so long that whatever issue I am facing, someone probably has had it before, and there might be an obscure function that already does what I'm trying to implement or work around.
I’ve been writing JavaScript for at least a decade now (way before all of these fancy frameworks, build systems, etc.), and yesterday, I googled “JavaScript addEventListener” so that I could remember how to use it. I’d love for someone to tell me that I’m not a “proper engineer” because of that!
Its forgivable because with software dev the gigabytes per year of new information to remember in your head is much higher than other disciplines. It’s darn ridiculous!
Alternative title: everything that was wrong in my development environment and process in a week as a professional software engineer. If you are "googling" to find out React dependencies, GraphQL-related React errors, Apollo release notes, Apollo-related GraphQL-related React errors, git rebase undo commands to go back to the way things used to be in your revision control system before it got all messed up and confused you, 'react testing library apollo "invariant violation"', whatever that is, "jest silence warnings", to silence warnings on the warning system that is supposed to make you better at your job but that actually makes your developer experience unbearable, and how to do semantic HTML, whatever that means these days, for contact details, and a little more, all on a Monday, you might have a problem with your development environment and process. If you have the feeling that Dijkstra is looking over your shoulder and "would not like this", you definitely have a problem with your development environment and process.
This sounds fun. I’ll give it a shot next week. My prediction - 30% food related, 20% co-workers names followed by linkedin (hoping for a picture, I can’t remember names), 20% random stuff (e.g. population of Brazil), the remainder is actually job related.
We were talking about JavaScript fatigue, but this shows really how complex things got.
But I do not see this pessimistic. We developers work in large domains and it is good to have a companion on your side that enables you to build better applications: Google.
not exactly software engineering, but back when I worked at HostGator as a tech support rep, there was a tongue-in-cheek thing that went around that the best agent was the one who could use Google most effectively. I think that that general concept is accurate for most technical roles in general; sure, you should know what it is you're doing, but the internet acts as a memory aid device and augments our ability to store knowledge to a great degree, so being able to effectively use it obviously contributes to being a more effective employee
edit: and of course StackOverflow when I briefly worked as a perl dev there.
Thank you for this. Been a software engineer/Sysadmin for over ten years. It’s always nice to see something like this to remind myself (and others) that using google does not make you any less of a developer.
Great list. Would be awesome if the search terms were clickable. To be clear, I am lazy and do not "expect" them to be clickable, only that... Hey... Making computers easier is mostly what a lot of us do.
Longer I work as a developer less frequently I google stuff. Most times I use google to find a project on Github and read docs. If I google for something specific then it is for finding better solution.
That implies you are not learning new things, or you are getting that info from books or some other source. Or you bookmark all docs pages or something?
I watch conference talks, contribute to open source, work on side projects. I have plenty of opportunities to learn. I just do not find googling that much compelling. It is more useful to read docs and try to debug instead instantly reaching for google.
It's also more satisfying, and you often end up with a better understanding of things if you try to deduce the solution to a problem before just typing in something you might not really understand (and potentially breaking shit).
Everytime I try to google what I need, I end up with postgres documentation about SQL features. But I'm in the guts of the C API. So off to postgresql sourcecode I go
I switched from momment to dayjs in a project I finished a couple months ago and the massive reduction in entry size was remarkable. I really like dayjs.
cool thing had the same idea but never published it, funny how often you google the same thing? like "explode in js" ;)... i never deactivated the complete history of google since it exists.
IMO technical interviews should give you a terminal with Google. And ask questions that can't be answered without it. It's a more business-valuable skill than memorizing things / having had experience with some specific technology and its problems.
Ironically, your comment is an example of a narrow view mindset that software engineers who work on React and Javascript are somehow not equivalent to "real" engineers. I don't think that's a balanced view in an industry where a large percentage of companies have web-related products and employ engineers to work on them.
Truth is almost none of us are "engineers". (meaning a very strict discipline, often with licensing, and with very real consequences for failure after you sign off on something) I'm fine with that, but I'm also fine with the colloquialism of "software engineer" (same way I'm ok with "crypto" meaning "cryptocurrency" if we all have an understood context)
However, I usually interpret comments like this to suggest things like the backend is "real software engineering", which while being literally incorrect, it's also a very narrow view of our craft. I think the pushback is similar to the traditional ops folks pushing back against "devops" or the way people make the same, predictably pedantic comments about any "serverless" article: it represents an encroaching on their expertise.
The art of googling.. I'm a nobody. I don't write codes I'm no engineer. Im just a regular person. I love google. Only thing i don't understand in their maps how definite is the location of the raw data. Is it100%?
I mean, obviously I'm trying to be snarky here, but I'm not sure how a professional "software engineer" focused on web development types such generic queries into the browser after working on it long enough to be "professional".
I've been developing for the web for about 15 years, and I totally can see myself googling "writing cookies". It's not a thing you do that often, and even if you do, perhaps it just doesn't stick in your mind.