As a coder I don't think I can do anything without the internet. Even if one specific site (maybe there are two actually, SO and GH) were down, it would wreck my day and many of yours.
My style of coding is very intimately connected to having access to online resources. I regularly search for things like how to concatenate strings or the syntax of a for loop in some language. I also use the internet for higher level things like how memory management works on some system, or how something like an ECS architecture works. I also spend a lot of time looking for the right components to put into my own systems, so if GitHub were down it would bother me.
Basically I'd be useless without the internet. The coding tools themselves, all the examples of how to use them, and all the actual knowledge about how everything works is on there.
Perhaps the only thing that's actually my own input is the judgement about what things are important, which sources are reliable, and which people are authorities.
When I worked for a major financial institution in a data science / analytic group there were a lot of very, very smart math types who couldn't code. They'd ask how I'd gotten my code working and I'd never lie, that many parts of it I'd fixed using SO. This was 2012-2015 when a lot of the machine learning wasn't in helpful R packages or python libraries pre-written.
Many would ask to see the page I'd read and I'd show them and they would ask how I got my code working from an example that had nothing to do with what our task was.
I think there is a lot of value to looking at a SO answer, generalizing it and making it work in your code and doing it quickly. Knowing which SO articles are junk and which ones are gems is something many take for granted but is actually something close to having the right 'gut' feeling.
Absolutely true. A lot of people think "cut-n-paste" is how people get code written, but the truth is you almost never find a snippet that is exactly what you want. It might be trivial like changing the name of the variable, or it might be some very small piece of a snippet you need, but the superpower isn't being to search for things, it's making sense of them and being able to reason about how to adapt it for your own case.
The difference in effort in inventing a solution vs verifying a solution is correct. Most of the stuff on stack overflow or the suggestions from copilot are wrong, but its the fact that you can usually read and know if it is correct and then you have been saved the time which would be starting from scratch.
SO is a great place for finding that one little code snippet that plugs a hole in your understanding of a problem. Many times you're 3 layers deep in a dependency chain of problems solved, and the question addresses some totally unrelated problem, but the question's answer uses that special argument to the framework's / language's library function in just the right way where it unlocks your understanding and lets you continue on your merry way.
This is particularly true when you're thinking "is this use of X a horrible hack or a performant best practice?". Someone laying out and linking to the guarantees on offer by X in a completely unrelated use case will clarify everything.
I agree, I have worked places where there was no internet access and it was a major slow down. Also, every time I find the answer I was looking for, it is closed as not an appropriate question, but not until after someone gave it the answer I needed.
Not only do you need to find the snippet which is hard enough as it is, but you also need to understand the snippet and often take out a line or two from a 20 line snippet which do what you need.
If I ever have to solve something especially weird and use a stackoverflow result, I leave a comment above the weird code linking directly to the stackoverflow answer. And usually some extra context if the stackoverflow question isn't a perfect replica of our situation.
A highlight of my development career thus far is when I was able to fix a build that happened to be broken on a colleague's machine, which had stumped three other developers for multiple hours spread across a week.
I don't remember the line of reasoning or detailed investigation that led to the solution, but I found something in a build configuration file that looked suspicious enough to warrant a search for an exact phrase in the code. It turned up a StackOverflow answer, where three lines had been cut and pasted into the build file.
There was some specific change in a library or binary upgrade that had broken the thing that this particular change was meant to fix, and the problem was solved by removing the lines in question.
The commit comment for these three lines was "merge hmmmm", which was also the only local documentation I found that I could have used to cross-reference with anything else.
This is very similar to academics and published papers. Most other labs aren't really working on things that similar to another lab, and so techniques have to be modified and not just directly lifted.
Some point to that as an aspect of the replication crisis, but it's really just the nature of identifying and abstracting common elements out of specific goals.
> I think there is a lot of value to looking at a SO answer, generalizing it and making it work in your code and doing it quickly.
This will probably get downvoted but I disagree with this.
IMO you should never be "generalizing" random pieces of code. What you should rather look for is help in augmenting your own logical reasoning process: e.g., I want to transform X to Y with BigO of Z complexity, how do I do that? Look for stuff that helps you understand how to do this. Once you do, you shouldn't need to generalize other people's code.
I don't mean to say that you should reinvent the wheel. Rather, you should understand that you need a wheel to solve your problem - where you get the wheel from isn't really that relevant (except for copyright, licenses etc but that's a separate topic).
Legal textbooks are the best example of a reference I've seen: the index is grouped by legal concepts and terminology, and is impenetrable to non-lawyers.
Like jargon, it gatekeeps; but that's not its only purpose.
I used to write code like this for a long time but I honestly felt like I was never actually internalizing anything. So at one point I just turned autocomplete off, stopped searching immediately and forced myself to remember and I think it's had a hugely positive impact.
When I had to actively memorize I paid way more attention to what I was doing. I think the online search multitasking is honestly very bad for sustained attention and very passive. As an alternative I started to look up library code directly and just read and I feel like I learned more about how python worked in a few weeks than I did in months or so of just typing things into google.
I think how you learned to code does sort of shape your coding.
I'm very much from the opposite camp. I learned to code in C in vi and didn't really have anyone to ask other than the manpages and the K&R book, I'm pretty sure I didn't even use syntax highlighting.
I do think this has shaped my programming even later to where I stubbornly stick to boring but stable programming languages like C, C++ and Java, and these languages I've used for in some cases almost 25 years and where I virtually never have to look anything up. It's not like I haven't dabbled in other languages and paradigms, like python and haskell and whatever; but it's not those languages I use when I need to build something. I just don't see the value in constantly learning all this flavor-of-the-month stuff. When the dust settles, most hyped upcoming programming languages become footnotes when the history of programming is written.
I learned to program in a similarish vein (different languages), but with monocolor screen and reference books. However, now when I login to a system where vi is not mapped to vim so that the syntax highlight is not brightly colored, I quickly exit, create that alias to vim, and go again. Coding without syntax highlighting is caveman style to me. It can be done, but I really don't like how it slows things down now.
I do agree that quickly jumping to an online reference because you can't remember the order of the parameters for a particular method is a crutch that if you were forced to remember would remove that slow down. But at least I'm able to remember the correct method vs having to search for which method is needed and when.
Oh yeah, I wouldn't code without an IDE today. But I do think learning to code without one, without even syntax highlighting, and especially given the extremely obtuse error messages compilers gave back then has given me a pretty useful skill set that is hard to acquire with all those tools.
I don't really know what the take-away is. This isn't a fast way to learn to program, or an easy way, and I'm not even sure I'd recommend anyone learn this way, but I also can't deny it's brought some very tangible benefits.
I feel like there is a inflection point of having the time to be able to study a new language vs just brute forcing your way through with SO. If you know you will be using python for long enough then it would be worth reading the docs, else just SO all the way.
So there's the tool and then there's how you use the tool. As the expression about the poor craftsman goes...
I am not surprised changing your approach helped you to improve your skills. But if someone were to use online search to say... find the library code, and read the manual, they'd likely get similar results to yours, right? ;-)
> So at one point I just turned autocomplete off, stopped searching immediately and forced myself to remember and I think it's had a hugely positive impact.
n.b. I program like this too, just plain Emacs and I try to remember API and package names before consulting Dr. Google. The IDE trying too hard to help me is hugely distracting.
Pre-web, you basically had a lot of reference books, used external libraries very judiciously (and there were relatively few of them anyway), and generally kept software stacks pretty simple. And, yes, probably took a lot more time.
More generally speaking, most people today would be incredibly frustrated getting information about anything generally if they were plopped down 30 years ago. When I was a product manager back then, we paid consulting companies large sums of money to get the most basic competitive information faxed to us because you couldn't just look it up.
>Pre-web, you basically had a lot of reference books, used external libraries very judiciously (and there were relatively few of them anyway), and generally kept software stacks pretty simple. And, yes, probably took a lot more time.
As a new developer, this is what I am doing and focusing at. Not be reliant on the internet and it works. Because I have several books and for example I can read Mozilla MDN offline.
Also, retention seems to be better for me when researching a book vs online. It's kind of a mental thing probably. "I don't want to have to stop to pull that book down every time I need this, so I better remember this" compared to "I don't really need to remember this because it's just a websearch away"
I am trying to be less dependent on the Internet and try train myself to be better, that's it. Same goes for game too right? Look-up solution and the puzzle is done. Without thinking.
I use a website/app and books, the website updates itself. But for some things I still need to look elsewhere than Mozilla MDN, e.g W3.org
> … and for example I can read Mozilla MDN offline.
Mozilla MDN is pretty much the last thing I’d prioritize for offline reading, since it’s relevance is tightly coupled to the availability of the network.
I might prioritize offline resources for Raspberry Pi and other computing platforms, that make more sense without the Internet.
But above that, maybe a few books about growing food in one’s own garden. :-)
Not sure how books on gardening have anything to do with coding workflows, but yeah, okay.
There's a difference of building a reference library for home use or homesteading, crafting, etc than say working for 8 hours on a coding project directly tied to the need to use the computer. Computer languages/libraries/etc change frequently enough printed books become obsolete fast. How/when to plant, what to feed them, etc hasn't changed too much for generations. Nor things like when to use a dovetail joint, or other building techniques. Books from the 1900s would be just as useful today.
A lot of the programming I did was pre-Windows. But, yes, MSDN was a pretty remarkable developer resource at a time when developer resources were pretty fragmented/hit and miss.
I learned C# 2.0, when the offline MSDN was still a thing. That language environment I can still work in with ease, from memory, without consulting online resources. Good offline documentation (and good library design) works wonders.
You thought it through, investigated the issue, came to a conclusion, created a hypothesis, and tested a solution. Now, the cycle seems to be read a problem ticket, see what other people have done, copy code, close ticket.
I can't help but feel that this type of workflow just took a huge step forward in https://copilot.github.com/ too. A solid recognition that thats how most people code distilled into a product that automates that lookup by cross referencing information on the worlds largest code platform.
A step forward, but also potentially a step backwards compared to the “harder” workflow (the one without all answers on stack overflow).
Just a small example, I imagine there are a lot of JS developers who know that they need to create a new function using “=>” or “.bind(this)” if they need access to “this” within the function… and if they don’t, things break, without really knowing why (even though they know the solution).
TLDR: the best developers I’ve worked with go beyond “knowing the solution” to “knowing why the solution is the solution”. Understanding “why” something is the way it is becomes difficult when something like Copilot autocompletes the correct answer.
Which might be after you've gone home and just stopped thinking about it. Mine usually comes the next moring in the shower.
Sometimes pulling an all nighter and beating your head on the wall is less productive than just stopping to take a break and coming back at it with a fresh mind. FAANG PMs don't get this and probably think less of you because your Zen like approach is anathema to their fast paced do or die approach now. You can't be productive if your butt isn't in the chair.
When phones still had IR blasters, I got stuck for 3 weeks straight on a IR library that needed to imitate the sequences coming from a TV remote. Code was “done” in 2 days, the rest was spent in front of a TV to have my sequences get recognized.
Literally went to the library to read books on IR and make sure I understood what the doc was saying. Asked around to find people who worked on these kind of sequencing. Went to a small repair shop and bribed the tech to give me hints on how remotes work.
It was fun in retrospect, maddening and incredibly taxing at the time.
With prior knowledge of how quirky it is, I'd sacrifice way more time finding an expert from the start.
I don't think we did anything fundamentally wrong, short of underestimating the problem space. It's also not as if I was left in my corner for weeks, I got pretty good advices (like go finding more help and returning to basics). Getting info before the internet, before makers got into habits of published maintenance manuals etc. life was just way harder.
Now you import isEven(), burnout because of all those crappy third parties and look for a job in go. And in 10 years when it will have overinflated like java and become a piece of garbage you'll rinse and repeat.
And sometimes it is knowing what you don’t know that makes one a good developer.
Leet code interview question where I could just find the answer in 5 seconds with a google search, might actually be good interview questions if they just let me use a google search.
You still have to know what to look for.
Being good at word problems in math I think are the only place where “being a math whiz” is what it takes to be “able to code”. How you setup your equations to solve a simple algebraic word problem, can make solving the word problem super simple, or more complex.
My kids do Singapore math, and holy crap does it force you to think about other ways to approach solving problems, a valuable skill.
Though they’re not homeschooled. Well they were during covid year but we did Singapore because their school does Singapore.
I’m always trying to solve the damn problems with an algebraic equation that is way more complicated, and my kid draws a bunch of bars and things and effectively does algebra but with what ends up being actually a simpler equation.
I think most of us operate these days like this. This allows for consistent breadth. While you may have deep expertise in a specific language, coding paradigm, or other technical area, the shared resources of the Internet allow you to operate more efficiently and broadly. Your "depth" in this breadth space is by having an efficient operating model that weeds out spam, bad input, and similar gaps when the "broad" knowledge base is incorrect.
I believe this is the real reason spam-answer/grifting sites are so bad. They pollute this shared commons of freely available information. From a policy recommendation I certainly hope these free resources are kept in place, timely, and free from spam/SEO farming.
Hah, I’ve answered enough questions on SO that every once in a while I end up having to consult SO just to copy some code I wrote myself an answer. I guess in these cases I directly offloaded my own knowledge onto the Internet :)
The “own” snippet that comes up the most often for me is a binary-search implementation of integer cube root in Python - I know how to do it but the edge cases trip me up enough times that my pre-written answer is faster than trying to write it from scratch.
Just sharing some perspective. I got started in the early and mid 90s. I became obsessed with MUDs as a teenager. I needed a compiler, discovered Linux. Painstakingly downloaded and installed it using over a dozen floppy disks. My Internet was a 28.8-56k modem in that era. I got extremely good at C because that is what CircleMUD was based on. I had a pile of reference manuals, man pages, and Usenet. Occasionally things like Beej’s guides were useful. I also wrote a decent amount of Perl. If I wanted to learn something, like Regex, it was source code, man pages, info pages or an O’Reilly book. Internet hunting was secondary to trusted resources (books, etc).
Certainly some things were harder and slower, but if forced you to use your brain more and think through some types of issues. You rarely got instant gratification of reading someone else’ solution to your weird problem or error.
It forced you to really look through the source code and think things through from first principles. To understand your runtime (C programs barely have a runtime really, but it still exists in complex programs that must run for long periods like a MUD). It forced a deep knowledge of your tool chains and made you really good at them. It forced you to be more narrow but also creative and deep. From a first principles perspective it taught me ways of working that are different to younger programmers, and even modern me. Often looking something up is right. But sometimes you are best served by learning something for yourself, through source code and such. I learned so much looking at other code based and constantly asking “why?”. Why is this code this way, etc. I can’t say if it’s better or worse, but I got really good at MUDs and it turned into a long career in tech with no degree :)
I briefly ran a PK-focused MUD (hosted on Wolfpaw) and remember being amazed at having 15 players online simultaneously. I was ultimately forced to shut down the MUD by my parents, because working on it started negatively impacting my preparations for final exams.
I went the opposite way, heh. I had a pretty popular MUD as well. I learned so much and made a few friends along the way. Took me about 5.5 years to finish high school from all the Kate nights and ignoring my studies. I just dropped right into software dev after that in the professional world. So it goes :)
As someone who started working for pay before SO and GH (damnit I'm old) what I remember is that syntax errors took longer to figure out (and more trial and error), but the biggest difference to me is honestly that it is so much easier to find the right library. Back in 2005 I was coding up pretty much everything from scratch myself every time (I even remember a heroic attempt at an XML parser in C++! Backwards linking and processing directives and everything) because finding the right library to use was such a pain in the neck. If it wasn't in STL or Boost, it was like it didn't exist. Maybe it's that I'm older and more experienced (and know not to try and write your own XML parser because of all the complexity inherent in that), or maybe it's that SO and GH facilitate library discovery (with SO providing the pointers and GH providing the actual code) so much better than what came before (SourceForge etc.).
I'm not sure, but I suspect it's that it was the technology improvement more than it was my improvement.
I usually turn to SO in hopes of a quick and easy answer, but a lot of the time your specific use case isn't going to be well covered as the stackoverflow threads sometimes have some bespoke requirements from the asker.
What I find is that it can be faster to just look at the documentation that comes with the tooling. Usually a good stack overflow answer is just regurgitation of something that's already covered in the documentation that the asker clearly didn't read a lick of. When you start using a new tool, just skimming the documentation end to end can but you at such an advantage and set you up to get working with the tool in the correct way with a decent understanding of some of the caveats that might be at play with your particular use case (which is rarely something that can be gleamed from a terse stack overflow answer). Even just a pdf covering a language can be handy to thumb through the chapters and revisit common patterns like for loops that can vary in different languages, once again its easier to trod down a familiar pdf textbook than it is to try and trench up something relevant from google search these days.
It's coming at the problem from different directions. The documentation gives you the general bottom up approach, but lacks details and examples. SO shows what it should look like at the end, and adds details on edge cases that the documentation just glosses over. But you can never find exactly what you want, so you combine those two, a working end result, then modified using the documentation, to get what you want in the end.
If you already have a working pattern that you're just modifying, you might get away with just looking up documentation. If you already have a strong foundation and know exactly what you want to do, then you're just looking up the syntax.
Maybe another analogy might be, learning vocab to better express your thoughts, vs looking up the exact spelling of a word you already know.
Personally I've replaced a lot of my stack overflow usage with hacking along with test cases and using the documentation to help develop those tests. Imo, its a lot easier to figure out how something works by figuring it out with your two hands fiddling around vs hoping you find someone who writes well and understandably about this specific niche thing on stack overflow. Sometimes on stack overflow the answer is a very lazy "just install another package" vs developing a solution with the base tooling that's probably more performant anyhow, if a little bit more verbose.
Perhaps the only thing that's actually my own input is the judgement about what things are important, which sources are reliable, and which people are authorities.
This is more or less what I got out of the 2nd half of my higher education. Plus I got a ton of reps in, so to speak, to really refine my judgement and ability to research. It's the old adage about learning how to learn, teaching one to fish, etc. I'm sure this all applies to many fields, but especially one like ours that changes rapidly. Thanks to this education, I feel like I can switch careers pretty easily.
I make no mistake about it. I don't feel like I know what I know without the help of the Internet; I feel like I'm just some bloke who is good at sussing out what's important and how to apply it to my job. I think this is a skill that is missing from a number of my colleagues though.
Sometimes I code without the internet (on vacation, on trains and planes, etc) and it's not too hard.
The major difficulties are indeed around the things you mention, but one solution for that is having a few large projects open in another IDE window so you can search for functions, idioms, algorithms, patterns.
I guess I got used to it because I started coding professionally before Stack Overflow. There was Expert Sex Change, but it sucked, so I avoided it like the plague. It was mostly C#, so the combination of Static Typing + Intelisense also helped me not needing internet. Before C# I coded in Perl, so seeing other people's code wouldn't help anyway ;)
It's inevitable, really. Things have became so complex and dynamic that you can't hope to have all the reference material you need at home.
We used to have these little folded cards with the complete syntax of Pascal or C and that would be enough. Nowadays, you don't even know all the languages and libs that are in a given stack.
This is partly not your fault, however; the manner in which documentation is provided (or not provided, to be exact) is also complicit.
E.g., there are languages where the offline manual covers 99% of all your needs. I include bash and c, octave, matlab in this category.
Then there's languages that feel the offline docs are best left ad an afterthought, and focus most of their effort either towards funky online documentation websites, or effectively relegate it all to stackoverflow.
I'm fine with the built in `man` command, and of course offline documentation for almost everything. It's actually kind of weird that our first instinct is to waste internet bandwidth while we probably have a copy right with us.
> I'm fine with the built in `man` command, and of course offline documentation for almost everything. It's actually kind of weird that our first instinct is to waste internet bandwidth while we probably have a copy right with us.
Exactly, yesterday I wanted to to have DNS over HTTPS and if you're going to search for it, you have to do a lot stuff. When there's the manual and literally and says that you need to add two lines and uncomment the specified comment.
> Basically I'd be useless without the internet. The coding tools themselves, all the examples of how to use them, and all the actual knowledge about how everything works is on there.
I know it's sometimes easier to google particular questions and that some projects lack documentation, but many if not most programming tools include very good documentation in the same package. And many include their source too.
I think saying you'd be useless for lacking online access to that knowledge is an exaggeration.
Not being able to pull new dependencies is more problematic, in my opinion.
The thing about the situation is you have a wide variety of knowledge types. Notably some knowledge can be verified pretty easily and some knowledge (or claims) is quite hard to verify, it rests on experiments, hard gain expert knowledge or the testimony of single individual.
Programming information tends to be easily verified so organizing it's skill around filtering google, stackoverflow and so-forth is fine. The resulting program can be tested fairly easily.
Uncritically taking in other sorts of claims can be very problematic.
You might want to work on that. There's something to be said for mastery and it seems risky as fuck to require the presence of the cloud, let alone a few websites, to be able to work.
It seems like mastery and knowledge just for the sake of it is less of a priority these days. Why bother wiping your ass when the cloud can do it for you?
Relying on a reference as a guide is not necessarily a sign of a lack of mastery any more than not using a reference is a sign of mastery. Sure, there is always the possibility that a reference won't be available when you might need it, but for the obvious exception, perfect recall is not what mastery is about.
Memorization is not all of mastery but it sure is a large part of it. Like, I get it, I can't memorize for shit and I rely on Google/SO to an uncomfortable degree for programming, but that is more because they are the most convenient. If you can't function because Google is down, when you should be looking in an offline reference like a local copy of the docs, library code, a man page, or a book, then I would consider that to be a pretty crucial gap and weakness in technique. Resourcefulness is another component of mastery, and, frankly, even more important than memorization in my book.
It's the same thing for so many other parts of life... just because you use Google Maps to navigate doesn't mean you shouldn't also know how to read a map and navigate on your own.
> Memorization is not all of mastery but it sure is a large part of it.
Not at all. Memorization is a common side-effect of mastery, just like mastering a lot of physical activities tends to have a side-effect of improved strength, endurance, etc. No question there's a lot of correlation between memory and mastery, but that's not the same as being an essential component of it.
An elderly master of a physical activity might well have a significantly deteriorated physique and still have mastery, and dementia patients can retain mastery of musical instruments, jigsaw puzzles or contract bridge (even if they can't name the suits, let alone articulate simple bidding rules) despite severe deterioration of their memory as applied to those specific skills.
> If you can't function because Google is down, when you should be looking in an offline reference like a local copy of the docs, library code, a man page, or a book, then I would consider that to be a pretty crucial gap and weakness in technique.
You might consider it a crucial gap and weakness. I might frame it more positively, that it's a great advantage to NOT have to rely on a reference tool. However mastery does not preclude weakness. Heck, master writers like Robert Caro, J.K. Rowling, Neil Gaiman, Joyce Carol Oates, Stephen King, Danielle Steel, and Don DeLillo have spoken in detail about being dependent on their old school tools and are unable to take advantage of the benefits of modern writing tools, and there are other writers for whom the reverse is true! No one suggests they are anything less than masters of their craft.
Certainly, resourcefulness is an important skill, and a resourceful person with poor memory will have lots of ways of getting by without having to remember things. However, there's a bit of a limit to what's reasonable and important for mastery. Sure, a resourceful soccer player can still apply their craft without a ball or a field, but really, how useful is their skill? In distributed computing, people often absurdly complain about how an application node behaves when the network is down, which raises the interesting question of what work the node might be expected to do in that scenario. Similarly, when a software developer can't function without a working computer, or if their network connection is down, that's not great, but might not be all that important if the development work they are doing is entirely dependent on those tools anyway.
> It's the same thing for so many other parts of life... just because you use Google Maps to navigate doesn't mean you shouldn't also know how to read a map and navigate on your own.
This is a good example, and not just because reading a map and using Google maps are different skills. It's a good example because one can have mastery of map reading without necessarily having memorized a single map or geographical layout. Heck, maps have legends specifically so you don't have to remember much about them in order to use them effectively. Memorizing geography would be a distinct form of mastery separate from map reading.
I understand using the official docs to look up some lesser-used parts of an API, but Stackoverflow is pretty useless to me nowadays. Most of the problems I run into are way too specific to my situation for SO to be of any help.
That also holds for me, which I suppose makes me a slow coder. I imagine that coders like Thorvalds or Carmack don't have these issues and that's why they are fast. In my head its like typing. I 'type' with 2 fingers and always looking. They 'type' with 10 fingers and blind.
Not actually related to your post, but as a non native English speaker, it took me a lot of time to remember the meaning of "SO" on mainstream Internet. I don't know why, but I kept forgetting it and wasn't able to infer its meaning, so each time I had to look for it.
Nowadays it's ok, to the point that reading your comment I was able to understand that it was not the mainstream "SO" you were talking about, but another one more related to HN.
Anyway, what I found funny in this is that I can think as "SO" being my "SO" because I 100% relate to what you shared.
I've fought with this on and off. It is true that my work tends to be as internet dependent as anyone's, but I've experimented with an offline-first type of workflow.
I think it's entirely possible to work offline first and foremost. You'd need the documentation for everything local. Operating system, programming language, libraries, frameworks, etc. Mass storage makes this more practical now than ever.
A lot of projects don't provide very downloadable documentation which would make it harder and, of course, you'd miss out on knowing when the documentation is lying.
The biggest problem with it is probably nothing intrinsic, but the fact that you'd be swimming upstream doing it.
In the 'old days' there would be a shelf of programming books nearby and man pages on useful operating systems. It was slower, but fairly similar in principle.
I started coding in the disconnected/RTFM era, and had a hard time with the poor documentation, bugs, and general over-complexity of most modern programming platforms. StackOverflow seems to fill the void, but not efficiently. So, maybe don’t feel so bad because it’s not really learnable the way it used to be.
I think you are selling yourself short. The things you find on Google are trivial things. A lot of this you'd reference in a book back in the day. You'd be slower for sure, but not out.
Personally, I'm starting to identify a lot of cases where looking things up in an external resource is a deeper mental disruption than remembering it, even if the lookup is just as fast or even faster. (Or, it's possible that the external lookup only seems faster because the several mechanical steps compress my perception of time, like keyboard navigation can seem faster than the mouse, even when it isn't.)
For example, I find that when I'm working with an API enough that I am frequently looking up the same operations, investing twenty minutes in identifying and reviewing the important operations, as if I were preparing for an exam where I wouldn't be allowed to consult external resources, pays off in fluency and immersion.
Another example is that when I am reading a history book, before I start, I review relevant names and contemporary dates. Then when I find myself thinking, "Wait, at this time, how long ago was X? Has Y happened yet?" I can answer from memory. This gives me a richer reading experience than if I needed to remove myself from the context of the book to look up those dates.
I know memorization is seen negatively from a pedagogical standpoint, and schoolchildren find it alienating and discouraging, but I think when you have enough experience to understand the value of it, so that the work to achieve it isn't such a negative experience in itself, judicious application of it has a lot of power to make your work easier and your learning experiences richer.
> I know memorization is seen negatively from a pedagogical standpoint, and schoolchildren find it alienating and discouraging, but I think when you have enough experience to understand the value of it, so that the work to achieve it isn't such a negative experience in itself, judicious application of it has a lot of power to make your work easier and your learning experiences richer.
> I define memorization as learning an isolated fact through deliberate effort. [emphasis in original]
I'll emphasize isolated fact. A lot of people who say memorization and rote learning are bad or ineffective are probably framing it in the same sense as this author. They (mostly, there are exceptions) aren't arguing against committing information to memory, but of committing information to memory with no (or minimal) understanding attached to it.
If you just learn isolated facts, you become a meat sack Chinese room (https://en.wikipedia.org/wiki/Chinese_room). You can't connect the dots and perform the real interesting work of synthesis (combining information and ideas) or derivation (from what you know, determine new results). The author talks about students "knowing" that the sine of pi/2 is 1. What do they actually know, though? What can they do with that fact on its own without any understanding of what "sine" itself means and the contexts in which it is used?
Yep, defining memorization as learning "isolated" facts presupposes that the teacher is too lazy to draw connections in the classroom and explain the relevance of what the students are learning. It defines memorization as something that only an incompetent teacher would utilize, like defining engagement with primary sources as having students read material in dense technical or archaic language they don't understand, without assistance, or defining classroom discussion as asking the students to talk about the material and then going to the teacher's lounge to have a cup of coffee. No teacher would defend the use of techniques framed that way.
Yep. Having to look up every API call, versus taking the time to memorise the (relevant parts of the) API, makes the difference if you want to be a 10x coder.
Knowing how to look up and apply things is nice. Being able to discern which of the things you looked up are worth memorising, and then actually doing so, is even better.
Some things are worth memorizing, others aren't and just just remembering a reference does the trick. Also memorizing information that becomes obsolete fast becomes a hinderance rather quickly.
You're using [pre-fetching](https://en.wikipedia.org/wiki/Prefetching): looking up information that you expect may be needed in advance. And it's also reasonable to retain such cached information over the term in which you expect it to be reasonably in-demand.
By contrast, the critique of rote-memorization in education is that students have their time/effort/educational-opportunity squandered in courses that ask them to memorize information incongruent with expected utility. For example, kids had been asked to remember elements of a fictional narrative, or specific dates or factoids about historical trivia.
It's more of an issue of (cache/memory)-management and when/what information ought to be retained rather than an issue of memorization always being good/bad.
Generally, the critique of education is that students ought to be learning how to think and operate rather than memorizing trivia. Of course, learning to operate effectively can involve memorization at times, e.g. retaining an appropriately-tuned working-cache of relevant information, though exams that basically force students to waste their time/effort/opportunity memorizing trivia seem counter-productive.
---
Hard to explain this, but it's important, so...
I think one of the worst parts of rote-memorization in education is that it messes up the mind's internal prioritization scheme, which I think makes people stupid.
For example, when someone's doing a math-problem, they need to remember the numbers that they're working with; such things ought to be retained in the term in which they're used, as required to do a math-problem.
For another example, when someone's studying a historical-happening, they need to remember dates, names, and so forth, well enough to piece it together. Though like with math, they ought to be able to write such things down or look them up, as appropriate; there's no value to memorization that goes beyond its utility in caching.
So, when a teacher asks their students to memorize dates, it's not that they're wrong in the notion that a good-student ought to, at some point, have memorized the very things they're asking the students to memorize. Instead, the teacher's big screw-up is that they're asking students to elevate such minutia from cache-trivia to deliverables. Alternatively, they're asking students to treat their mental-cache AS a deliverable.
And that's what I fear would make people stupid, because that's not how a healthy mind would seem to operate. This is, a healthy mind ought to have an efficient, tuned cache-management system; it's a mental-sickness to try to train one's cache to reliably operate over the course of days or weeks, as students had sometimes been asked to to regurgitate trivia on an exam.
Instead, a healthy mind ought to have an internal feel for when/how trivia is useful, then retain it accordingly. And a healthy mind shouldn't be obsessed with a fear of failing an exam for forgetting something; that'd be like designing a CPU around the ideal that a cache-miss is a critical-error.
In short, I'm trying to stress that it's not as simple an issue as memorization always being good/bad, but rather it's an issue of rote-memorization training students to engage in poor mental-hygiene practices, messing up their ability to think efficiently. (Obviously, this is in addition to the normal objection that students end up missing out on a real education.)
> Alternatively, they're asking students to treat their mental-cache AS a deliverable.
I think that's the student's point of view, but as a teacher will explain, not every part of the process is the ultimate goal. Teachers tell students that awareness of basic facts from the reading will help them get more out the class discussion tomorrow, help them get more out of the lecture, give them a head start on further assignments, etc., but to most of them that just goes in one ear and out the other. They really don't care if the class discussion happens without them and they don't understand a damn thing and the class is a total waste to them. So the teacher also has to tell them there's a quiz on the facts tomorrow and follow through on that promise. Then the students who are just want to do the minimum to get a certain grade also get a little bit more out of the class discussion, which they wouldn't have otherwise.
In the end, not even knowledge of history is the ultimate goal. The ultimate goal is something like the ability to use historical perspective to better participate in society and make decisions about their own lives, historical perspective that they pursue and deepen throughout their lives. Teachers can't test that. They can test, and motivate, constructive steps towards that.
I started developing software in the days before the Internet. It was indeed very different. My choices about what to remember and how I remember things have changed significantly since the ubiquitous availability of searchable information. I no longer have tomes of K&R, processor manuals, Inside Macintosh or Java books on my desk. The only technical book which happens to be on my desk at the moment is "Anti-patterns".
I have been "weeding" my paper books, especially technical books over the last couple of years and gotten rid of an entire bookshelf of books. I still have an entire bookshelf with Knuth TAoCP, math books, type theory, graphics, graph theory, control theory, physics and some nostalgic books like my original Motorola 68000 processor guide, Rodney Zaks Z80 book, "Adventure game programming in BASIC", Commodore 64 Innerspace Compendium, etc.
When a puppy chewed up my copy of "Linux in a Nutshell" I was tempted to buy the new edition but decided to try it online for a month with O'Reilly Safari through work and have not bothered to replace the paper copy for more than 2 years.
I would categorize the main change in style of memory as a shift from focus on remembering details and facts, which I can look up, to using memory for decision trees, processes and methodologies which either can't be looked up or are significantly personally customized. I consciously "outsource" the simpler and to me, less valuable, aspects of memory. Some of this is handled automatically by the IDE I use and the rest is done with mostly Google searches.
Yes, I got it on the recommendation of a friend but have read less than a quarter of it so far so don't have my own opinion yet. Because I am cheap bastard I got my copy on eBay for $4.
> Why learn something in depth when it can just be looked up?
And the answer is that you can't daydream about something or think about it deeply unless that information is easily pulled up from your memory. Daydreaming, out the "default mode" of the brain organizes and helps is too understand information. If I had taken the care to study and memorize information regarding, well, everything in school, via a tool like Anki, I would have a lot more information that I could use to connect disparate ideas together. The brain can't do this to the full extent possible unless that information is memorized.
That being said, it's important to know that memorization has a cost in time. The time for a single data point is low, but 20-30k data points in Anki is a serious time commitment.
If it's important to you, memorize. The benefits are huge, and it will help too ward off mental declines later in life.
"Reading, after a certain age, diverts the mind too much from its creative pursuits. Any man who reads too much and uses his own brain too little falls into lazy habits of thinking"
> If it's important to you, memorize. The benefits are huge, and it will help too ward off mental declines later in life.
I slightly disagree. I do agree that the end goal of having the information memorized is important, but drilling flashcards never worked for me. What makes information stick is drilling practice problems that touch on the knowledge to be memorized until it's established. It's more useful for recognizing when those facts will be relevant, too, compared to having a memory silo of 10,000 stored facts.
Anki and similar systems can be used for practice problems, not just "read card" or "read card, say the reverse side, check if correct". There are some add-ons for math that actually generate problems, or you can construct the cards in a way that they promote use and not mere recall. "Io {parlare} con mia moglie" can be one of many such cards that prompt you to conjugate, or set the article, or properly determine the plural. Or can be combined with reading comprehension or listening comprehension scenarios (play or show some sentences, follow on with questions about what was heard or read).
You should see your memory as a library of functions, not a database of facts. Facts helps you answer test questions, library functions is how you solve problems.
Every piece of knowledge has things you can do with it, things that relates to it etc, understanding how to work with the knowledge is the important part to learn and what you don't really learn by just looking it up on the internet. Basically you build interfaces and connects them and create a large structure of things you can do and apply to all sorts of things.
When I was younger I worried a little bit about this but now I'm perfectly happy to use my brain as an index instead of a data warehouse. As in, I don't bother committing to memory things that I can reach out and grab if they fall out of my "cache". As others have noted, I regularly reach out to SO/Internet to "remember" how to do basic language features if I haven't worked in said language recently or I've just forgotten the syntax. I can speed 5min+ racking my brain and/or reading docs or I can search and find it in a few seconds. Often I just need to see an example to "remember" or kick start myself.
The problem for me now is my mental index can suffer from link rot. For programming its okay because this doesn't happen often. But for other topics when a site or blog with the details of how something works disappears from the internet its very disorienting. Like you unlearned it. I guess this is why people get into archiving content.
You're right. It is. I'm thinking about setting up some sort of archiving web proxy on my home network to archive most of the pages I visit. I'm not sure if that will turn out to be any more useful than searching the Google index, but, at least it'll be a fun project. :-)
I think this is the bigger point - the difference between internal and external knowledge has become much less important because the external knowledge is always easily available. I have a friend who prides himself on knowing all manner of both sports and history trivia - he could walk you through who won every NBA championship and who was in their starting lineup for the finals off the top of his head. Meanwhile, I'm a Lakers fan and I remember they had a threepeat in the early 2000s, but I don't know the years off the top of my head. Who cares? I can Google it if I need the details.
The things I wind up having to Google / SO are things like "How to setup a Spring Boot app with Spring Data JPA for data access". There are just so many fiddly details: the exact list of dependencies to put into your pom, the various required annotations (@EnableJpaRepositories or whatever, etc., yada yada), the various properties that have to be configured for a Datasource, etc. There's no way I'm committing all of that stuff to memory, especially when it's easy enough to look up (or better yet, build a trivial "hello world" project with everything already done, and store it off to the side as a template).
The easy access of facts on line is a good thing. This kind of “knowledge” is shallow, assembling the parts together to synthesize new knowledge is the important step. That part isn’t easily found online.
This reminds me of leaning math. At one point “doing math” meant calculating numbers, like memorizing multiplication tables or performing long division. I was always relieved when the teacher said we could use a calculator on a test. But at some point I came to realize that math is not calculating. Math is about relationships between numbers and their general properties.
Googling facts is akin to using a calculator. Extracting meaning from that jumbled pile of facts is knowledge.
Yeah. There are serious epistemological problems to go with externalized knowledge.
Knowledge isn't just the ability to produce true statements, it isn't knowing what things are or what things will be, it's knowing why things are and why things will be.
If I keep guessing coinflips, I don't have knowledge of the answer 50% of the time even though I can predict them that often.
Knowledge has an aspect of understanding why things are, not just that they are, and that aspect becomes incredibly weak with the "look up facts on wikipedia"-model of external knowledge.
The JTB-model of knowledge is probably incomplete, but it's arguably less wrong than a model that doesn't contain justification.
> If I keep guessing coinflips, I don't have knowledge of the answer 50% of the time even though I can predict them that often.
Sorry to get off topic, but I think this is an interesting way of thinking about this. I wonder if it represents any kind of larger difference in our worldviews. I would say that we have knowledge of the answer 0% of the time. If we guess and it happens to be correct, that's a coincidence, not an indication of knowledge.
IMO this goes hand in hand with thoughts I've had about the concept of mistakes. When I used to dabble in day trading (I've been clean for a few years now), I gained and lost and gained a lot of money. Whether any particular gamble ended up as a gain or a loss, I consider them all mistakes because I had no rational reason to expect any of them to pay off.
My uneducated hypothesis is that whether somebody thinks of a successful gamble as a mistake or a good decision could be a decent predictor of certain personality traits and political views. Maybe the same applies to the question of whether being correct implies knowledge.
The JTB model is wrong or at least misleading. Definitely outdated.
Our best modern models for both biological and artificial intelligence indicate that concepts or knowledge* emerges from networks of sensation/facts/data.
* Specifically referring to "system 1" cognition, which is what the article is ostensibly targeting.
Well the JTB was the standing model for some 2300 years, so even though there are a few corner cases it doesn't quite cover (Gettier etc), those are corner cases indeed. It doesn't invalidate JTB any more than Einstein invalidated Newtonian physics.
The question is what we mean when we say knowledge. I don't think modelling is a good way of answering that question. Of course there is going to be a connection between perception and knowledge, how else would the knowledge enter us? But David Hume could have told you that.
Knowledge doesn't "enter" us, that's the point. It's an emergent property of (many, many) memorized & networked sensations, which do enter us through our various sensory organs.
It is virtually impossible to teach an abstract concept like "cup" without a) providing real, sensory examples of what you mean by "cup" or b) relating to analogous concepts that the student has already learned through personal experience (like "bowl" with "handle") and is capable of communicating.
There is a point where we do not have knowledge, let's say when we are born, then we have perceptions, and after that we (may) have knowledge about the world. I will say that knowledge has entered us through perceptions. If something is not in my mind, I have a perception, then it is in my mind; and if I was unable to have perceptions it could not have entered my mind, then it has entered through that perception. All physical phenomena are to some extent emergent down to subatomic particles and possibly even further. It doesn't really have much of an effect on the subjective human experience which principle is more emergent, and the subjective human experience is the only human experience.
It's very hard to teach someone who has no relationship with the world, but you don't need a large set of concepts to synthesize additional concepts. You can derive most of mathematics from a few simple axioms. Democritus concluded the world was made of atoms based on observations about how the world appeared to work, on encountering problems like Zeno's paradox. Knowledge, as well as language, is all about how things relate to each other. Words to meanings, causes to effects.
> You can derive most of mathematics from a few simple axioms.
Yeah, nothing is stopping you from building towers of abstractions that are N degrees from any real experience or data. But those heavily-derived concepts are prohibitively difficult to teach and communicate, because each level in the abstraction hierarchy adds semantic noise. A major reason why classical, "pure" mathematics pedagogy is infamously ineffective.
Ultimately, every great mathematician learned to count with rocks or apples -- not by internalizing Peano's axioms.
That is how it is commonly taught, but can it really be argued it's the only way it can be taught? Regardless which method is easier, why is it possible to interact with negative, irrational, or complex numbers without anyone having ever seen them, but not integers?
There of course needs to be a common set of ideas to communicate, but I'm not convinced you need one particular set. In many cases, having some of the ideas means you can synthesize the rest. You could for example reason about numbers by drawing upon size rather than quantity. Then you basically end up with the Euclidean method, a mathematics of proportion that goes a surprisingly long way.
Abstract quantities, even positive integers, don't exist in reality. Heck, even "objects" don't really exist, because "boundaries between things" don't concretely exist except in our perception and imagination.
If there is any primal, axiomatic concept, it is "object"/"thing". From object you can derive quantity (many objects), from quantity you can derive quality (two objects are different), and so on and so forth.
You can reason about the same reality in various different terms, and the choice in terms shapes how you see the world. I don't think it's easy to argue that one concept is the true concept from which all concepts inevitably derive. You can just as easily think of things as parts of a whole as objects in an emptiness.
In some languages the name for door is the same as the name for mouth, and a door is a bit like a mouth so it makes sense to relate them. All language is metaphorical like that, except not always as explicitly. When we say a thing is an object, we say that our concept of that thing shares similarities to the concept we have for objects.
To offer a counterpoint, doing a basic calculation in your head is faster than reaching for a calculator - remembering 7*3=21 is faster than punching it in on a keypad. It's the same with facts -- having some fact committed to your own local memory is much faster than searching a global computer network for it every time.
For somputer systems, the speed at which data can be accessed has a transformative effect on the algorithms that deal with it. I think this is the case for humans as well.
Memory is the basis of any real knowledge. This we have known since for ever…
Intelligent people tend to think that you do not need to memorize because everything is out there at a button-press distance. But they do not realize that what makes the intelligent is that they can memorize lots of things without effort.
> But they do not realize that what makes the intelligent is that they can memorize lots of things without effort.
I play board games with my friends. We all know all the rules of the game, although we took varying amount of times to memorize these rules. Yet some of us come to dominate the game play. To the degree that there is correlation between the speed with which someone learns the rules and the percentage of time they win the game, I do not believe there is causation. How could there be? One would have to believe that once everyone has all the knowledge in the system, those who memorized that knowledge faster still have an edge because they memorized the knowledge faster. That's difficult to believe.
It’s not just memorization of the rules, in a game like Go that would only be a tiny fraction of the memory utilized. Remembering large numbers of examples assists in recognizing patterns and developing strategies.
That doesn’t really resolve the underlying problem. Given the same set of examples, the guy who memorized them faster isn’t necessarily the guy who wins. There is more going on than just “effortless memory.”
Presumably, human memory involves a significant associative aspect. If you couldn’t recall a relevant fact within a useful time frame, this might impact comprehension or the ability to make connections. Imagine trying to read a book, while only have a vocabulary of a few hundred words, or lacking the ability to remember more than the last few sentences.
There are also fidelity issues. Suppose half of the facts or ideas you recalled were corrupted, or most of your retrieved memories were misassociated or irrelevant?
There is an inherent and not entirely unfounded bias evident in this article that external knowledge is inferior to internal knowledge. It is certainly true that reaching the point where you don't have to look something up usually means you have a deeper understanding of it and are more competent and fast using it. Quality of knowledge matters, but there are limits to how much knowledge a single person can internalize.
Quantity has a quality all it's own.
Having vast amounts of external knowledge available for rapid access offers unique capabilities. Skill at searching as well as quality of search tools can enable people to do things that would have taken far longer just a couple decades ago when any given search would have involved dead-tree books.
Perhaps we should start thinking of google and other such tools, not as poor substitutes for knowledge painstakingly internalized, but rather, an augmentation of human intelligence that grants humans rapid access to far more knowledge than any single human could possibly internalize in a lifetime. As these tools improve, this augmentation effect will become more pronounced, as will the deleterious effects of being cut off from the internet.
The blurring of lines between internal and external knowledge may also intensify with future technologies. e.g. Wetware out of a William Gibson novel lets people with data jacks install modules of mental expertise at will. Will an article like this even have meaning once people can look up a module of deeply internalized knowledge and install it in an instant?
One day, the Internet's knowledge may be indistinguishable from our own.
This is what the Internet started to do, and seemed to promise. But, as we all lament on HN, the Internet is no longer that which we yearn for. Walled gardens, embedded links, snippets instead of articles, titles instead of papers, ads ads ads. It is almost impossible to find the actual fruits of knowledge among the trash. It wasn't long ago, that you had free access to research, data, results, of studies from around the world. Knowledge unbounded. Its almost hopeless now.
How to fix it?
(And the best scientific paper resource is 'illegal '.)
It's problematic to not distinguish between things you know and thing you just googled, regardless of which is more accurate.
The problem is similar to not looking and determining what the information of the article you Google is. This feeds into the now stereotype behavior of someone confusing a Google search with actual research. Google doesn't help with it's question-answer section, many of which are mis-categorized and from not-necessarily reliable sources.
Quantity has a quality all it's own.
A large quantity of poor quality of information is certainly not something that it would be useful to have automatically pouring through one's cranium, to say the least.
Q: You won more than US$2.5 million dollars over 75 episodes of “Jeopardy!” How did you do it?
Jennings: I’ve always considered myself to be a very curious person by nature. If I don’t know the answer to something, it’s like a mystery I need to solve; it spurs me on to find out more information. I read just about everything I can pick up, I watch a lot of movies and I also like to enter my questions into Encarta; it’s a great digital encyclopedia with the answers a mouse click away.
I was at the supermarket one time and tried to remember something. In the act of trying to remember it, I instinctively reached into my pocket (to Google it), and found that I'd left my phone at home, and suddenly felt like part of my brain was missing.
At that moment I realized the Internet had become part of my my "memory". The feeling wasn't "I can't look it up", but literally "I can't remember."
I think this finding is closely related to the fact that people think they're good at multi-tasking but they're really not. Think of your brain as a cache. People sit at work, filling up half of their cache with random data from HN or YouTube or whatever, then when they need to pull up a fact actually needed for their work ... oops, cache miss. Instead of milliseconds pulling it from memory, they spend seconds to minutes looking it up on Google or Stack Overflow and don't even realize how much these interruptions repeated many times a day slow them down. People who will spend hours hyper-optimizing their editor and their window manager and their shell environment for maximum productivity won't spend any time at all keeping the "working set" for their job in memory. It's kind of crazy, really. This is where those 10x differences between developers come from. A bit of talent will get you far, but a modicum of good old-fashioned focus and self-discipline might get you even further.
I can't remember the study now -- perhaps fellow HNers can -- but there is a study where they asked a group of people to take a side in a debate. The debate was structured so that it sounded like A was the intuitively better choice, and when polled almost everyone agreed the group would support A. One plant was armed with excellent arguments for B, and when presented with these arguments, the group switched to supporting B. When asked later, they thought they had supported B the whole time.
As I understood the abstract, participants are given general-knowledge questions (e.g. who was the fourth U.S. president), participant looks up the answer online, sees "Madison" and then thinks they must have know that (Washington - Adams - Jefferson - Madison). The participant will then predict that they will do well on subsequent questions without using the internet for reference (at least that's how I interpret "... [participants] predict that they will know more in the future without the help of the internet").
That's a foible fit for satire. I'm curious whether there is a process of rationalization like I just imagined, or whether there is just a fundamental confusion by the participant about whether they knew the information prior or knew it from the recent internet search. If the later, that could be horrifying (imagine you needed to find out how long Oceania and Eastasia have been at war).
I've read about teachers struggling to convince students of the need to actually memorize things despite the ubiquity of easy reference: you just won't make new connections or insights if you don't have any information already in your head. I would say this study would suggest the teacher's task is even more difficult if people misattribute recent searches for prior knowledge.
There is another tangential problem with relying on online information stores.
At least with news media, if there was a significant error in reporting, that would go out in an upcoming edition. Both would be recorded.
In recent times, news outlets, and other publications, _silently_ update things without issuing a correction or making it obvious there was a correction and the only way you can find out is by the internet archive, if it was there.
I feel like people take for granted the ability to ask the right question with the right words. I get that this study is talking about general knowledge questions but I can’t tell you how many times I have tried to search for an answer for something in another field only to find that I need to find the right keywords for the question first. For example a social network in humanities is a complex network in physics is a graph in mathematics is a network in ML (but sometimes its also called a graph) this is very annoying cause all these fields have different ideas on “community identification” (clustering). I’m sure people must have the same problem for general knowledge questions as well.
I think the "extended mind" view makes sense but also gets way more complicated in the presence of computers that have capabilities beyond just information storage and retrieval (i.e. knowledge "that"). Knowledge "how" to do something can be much richer, especially when the things we partly know how to do are often about interacting with computers.
People learn arithmetic in school, but often become functionally dependent on calculators or computer. When this happens, I don't have the sensation that I "knew" the right answer, or that I did the calculation; clearly the device did the work. The separation is apparent.
What about when a script or tool you use every day does something you could in principle do on your own, but the computer is far faster and more reliable? Do you feel you "know" how to typecheck your code? Or to resolve dependencies?
> When information is at our fingertips, we may mistakenly believe that it originated from inside our heads
reminded me immediately of Chalmers standing with his iPhone in his hand, arguing that there is nothing special in the skull as a location that is required for establish ownership of an idea.
I partially judge the novelty of the problem I'm solving by the level of access to information about it I have. You realize you're on the edge of knowledge when you're deep down in the research papers and there's only a couple papers on the subject and you implement from there. The solutions you come up with from that tend to be the most rewarding and unfortunately relative to all the mundane work few and far between. I think I run into that type of problem a few times a year, if even.
> erroneously optimistic predictions regarding how much they will know without the internet
People experience tools as extentions of themselves. If you always have the tool, does it matter whether you've misattributed (and if you have the tool, have you misattributed?)
Are we not tool-users?
Carpenter don't perform as well without hammers; cyclists without cycles; surgeons without scalpels; engineers without mathematics; writers without language.
If you word a question correctly, you can also find evidence for almost any belief with Google (it is Nietzsche, you ask Google, and Google asks you). It is actually quite bizarre to have a conversation with someone online, that person has literally no real knowledge of the topic, they pepper you with "sources" (usually those sources do not provide evidence for what they are saying, they don't know enough to know that), and then (and this is very 2020s) act outraged when you point they are wrong and have no actual knowledge of the topic beyond Google.
In my experience, these people have usually integrated Google fully into their own perception of knowledge. They view themselves as totally rational, their views are all evidence-based because they can type the magic phrase into Google and get justification...but, of course, the feeling comes before the source and they have no real understanding of what "evidence" actually looks like. It is very odd trying to have a conversation with someone who is totally irrational but believes heavily in rationality (this is the "source? source? source?" meme of Reddit).
I think this is distinct from the SO stuff. Programming languages are very explicit so it is easy to forget exactly how to do something basic if you do it infrequently (for example, I can never remember how to get an env variable in Python, I do it in every project but usually only a handful of times per project...it is easy to forget) Once you get into more complex problems, you can look stuff up on SO but you still won't be able to do it unless you actually understand what you are copying (in my experience, I have usually copied something, something is slightly different, and then I spend time going through it/reworking/understanding).
With Google available, there's no upper limit on programming complexity. Before search engines, programming had to be simple enough that people could learn all the necessary parts. That restriction has been removed.
It would be interesting to see if these results replicated when the target was "home encyclopedia set" or "university library"
Probably not possible these days since it would be hard to test when people already have internet access l, and part of that access can include these things.
But I suspect this phenomenon is not unique to the internet, and would be more generalizable to any readily available source of information. Though the effect would probably increase with ease of access, so the internet would produce a larger effect. I acknowledge this is just speculation though.
Something you recently Googled is your knowledge, for the time being. It is not external.
You may have an inflated sense of being able to retain it going forward, but that doesn't make it external.
It's no different from any kind of learning.
If you're able to recite the information without looking at an online reference it's internal, and if you looked at it a month ago, it's even long-term memory. It might vaporize in another few months, but that doesn't mean it had been merely external.
There's a synthesis here: people know how to gain knowledge from the internet, even if only temporarily. It is a relatively explicit form of the phenomenon known as "external cognition", which every one of us utilizes in many respects and ways all throughout our days, even when the internet is not involved.
I found that having an offline-first mentality helps with slowing down and actually acquiring a deeper understanding of the topic you're dealing with.
I'm keeping offline docs for most of the tools I use. For example the Postgres manual is easy to browse offline, and offers a lot of insight into how the db engine works, much more than searching for random bits on SO.
These docs are stored locally on my machine as html files. I have an nginx running just to browse them more easily at http://localhost/doc/
Also writing down things as you learn helps tremendously. I use Zim which is a local wiki application. It's easy to use and I have separate notebooks for different things (like work, programming, etc).
This pattern pretty much mirrors what you would do in school.
> At the Egyptian city of Naucratis, there was a famous old god, whose name was Theuth; the bird which is called the Ibis is sacred to him, and he was the inventor of many arts, such as arithmetic and calculation and geometry and astronomy and draughts and dice, but his great discovery was the use of letters. Now in those days the god Thamus was the king of the whole country of Egypt; and he dwelt in that great city of Upper Egypt which the Hellenes call Egyptian Thebes, and the god himself is called by them Ammon. To him came Theuth and showed his inventions, desiring that the other Egyptians might be allowed to have the benefit of them; he enumerated them, and Thamus enquired about their several uses, and praised some of them and censured others, as he approved or disapproved of them. It would take a long time to repeat all that Thamus said to Theuth in praise or blame of the various arts. But when they came to letters, This, said Theuth, will make the Egyptians wiser and give them better memories; it is a specific both for the memory and for the wit. Thamus replied: O most ingenious Theuth, the parent or inventor of an art is not always the best judge of the utility or inutility of his own inventions to the users of them. And in this instance, you who are the father of letters, from a paternal love of your own children have been led to attribute to them a quality which they cannot have; for this discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.
But don't we copy almost everything we know from outside anyhow and then think it's ours without remembering who mentioned it or where we read it? Are we really original thinking machines or do we copy most of the time?
I think of it as my own RAM. Search used to be quite slow and you had to have knowledge stored in your RAM to be proficient. Memory was valued but you also had to deal with human limitations.
Fast forward to today and search can be done quite a bit faster. And your internal RAM effectively multiplied or perhaps your storage (SSD) became faster and bigger. In that sense what’s become more valuable is not necessarily how much information you can physically retain in your brain/memory but how FAST and EFFECTIVELY you can look up things.
There's a whole lot of people in here trying real hard to justify why forgetting things about your craft is good. Guys, I understand memorizing is hard, but can we drop the ego a bit and just admit to ourselves that yes, we would be better developers if we remembered the standard libs? It's not strictly necessary nowadays, but it has not stopped being a useful edge.
I think waitbutwhy gets it. https://waitbutwhy.com/2017/04/neuralink.html
Adding internet as our knowledge source and keeping brain as a cache for most important things let’s us scale our brains capabilities order of magnitude higher.
No matter how you say it, the acronym still sounds a little like "penis," there's no escaping it. I've seen this topic come up organically at least a dozen times.
Which arguably is better than the joke ... "Paper Not Acceptable Science" or "Probably Not Any Science"
"Oh, and check it out: I'm a bloody genius now! Estás usando este software de traducción in forma incorrecta. Por favor, consultar el manual. I... don't even know what I just said, but I can find out!" --Wheatley, Portal 2
Our brains naturally seek out the easiest path to a solution. One of them is knowledge retainment. As long as we know we have access to information our heads won't retain much past where to find it.
It isn't the access of information ( Internet or not ) that is a problem. It is the access to "answer" that is the problem. Or more precisely access to answer without explanation why this is an answer. People no longer "think" if this answer is correct or not ( misinformation ), but presume it to be true. And then they built up a mental model using that information. Which leads to all sort of wrong conclusion. In the old days access to information ( library ) tends to be much better because quality of information is likely millions times higher.
This leads to "way" less thinking. And "thinking", the process to digest information is critical to gain knowledge. The more access you have to answers, the less thinking you have to do. Which ends in a negative feedback loop.
I have been teaching kids these days and making this point extremely clear. It is not the answer that matters, it is how you arrive at the answer that is the most important. Especially in the day of Google. But generally speaking they do ask a hell of a lot "why" :) Part of the joy of working with kids.
Issue comes in when it gives you the experience of knowing more than you think and making judgements based off of that feeling.
My concern is the difficulty of tracking down sources years later without a means of reference. Unless you are actively archiving and organizing those links in a useful way. What we have is more prone to forgetfulness, but also at the time that is our strength. Still a pain though
> Isn't internet access at your fingertips a form of transhumanism
Well to some degree. At some stage we will forgo having a smartphone and have Google hardwired to our brain. I think that's where we're heading eventually. Tech companies want to be inside our head (if they're not already). Musk talks about the bandwidth problem of accessing information. If it was super readily available (i.e hardwired to our brain), then we could be superhuman.
This is why we're to some extent already cyborgs, just not in the cyberpunk aesthetic way of being half machine half human, but we're already half-way there with smartphones.
My style of coding is very intimately connected to having access to online resources. I regularly search for things like how to concatenate strings or the syntax of a for loop in some language. I also use the internet for higher level things like how memory management works on some system, or how something like an ECS architecture works. I also spend a lot of time looking for the right components to put into my own systems, so if GitHub were down it would bother me.
Basically I'd be useless without the internet. The coding tools themselves, all the examples of how to use them, and all the actual knowledge about how everything works is on there.
Perhaps the only thing that's actually my own input is the judgement about what things are important, which sources are reliable, and which people are authorities.