I appreciated their synthesis but I think some general recommendations would have also been useful for the public (imperfect but helpful):
1. Ask primary care physician for a lab referral or purchase a nutrient deficiency test from a straight to consumer lab (cost was lower than I expected at ~$100-$200).
2. Google whether nutrient is actually bioavailable when taken as a supplement.
3. Use independent tests to ascertain brand quality for specific supplement or overall trustworthiness (labdoor is the one I'm familiar with).
The proliferation of these sources have torpedoed my Google effectiveness.
Not only do these sources amplify themselves, they are of near necessity targeted at simple use cases. The result? Google has ample popular material filled with my relevant keywords but void of any usefulness to anyone not just getting started. As it happens, the people with the most questions _are_ just getting started, and do find the results relevant pushing content on the margins further down the list.
Worse still, with Google's increased focus on natural language processing, their seeming approach of "what you're really asking is..." makes loose queries even more difficult. Definitionally the most common questions aren't edge cases, at least not the single one you're interested in.
After all these years, I think I need to retrain myself on how to Google (distinct keywords no longer cut it unless I have a sequence where I can look for an exact match), and recently started falling back to other search engines with some success.
I find the mental overhead of the always-on camera exhausting, and I prefer voice only communication for certain types of conversations so I can dedicate all my processing power to what's being said. I also pace when I'm thinking, make coffee, lay down, etc.
In a pleasant twist of fate, migrating our full company (20ish people) into gather.town has largely made it a moot point for me now.
I have the choice between zoom like video boxes or switching to small thumbnails I largely ignore in favor of viewing/interacting with the 2D scene.
Also, when you see someone walking by and they "drop in" for a quick chat, the context for the visual feed seems more natural and less encumbering than spinning up a zoom meeting: they approach, you talk, they walk away, and you're alone again. If I pace IRL during the discussion and am off camera, my avatar is reinforcing that I'm still in the same space with the other participants.
If I don't want to be interrupted I go to a blocked off private area; the 2D equivalent of a DnD status.
(Wish I didn't have to add this disclaimer but this is the internet: I am not, nor have ever been affiliated with gather in any way. That friends was just an earnest anecdote and soft reccomendation)
That's really interesting. I have used gather.town for conferences before and it works really well - but I haven't seen it used on an ongoing basis. It does make sense in a lot of ways to incorporate some of those physical cues of an office environment into a WFH setting.
It's strange but really cool to zoom out and see various meetings between people in different rooms, or when we bring an outside contact into our space for a meeting you can introduce them to coworkers ad-hoc (like a real office).
Swinging by with a guest, like other real world experiences translated into a 2D world, require developing a new set of social norms. For instance, if you're screen sharing through their app, you won't see someone approaching and the screen share feed will show up expanded by default for whomever enters. That can be problematic for a number of reasons (client data, personal info you were sharing with a close coworker, etc.).
We also have recreated games like tag using the confetti feature, have rituals like hitting a lap around the entire space on go karts after code reviews, and occasionally raid Client Experience's office for supplies.
It is a jealous god when it comes to CPU though and since modern JS environments are greedy as well, my fan gets a lot of work.
Are you able to compare the experience of being at a Big Tech company against a very small team with wide responsibilities?
I'm in my 11th year as a developer and left a pure tech company 6 years ago where I was surrounded by other engineers for a small one where I've been 1 of a 2 until this past year (I now lead a team of 3 stateside and nominally 3 more overseas).
Note for traditional startup folks: the lack of team growth may seem like an obvious signal of low performance. Fwiw - we're entirely self-funded and our revenue has grown by nearly 5x and total staff by 3-4x during that time.
From a career/skill growth perspective, I often wonder how being on my little island nets out against joining a larger elite team.
I've no doubt developed idiosyncrasies but I've also directly or collaboratively coded, designed, deployed, and promoted every piece of software, including several web products from scratch, that have been foundational to our success since the first year the company existed.
I really have no idea how to compare that experience with being a cog in a massive machine but surrounded by brilliant work and brilliant people I could learn from.
I don't personally have experience with early stage startups so I can only talk second hand. I think both can provide growth in different ways.
Big Tech is more consistent and structured, while startups are sink or swim. I've seen people come in as Principal Engineers after building the entire product of a successful startup. But that happened because they joined a rocket ship as an early engineer, which is the programmer's equivalent of winning the lottery.
So if you can join a rocket ship by all means do it. Otherwise Big Tech is generally a safe bet.
> Are you able to compare the experience of being at a Big Tech company compared to a very small team with wide responsibilities?
Most "Big Tech companies" are already structured internally as small teams with wide responsibilities, which imclude owning multiple projects and with it everything required to develop and deploy them.
If anything, "Big Tech companies" have enough resources available to allow developers to work without bothering with distractions.
I think it's a good point that there are many more domains within large organizations than one might expect from the outside.
However, people are undoubtedly in more specialized roles and teams on the whole.
I'm merely trying to explore the differences in growth as a developer between the two extremes.
While I don't think as many people who build React interfaces at Facebook are also spending days creating Postgres schema diagrams and responding to mission criticial DevOps failures, I certainly grant that the small team varied project approach might be more similar than I realize.
In that case or either way, the most impactful difference in how personal growth is impacted may be the surrounding ecosystem and knowledge base.
In one scenario you are flush with existing infrastructure and the thoughtful people who designed or maintain it. This can focus your work as you mentioned and is also a great learning environment.
While on the other side you are working without a safety net slogging your way through creating everything* from scratch. This forces you to learn new skills in unfamiliar areas and managing all sorts of tradeoffs autonomously (there could be literally no one else to motivate you or to ask a question aside from your search bar).
* Infrastructure, tools, processes, etc. I'm not talking about reinventing the wheel though you are free to do so for your own pleasure or peril.
I do think there is more activism at play than you might. At a minimum it provides the fear mechanism that prevents employees from challenging the work, particularly if they aren't even on the project (why stick your neck out?).
However, the code format for prose analogy illuminates the writing quality angle brilliantly.
I'm sure I'm not alone in having distinct communication modes in work documents, each with their own reasons to be excluded from the ML dataset:
1. Keep it simple -- when achieved it is highly effective for communicating a work memo, but would be terribly dry if used in auto suggestions for an English major struggling through crafting moving prose.
2. Lazily verbose -- going long is more expedient than crafting a compact message. It's mostly unrefined garbage. (exhibit a this comment)
3. Everything is awesome (positivity inflation) -- a deluge of great, love, awesome, wonderful, perfect, etc. all applied to far too many things, far too often.
Imagine if an ML code formatter included psudeo code inputs, or was given a data set of every local file change
(pre-commit) instead of what is sitting in a main branch? My Docs are filled with things that I'd never commit much less pass a code review.
If Google Docs was only used in corporate settings, and we wanted to double down on our drab corporate communication styles, I guess the feature would make some sense.
Yep, I had struggled down these lines too but the always lucid John McWhorter cleared things up for me, and as a linguist it's right up his alley.
"On metaphor, master is a useful example. The basic concept of the master as a leader or person of authority has extended into a great many metaphorical usages. One of them was its use as a title on plantations worked by slaves...That makes sensible the elimination of certain other uses of the word, which parallel and summon the slavery one...in the 1970s, such schools had just begun a call to stop having male teachers called "master" and female teachers called "teacher," in favor of having all instructors called simply "teacher"...This meant that young subordinates had been calling white men in positions of authority "master," after all—including, by the 1970s, more than a few black students. And today's call to stop referring to technology parts as "master" versus "slave" attachments follows in the same vein, as it directly channels what was so offensive about the slavery usage...However, other extensions of the word master do not meaningfully resemble the plantation one, and only a kind of obsession could explain spraying for them now. Are we to consider it racist to refer simply to mastering a skill? To master tape as opposed to dupes...The plantation meaning of master was one tributary of a delta of extensions of the word; it should go, but we need not fill in the entire delta."
And for emphasis I'm pulling this one out of the main block:
Sincere question: since idioms and metaphors are rooted in history and shared experience, isn't dropping extremely common usages kicking the can (if you will) in the other direction?
In other words, by trying to be inclusive about allowlist aren't you taking away opportunity to include ESL folks in that rich history?
Anyone will have to ask/learn about able/denylist behavior anyways on their first encounter, and learning "whitelist" would have a multiplier effect that increases comprehension of the numerous similar usages they will inevitably encounter. That seems like active inclusivity to me, but I wouldn't be surprised if I'm missing something (I hadn't ever thought down this line until I read your comment).
Note: I'm specifically asking about otherwise neutral terms and have come around on avoiding any terms that are still used in the same way as their offensive history (e.g. no master/slave but yes master debater).
Same age, same boat but I wonder about two independent factors: how my cumulative health decisions have depleted my energy reserves, and if being lower on the Dunning-Kruger curve had some distinct advantages earlier in life (i.e. I expect things to be harder now which adds an additional mental obstacle and fatigue). I also gave a shit about far less thus further freeing mental capital.
Then again, outside of sports, nothing in my youth was taxing (particularly not school). The only real mental investment I had to make was verbal sparing with friends, and video games.
It was easier to do math in my head but I had been doing it regularly for years (full stack web apps call for very little), and it was much easier to keep momentum on something like Tolstoy (could also be health related though).
What about you? Any specific things you think you could incontrovertibly do better in your youth that aren't likely confounded by other variables?
> how my cumulative health decisions have depleted my energy reserves
I wonder about this too. Goodness knows that sleeping for ~4 hours or pulling all-nighters, eating the cheapest things on the shelf, and not getting in much physical activity late in high school and through college weren't sound decisions and are likely to haunt me for the rest of my days.
> if being lower on the Dunning-Kruger curve had some distinct advantages earlier in life (i.e. I expect things to be harder now which adds an additional mental obstacle and fatigue)
This isn't that much of a problem for me. In fact when it turns out that things put up less resistance than expected it's kind of refreshing and helps pull me along. The trick is getting to that point in the first place.
> What about you? Any specific things you think you could incontrovertibly do better in your youth that aren't likely confounded by other variables?
Hard to answer, really. Maybe sitting down and getting lost in the process of doing something creative… it came extremely naturally to me in my teens and early 20s but there are so background processes associated with being a responsible somewhat functioning adult running in my head now that it's considerably more difficult. Probably fixed by removing the need for those background processes to run, but that's not exactly practical.
Are you referring to people often misapplying the concept, or its replicability issues (which I had admittedly come across before but don't hold in active memory)?
I'm confident I nailed the former (I thought I was better at things than I was in reality when I was younger, and am now more likely better than I think I am in those same domains today).
If you referring to the latter, it's tricky for me. It's like the famous two humped camel in programming aptitude. It seems to be useful in describing experiential observations but didn't (doesn't?) replicate. I've stopped using it at this point because I don't want to mislead people.
DK also has the appearance of explanatory power at a minimum, and utility in communicating the simple form of the concept. I don't like the replicability issue but wonder how bad the down side is if all I'm really saying is we aren't often humble enough about new things, and aren't confident enough about things we've dedicated time to?
Welp, that justification for promoting something potentially falsifiable surely isn't going to come back to bite me and expose my hypocrisy elsewhere...
1. Ask primary care physician for a lab referral or purchase a nutrient deficiency test from a straight to consumer lab (cost was lower than I expected at ~$100-$200).
2. Google whether nutrient is actually bioavailable when taken as a supplement.
3. Use independent tests to ascertain brand quality for specific supplement or overall trustworthiness (labdoor is the one I'm familiar with).