Hacker News new | past | comments | ask | show | jobs | submit login
So much of academia is about connections and reputation laundering (columbia.edu)
394 points by luu on May 15, 2020 | hide | past | favorite | 219 comments



So much of everything is about connections and reputation. Tech in particular holds up a guise that people are chosen and ascend because they are “good”, meaning, they are good at what they do. That’s part of it, but it’s an insufficient part, and you can do pretty well in a career being solidly average at what you do.

The real currency in my experience is one thing - relationships. Do I think this person likes, respects, and would vouch for me? Everything else aside, that’s what people optimize for when they choose who to promote, respond to reference requests, and generally in who they engage with at work.

Oftentimes It seems like it may not even be a conscious behavior, they just know that’s who they have a gut feeling about.

I definitely wish this wasn’t the way of the world, as someone who isn’t a natural when it comes to building relationships in a professional setting. But I also don’t let it get me down. It’s an element of human nature that’s hidden many layers deep in the workplace.

When I finally faced this fact and started devoting some of the time I previously spent on hard skills, I realized it was far and away the more impactful way to allocate my resources.


Yes, this was the opinion I came here to post, certainly my own profession of law is like this, at least where I practice in Toronto Canada but I can't imagine other places being much different.

I think one advantage of computers is that what you produce at the end has to work - and if the code doesn't work or you don't know what you're doing, you're quickly exposed as a fraud. This is not the case in other fields like law, academia, politics, and even the corporate world where the results and methods are more abstract and given to opinion.

Unfortunately much of the world is run by people who excel in the abstract fields rather than the technical ones.


You make a very good point, but I'd like you to believe its not unfortunate.

Most people I know would happily sign a document that states 'We hold these truths to be self evident that all men are created equal', even though the world shows us every single day that it isn't true.

In technical fields, getting that document approved would be a nightmare.

Maxwell's equations on the other hand are easy to get approval for. They are 'self evident'. Because they describe what we see, not what we want to see.

So thank god for Jefferson penning that line down and effecting how everyone thinks, because it shows us for some problems, the ambiguous kind, were truth is what we want it to be, technical fields will struggle to provide answers.

Its very fortunate, esp to make progress on all the ambiguous problems, that society props up a Jefferson now and then.

How did Jefferson get into that position and not some jackass, is the most important question. And the answer to that, with a modern developing understanding of networks and graph theory, has only recently started moving from the abstract and ambiguous to the more technical - https://www.youtube.com/watch?v=07KKYostAJ0


I don't think it's true that computers are great at exposing this. Software ranges from the concrete (does this button do anything) to the abstract (does this statistical program have an off by one error that's invisible to customers other than returning somewhat wrong results?). I think there's two competing definitions of 'work' in software - 'it works' as in 'allows for use' which can compete with the 'implements properly'. Computers and code will brutally expose the former, since it simply will not allow you to do something if it's broken. They don't necessarily expose the latter.


If the software should let me do X, I can click around and confirm that I can do X. I think most people would consider that the definition of “works,” rather than the quality of the code, the validation that the proper thing was built, etc.

Looking at the code can tell you whether it was “implemented properly” - or at least, if it was done reasonably competently.

Relationships, though, can help you know if that properly implemented code was likely a fluke or not. Does the person ask questions? How do they ask questions? How do they communicate? How do they capture requirements? How do they push back when something seems unclear or unwise? etc.


I don't buy your arguments - at all. At the end of the day people have strong expectation about what software should do. If it doesn't do that, they will quickly notice. If they don't notice (now or in the future) things like your "off by one" errors, then that requirement wasn't likely that important to begin with.

Now, I've seen plenty of "demo-ware" software that has been presented as something that was much more than it actually was, but again, stuff like that always falls over, riddled with bugs and crumbling under load, when it gets substantial use.

Some things you really can't fake, and software engineering is one of them.


Let's say you have a for loop that is supposed to run once, but some "programmer" calls it a hundred times. The program still works, but it's a lot slower.

Input lag on devices has doubled or tripled from the 1980s while clock speeds have increased drastically. Reddit and Hackernews do essentially the same task, yet Hackernews' load times are almost instant while Reddit takes more than a second to load at times.

It's true that you can't fake a minimum viable product, but it seems to me there is a long way from that to actually making something good.


We've outsourced the cost of development from the developer to the user.

Noone wants to write low level code because learning it costs money, so instead we just throw more hardware at the problem until it goes away...


Sort of. Would a user use all of that compute if it all wasn’t wasted? Probably not. What we’ve done is “inflate away” (similar to fiat and central bank policy) the need for additional human developer time through Moore’s law. We all pay for this through annual tech spend, which keeps the advancement treadmill running.

Human time is expensive, so it makes sense to throw anything cheaper at the problem first.


Perhaps we have different conceptions of 'fraudulent'. I don't think it necessarily implies 'fake', but something more along the lines of using inferior building material. Perhaps it 'wasn't important all along' or perhaps it'll lead to catastrophic failure in 25 years, who knows? Software 'engineering' may be hard to fake, but that's because you're probably defining it as the proper way to develop software. A lot of software is certainly not built in a way familiar to an engineer.


Building materials are judge based on the standard of "survives for at least three decades without crushing those inside" not "survives the next reorg or pivot." What would be the equivalent of inferior building materials failing over 25 year timescales in software? C isn't even twice that age and the oldest running mainframe software is probably only a little older whereas we've had millenia of experience constructing buildings.


I know friend of friend who went to Harvard Law like 15 years ago and she told me there is SO MUCH butt kissing going on there.

You'd think people who got that far would be confident enough to rely on their own skills/study to get ahead BUT even at that level, there is a lot of butt kissing to get ahead.


It's the correct asses though.


Is choosing the correct butt to kiss NP-Complete?


> I think one advantage of computers is that what you produce at the end has to work - and if the code doesn't work or you don't know what you're doing, you're quickly exposed as a fraud.

Is this actually true? You can bullshit coding...


Take the old guestbook example: if you can't add entries to the guestbook the code apparently doesn't work. No way around that.


Also we can take this idea even further: what if only 1% of the entries in the guestbook don’t show? Does the code work? What if it’s so badly written it take someone with a really good experience to be able to prove that bug exists at all?

How do you go about proving this? In the real world people are going to dismiss it. If you are playing against a politician you will be told that you simply have a bad memory thinking that some things exist that really don’t.


If you are a lawyer and can’t write you probably wouldn’t be able to sell yourself as a lawyer. However being able to write doesn’t make you a lawyer.

Same with the guestbook. It may even work. But on the inside it’s a pile of unmaintainable crap.


Subcontract it to someone who writes it with security holes.


This is one reason I believe computer science should be thought in school just like mathematics. The experience is very humbling, you've got all this logic, all the rationale, to you, it must work, but the computer knows it doesn't make any sense.

I've learned that most problems are easily dismissed by people as being "easy", "simple", just do this, just do that. I've found software forces you to really think through, "just do this" is easier said then done, how to do it?

Real problem solving is hard, and I do think computer science humbles you in that sense.

Here's an example, my friend always tells me he thinks we shouldn't take anymore refugees. Alright, I'm not even going to debate the the morals or anything like that. He just think, whoever is letting the refugees in is dumb and shouldn't do that. That's surface level thinking to me. How do you do that? There's a boat arriving to the shore, full of refugees, it's not turning around? What do you do?

This is a hard problem, no matter what you think you want the features or use case to be, how do you do it? The logistics are difficult, how do you scale it? What about all the edge cases? What about the cost?

In that sense I think CS can help you learn how to really think through a problem from understanding the full implications and complexities and the challenges. We learn this by trying to model problems and their solutions very formally.


> Here's an example, my friend always tells me he thinks we shouldn't take anymore refugees. [...] That's surface level thinking to me. How do you do that? [...] What do you do?

It’s funny you take this example because your point of view is precisely surface thinking too. I bet you don’t have any concrete plan to fund housing, education, healthcare for those people, nor you have one for them to become productive members of the host society. They are dozens of factors that makes irrational to allow any of them, and a lot of countries (Australia, Japan for instance) have in fact a successful policy of allowing only a bunch of them.


You're misquoting me. The surface level thinking refers to this segment: "He just think, whoever is letting the refugees in is dumb and shouldn't do that".

When people are coming to your border in large trove, not letting them in is easier said then done. You can't just "not let them in". To keep my analogy going, this requires an algorithm of some sort. First you need to find something that works, and then you need to take into account cost consideration, scaling, etc.

Your example would work as well. Someone who were to say, whoever is stopping them from coming in is stupid. This would also be surface level. What do you do with them once they're in? Housing, education, integration? Does our system have the capacity to handle such load or will it collapse? Just letting them in may not be enough, it could cause problems down the line, what are we doing to prevent those.

The right conclusion to take from my comment is that problem solving is hard, there's a lot of variable at play, a lot of considerations to take in, and it is never as easy as you assume at first. Recognizing this is part of being humble, and I think computer science teaches you that.

Thinking you have the answers because you thought about it for 5 minutes and not realizing you skimmed and hand waved over all the direct and indirect complexities and considerations and are skipping multiple steps of the solution. Thinking that there is an easy solution and then advocating for it strongly when you haven't begone to recognize and understand the complexity of the problem itself. This to me is an indication of a lack of some form of critical thinking. I think computer science can to some extent teach people better about this, by having them practice concrete problem solving exercise on a computer which can assess to the solution working or not. And learning formal problem modeling techniques and validation strategies. It's a useful skill.


>It’s funny you take this example because your point of view is precisely surface thinking too

But they're not...? GP didn't say he want to take refugees in, they merely said their friend want refuse refugees without specifying how exactly they want to accomplish that.

>I bet you don’t have any concrete plan to fund housing, education...

I don't think this assumption is fair.


>and a lot of countries (Australia, Japan for instance) have in fact a successful policy of allowing only a bunch of them

I don't know about Japan, but Australia allows in large numbers of immigrants.


Immigrants are not refugees.


Australia took in many refugees from the war in Sudan.


> I think one advantage of computers is that what you produce at the end has to work - and if the code doesn't work or you don't know what you're doing, you're quickly exposed as a fraud.

This is very naive point of view. Think of how car repair shops bullshit customers (especially what was before some mitigating laws) and then multiply it by Pi or more


This goes so well against the “free market hypothesis”.

There are industries where lying is simply a stable equilibrium.

You need law to keep those businesses going and fair. Otherwise the lemons spoil it for everyone.


You are correct, although I personally know two programmers who are not able to write basic JavaScript code and have survived for almost a year in their jobs by asking around, copying other people's code and asking for help ("I did this, but somehow it doesn't work!").

I don't know how to raise this issue at work without being insensitive.


> I think one advantage of computers is that what you produce at the end has to work

No that's not true, that something works is beginner level stuff in many cases. In some cases you're right though, because what has been asked is a super tough problem to crack.

I'll showcase the moments in which the "it has to work" requirement is not enough.

To show this example, I already have to break 2 conventions that are a no no, in my opinion. To other devs: I'm commenting everything (I can't assume someone who works in law to know code, he/she might, he/she might not) and I'm using globals for easier variable reassignment, to ease the example for readability.

Let's get started. The requirement: print 5 to the console of your browser [1].

Open up Chrome Dev Tools (view -> developer -> developer tools -> console)

Type in the following:

  five = 1 + 1 + 1 + 1 + 1 //compute 1 + 1 + 1 + 1 + 1 and save it in the computer with the name five
  console.log(five) //outputs 5

  five = 0 //reset

  // loop 5 times, and every time the pc adds 1 to the saved part of the computer named five
  for (loopCount = 0; loopCount < 5; loopCount = loopCount + 1) {
    five = five + 1
  }
  console.log(five)

  five = 5 //save the number 5 in the computer with the name five
  console.log(five) //outputs 5

  console.log(5) //also outputs 5

  FIVE = 5 //all-caps is a convention for that something is a constant, like a constant number such as 5

  console.log(FIVE) //outputs 5 as well, and also probably the way programmers want to see it.
Some of these methods have been really silly and if you catch that during an interview, you won't pass. People can point out that most of these methods happen one way or another, but not when you have a simple requirement such as "output 5". In fact, if that truly is the requirement, then I would go for:

  console.log(5)
And I'd consider the first 2 entries totally nonsensical (unless I asked it with a funny voice or asked them to do whatever / be creative), the 4th entry slightly questionable but fine and the 5th entry fine.

[1] Read the following link to have some context about what the JavaScript console is. Don't worry about not understanding the code or the technical terms. You're not missing anything, as far as context is concerned, I checked. Read it until the first paragraph of "Running JavaScript".

https://developers.google.com/web/tools/chrome-devtools/cons...


A question: is really fifth the best ? Seeing FIVE=5 always irks me, as it's an extra indirection for nothing. What do I gain with it ? The ability to change it to FIVE=6 ? OUTPUT=5 maybe but then console.log(OUTPUT) gains you nothing. Maybe NB_LEGS=5 and console.log(NB_LEGS)


This is why I put the word probably in the comment and why I stated that for this particular requirement I actually don't think this is the best approach.

You can always screw around with code, redefining code blocks in C would be fun to do.

But I did have situations where FIVE = 5 changed to FIVE = "five" or FIVE = "5" or FIVE = "vijf" (Dutch) or FIVE = "....." (ok, the last one is hypothetical, but maybe you want five points because you tick them off in a for-loop? I've seen something clever-ish like that in a particular fun Fizz Buzz example).

The layer indirection is indeed a classical trade-off that you do or do not want to make.

Regardless of that, your question proves the point I want to make to kbos87.


Just continuing the discussion, I fully agree with you on principle. I stand by the fact that FIVE is not really a good constant. You will not change five unless you also change the code to handle it, so it just adds another part to modify. And if you never intend to change it or reuse it.. well the only point for making its symbol is to give it meaning, which FIVE does not. But well at a point it's like discussing colours.


The way a former boss explained it to me is that bosses need to form pyramids to support their own ascension. It is just subtribes within the big tribe fighting for power and dominance.

Those networks that you form to support your own rise are used by your boss to support their rise. You get promoted because you will make your own boss look better in the future. The second this ceases to be the case, you get discarded like a used tissue. Competence is very far down this equation.

I suspect this is almost universal. The only forcing function is what features the ultimate power selects for. A great CEO forces the subtribes to compete on the basis of value added.

We pretty much act the same way as those chimps in nature shows.


> Competence is very far down this equation.

The worst part is when performance review time comes around. It's obvious to anyone looking at the situation objectively that competence doesn't matter much when choosing who to reward and who to fire, but seeing the tribe members pretend otherwise and target hard workers who aren't part of the clique is absolutely devastating to morale.


In corporations, yes. Corporations are like communist governments without authorization for coercion -- relationships and networks determine your impact.

If you sell a product, there is a different calculus for ascension.


Yes.


Reputation is very important. It is a cached value of "worthiness". Knowing it allows you to save months, or maybe years, of expensive experimentation.

The validity of any cached data can be a hard problem; in the case of reputation, a problem there is. Still, however imprecise this tool is, its usefulness is so high it's not going to go away from human interactions.


I've been thinking about this a lot lately and have come to a similar conclusion. I've been very focused on technical skills lately, but in doing so I've started to lose some connections in my org and network. Career growth depends more on relationships than I'd like to admit.


In my experience, coworkers gravitate towards each other socially because they respect each other's skills. Competence, humility, integrity etc. in your work are a major source of those warm fuzzies.


But humility and integrity are "soft" qualities, outside raw technical prowess.

A talented jerk is a scourge of a project (especially in open-source): the engineering might attract others in the short term, but being a jerk makes long-term interactions painful and not very fruitful.


Talented jerks can absolutely destroy a team. In my experience, the actual degree of coding prowess turns out to be significantly less important than the ability to collaborate.


Sure, but these are qualities that come across in how you do the work, that make people want to build relationships with you.

Brilliant jerks are bad, but engineers also tend to resent politicians.


Eventually this does make sense in some ways. It's commonly accepted that you would want to avoid a 10x programmer who is toxic and drags everybody in the team down. So what's the huge difference of that to what you're describing? You'd much rather have in an important position somebody who everybody can trust, has mutual respect for, and is willing to work with, to somebody who is untrustworthy and looks down upon everybody. This pulls everybody together and eventually results in much better results for the organization (and hunter/gatherer group for example in ancient times), compared to the situation where one person has brilliant hard skills but makes everybody really uncomfortable and demotivated.

Of course, one might interpret what you said in a slightly different way, in that two groups of people would fight each other and only vote for people within their group, which would be sorta counterproductive for the whole organization. However, the behavior still actually makes sense for the particular group that those people are in and creates the maximum benefits for everybody in that group. For the organization, it would then be a problem of reconciliating the interests of different groups of people.


When you're focusing on your own skills you're polishing an aging vehicle for value delivery, which is mostly only good when someone else glues you with other people. When you're polishing a social network you're engaging in a more amorphous activity with respect to value, but you're also investing in something beyond you, a kind of social coherency which will likely outlast your death.


I think thats really true from the perspective of a fairly experienced person but I also think it takes a level of skill to build social currency. Building a network is 10x easier when you have done something noteworthy compared to just pure networking off your personality or whatever.


>Do I think this person likes, respects, and would vouch for me?

But do you know who else does this for you? A dog.

I've never promoted anyone who likes me or respects me or vouched for me, I promote those who are reliable and have have skills to achieve the task at hand, one who will not betray themselves or their company for some kickback.


Both are valid IMO. Of course you wouldn't choose a spineless clown who only knows to please the others but not actually do any real work (though admittedly this might be what's actually happening in some places). But on the contrary, you probably also wouldn't choose somebody who only does 10x work but has endless toxic conflicts with the others and can't cooperate/earn mutual respect.


But a person doesn't have to respect, vouch for me, or like me to be a toxic person


Mixed experience here. Technical breadth has helped me a lot,but relationships helped me more. They're not mutually exclusive. If I had barely enough technical skills though,no relationship can save me.


Yes. Too much networking without any technical accomplishments can often backfire.


I think these relationships are the spice of life.

It's just any sort of scorekeeping added on top screws things up.


I clicked on comments so I could type exactly that .. "So much of everything is about connections and reputation."


We remember the really good people and the really bad people. The mediocre ones... we'll see if they are a good fit.


I've noticed this when applying to FAANG. The online resume site doesn't cut it. You need to know people to get past the resume screening, even if you have a bachelor + master in CS, TA experience and some work experience. Someone did reach out via HN recently, which I am really grateful of [0]. I'm curious to see how that works out.

It's only FAANG, I've also seen this everywhere in The Netherlands. I've been looking for a job for 18 months -- I've been picky as I was very ambitious during my uni (yep, my mistake, ambition can be very detrimental to one's career as it can make someone quite picky, aka only big corporations). The irony is that a couple of months ago, I walked into my old university, met an old colleague that had a startup and wanted to hire me straight away after the most relaxed interview I've ever seen. They wanted me to read some code and explain what was happening, it was about some caching system.

So yea, it's all about people.

From my perspective, there is little meritocracy to be found in the tech world [1]. Maybe past the resume screening? Definitely not before it.

[0] Next to my own gratefulness, what I always find amazing to see is how some people in really dire situations get completely picked up by HN and sorted out, in some cases. I really feel for the homeless people in the tech industry, because it's a relatively rare issue, but it does exist! And those people tend to be able to find help here.

[1] Maybe there's a lot of it by comparison to other industries, but if a house has been burned to the ground and other houses are starting to catch fire, while in other cases entire city blocks are on fire, then I wouldn't call it a good situation in both cases.


FWIW, this wasn't my experience at all, but also there may be differences between the US and Europe. I went to a unremarkable top 50 state school in the US, graduated with a 3.0 GPA BS in CS, had a single internship at a no-name local company, no research experience, no referral, no significant personal projects, no TAing experience, and I got interviews at Amazon, Facebook, and Google by cold applying online. It was at the smaller companies and startups where I couldn't get past the resume screen, likely because unlike FAANG they can't afford to interview literally tens of thousands of candidates every year.


The reality hurts so much, and it's upsetting to me I can't be more grateful (some people that I know are homeless and hungry, relatively rare in The Netherlands but it's there). I should've done my own research when I was young, but I was young. I didn't know that going to the US would be so tough, no one tells you. 18 months of not even being looked at in most cases, especially US companies (some Dutch companies did). I am constantly borderlining a mild depression now that I think of it. It just feels so fucked up to not have one phone screen at my 1st pick companies but coding grads do (e.g. Full Stack Academy in New York).

My CS master grade was an A [1, EU grading system explained]. I'm sorry for writing all of this, the GPA 3.0 got me triggered.

I think it's because I'm from Amsterdam and not from the US. I've also noticed by watching YouTube that US people had an easier time applying to Google as a grad, a way easier time in fact. I have found no videos of Europeans doing something similar. Not that I have looked, I simply come across them.

It feels that my predicted future was a complete lie. I failed in my goal, spectacularly. Everything I gave up for it was in vain. Getting to FAANG later won't be the same, it's not the career trajectory/velocity I want. I should've partied a lot more. That would've been fun. I do appreciate the education though.

Perhaps I should find a career coach.

[1] European GPA: 8.1 out of 10, European grades are much harsher, a 10 means a god-like level in many cases, I don't think anyone has gotten a 10 as a GPA, 9.1 or 9.2 means you're the best or one of the best in the country/


That almost certainly has nothing to do with you - hiring a fresh graduate through the H1B program is practically impossible in my experience. H1B applicants have to show "significant work experience" in a relevant field and unless you have a PhD (and often times even then), any work experience gained during your education will not count towards that requirement. Most companies won't even bother with the interview; 18 months sounds like the bare minimum to even have a chance at meeting the requirements.


@akiselev: I haven't only been applying to US locations of US companies (e.g. Google Switzerland). But a SWE from Google gave me a referral link to which I'm grateful. I wonder to what extent it will help though since it went through HN and we just met. I want a phone screen and be told that I am or am not good enough. I can have peace with this if one FAANG company shoots me down on my D&A skills. For my own sanity, I've resigned to the idea that I'm not going to get a phone screen as I have little faith in the Google referral to help me past the resume check, but we'll see.


I don't know how many satellite offices you've applied to in the EU but I bet your sample size is very small. Don't get discouraged! Hiring processes at most companies are byzantine, multinationals even more so. It's really hard to tell who's pulling the strings or what the constraints are unless you're directly involved in the hiring decisions. I've passed on at least one once-in-a-lifetime hire (brilliant engineer from the Indian space program) just because I thought the H1B paperwork would interfere with a project deadline since my employer was a startup and didn't have the HR infrastructure a big company would have.

More importantly: fuck them. Don't judge your self worth based on the latest shiny megacorp's inscrutable interview process.


> Don't judge your self worth based on the latest shiny megacorp's inscrutable interview process.

The following comment might be too candid and too unpopular. I'm sorry about that. I need to write it down somewhere with the potential for some interaction (it helps me learn). I know I'm not alone in this feeling though. Though, I'm pretty sure the majority of people won't share my opinion. So either there is something I should correct in myself, or I'm a bit odd regarding what I'm about to say. I guess some financially independent people feel similar about this

---

It's not about self-worth. My financial worth depends on it and with it the freedom and possibilities I have. Now, I maybe be biased in how I view my options.

Which is why I said, maybe I should find a career coach.

The way I see it: no job at FAANG means less secure freedom. There is a lot of freedom to be had as an entrepreneur, or as an artist. It's also a lot less secure. There are indeed drawbacks for being in golden handcuffs and not having a lot of free time when working at FAANG. But as far as I can tell: while the freedom is skewed towards money and not time, if you don't have lifestyle inflation and save up the money, you can retire quite a bit earlier.

I want freedom. I want to retire earlier.

And now that I know that I have to work at least until I'm 67, because I can't get an amazing career start, that hurts. I know it's a spoiled statement to make compared to the rest of the population and perhaps even an insult to the world to consider this normal. Nevertheless, I've worked for this goal in particular and I'm seeing nothing of it back. My family has worked at this as well, they did their best into letting me succeed. Yet, I spectacurlarly failed.

I'm not worth less because of it, but financially I am worth less because of it. And because I'm worth less financially, I am able to do less with the life that I want to do.

Of the waking hours:

- 50% of my life is spent working

- 25% of my life is spent doing mundane tasks

- 25% of my life is spent doing what I want

If I could increase from 25% to 100%, then in a sense I live 4 times as long.

Note: I like programming, but I don't love it. For me programming is similar to physically moving around, except now I'm physically moving around in the digital world. I like to do it when it's needed, but not much more than that. I'm not an athlete (a person who only loves physical movement).

There are so many things that I love (that don't make any money or are very risky). I want to do those things instead, but I've seen with a lot of family members how that turns out (bad). Startup failures are real. Life changing successes don't come around often, if at all. This is even the case when you're a person who does everything right.

Now I know that I have to be happy with living a life like the rest of us: mostly doing things that I don't want to do but have to.

I know I have to grow up in this sense (despite my age), but it's a gloomy future and one I don't get excited by. It feels too boring. I don't really see the point of it other than raising children and doing your best they can live a life that feels fulfilling to them, if you have them already. If you don't, then one should reflect deeply on whether they want to burden their children with a father who feels that life is too boring (because they are mostly doing things they don't want to do) and couldn't care less.


When I was in a similar position, i had to make loads applications, almost a hundred. The ones that came off are the ones i least expected. I needed this many because in the begining I sucked at interviews, misunderstood questions/what people where after, etc. Also FAANG will not neccesarily give you the most meaningfull work, there are plenty of smaller firms working on important things in life


FAANG gives you money, in Europe there aren't that many better places to be. I started out with a typical salary and joining Google doubled it, and then raises at Google doubled my salary again over a few years, that has saved me many thousands of hours of work that I can spend on other things in life so far.


I suppose I'm not the only one then. If you have any career advice, it would be appreciated (my email is in my profile).


I'd say the overwhelmingly most important thing in getting past the first screening rung is your "fit" with the job as determined by HR. They can be downright stupid about putting the fit above technical ability and experience.


Economics is the most cliquish field by far. There are basically five universities that matter, they get all the citations, no matter how dumb their research is. It's just fashion (and grant money). Everybody else desperately tries to win favor with the elite and just replicates their worst tendencies. In a real science you get citations all over the place. In economics all the citations are Harvard/Yale/Princeton/Stanford/Chicago. Somehow it's impossible that anybody in South Dakota or Missouri could ever do anything worthwile, I wonder how that's possible. The cliquish gives them license to support whatever insane ideas the Jeffrey Epsteins of the world prefer.


The incentives are also out of alignment - politicians have an enormous incentive to promote economic theories that let them pork barrel to constituents. And a slight incentive to favour theories that increase government control of industry. Major financial players have incentives to push theories that favour wealth accumulation to big players.

I note with some cynicism that there has been no apparent push by economists to promote workers as primary owners of companies, for example. I suspect pervasive co-op style businesses combined with a reasonably permissive lending environment would be absolute economic powerhouses.

That sort of research no doubt happens, but it gets no airplay compared to people pushing branches of Keynesianism or Modern Monetary Theory.


> a slight incentive to favour theories that increase government control of industry

I think you have this the wrong way around. Mainstream economic theory (the neo-liberal kind) actually leads to the control of government by private capital. What you will see coming out of the top schools (especially Chicago) but many others too is theory that pushes for de-regulation, privatization, free flow of capital, laissez-faire etc. The exact opposite of government control over industry.

> no apparent push by economists to promote workers as primary owners of companies for example

This is (probably) true, but the reasons are again quite simple and don't involve the state very much. Great concentrations of wealth are built and maintained by keeping capital ownership in as few hands as possible. Since academic economists are often beholden to big capital owners (in one way or another), they will of course promote theories that justify and encourage concentration of ownership, not its dispersal among the workers.

> it gets no airplay compared to people pushing branches of Keynesianism or Modern Monetary Theory

MMT especially is not at all a mainstream theory. I would say most economists consider it at best "heterodox" and often either don't know much about it or strongly disagree with it.


> MMT especially is not at all a mainstream theory.

AFAICT, the descriptive aspects of MMT are widely accepted, if deemphasized in prescriptive contexts, aspects of mainstream economic theory (not just [neo-]Keynesian, but across essentially the whole spectrum of descriptive economics.) The prescriptions that MMT adherents make based on those descriptive aspects are out of line with mainstream prescriptions, which tend to honor what MMT loudly points out (and mainstream economics more quietly acknowledges) is the fiction of the finite public purse.


Is something descriptive really accepted if it is deemphasized in prescriptive contexts?

Take the very basic thing that you mention at the end, which should be absolutely non-controversial: the US government cannot be forced into default.

If your prescriptions are just going to ignore that fact, then have you truly accepted it? I'd argue that no, you really haven't.

(That doesn't mean you'd have to follow the prescriptions of MMTers necessarily, e.g. the Job Guarantee is certainly not a logically necessary conclusion of the fact that a sovereign government cannot go bankrupt. But your whole framing around government spending and revenue really does need to be centered around this observation, or you're simply bound to fall into fuzzy and incorrect thinking all the time.)


If I lend the US government enough money to buy a sandwich, and get back only enough money to buy a half-sandwich it really doesn't matter to me whether they technically defaulted or not. I am down half a sandwich.

The US government 'can't default' but that is basically word games for a complicated tax where nobody is quite certain who is paying. The government is definitely consuming real resources and unlike a tax it isn't at all obvious who would have gotten those resources had the government not redirected them. I'd rather governments were straightforward and levied taxes to pay for things so we know who is supporting state spending.

The conversation really hinges on the semantics of 'default' - in real terms the Government absolutely can default. At some point the country has collapsed, there is nothing left to give (see classic hyperinflation cases) and the government will not make good on its debts. In nominal terms the government can't default but anyone who treats that as useful in their decision making is going to lose out sooner or later when it comes back to real goods and services. I don't want to be one of those people, and I don't want there to be any people like that because it seems dishonest at some level to pretend they aren't losing out.

People call US bonds 'risk free' - that is only in nominal terms. In real terms they are actually quite risky. Take on a 30 year treasury bond today and there isn't any certainty how much it will be worth in 2020 dollars as it matures. Is it a likely net win on the sandwich scale? Signs point to no, but it might be. There are risks.


You are right. But note that in order to contradict me on the surface, you are in fact agreeing with my point by starting to shift the conversation away from nominal terms like the deficit and towards real resources -- which is the correct framing!

The next steps are to recognize and integrate into the conversation that:

* Whether and to what extent the government consumes resources that would otherwise have been consumed by somebody else cannot be determined purely by looking at the deficit as a single number. A government deficit, when done right, stimulates the economy which means that it causes the creation of resources that otherwise simply wouldn't have been created.

* Inflation, which is what you're really getting at, is complicated and has many potential drivers. A government deficit can be one of them, but isn't necessarily. There are many other potential drivers: overly lax monetary policy, excessive bank lending, entirely internal mechanism such as genuine supply shocks, businesses' price hikes in an attempt to increase profits, strong unions helping drive up wages broadly across the economy, and so on. Note that some of these effects, especially the last one, are actually desirable for most people, meaning that inflation can actually be a good thing for society overall! Admittedly that happens rarely in practice, but that's really a function of workers having too little power. It all depends on the details.

In fact, on that last point there's reason to suspect that we'd have had quite a bit lower "effective" inflation (meaning higher purchasing power of wages / salary) for the majority of the population today if governments had decided to address the Global Financial Crisis by direct job creation and handing out money to the population at large, rather than leaning on monetary policy which really only caused asset prices to balloon.

This alternative policy wasn't even discussed seriously, because people largely do not understand that the government cannot default. Discussion was shut down with slogans like "making sure that the US is not going to be the next Greece", which are complete nonsense given that the US and Greek governments operate under very different currency arrangements (sovereign currency like Japan, vs. the effectively foreign currency of the Euro in Greece's case).

So anyway, the point is that the framing in real resources matters significantly, precisely because there is no ironclad correlation between government deficits and real resources. People need to learn to go into the real resources framing and then stay there.


Hmm, interesting. Do you have a few references from let's say top five journals that incorporate MMT descriptive aspects? I'm genuinely curious.


>Great concentrations of wealth are built and maintained by keeping capital ownership in as few hands as possible

That's a tautological and zero-sum way to look at it


The implication was that concentrations of wealth are not a law of nature, but rather an emergent property of how capitalism is set up to be. This idea is not tautological, just a casual relationship.

Regarding the 0-sum aspect, I don't know how else you can look at it. Ownership of capital has to follow a certain distribution, which can in turn be more or less egalitarian, depending on how we decide to set the system up. it's however not possible to have both concentrated ownership and co-op style ownership at the same time (for the same company).


To be more specific, it's all down to how the society understands and enforces property rights. If land and real estate can only be held in usufruct, for example (i.e. if society only protects violation of those rights), then capital cannot be concentrated indefinitely.


> The exact opposite of government control over industry.

Only in the sense of "industry control over government". Which doesn't strike me as any improvement.

The true opposite would be "industry has to fend for itself without being able to co-opt government to tilt the playing field in its favor".


> The true opposite would be "industry has to fend for itself without being able to co-opt government to tilt the playing field in its favor".

Sure, good point. However I don't think it's possible or desirable to let industry be completely independent of the state. It seems to me that would lead us right back to our current predicament - power would concentrate and it would start putting pressure on the state.


> power would concentrate and it would start putting pressure on the state

The state itself is a concentration of power. Industry wants to co-opt it for that very reason.


Not saying that the state is a good thing. It's slightly better than a corporation though, because it's under some sort of democratic control.

Ideally though, all concentrations of power should be dismantled.


I'm not sure that's realistic, with respect to concentrations of power.

Whatever power you use to reduce concentrations of power would undoubtedly become quite powerful.

I think that it's probably better to attempt to reduce the size of any one concentration of power, and set them all up watching each other.


> it's probably better to attempt to reduce the size of any one concentration of power

Agreed. It would be great if power was as decentralised as possible and if individuals would take an active part in their (self-)government.


> I suspect pervasive co-op style businesses combined with a reasonably permissive lending environment would be absolute economic powerhouses.

Based on what? The data seems to point in the opposite direction: successful, innovative businesses are driven by singular leadership. Apple since Steve Jobs died has not only lost much of its innovative spark, but quality has plummeted. See also Tesla, Amazon, etc.

I don't like the implications of that, but I see little evidence pointing in the other direction.


> I see little evidence pointing in the other direction.

Hrm, I think it depends on what they meant by co-op style (as in the previous sentence they say "primarily worker owned"). I agree singular leadership and direction is important, but that's not necessarily at odds with primarily worker owned, even if it's at odds with some versions of worker owned.

An example of a company that has multiple CEOs that have driven it to success in different times and in slightly different contexts would by Microsoft. Originally helmed by Gates in a manner somewhat similar to Jobs and Musk, it's now helmed by Nadella to great success. I would count that due to Nadella having a singular vision, and buy in from the company so he can achieve that. I'm not sure being worker owned (other than it being kind of hard to keep it that way once you get large enough...) would change that as long as he still had the confidence of the board (which in that case would be a council of workers I guess?).

Another way to think of this might be the (traditional) American auto industry. While not worker owned, as I understand it the unions have quite a lot of power (including board seats), and that might be seen as somewhat of a proxy for large successful (significantly) worker owned companies, and those have gone through periods of great success at times as well.


Singular leadership, sure, but we accept shareholders wielding power as normal and fine and not contrary to putting one person in charge. Why not workers?


Steve Jobs didn't own Apple; he'd apparently sold his entire stake around the time he was ousted in 1985 [0].

Your evidence doesn't quite say what you think it does; you are talking about day-to-day management, I'm talking about ownership structure and how day-to-day management gets appointed/fired.

But no specific evidence, just first principles reasoning. It is hard to see why it would do badly.

[0] https://www.businessinsider.com.au/steve-jobs-original-apple...


> But no specific evidence, just first principles reasoning. It is hard to see why it would do badly.

There are concrete examples of worker-dominated organizations: schools, public transit entities, etc., where powerful worker unions dominate policy. They are almost universally unsuccessful, as worker interests take precedence over delivering a product to the consumer.


But none of those things are worker owned either; they are generally owned by the public. The success or failure of a school or public transport entity has nearly no impact on the success or failure of the workers (short of catastrophic mismanagement, anyway).

I suppose we have trailed things like worker ownership in early stage startups with high equity compensation. That might be evidence that it works well. Letting workers capture most of the value they create would be at least as interesting experiment for me than Universal Basic Income; but I think UBI has much more coverage as a political idea.


The danger with arguing from first principles is that a lot of real-world failures sound pretty fantastic. Take communism, or the satirical "A Modest Proposal".


If you thought “A Modest Proposal” sounded pretty fantastic, you must have read a different essay by that title than I did. And saying that communism sounds good in theory is also a pretty superficial reading of Marx and others.


There’s more than one way to skin a cat. And the jury’s still out on Apple.


> The incentives are also out of alignment - politicians have an enormous incentive to promote economic theories that let them pork barrel to constituents.

If this were the case, the United States would have adopted full-on UBI socialism decades ago. Or at least, the net taker states would have prioritized advocating for that sort of thing. What we've actually seen is the complete opposite to your theory.

The actual incentives that politicians have are not to their constituents, but to their sponsors.


Keynesianism and MMT are totally compatible with labor owning companies. Keynesianism in particular is strongly pro worker/consumer (aggregate demand), and both are pro debt fueled spending to combat unemployment.


You must be thinking of the top 5 journals. At the places with money, top 5 publications are the only thing anyone cares about. That's not the entire profession by any means - only those in the highest-ranked departments. The vast majority of papers I read and cite were not published in the top 5 journals.

As someone else pointed out, MIT is one of the top 5 departments, while Yale is generally not considered to be in that group. The list of departments with Nobel Prize winners shows that there are plenty of good departments: Harvard, MIT, NYU, Yale, Chicago, Princeton, UCLA, Stanford, Northwestern, Berkeley, Minnesota, Columbia, Arizona State, Carnegie Mellon, San Diego, Arizona...that's just in recent years.


You'd sort of have to distrust the ability of any economist who wasn't good at squeezing the most out of a compensation system.


Not as much as I'd distrust the advice of the squeeziest.


Can you point to evidence that the top 5 (to which I would add MIT and Berkeley) get a larger share of citations in economics than in other disciplines?


I could find one source here with such a ranking (didn't dig in their method though): https://ideas.repec.org/top/top.inst.nbcites.html


Isn't M.I.T. traditionally the powerhouse school in Economics?


I would argue that the results from the now infamous “Replication crisis” point otherwise.


> The cliquish gives them license to support whatever insane ideas the Jeffrey Epsteins of the world prefer.

Was that sentence necessary? There are so many conspiracy theories about Epstein that its not clear what you are expressing here and I'm not sure I care to know.


I think it's meant to be read as, "rich, powerful, and totally undeserving of influence."


Very difficult to understand what's going here, through the snark, personal abuse, name calling, and political bias. Everyone's trying to communicate through increasingly scathing Tweets and declaring each time that the previous Tweet is yet another new low point. Not a useful way for anyone to get a point across. Everyone should be embarrassed.


Definitely agree, this whole affair has a real rashamon vibe to it. it feels like everyone involved (the article, philipson, and furman) are talking past each other. As far as I can tell, everyone agrees forecasting and curve fitting are different things, and that a cubic fit (red line, allegedly) would be a bad representation of what the future will hold. The disagreement seems to be:

- furman believes (or claims to believe) the original tweet was being deceitful by presenting a curve fit as a curve fit but leaving enough ambiguity that casual viewers might interpret it as a prediction of future deaths dropping to 0 by mid may

- philipson believes (or claims to believe) furman thinks that curve fitting and forecasting are the same thing, does not address whether the original tweet was attempting to be coy or deceitful

- article believes (or claims to believe) that philipson believes that a curve fit actually is a good prediction of the future, then goes super wide with it to make some vaguely related point about academia

I believe no one is lying or stupid, but everyone is finding the worst possible interpretation of events so they can dunk on each other.


https://en.wikipedia.org/wiki/Rashomon

"Rashomon (羅生門, Rashōmon) is a 1950 Jidaigeki psychological thriller/crime film directed by Akira Kurosawa...

The film is known for a plot device that involves various characters providing subjective, alternative, self-serving, and contradictory versions of the same incident."

Now that is an esoteric reference!


It's one of the most famous Japanese films, top ten perhaps.


thank you for this great summary; i couldn't understand it myself, but can with your help.

my only regret is that now that I understand it, i realize it's not interesting (to me personally)


Honestly who knows if I'm actually interpreting it right, the whole thing was very confusing.


No, I don't think that's right. That was my initial impression also, but what's going on is that people are ridiculing the academic advisor who is working at the Council of Economic Advisors at the Whitehouse because his agency (a) released a tweet about Covid with childishly absurd mistakes in it and then (b) he vigorously defended it in a follow-up tweet.

The problem they are ridiculing is mainly that an observed time series that is going up and down with no clear trend is "modeled" with a polynomial that necessarily goes down, and that the tweet claims there is some value in the fitted polynomial.

Secondarily, the fact that the polynomial is described as cubic, when it appears to be quadratic.

Added together it makes it extremely explicitly clear that at the highest levels of American government, even in situations with the involvement of elite academic advisors, the actual technical content is utterly incompetent; childish; totally illiterate from a statistical point of view; bad even at high school.


I came to the same realization, but it was unnecessarily difficult to see through all of the snark. If all parties had simply focused on the salient points of disgreement rather than snarking at each other, it could have been cleared up much more quickly. The GP is right; all sides should be embarrassed.


It would not have "been cleared up", because the CEA didn't make an honest mistake, it was a deliberate attempt to seed disinformation in order to support the President's political agenda.


Exactly. It was the epidemiological modeling equivalent of suggesting we all inject disinfectant. Only it was worse because it was politically motivated instead of solely stupid.


There's a subset of people that would prefer to think that the halls of power are manned by morons, rather than the criminally selfish.

I don't know what that says about any of us.


Malice and stupidity are not mutually exclusive.


Sorry but you’re completely wrong.

If an analysis is making a reasonable effort to use the best or even some sort of relevant methodology in the field maybe. There is an entire field built around the learnings and failures of previous work that informs how we do things now.

In this circumstance the economist came in with no understanding or desire to understand any of that and through some random excel function at the problem to get the answer out they wanted for political expediency.

Not to mention the prediction was ridiculously stupid and anyone with common sense let alone an epidemiology degree could see it was going to be wildly incorrect within days.

THAT is the model being talked about at the highest levels of government when you have the entire field of the world class epidemiologists at your fingertips.

Trying to “both sides” this one because scientists who have spent their whole lives trying working on this got a little snarky is just missing the boat.

There are plenty of other places where epi Twitter is having productive cordial discussions.


Yes, the prediction was clearly stupid. No one is arguing that. But claiming it was the "worst day in the history of the CEA," or words to that effect, amounted to a long and pointless distraction that could have been avoided, and both sides were clearly guilty of it. The whole exchange would have been rightly flagged into oblivion if it appeared on HN.


>Secondarily, the fact that the polynomial is described as cubic, when it appears to be quadratic.

wut. a quadratic only has one inflection point. that plotted curve clearly has 2.


Yes the curve has 2 inflection points, so if it were a polynomial is would have to be quartic. (Cubic polynomials have 1 inflection point at most). Buz I dont think this is a polynomial.


Yep you’re right, I said something daft. It looks pretty much like a Gaussian doesn’t it?


small tails. maybe. it could be anything though with the right parameters.


Definitely not a cubic function though, you can't get a polynomial to hit 0 that nicely. Any polynomial's leading order term will dominate as x increases. Unless they're using a super high order polynomial and hiding the blowup off the edge of the plot.


As someone who took statistics in college and worked in data science for a period of time, I understood very easily. But I don't blame someone who never fitted a curve to not understand - and that is most of America.

The problem of today is not most of America have never fitted a curve and not understand the difference between training accuracy and prediction accuracy. But that they choose to not believe or even hear out those who did.


The issue itself is easy to understand, once you know what it even is. It's not clear from the original tweet alone that the CEA was actually claiming what they were being accused of claiming. Establishing that requires some further reading, and the snark makes that part unnecessarily difficult.


> But that they choose to not believe or even hear out those who did.

Who are they supposed to believe here?

There's a right-wing academic stroke political appointee, and a left-wing academic stroke political appointee. Both have genuine academic credentials. Both are saying something that supports their political masters. One appears to have been ousted by the other so is probably bitter about that.

If you don't know about statistics, which as you say is most people, this is two equivalent people having an unseemly fight on Twitter.

Did you notice neither presented an actual argument? Just abuse.


One of them speak the fact though, and things that are facts tend to draw in scientific consensus sooner or later.

Like climate change, but people chose to not to listen to the scientific consensus anyways.

p.s. The fact being that comparing training accuracy to prediction accuracy is something that is simply unsound. You can have hundreds of ways to spin a statistic that is backed by academic research, and comparing those two are not. It also happen to be the first thing you're taught to avoid.


> One of them speak the fact though

But you said people don't know the facts themselves! So how do you want them to know which of these two people has the facts?

I mean they literally say the same thing about each other - 'new low...'. There's no information to action here if you don't know about statistics! Even if you decide to check their authority and motivation there's still nothing to divide them on!


> things that are facts tend to draw in scientific consensus sooner or later.

Science doesn't work by consensus. Science works by having a track record of accurate predictions. So when you see people talking about "scientific consensus", that should immediately be a red flag. Valid science doesn't talk about "consensus" at all; it just points at the predictive track record--which requires not just "facts" but a series of accurate predictions, made before you knew the facts, that match with the facts--and lets you draw your own conclusions.


I think you are arguing semantics. You are talking about what makes good science when evaluated by a person skilled in that particular art the parent poster is talking about how groups of people discover increasingly true pictures of the world.

Human beings are at the very best arrogant and fallible by nature, incapable of truth. When you want to improve your model of the world by including some heretofore unknown to you concept,fact, or set of facts you can opt to learn everything from the ground up in order to develop a deep understanding of the topic or accept or slot in some preexisting truths and models as described by others that you presume to be true.

This presumption of correctness is typically based on their standing with yours or preferably with their own peers combined with your assessment of them based on how well their statements comport with things you know or at least believe to be true. This is typical because the world is incredibly complex and our time here is finite.

Even smart skilled people have to lean on option two a lot outside of their particular area of expertise. When intelligent people do this they ask themselves whose views do people skilled in a particular area think are worth listening to or what on average do people skilled in this area say about something. This is what is meant by scientific consensus. Unintelligent people ask themselves what do my fellow unskilled peers think about this or what do I already think is true and are their any experts who confirm what I already want/believe to be true.

When people say that the scientific consensus is that cigarettes cause cancer they mean I haven't fully examined the complexity of the human lung and the effects of carcinogens on same but I accept the fact that many experts have done so and are telling me that If I keep smoking I'm more likely to die of cancer. This is converse to the person who also doesn't have time to understand how lungs work who eagerly looks for someone with credentials who says its OK if I keep smoking.

People talk about scientific consensus precisely because in a broad population of users you can find at least one party with any given credential willing to espouse any given stupid thing for money or for kicks. It's especially useful if you can get someone who actually IS smart and therefore respected in one area to believe he knows something about a field totally outside his area of expertise and lend existing cred to a stupid idea that an actual expert would dismiss. This strategy is very commonly on display in the discussion about climate change for example.


> You are talking about what makes good science when evaluated by a person skilled in that particular art the parent poster is talking about how groups of people discover increasingly true pictures of the world.

No, I am talking about how groups of people discover increasingly true pictures of the world. They don't do that by consensus; they do it by finding models that make more and more accurate predictions, as shown by the actual track record of accurate predictions.

> When you want to improve your model of the world by including some heretofore unknown to you concept,fact, or set of facts you can opt to learn everything from the ground up in order to develop a deep understanding of the topic or accept or slot in some preexisting truths and models as described by others that you presume to be true.

Or, instead of making any assumptions, you can look at the actual predictive track record to see which "preexisting truths or models" actually work and which don't.

The reason this isn't obvious to most people is that most people don't stop to think about how much of their everyday experience, particularly in this age of computers and GPS and other technological marvels, actually gives them a huge track record of accurate predictions for our fundamental scientific theories. If our predictions based on models using General Relativity were not accurate, GPS wouldn't work. If our predictions based on models using quantum mechanics were not accurate, computers wouldn't work. There are countless other examples. Most people don't stop to think about this so they don't realize how high the bar actually is for having a track record of accurate predictions. They think of GR and QM as esoteric physics, not as everyday realities. They don't realize how huge a volume of evidence from their direct experience they already have for these theories being correct, so they think they have to take physicists' word for it, when they actually don't. Which means they also don't realize how much other people, who seem to be just as sure of themselves and their predictions as physicists (if not more so), actually are just overstating their case, often by many, many orders of magnitude.

So I reject your model of how people should actually assess claims in areas where they don't have expertise.

> When people say that the scientific consensus is that cigarettes cause cancer they mean I haven't fully examined the complexity of the human lung and the effects of carcinogens on same but I accept the fact that many experts have done so and are telling me that If I keep smoking I'm more likely to die of cancer.

When people assess the probability that if they smoke they will increase their risk of dying of cancer, they have no need to rely on any "consensus". They can just look at the data.


I totally disagree, and the "everyone should be embarrassed" is "fine people on both sides" nonsense.

The original graph showing the "cubic" curve is just - pathetic and sad. It's the equivalent of using a sharpie to change hurricane trajectories. And then Philipson defending this nonsense by calling someone else an "economist turned political hack"????

How the hell can so many of these people have no shame? I totally agree with a follow-up tweet - doesn't matter what Philipson did in his whole career, he will be remembered for completely abdicating any decent sense of professional ethics.


It's not clear what technical content this post has. A cubic polynomial vs time would be a terrible idea, but the dashed and/or dotted red line (that people seem upset about) is obviously not a cubic polynomial vs time.

At that point I run out of guesses about exactly what's being said by anybody.


I think there's actually two things going on here. There's what this post is talking about, which is a general lack of understanding of statistics, which is likely true and correct, and likely for both reasons that are explored here.

Then there's the reason to actually even talk about this specific instance, which is that there is a complete lack of operating on the assumption of good faith and clarifying intent before making assertions as to other peoples intentions which is rampant currently. Whether it's rampant on twitter and between political parties or has spilled into other areas such that it's harder to have coherent discussions in general now than it was in years past I'm not sure.

There are a lot of assumptions in that reply to the tweet in question that's shown. It's not even a novel set of assumptions, it's the standard twitter fare of "I assume he means X and this thing doesn't explicitly show X therefore he must not understand what he's talking about." Any nuance such as using a secondary aspect of something to outline a potion of what you mean is immediately ignored, and if pointed out later assumed to be covering up after the fact.


> ...there is a complete lack of operating on the assumption of good faith and clarifying intent before making assertions as to other peoples intentions which is rampant currently.

This is basically what all politics is. People projecting thoughts (or lack thereof) in bad faith on each other. Though it does seem that the fraction of society sucked into this toxic discourse has grown to cover practically everything lately (in the US anyway).

But also the outrage machine of Twitter etc is obviously biased toward the extreme views and outliers. Where 99/100 people might say 'meh, another failure to predict the future like most', the remaining 1/100 blows his top and gets the attention.


For all the negativity out there, there is a lot more clarity in challenging bad interpretations. I havent learnt as much online recently as I have in 15 yrs on the net.


Well, if you want to express nuance, Twitter might not be the medium for you...


Lol, I believe that. The problem is that these days it seems like the message either has to be only visible by friendlies, or reduced to the simplest and rote version that's impossible to misinterpret if you want to avoid people trying to tear it apart, and even then they'll sometimes pull it into some wider framework of society and what it means when views through the lens of X, Y and Z and why you should be mad. That's a high bar to hit just to tweet something.

It's crazy, and twitter's where it's easiest to see, but you can also use a news aggregator like news.google.com and get a good dose of it just from the headlines about what is ostensibly the same story from different news agencies. Wild times.


> It's not clear what technical content this post has.

Indeed. The post is a big exercise in strawmanning (and of course ad hominem but I am willing to forgive it because it adds entertainment value). Andrew Gelman (a well-known statistics expert) dunks on some guy because he dared to defend a shitty graphic and "made a statistical error". Which is kind of unfair, because he didn't do any statistics. Of course when our brains see some curves they can't resist the temptation to do some statistics on their own, but bad dataviz is a different sin than bad statistics and should be treated as such.


Um, yes.

The statistical community has not been doing well with the coronavirus epidemic. Nobody's models seem to be predicting well. Nor is the source data for anything but deaths very good.

This matters, because the current plan US plan seems to be "open things up and wait for herd immunity". How much time, and how many deaths, lie between now and that point? I dunno.


The models are mostly easy to understand (from simple differential equations for the simplest models to agent-based monte carlos modeling everyone ina region), but the parameters in the models depend on people's behavior, which is not something that can easily be predicted.

Argonne National Lab has simulations where they simulate every person Chicago and where they might go and who they might run into, which, given the correct set of behaviors would probably work very well. But... how do you predict how people will act in a week? That will depend on the weather, court decisions, number of deaths in random countries, how rousing a speech a politician might give, etc. If we could control how people acted, I bet the models would work very well.


> This matters, because the current plan US plan seems to be "open things up and wait for herd immunity". How much time, and how many deaths, lie between now and that point? I dunno.

This seems pretty straightforward, no? You re-open cautiously, wait a few weeks, make sure your hospital resources aren't being overwhelmed, then open up a bit more, rinse and repeat. I wasn't aware there was another way.


The trace model: identify each infected person, trace all their contacts, test all of them, expand outwards. Full quarantine the infected. Intensive but it works; see South Korea and New Zealand.

The "herd immunity" plan can only work with more than 50% of the population infected, of which about 1% will die and some slightly higher percentage suffer lingering ill health, which in the US means at least a million people.


The "herd immunity" plan can only work with more than 50% of the population infected, of which about 1% will die and some slightly higher percentage suffer lingering ill health, which in the US means at least a million people.

That's current US policy. That's what the "Get and Keep America Open" plan does.[1] Current death rate for the US is around 1,400 per day.

[1] https://www.cdc.gov/coronavirus/2019-ncov/php/open-america/i...


This could be a naive question, but everything I've heard is that coronaviruses don't disappear, they just sorta integrate and mutate into our normal "known set" that we deal with year-to-year.

If that is true, doesn't the strategy taken by SK and NZ put them at continued risk for a another outbreak if the virus sneaks back in? Without a vaccine, and then significant uptake by the population yearly, doesn't the risk of covid-19, and its mutations, come back every year?


We do eventually require a vaccine or similar.

SK and NZ can "end" the outbreak. The "herd immunity" strategy will simply continue it straight through the whole year, with a lot more deaths.


> The statistical community has not been doing well with the coronavirus epidemic.

I'm curious how you concluded this. Can you (or anyone) recommend a reasoned evaluation of the existing models? We've got a few months of hindsight now and I'm curious hear what any reputable data scientist might have to say.

I see criticism of the models online (in op-eds and social media), but the criticisms are usually agenda driven and opaque on technical details, which isn't particularly helpful.


The actuals from now and the predictions from a few weeks ago aren't aligning quantitatively. The shapes of the curves, though, do make sense.

Here's the Financial Times graph of actuals, country by country.[1] No predictions. The pattern that shows up in some of the actuals is "huge spike, tight lockdown, big drop". See Italy, Belgium and the Isle of Man. The Isle of Man puts people in jail for weeks for violating quarantine rules. The US didn't have a huge spike, but isn't seeing a big drop, either. US deaths are at about 2/3 of peak.

[1] https://ig.ft.com/coronavirus-chart/?areas=usa&areas=gbr&are...


> for anything but deaths very good.

Even that's iffy when compared to "excess deaths" (in lots of places, excluding all covid deaths still results in more people dying this year compared to previous years).


> A cubic polynomial vs time would be a terrible idea, but the dashed and/or dotted red line (that people seem upset about) is obviously not a cubic polynomial vs time.

What? The chart's creator said it was a cubic polynomial. Mr. Hassett said he had employed ‘just a canned function in Excel, a cubic polynomial.’”

https://www.vox.com/2020/5/8/21250641/kevin-hassett-cubic-mo...


Sorry to shock you, but not every piece of text on the internet holds water mathematically.

Plot a cubic polynomial in time, at^3 + bt^2 + c*t + d. Use Excel. The curve either diverges to + or - infinity at each end, or it's a constant. That's not the behavior of the dashed red curve in the OP.

What it could be is some kind of cubic spline. But it is not a cubic polynomial in time.


It’s apparently a cubic polynomial fit to the logged data — see http://bactra.org/weblog/1176.html


A cubic polynomial goes to +infinity on one side and -infinity on the other. So taking exp, log, etc won't make it go to 0 on both sides. I can think of two possible explanations: 1) the leading coefficient happens to be 0, so it's actually quadratic fit 2) the leading coefficient is nonzero, so somewhere to the left or right the graph starts rising again, but it's not visible due to cropping. Anyway by now we've spent more time thinking about this than any of the participants, so we should call it a day.


on the other hand, for reals calculus (namely taylor polynomial estimation) also says, "why not a cubic polynomial?"


Because Taylor approximations can lose accuracy very quickly as you move away from the point they are centered around.

There are available epidemiological models that are actually grounded in a scientific understanding of the problem. "Why not a cubic polynomial?" is a stupid question.


"Why not a cubic polynomial?" taken literally is not a stupid question at all. No, it's not a contribution to understanding the problem at the professional level, but it is a fundamental and important question.

Somebody persists after hearing the answer (which you gave), that might be stupid.


Yes, and until you do the correct analysis and estimate just how bad the estimate is going to be (off by 10%? Off by 50%? An order of magnitude?) You can't just dismiss it out of hand. So how far off will it be?


The cubic is going to go to either +infinity or -infinity, so it will eventually be of by infinity%. Or infinity orders of magnitude.

I feel like I can dismiss it now.


There's such thing as a convergence interval and your flippant comment betrays your ignorance.


Convergence interval doesn't help at all when you're working with a fixed finite polynomial (cubic, in this case).

You don't need any flippancy to betray your ignorance, it's already clearly on display.


The article may not have technical content, but the title certainly is true: http://pseudoexpertise.com/


Well the flame is saying it's a misunderstanding of the difference between data smoothing and model-based forecasting.

To which I'd say that just begs the question of if there's a difference to begin with, and what criteria you would use to distinguish them if they are.


To begin, if the curve doesn't extend into the future, it can't forecast anything.


Of course there's a difference between smoothing and forecasting. It's the difference between interpolation and extrapolation.

But I can't figure out what anyone in the original discussion is really saying, if anything.


I think it resembles a cubic spline kernel, FWIW.


Yeah that could be it. But in that case what's the big deal? The data is a messy bump on a log, the fit is a smooth bump on a log. That kind of fit is useless for prediction, but they say so explicitly.


This article doesn't do good job describing the gory details. In AI/ML research, this is so unpleasantly prominent that its nauseating. How does workshop speakers at top conferences gets selected? How does keynote speakers get selected? Whoes workshop proposals goes through? Who gets to be area chair? Why are there 18 people listed as authors when almost 100% of work was done by one student? I call it favor economy. You invite X to be your workshop speaker and then you get to be speaker in X's workshop. You add X as co-author in your paper even if X has no real contribution and then expect that X does same for you. This leads to people bragging about having 16 papers in NeurIPS which indicates how deep there are in this favor economy. If you are unwilling to be participant in this favor economy, your citation count, number of papers, awards etc quickly becomes insignificant compared to those who are. The honesty and ethics are perhaps all time low in the history of modern scientific research.


Putting people on the author list who made no contributions to the paper is known as academic fraud. I am surprised this is allowed.


1 student and 4+ "advisors" aren't uncommon these days. They would show up in meetings, put their attendance and claim their spots. So one can argue its not really "fraud".


It's expected in academia. Even worse, I've seen some main contributors who wrote actual code might not even be included.


it's only forbidden if you get caught?


> your citation count, number of papers, awards etc quickly becomes insignificant compared to those who are

There are few metrics with which academia can judge how effective a researcher is. If you're not interested in getting as many papers/citations/awards in whatever way you can then you may be in the wrong game.


>If you're not interested in getting as many papers/citations/awards in whatever way you can then you may be in the wrong game.

Or rather it could be that the game itself is whats wrong, leading to a reproducibility crisis, fraud, group think, walled gardens and ivory towers.


It's all in the game, yo.


Headline and article content don't line up very well. Actual (short) body of article makes the point that when people in positions of power don't have significant training in statistics, it isn't surprising that they don't understand statistics. But that they do need to understand how much they don't understand. The article says next to nothing about gaming of professional achievement in academia.


I can offer one small anecdote that maybe reinforces some of the criticisms of academic circles.

A long while ago, I worked in IT in the admissions department of a top-5 ivy league school. While there I became good friends with many of the admissions officers for the undergraduate and MBA programs.

It's an open secret that admissions are highly influenced by who you know, but what I was stunned by was the overwhelming percentage of each incoming class owes its entry to the connections of their parents.

I had always assumed it was some small single digit percentage, but the first time I saw "the list", I was dumbfounded. There is a list of students each year that is sent from the Dean of admissions to admissions department containing the students that must be admitted. The process for rejecting one of those students required the admissions officer to submit a report outlining why - an incredibly rare occurrence.

The admissions officers rationalize this as a necessary evil, and cover themselves by pointing to the special attention they pay to diversity candidates. "If I see one more white, indian or asian kid from the upper-east side with a perfect GPA, I'm just going to throw the app out the window" was a quote that stuck in my mind.

The list was a collection of applicants who were the children of staff, professors, administrators, and financial or political benefactors. Surprisingly, children of alumni (even those who donated regularly) were not in the group unless they really made an effort over the years and had someone at the school who could call the Dean personally.

It sort of bothered me because it made me realize that someone like myself - a good student, non-diversity, with good EC activities was competing for a tiny tiny portion of the admissions slots for any top school.

Is it really so much to ask to have a transparent and level playing field in college admissions?


This is seen even in very technical fields, such as physics. As the saying goes in poker, if you are in academia and can't spot this person, then it's probably you. It's also a slightly cynical take on imposter's syndrome, in a way.


It’s difficult for me to take any discussion of imposter syndrome seriously that doesn’t consider the possibility the subject feels like an imposter because he is. Having been personally acquainted with several such imposters and having never seen such a discussion I’m left to conclude the entire concept is deeply unserious.

Edit: sibling is making much the same point. The concept is only useful if actual imposters can be identified.


You can't have impostor syndrome if you don't care about your ability to do your job.

Likewise if you believe - probably correctly - that your failures don't matter because you can bullshit and bluster your way through them. And if that doesn't work, a quick word with your sponsors will sort out your problems.

Impostor syndrome is the opposite - caring about quality, feeling you fall short (because quality is hard), and relying on substance not superficiality to get ahead.


There are at least three dimensions here: Feels like imposter, is imposter, and cares. I'm not going to enumerate all 8 values since that's trivial, but realize my point is that conflating the first two dimensions is what makes the exercise intellectually empty. There are people who feel like imposters (or aren't), are imposters (or aren't), and still don't care about their ability to do their job.

I'm still quite skeptical, but even this simple conversation is far deeper than any discussion I've seen on the subject. I would never have bothered to think about the caring dimension if you hadn't mentioned it.


Could you elaborate on this latter point further? I guess maybe I have impostor's syndrome but I have a hard time understanding how you could ever tell if you are that person or not. Imagine the following that I saw on a poker video recently:

You have a table full of top-tier poker players and you have a rookie who won a contest to be in a game alongside them. The rookie is playing absolutely terrible, the commentators are cringing at the moves the rookie is making. The other players are clearly doing things to take advantage of the rookies playing style. Yet at the same time, the rookie comes out in 3rd place, up 50k from their buy in at the start of the night. 3 seasoned award winning professionals are all net-negative for the night, some of which are -150k from where they started after 150 hands played.

Is this rookie an impostor or not? Does it matter that the rookie is an impostor if he is still beating people who verifiably are not impostors over the average of 150 separate hands?

I guess all this is to say that I don't get what value using the impostor's syndrome framing gives us.


Poker is still a game of chance with beginner's luck being a thing. It's a quote from Buffett, relayed from poker folklore: As they say in poker, “If you’ve been in the game 30 minutes and you don’t know who the patsy is, you’re the patsy.”


I thought the saying in poker was "if you look around the room and don't know who the mark is, it's you." Which is a little different problem.


I know physics and I know this description is wrong. If there’s a bullshitter here, it’s you.


Most of the top comments in this thread are badly missing the point. This is probably partly due to the title -- the point here is not whether connections and reputation are important.

The point is that at the very highest levels of US government, when they have brought in an academic advisors from elite universities, they have ended up presenting absurdly wrong child-like nonsense as their best attempt at analyzing Covid-19 data.

Rather few people in this thread have managed to get beyond the title and the slightly opaque columbia.edu blog post to see this. An exception is ahdeanz.

It is an extremely depressing and valid point. Yes, human connections will always be important, but we MUST as sensible democracies, ensure that when science needs to be done, it is not left to the those who have been so involved in the world of human politics and dinner-party-approved conversational topics that they can't even vaguely think about something technical.

I expect he has many flaws, but Dominic Cummings in the UK Conservative party is on the right side of history here, in wishing for a new era in which politics is not dominated by those with law and humanities degrees.

https://dominiccummings.com/the-odyssean-project-2/


I wonder how many commenters actually read the article posted here. The validity of this general statement is something worth debating, for sure, but in this particular case, it doesn't seem like that Chairman Phillipson "doesn't know what he's talking about", as the author seems to be suggesting. But rather, he's trying to defend his political position, for some political aims (remember that he's a politician now, instead of an academic publishing a paper). It would be quite untenable (though of course not impossible), for a chairman to join force with opinions bashing the results published by his own agency, especially when some clear political antagonism exists.

In many situations, it's not that the person doesn't know what they're talking about or is "bluffing", but that they are deliberately presenting a position that fits their current role and benefits them in some way. I totally agree that the former does happen, but those two things are really distinct, and if the author conflates those two and throws out a blanket claim that "stats is hard", it's not really helpful.


A moving average would make sense, but cubic fit? WTF? In epidemiology, is there a theoretical basis for death rates to follow any type of polynomial trend with time? If you measure say, drag force vs. airspeed at finite intervals, sure, use a second-order fit. Drag varies with the square of speed, theory predicts that relation. But there isn't such for death rates, is there?


The article explains that cubic models are backed by the theory that Excel built-in functions exist for a reason.


I'm just going to comment on the graph, not the tweets. The graph shows three predictions from IHME.

One, the blue line, is a model from 3/27. It matches the data okay.

The second, the yellow line, is a model from 4/5. The agreement of this line is much worse, and the fact that the model did worse with more data is not promising.

The third, the teal line, is a model from 5/4. The data (black) ends at 5/4. So the agreement of the teal line with the black line is not a prediction at all.

The red line, a cubic fit, is totally irrelevant. By "cubic fit" I infer that they mean some kind of low-pass signal filter. Fitting a simple model like that to a complex time series without some motivation for why that model was chosen is the mathematical equivalent of treating tuberculosis with mercury.

My point is: it sure doesn't look like the models are doing a good job of predicting death rates. And that's just from the graph used to advertise them.


> By "cubic fit" I infer that they mean some kind of low-pass signal filter.

It's literally just ax^3 + bx^2 + cx + d, optimize a, b, c, and d to minimize some loss function (probably L2).


And since cubics can't be nearly flat over two disjoint intervals separated by an extrenum, they added an unlabelled nonsensical bell curve continuation to the to the end where a cubic would predict the case rate plummeting to negative infinity


Not really, they just fitted the cubic to log-deaths and exponentiated them again.


Cubic has to go to +infinity on either the right or left. Exponentiating doesn't help with that.


exp(-infinity) = 0


And exp(+infinity) = +infinity, so it has to go to +infinity on one side.


Yes, on the right side, https://statmodeling.stat.columbia.edu/2020/05/16/what-a-dif...

So in addition to log-exp to prevent projections going negative, they clipped the end date at 4 Aug to prevent it from going back up to +inf


>What’s striking is that the professor and A/Chairman doesn’t know that he doesn’t know. I’m struck by his ignorance of his ignorance, his willingness to think that he knows what he’s talking about when he doesn’t.

Does this make sense to anybody? The author is "struck" that somebody is "willing" to talk about something they believe they understand? The only annoying thing about academia is that it seems to attract smug, selfish people like this.


Reputation is just a derivative on people. Just likes options price risk on the stocks and commodities, reputation prices risk of the person. academia is kind of like a rating agency on people’s reputations. Rating agencies can misprice risk like anyone else but only in retrospect.


Doesn't this apply to almost any large bureaucratic organization or system?


I've worked on multiple teams at large companies. Tech teams lead by hacker types would make sure to respect and reward technical competence, even if relationships did still matter a whole lot.

On the other hand, I've recently worked in "machine learning" teams led by academic snake oil salesmen who publish lots of papers in ML journals and have fancy PhDs. They often regard coding and technical delivery as "grunt work" and do nothing but play corrupt politics, delivering little value. I have a hard time believing the fact that they came from academia has nothing to do with that, although I guess it may be similarly bad under other non-technical leadership and impostors.


Sounds like academia, government, and most of the business world.


>"So much of academia is about connections and reputation laundering"

[...]

>"2. Academia, like any other working environment, is full of prominent, successful, well-connected bluffers."


Basically: smoothing data != model forecast

It's a fair point. But what should have a discussion turned into playground name calling.


No, that's what I thought at first, but that's not it. See my post elsewhere in this thread.


s/academia/every career/g


Does the s/ here mean substitute?

What is the /g in this context?

Is this the regular expression flag for global search?

Sorry, not a programmer just a dumb tradie.


Yes. Not easy to google! From the sed manual ('man sed'):

s/regular expression/replacement/flags – Substitute the replacement string for the first instance of the regular expression in the pattern space. ...

g – Make the substitution for all non-overlapping matches of the regular expression, not just the first one.


Yes that's exactly right on both counts.


You're either joshing with us or you have a bright undiscovered future ahead of you in regexp's- given your apparent intuition for them.


I was kinda hoping I’d read HN for a decade or so then get a job as senior software engineer or systems architect, without all the intervening effort ;)


So I'm not alone with my genius idea.... dang


Fun fact: you can use sed's replace syntax to do a replace in your last message on Discord.


Rather than popping off on academia, why not attack the real issue, which is lack of proper testing.

No models, forecasts, fittings, or prophecies mean anything with heavily biased data. I'm unaware of a solid method to counteract this problem.

If my understanding is correct, we need population-wide testing to get a good basis for predictions of the future. Something which, unless I crawled under a rock again, we simply haven't come close to achieving (at least in my neck of the woods).


Point 2 in the conclusion states:

> Academia, like any other working environment, is full of prominent, successful, well-connected bluffers. The striking thing is not that a decorated professor and A/Chairman @WhiteHouseCEA made a statistical error, nor should we be surprised that a prominent academic in economics (or any other field) doesn’t understand statistics. What’s striking is that the professor and A/Chairman doesn’t know that he doesn’t know. I’m struck by his ignorance of his ignorance, his willingness to think that he knows what he’s talking about when he doesn’t.

The first sentence sounds like it's going to lead to something about bluffers, but the remainder looks like a re-iteration of the Dunning-Kruger effect. A little surprising to not see it mentioned in the article or comments.

> In the field of psychology, the Dunning–Kruger effect is a cognitive bias in which people with low ability at a task overestimate their ability. It is related to the cognitive bias of illusory superiority and comes from the inability of people to recognize their lack of ability. Without the self-awareness of metacognition, people cannot objectively evaluate their competence or incompetence.

https://en.wikipedia.org/wiki/Dunning–Kruger_effect


Academia? Shoot... senior hiring in the tech world is the same thing. If you don't have buddies, gooooood luuuuck.


> which is the general level of mediocrity, even at the top levels of academia

This is not unique to academia. Our entire society has gradually degenerated over the last few decades for a number of constructively interfering reasons:

1. We told two+ generations of children that everyone was capable of anything, gave them all awards after every "competition", and that kind of upbringing makes it difficult to recognize merit.

2. We've lowered the bar for standards across education, in an attempt to bring our lowest up, failing to realize that the primary result was bringing our best down. That hurts merit at professional levels especially, where the pipeline effectively shrinks.

3. Our media has regressed to the lowest common denominator. The most popular sources of influence in our society are uncredentialed hacks who spread misinformation ("Dr." Phil, "Dr." Oz, Oprah, etc). Even our official "news" sources are primarily entertainment venues and are fully editorialized. This makes it extremely difficult for the average person to recognize merit.

It's like our entire culture has been consumed by charisma, such that incompetence permeates every sector of our economy and society. Things were too easy for too long, and now we face a reckoning - either we fix things or our nation collapses. There's no room for popularity contests, crony capitalism, or diversity initiatives during times of crisis.

Edit: what about this comment is deserving of being flagged?


What is with this ridiculous fixation people have on participation trophies? I'm serious, where is this idea coming from? Was it an object of moral concern in the media before I was old enough to remember or something?

Getting a stupid ribbon in third grade is not going to radically inform your approach to life.


So I agree that people have a fixation on "participation trophies" but the problem it attempts to address, essentially the featherbedding of education is a serious one.

In response to your question, about the stupid ribbon, it probably won't but that's the point. Everyone got a freaking ribbon, everyone got a ribbon in third grade, and fourth grade, and so when someone is actually exceptional how do you then distinguish them, you can't. It's not that the ribbon changed anything because you got it, it's because everyone got it that made it worthless.

Suddenly everyone can prove to everyone how smart they are, meanwhile those that are actually exceptional in an area without an easily defined winners and loser bracket can never be recognized. This leads quickly to a situation where my ignorance is as good as your facts because we are all can be "right in our own way."

The result leads to a distortion of facts a society that can agree on basic reality and everything being run by conmen and manipulators because they realized early on that was the only way to get ahead. Starting to sound familiar?


>Getting a stupid ribbon in third grade is not going to radically inform your approach to life.

It's not a single stupid ribbon. It's growing up in a society where literally every competition results in everyone winning. Predicting performance (i.e. evaluating merit) is a skill that requires development, yet when you reward everyone equally regardless of success or failure you train that skill on noise. How do you expect children to learn to recognize when people are or are not skilled when you imply that skills don't matter because everyone wins anyway? Instead you raise them to believe that skills don't matter.

What happens when these children become adults after a lifetime of being taught that everyone is a winner, regardless of performance? Cognitive dissonance and a sense of entitlement, because there will always be true winners and losers in a world of scarce resources.

Children need to experience failure. Just like they need to experience pain and a multitude of other negative emotions that our modern society increasingly attempts to shield them from. Otherwise you raise a generation of childminded adults who fail to differentiate between charisma and merit, and all of society suffers.


Maybe what the other commenter is getting at is that there's no criticism of society you couldn't find some way to project onto some act of parenting or other.

But drawing a line from your pet peeve about the world to one occasional event out of thousands in a kid's life is disproportionate and reductive.

Children fail and children fail to get their way all the time, in hundreds of daily struggles. A few school contests they don't even necessarily find important shouldn't be assumed to move the needle. If a kid grows up rich, that's something that colors their every experience and is more likely to shape a lifelong attitude about what they're entitled to. But that still doesn't mean you have to stereotype them.


Maybe the failure that is being taught to children is the failure of external sources of validation. The sooner a child learns which external sources of validation have merit or value, and which are gamed or captured, the sooner that child learns to trust in their own process over an external authority. I think that is a positive outcome in education.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: