Yes. This is correlation, not causation. He has been focused on diversification away from ads and hasn't been performing especially well.
His pay is likely set years in advance, based on performance metrics. Sundar is riding the ads wave to huge payouts in spite of his massive failures, Google Plus, Stadia, devices, especially ChromeOS tablets, mismanaging Cloud, and on, and on...
US recessions and bear markets happen most of the time because of mass psychosis, not because of some underlying cause. There can be a trigger, yes, but it's generally not that big of a deal. The government also plays a role in triggering these recessions, so that they can pop the bubbles before it's too late.
Yes, there can be recessions causes by world wars, worldwide droughts, pandemics and other cataclysmic events, but usually it's not the case for US.
Enough people losing their jobs, cutting spending habits, will create that mass psychosis.
Tech is a small part of the economy, but cities like Seattle that have built restaurants and services catering to the tech crowd will see further fallout. There was this idea that Seattle would be this next big metropolitan area, but tech implants have found it’s not for them after a few years
The idea that emotional investors moving large amounts of money can single-handedly cause recessions is naive, even if it is comforting. The world has been using ZIRP for a long time. This policy has led to overinflated stock prices and a significant number of worthless loans held by regional banks, now collapsing at a rate similar to the 2008 crisis. When equity assets lose value, companies buoyed by those previous values have to reduce their spending because they would not be able to raise enough money by selling those assets, and they'd otherwise end up underwater or going bankrupt.
His responses show he doesn't have any understanding of what he's doing to people and their families. If humans were robots who didn't need shelter, food, and care, okay I guess. I may get laid off. If so, I'll go to sleep until I'm needed again. That's not how people work. Nobody wants a job with a constant threat of being laid off during the absolute worst economic times, when finding a job is hard and if you're honest with potential employers about being laid off, you'll have an automatic negative stigma attached to you.
If staff reductions are necessary, there are ways to do them within a reasonable amount of time without seriously hurting people and traumatizing everyone who's left at the company. Layoffs are bad for morale, bad for business.
Or you could look at employees like sheep or cattle to be herded and culled when it suits and when asked for justification, make completely tone deaf answers that show no compassion for the human costs of your actions, the actual points of the questions. See if your best people are still loyal and decide to stick around.
> His responses show he doesn't have any understanding of what he's doing to people and their families. If humans were robots who didn't need shelter, food, and care, okay I guess.
Anyone surprised that Zuckerberg seems devoid of empathy? The laid off people can live in the Metaverse until times get good again, right?
Yeah, 4 is key. Many privacy regulations stipulate that account data must be deleted within a certain period of time, usually days or less, after a requested account deletion. In this system, all recorded requests would have to be discoverable by the requestor's ID and production systems would have to remember to perform deletions when necessary. Also, this database and all related testing systems would have to be held to production level standards for data access because anyone who can see test data to root cause errors can see people's and business' real, private information. Especially for data controlled and regulated industries like government and health care, this would be a nightmare.
It's a neat idea. These kinds of systems often require lots of care and grooming. Since it's used to retroactively test features after they're in production, there's a repeating process of discovering we're saving data we shouldn't, scrubbing, filtering, anonymization, etc. In most cases, I've watched them eventually get replaced by fuzzers. Still, having a central service used by lots of companies may allow this solution to scale up, develop necessary features to solve these problems and function well. I hope it works out!
What are you talking about? Egg producers reported that avian flu affected supply, so they made employees wear, "hair nets ... changed several times per day"?
Watch the movie The Informant. It's a true story and we only know it happened because one guy was trying not to go to jail. What's happening now is nothing new.
Education has always done a terrible job of keeping up with technology. Kids in school now will graduate into a world in which all information will be instantly available and will be presented in whatever format is most suitable. They'll have computers correcting their grammar, improving their ideas, completing their sentences and paragraphs. Instead of learning those technologies themselves and instructing students how to best utilize them, teachers tell kids, "that's cheating". When you graduate and get your first job, your boss isn't going to take away your books, phone, and laptop, and ask you to write a report about a well understood subject. I understand why that's a common practice in education. Maybe it's time to change.
Being able to give a well-reasoned opinion about a matter in your particular area of expertise to a manager who isn't deep into it during a face-to-face meeting without pulling out your phone or laptop is essential to career progress. Essays are a good way to develop this, especially the kind you have to be able to write in-class. Can you absorb enough information and context about a topic to make a decent argument on demand?
Being able to distinguish between useful, valid information and whatever YouTube video the search engine happened to turned up is going to be an even more important skill for those kids in school now than it was for those of us who were in college when Google was a hot new startup.
For those of you who didn't read the whole post (and it was long, so I kind of understand), Mr. Devereaux made an aside about his belief in the continuing value of initially learning how to do arithmetic without a calculator despite their easy availability over the past few decades, an opinion I've always shared.
Before reading this post, I still believed the same about the value of learning how to write essays ("delivery boxes for thoughts" was his expression, I think) and will make sure that my kid can write one with just a pencil and paper, even though he'll also be able to use whatever technical assistance is available in 10-15 years. He's learning to draw and make letters with crayons and pens before I'll let him spend a lot of time with my iPad; he's sussed out how that worked just by watching me, so I'm not concerned about a technology gap with his future classmates.
This post gives me something to forward to my non-technical but curious friends when they ask about ChatGPT and similar.
> I understand why that's a common practice in education. Maybe it's time to change.
not that i disagree, but what would you have it change to?
The fact that there's no need to memorize anything any more, means that any tests which allows the use of tech (like the internet, or chatGPT) to retrieve data, means that students no longer need to commit things to long term memory.
However, having the capability, and be able to recall facts when needed, and at high speed, is something that is foundational to higher creative thoughts.
This higher, creative thought, is not really easy to test, so the memorization is the proxy.
I do not know what form education (and the testing of it) would take on, if tools like chatGPT is allowed to be used.
In my experience (from exams allowing full access to computers and the Internet (for information retrieval, not communication with other people)) you need to memorise the most important concepts and patterns anyway, or you will be too slow during exams. YMMV.
Memorizing things is not useless. Learning concepts and principles is more important, but you also need some amount of facts memorized to make use of them.
For example, I had to memorize the structural formulas for all amino acids in university. This does seem a bit useless at first, but is actually very important the moment you work with protein sequences or structures. You might not need the exact structure, but if you read about a specific important residue in a protein or a mutation in one you need to understand the properties of the involved amino acids to make sense to this. And if you had to look that up every time you'd never get through a paper.
Teachers aren’t interested in a recount of the battle of so-and-so but in training you to gather knowledge, structure your thoughts and express them clearly.
You can only learn that by doing. A chatbot bypasses the learning process, so you will have neither gained subject knowledge nor methodical one.
> The calculator was meant to make computation more convenient for people who already knew about numbers. Now, it threatens to crash the intellectual order, assuming the role of an end, when it is only a means.
I'm pretty sure that a "sufficiently smart" chatbot (or maybe even an extra dumb one) is a useful tool "in training you to gather knowledge, structure your thoughts, and express them clearly". I've found it remarkably useful for clarifying my thoughts, considering alternative arguments, and general tomfoolery that can spark creativity.
The problem is that computing things is something most people don't do that frequently, while "structuring your thoughs and expressing them clearly" is a prerequisite to have any sort of meaningful conversation or even opinion.
I am mostly worried about young people who will grow up relying too much on ChatGPT, what will they do when they do not have a bot hand-holding them through some complicated idea? And if this kind of bots become so ubiquitous, what is the place for humans?
When calculators became wide-spread, we calculated a lot more. When LLM become wide-spread, we will.. Think more? I seriously don't know.
I'm very, very sure that a machine which requires you to structure your thoughts and express them clearly will not lower the capacity for that in the general public. LLMs are extremely prone to garbage in, garbage out - if you can't be precise in structuring and expressing your thoughts, your results will be likewise questionable.
I certainly benefit greatly already from using LLMs to accomplish a number of tasks. I think the answer on where the responsibility lies depends greatly on your view of the same sorts of questions around auteur theory - is the director responsible for the quality of the film? Or is it the writer of the screenplay? What about the cast, or the producers? Is Microsoft the author if you write a novel in Word, without scribing the lines onto the page yourself? I think it's going to be very interesting to see how all of this plays out, and where the lines are drawn. I suspect that what is causing concern now will, in ten years perhaps, be normal, obvious and not even discussed.
> LLMs are extremely prone to garbage in, garbage out - if you can't be precise in structuring and expressing your thoughts, your results will be likewise questionable.
I agree, and that is the issue. People like us can use LLMs effectively because we are already capable of expressing our thoughts in a decent manner and we can recognize when the output does not make sense. But to know whether the results can be trusted or not, you already need to be one level above that. If one is not capable of producing a coherent argument on their own, how can they evaluate whether an argument they hear is itself coherent? And if one, say because of lazyness, relies on LLMs from their childhood to fill in all the difficult steps, how will they learn how to do it on their own? Practising has always been the best way to learn things.
> I think it's going to be very interesting to see how all of this plays out, and where the lines are drawn. I suspect that what is causing concern now will, in ten years perhaps, be normal, obvious and not even discussed.
That would be all fine and good if it were true. But what will happen in practice is that they will think that they have access to all information, that the corrections may be subtly wrong, that their ideas will no longer really be their ideas, and that sentences and paragraphs will be completed with meaningless or faulty junk.
The goal of late modern and even postmodern school has been to raise compliant soldiers, factory workers, maybe low level clerks, and also later to serve as a day nursery for kids / teenagers. This way of doing things indeed might be reaching its limits for the last half of a century.
But this is somewhat off topic because the article is talking about essays in the context of a university, not school, education.
(Not to mention that parents are still at least partially responsible for their children's education.)
I was laid off from Google. I keep seeing posts from people who, I have to assume, just figured out how the world works and want to teach us. Guys, seriously, who didn't already know this?
Some companies stay lean, plan for difficult times. Others spend like crazy in good times and cut back in bad. Google always prided itself on being small and scrappy. That changed under Sundar, the current CEO. They started hiring a ton and now they're cutting back. The CEO still hasn't come to terms with the fact that he fucked up. He made comments in leaked meetings like, "Just imagine if we continued to grow and didn't have all of those extra people. Where would we have been?" Dude, you would have been fine. You don't grow a business by sticking more people in it. This is how accountants think about business, not leaders.
The thing is that Alphabet made a $13.62 billion profit in Q4 of last year[0] so these are not "lean times". Their profit was down from $13.91 billion in the previous quarter.
In other words, this is a company that is making at least $50 billion a year in profit yet is pretending to be so poor that layoffs are necessary! Even if Google lost a billion or 2 in a lean year, that's still a drop in the bucket compared to their profits and what an actual leader would do.
Revenue is generally highest in Q4 due to Christmas spending.[1] So comparing 2022 Q4 to 2022 Q3 isn't really fair. If you compare it to 2021 Q4, it's much worse. Operating income declined from $20.6B in 2021 Q4 to $13.6B in 2022 Q4.[2] Disclosure: I work at Google but don't have any insight into company finances.
Dude, it's still 13.91 Billion in income. Unless they are projecting losses I don't see where the need for layoffs is. It's smells like investor greed. I don't think they hired 12000 people in the last year, so justifying layoffs due to a loss of profit is plain and simple greed.
From the same report, Google had 156,500 employees on 2021-12-31 and 190,234 employees on 2022-12-31. That's a growth of 33,734. Amount hired would be even higher than that due to people who left during that period.
I see your point, and acknowledge my oversight, but I still stand by what I wrote. One downward trajectory point in profits should not prompt a mass layoff. People uproot their lives to come work somewhere. It's not just about money, but I guess nobody wants to admit that companies are made up of humans.
Fresh, starry-eyed people enter the workforce every day. As long as new humans keep being born, there is no point at which anyone is ever finished teaching the same lessons over and over again.
Unless you assume you're not the first one to be experiencing a particular situation. On a broader scale that's the reason why you doomed to repeat history.
I think this is a little uncharitable to… everyone actually.
First to the employee laid off: I’m not against layoffs in principle (maybe I’ll change my tune if/when it happens to me) but companies can take a more human-centred approach to things. Publicly shaming them when they don’t is one of the few tools people in this position have left to try and push them to better behaviour.
Second to Sundar: he’s not crying “woe is me,” he’s explaining the company position. I don’t even think he “fucked up” here because these situations are simply difficult to make the right call. Yes, maybe Google would have been “fine” but CEOs aren’t paid to keep things “fine.” They’re paid to put the company in a position to go after opportunities when they crop up. Even these layoffs will factor that goal in. Only time will tell who made the right decisions and who didn’t.
> imagine if we continued to grow and didn't have all of those extra people
This is the problem with FAANG, to a very large extent with all VC-funded companies, and to some extent with capitalism in general. Everyone looks at total dollar value, not at efficiency (loosely revenue per employee) and certainly not at anything hard to measure in dollars (e.g. quality of life). The only way to keep driving that number up is to grow, not to improve. Growth is mandatory, necessary even, and ultimately becomes like that other thing that keeps trying to grow without bound: cancer. Innovation is an uncertain route to growth over the long term. The sure routes are monopolization, regulatory capture, rent seeking (especially in the form that Cory Doctorow has called "enshittification"), and so on. So guess which one CEOs - whose pay is tied to that growth and not to more human-oriented metrics - go for. Every time.
To tell you the truth I think Sundar was not the one who fucked up. It was Larry and Sergey.
That guy doesn't have what it takes to be the CEO of one of the most important tech companies in the world, just like Trump/Biden are probably not the most talented leaders of the most important country either.
Eric Schmidt was great, and Google has lots of other much more talented leaders who understand programming on a deep level.
When we asked Eric Schmidt years ago of why some stupid things are happening (Christmas present of donating Google laptops to US children especially when most of us are from poorer countries), his answer was ,,I'm not the CEO''.
Yeah, I don't know whether I believe they actually found 18,000 people and somehow sorted them accurately into high and low performers.
I've worked with people who don't like teamwork because they're afraid of being discovered as not knowing what they're doing and others who can't compromise on literally anything, so claim they're the smartest and best and nobody can keep up. All have in common that in the long term, the things they build are unmaintainable Jenga towers nobody else understands. The rest of the team used 5 people's input to build robust systems. The silo workers just had themselves. It shouldn't be surprising that even with design and code reviews, what they create isn't as good.
They also have in common that they love to be called "lone wolves" because it makes them sound cool. I'm about 99% sure this article was written by one.
>I've worked with people who don't like teamwork because they're afraid of being discovered as not knowing what they're doing and others who can't compromise on literally anything, so claim they're the smartest and best and nobody can keep up. All have in common that in the long term, the things they build are unmaintainable Jenga towers nobody else understands.
Amen. Or both, even.
I worked with a lot of these at the beginning of my career. They would stick around for ages in companies that paid poorly, leading teams of fresh faced junior developers who would rotate out as soon as they had enough experience to get a pay rise and go work somewhere sane.
What even is a low and high performer? This depends highly on context and on the environment.
Something I do see though is that there is a trend towards more and more distraction. Always be available in slack, large offices where someone always wants something, etc.
Also, in my experience, some companies just love to have pointless meetings with many people that cannot contribute at all, because it's just not relevant to their work.
I do like to work in teams, in general, but it has to work for everyone at the end of the day.
Pretty sure this comment is related to that recorded teleconference Elon joined to let everyone know that if you dox, even if you tweet about someone else doxxing and link to them, you're banned.
His pay is likely set years in advance, based on performance metrics. Sundar is riding the ads wave to huge payouts in spite of his massive failures, Google Plus, Stadia, devices, especially ChromeOS tablets, mismanaging Cloud, and on, and on...