Hacker News new | past | comments | ask | show | jobs | submit login
Samsung expected to report 80% profit plunge as losses mount at chip business (cnbc.com)
178 points by cebert on Oct 10, 2023 | hide | past | favorite | 95 comments



Semiconductor in general is going bust, too much supply. All semiconductors companies invested more because of Covid (and the sudden huge demand).

Only CUDA/Nvidia managed to build something profitable with it, a competitive advantage that could also go down the drain once more investment is put into CUDA alternatives.

And this is counting on AI hype to continue (likely, but maybe demand for hardware might eventually stabilise).

Even though semiconductors is a highly specialized industry, it's a CAPEX heavy business, you need both fabs and research and you also have too many players in the market.

People talk a lot about TSMC and the bleeding edge business(computers, phones), but that's just one part of it, but there's a whole industry of embarked software that will likely be much bigger.

I don't doubt that the sector will grow(in revenues), but I doubt they'll be able to produce a profit as high as some companies in the tech sector like advertising etc, that are much more asset-light.

I really fail to see what people see even in Nvidia/Cuda, it won't take long until others catch up, and they'll likely move back to margins closer to the industry, which are extremely low. There's even more money to be made in sectors like energy drinks.


Energy drinks you say? Fun fact, Monster is one of only a small number of stocks that outperformed NVDA over the past 20 years: https://www.portfoliovisualizer.com/backtest-portfolio?s=y&s...


That’s amusing. Although, Monster had explosive growth in its first few years (starting from nothing). Do it over 17 years and NVidia wins.


Monster is a classic "capital lite" business model with a large and growing addressable market, and management has been extremely successful reinvesting the substantial free cash flow generated by the business into organic growth and acquisitions. There are really only a handful of businesses of that calibur in the entire public market. Of course, people have realized all of this by this point, so Monster stock trades a large multiple of earnings (nearly 40x), while prospective growth will never match what they did historically from inception, otherwise they'd end up the largest company in the world after not too long!


I mean, look at private ones like AriZona iced tea that still sells cans for $0.99 and the owners are multi-billionaires.


What do think the manufacturing cost for a can of it is? Maybe ten cents?

It's a high-margin business!


It’s too bad that most business owners care more about power than profits. Otherwise, we’d see far more occurrences of successful businesses like this.


That and construction slowing down. I see a lot of construction guys downing Monster energy drinks. TBH I'm always a little surprised -- but it does give you energy when you need it.


Aside from the caffeine boost the bigger issue is just that They burn through calories.


Celsius is also growing like crazy. As I said, there's more money to be made in energy drinks, I don't understand the big hype.


Energy drinks are addictive and the inputs are cheap.

I suspect that to some degree individual drinks will be boom -> bust, as newer brands cannibalize the market by having (or advertising) superior energy, more caffeine, better comedowns, etc., and with older drinks having a higher likelihood of bad publicity simply on account of having been around longer. MLMs like Herbalife are also operating in the space (all "loaded tea" shops are MLM), so obviously in cases like that there's a ton of pressure producing early growth.


Upgrading GPUs is addictive too - I haven’t been able to stop ever since I got my first one!


So are the inputs of GPU cheap.


Monster energy drinks net income is ~$1B. Intel/TSMC/etc have or have had net incomes many multiples of that.

Is there that much more potential left for energy drinks?

I would hope not for the sake of people’s health.


> Is there that much more potential left for energy drinks?

President Camacho has yet to come to power to authorize energy drinks for irrigation (because it's got electrolytes), so I'd say yes, there's still a ton of potential.


And then the become the central part of the economy.


Check Celsius out, it contains less sugar, more caffeine, they've been growing +100% YoY($CELH). There's more space than you could imagine.


I don’t see why a startup would be indicative of potential profits in this space, especially a miniscule business where it is expected to grow 100% YoY.

Coca Cola and Pepsi each seem to have profits in the $5B to $10B range, which can be a proxy for how much profit there exists in the space because everyone else is much smaller.

Maybe if these energy drinks displace all of coffee and tea, but I would be surprised. How much more caffeine will (or should) consume?


Selling sugar water that costs pennies to make for dollars per can is a business with excellent margins. The hardest part is figuring out how to market it.


"Liquid Death" is even better. Dollars per can of just water, no sugar. Of course, all of it is marketing, and they are really, really good at it.


>Semiconductor in general is going bust, too much supply.

TSMC literally can't make chips fast enough. Companies are competing for wafers.

>you also have too many players in the market

How? There are a handful of companies on the planet that can do 7nm process chips and below. Western Digital had an issue with its fab in 2022 and that alone was apparently enough to increase NAND flash prices by 10%.


I strongly suggest anybody commenting about the situation of 6 months ago to play this:

https://en.wikipedia.org/wiki/Beer_distribution_game

At 2020, a lot of people were saying that no, the executives on those companies know about the problem, of course they'll be able to avoid it. Yet, here we are. (Anyway, the game is supposed to be played by people that were instructed about the problem. But you can try it with naive people before instructing them too. It doesn't make a difference.)

TSMC specifically may be able to avoid it. They are in a different market from most players, with less players that maybe are close enough to act to avoid it. I still wouldn't bet on it.


>> TSMC literally can't make chips fast enough. Companies are competing for wafers.

That's interesting since Samsung is their primary competitor at the advanced nodes. It takes time and money to move though as fabs aren't completely fungible.


I almost can't believe this comment. TSMC literally has an entire paragraph in parents comment that addresses it. The comment is about the much larger semiconductor business, not bleeding edge. I fail to see from any perspective how you can read that comment and post this.


Parents Comment >> People talk a lot about TSMC and the bleeding edge business(computers, phones), but that's just one part of it, but there's a whole industry of embarked software that will likely be much bigger.

Your comment >> TSMC literally has an entire paragraph in parents comment that addresses it.

That looks like a single sentence, not an entire paragraph... And it's a paragraph without supporting evidence. No shocker c7DJTLrn wasn't convinced about parent commenters weak claim.


The sentence is obviously delineated by two newlines. A paragraph made of a single sentence.

The paragraph is so obvious that I would never expect anyone to question it. TSMC is a mere 12.5% of global semiconductor market in 2022.


There is no global semiconductor market. Saying it exists is the same thing as saying that global vehicle market exists and Boeing competes with Ford in making the best selling vehicle.

There is, however, "consumer chips foundry market", which is dominated by TSMC with 55%+ market share. And TSMC's market share is limited only because of their production capacity.


A vehicle is a finished good. A semiconductor is a semi-manufacture. Comparing it to a finished good with an end-user purpose is nonsensical. You could compare the global semiconductor market to, for example, the global steel market.


Semiconductor market is PARTIALLY finished good, if we consider creators market (gigs economy), where sell microcontrollers (RPI2040), and some other semiconductors.

It is market of small series custom devices, prototypes (like Oculus Rift), aftermarket upgrades, like cars tuning.

Yes, creators market is not too large, total number of sells just around million, but it exists.

BTW it is interest question, how large part of steel market also belong to creators market, if it is larger than semiconductor creators market.


Are you referring to this?

>People talk a lot about TSMC and the bleeding edge business(computers, phones), but that's just one part of it, but there's a whole industry of embarked software that will likely be much bigger.

I have no idea what "embarked software" means. So-called "bleeding edge" makes up a significant portion of semiconductor profits, so how can you disregard that and say semiconductor is going bust?


My guess: embarked -> embedded

OP's name sounds brazillian, where embedded is called embarcado.


Yeah sorry, my bad, it's embedded :-)


Semiconductor profits !== revenue.

Those semiconductor companies can make lots of revenue, but operate on thin margins, because a competitor can also build a fab. Think graphic cards, Radeon(now AMD) has consistently built worse graphic cards, with less R&D, but sold at a cheaper price point and survived throughout the years.

TSMC has some extra expertise that others doesn't, and has good profit margins because of that, but together with NVidia, they are the champions, and others are constantly catching up. As with many things in a capitalist system, prices face a race to the bottom.

Related to bleeding edge fab. 14nm has suffered some shortages, but even that has changed.

One example is Apple, even though they've managed to increase their revenues (and products sold) a lot because of Covid, that demand is down. Apple is projected to have less revenues this year than the last years.

The same has happened with PC sales. What has been really driving revenues and profits up in this space is AI.

I don't believe this will continue. More competition will drive profits down, meaning it'll be a less attractive field to invest for VCs or investors.

Semiconductor business is more akin to package food products in my point of view. Some products can enjoy a nice margin because of their brand, taste or being hyped at a specific moment, but the majority of it compete on thin margins.

It isn't like Meta, a closed garden, that has plenty of data and have a product that got everybody addicted and dependant on it, either social networks or communications, that because of network effects is extremely sticky. Where only regulations and government intervention could pose a risk to it.

But wall street and people's expectations(and valuations) on semiconductors is suddenly as if they are building the next monopoly. Monopolies only exist because there's only one, what I see is a reaaaally fragmented market, with high CAPEX, thin margins and low moat.


> but operate on thin margins, because a competitor can also build a fab

Building new fabs, especially for the advanced nodes, costs billions, takes years, and requires a lot of know how and special equipment. There are only a handful of companies operating and building new fabs.

> TSMC has some extra expertise that others doesn't [...] but together with NVidia, they are the champions, and others are constantly catching up

Not sure what you mean. Nvidia uses TSMC for the manufacturing for the 40 series (and Samsung for the 3000 series) and doesn't have their own fabs.

> Think graphic cards, Radeon(now AMD) has consistently built worse graphic cards, with less R&D, but sold at a cheaper price point and survived throughout the years.

You mean ATI (now AMD)? They got rid of their fabs back in 2009 and put them into GlobalFoundries. They now also mostly use TSMC for their CPUs and GPUs (and profit from the more advances nodes compared to GF) and GF for a few things in larger nodes.

Apple also (just as everyone else you have mentioned) uses TSMC for (most of) their chips and is one of TSMCs largest customers. Intel and Samsung have their own fabs, but also seem to lag behind TSMC.


I'm not comparing TSMC with Nvidia, I said they are champions in that business, given that there are companies that design chips and also has fabs, my comment makes sense?

I'm also sure here people know how fabs are expensive.

I believe you understood my comment related to Radeon being part of AMD. And also you need to work on being less pedantic.


> I really fail to see what people see even in Nvidia/Cuda

Cloud providers are choosing nvidia today as the gpgpu platform on the cloud, driven by neural network needs. So lots of software will be written with the platform in mind, aka. for nvidia arch / in cuda.

This will add friction preventing users to easily change platform (see how long it took for people to switch their workloads to arm for example). It will drives more cloud providers to use nvidia, more software to be written for cuda, and the moat will build itself.


But there is really nothing that "normal" AI requires that is bound to CUDA. pyTorch and Tensorflow are backend agnostic (ideally...). Sure some hardcore people write some of their own cuda extensions but the vast majority don't. If there was a competing pytorch/tensorflow-supported backend at half the price, AI companies would quickly fix the quirks and run on that instead. It will be a race to the bottom eventually. Even more so on the cloud where you have a higher tendency to run value added services on top, that in turn are price sensitive.


>But there is really nothing that "normal" AI requires that is bound to CUDA. pyTorch and Tensorflow are backend agnostic (ideally...).

That's true, but the current bottleneck in AI/ML is memory size+ bandwidth. NVidia with their cuda really is the most economic game in town right now on both the Low "hobbyist level" end (nothing beats 2x used rtx3090 setup in value for money), up to "high end" (h100 is still the best generally available card). Some interesting "unified memory" competition appears on the horizon(Apple, AMD), but both are still far from where nvidia is.


Cost vs. benefit here is the most important factor.

If somebody would come up with a product Tomorrow that isn't as performant as H100s, but price-wise is cheaper, they could make NVidia miserable, as NVidia is selling those products with a huge margin, they'd have a hard time keeping those margins and a price race to the bottom would happen.


Some popular ML projects ship with hand optimized CUDA kernels for performance. It is, unfortunately, the standard.

If you get around that, sometimes projects will use dependencies that include an odd CUDA-only library. I ran into this a lot with GANs (which often use some weird loss metric with a random hard CUDA dependency) or media libs.

Of course there are plenty of solutions, and some are theoretically transparent. But the issue is getting people to use them. Academic ML researchers in particular seem incredibly impatient, looking for the absolute easiest way to prototype something (which usually means coding what they already know). Ease of installation on a wide range of HW is about their last priority.


Yeah I write CUDA kernels for my own projects as well. There won't be a shift if someone launches something that is 10% better, but if someone launches something that is twice the performance for the same price the companies with huge GPU-bills will pay to port those kernels pretty fast.

The kernels I wrote I hand-optimized using Nsight for a specific usecase, and sure they saved me like 200% compared to pure pyTorch and it took a few days, but when your company spends $10M for a single training run, you have quite many of "my days" to spend if another HW platform would save you $5M (but you need to have the calendar time as well to spend, which is another factor in this space that moves crazily fast).

If/when the field stabilizes a bit people will start looking at that bottom line I think but right now everybody just "has" to get those H100 cards before someone else gets them I guess...


> there is really nothing that "normal" AI requires that is bound to CUDA. pyTorch and Tensorflow are backend agnostic (ideally...).

There are a lot of optimizations that CUDA has that are nowhere near supported in other software or even hardware. Custom cuda kernels also aren't as rare as one might think, they will often just be hidden unless you're looking at libraries. Our more well known example is going to be StyleGAN[0] but it isn't uncommon to see elsewhere, even in research code. Swin even has a cuda kernel[1]. Or find torch here[1] (which github reports that 4% of the code is cuda (and 42% C++ and 2% C)). These things are everywhere. I don't think pytorch and tensorflow could ever be agnostic, there will always be a difference just because you have to spend resources differently (developing kernels is time resource). We can draw evidence by looking at Intel MKL, which is still better than open source libraries and has been so for a long time.

I really do want AMD to compete in this space. I'd even love a third player like Intel. We really do need competition here, but it would be naive to think that there's going to be a quick catchup here. AMD has a lot of work to do and posting a few bounties and starting a company (idk, called "micro grad"?) isn't going to solve the problem anytime soon.

And fwiw, I'm willing to bet that most AI companies would rather run in house servers than from cloud service providers. The truth is that right now just publishing is extremely correlated to compute infrastructure (doesn't need to be but with all the noise we've just said "fuck the poor" because rejecting is easy) and anyone building products has costly infrastructure.

[0] https://github.com/NVlabs/stylegan2-ada-pytorch/blob/d72cc7d...

[1] https://github.com/microsoft/Swin-Transformer/blob/2cb103f2d...

[2] https://github.com/pytorch/pytorch/tree/main/aten/src


> Semiconductor in general is going bust, too much supply. All semiconductors companies invested more because of Covid (and the sudden huge demand).

Semiconductors are cyclical in general, and most sources that I trust believe that the bottom is already behind us. Many of the big players have already begun guiding revenue growth again, though not as massive as Nvidia's windwall, obviously.


I think I've really picked the worst time to finish my PhD thesis and go on the semiconductor job market... Though I think it will delay me by a few months at worst.


After a couple decades you won't remember if you graduated during a boom or a bust.


I thought the evidence was that this did actually have a substantial impact on lifetime earnings and employment. The data I saw actually related to unemployment, but I would be surprised if it didn't have an impact. There are more opportunities to shine when things are on the up, and that benefit will persist into the bad times later. It's kind of like taking steroids as an athlete early on... you might stop later, but the muscle mass is still there.


In a bust period, a lot of people end up taking whatever job will help pay the bills and, at some point, getting into the field they studied for means competing with new grads with more current knowledge and connections.

I don't mean to imply your life is over if you graduate during a downturn but you can easily end up derailed and with a potential hit to your earnings going forward.


I graduated with my Comp Sci degree in 2003ish, and my life was awful for at least 5 years post-graduation. I know it wasn't just me too.

(This is also because of where I graduated, and the fact that I didn't feel comfortable moving to the US post-9-11)


I graduated 2012. The only people hiring were institutions that kicked off the 2008 recession.


I'd have said there was substantial unserved AI demand at less than current Nvidia prices, but hardware-via-cloud upsets that, given they can hit higher utilization.

Of everyone, you'd think Microsoft would have the most incentive (and ability) to improve tooling/stability on not-Nvidia.

> I really fail to see what people see even in Nvidia/Cuda, it won't take long until others catch up, and they'll likely move back to margins closer to the industry, which are extremely low.

Looks like analyst predictions are that Nvidia growth will start flattening in Q1 2025.

And when even Cathy Woods is selling because of over-valuation concerns, everyone should be looking for the exits.

Part of Nvidia's strategic problem is that they made too much money recently: they needed lower profits in their primary markets to disincentivize competitor investment and retain their moat.

Instead, AMD and Intel saw a cash cow. And those aren't small, ignorable competitors.


I don't mean it as a commentary or opinion on Nvidia, but hearing Cathy Woods talking about Nvidia is as if she's just doubling down on her bad decision based on her initial gut feeling which she is too stubborn to admit that was wrong.

To me it just seems like she happens to pick some stocks based on gut feeling because of some random detail she liked about it and then will say whatever to justify her positions afterwards.


Yeah Cathy Wood publicly said that she’d back off into more reliable cash flow positive businesses better PE ratios or even cash if the valuations got too high — which she didn’t do. Instead, at the peak, and the trough, she bought more Teladoc and Docusign instead! Both still have a negative PE ratio! They’re no-moat, no-profit, no-dividend, hype companies.

The problem was, her ETFs are not paid and incentivized by performance. She’s paid and incentivized by AUM. So her team spends their time writing speeches and on Twitter writing hype posts instead of shutting up, being heads down buried in a laptop analyzing stocks, and performing analysis quietly. She might’ve been worried her AUM could drop if she made big changes like liquidating to cash.


Instead, AMD and Intel saw a cash cow. And those aren't small, ignorable competitors.

AMD and Intel have been trying hard to compete with Nvidia for a while now. I mean in AI accelerator market. Little success so far.


I'd say AMD and Intel have been focused on their CPU businesses.

If AMD had been really interested (from a corporate strategic level) in the GPGPU market, their driver/library team would be a lot bigger.


AMD was but focused on hpc instead on ml. They have very large installations, but the card is tuned for fp32+ calculations not lower precisions like NVIDIA.


The other side effect of HPC, I assume, is that compatibility and dev tooling are non-concerns.

You provide a BLAS implementation, and your primary customer can take it from there.

As opposed to Nvidia's current market, which spans HPC to AWS/Azure/GCP to corporate datacenter to "I followed the directions on Reddit" personal machines.


> I really fail to see what people see even in Nvidia/Cuda, it won't take long until others catch up, and they'll likely move back to margins closer to the industry, which are extremely low

I thought that would happen by now too, in 2016, so I started slowly selling much of my NVDA stake after a few years. This was wrong. It turns out they have created an ecosystem effect where everyone makes their papers and code work in CUDA and everything else is best effort. AMD hasn’t even really been seriously trying from my point of view, but once they do they’re still trying to bail water with a leaky bucket - new models come out every day where the researchers have run and tested on CUDA only.


There wasn't much money on the table back then. Now there is.


It’s not just a matter of catching up with Nvidia hardware - and who is going to do it? One of the major cloud providers? AMD? Apple?

Say they do catch up with the hardware, are they also going to convince the entire industry to switch from CUDA.


> and who is going to do it?

One of the big-enough generative/LLM AI users would be my guess. OpenAI for example seems to be even going towards making it's own chips as an alternative to Nvidia things being too expensive.

> are they also going to convince the entire industry to switch from CUDA.

Statistically unlikely, for example ARM definitely disrupted MIPS in several embedded networking niches uses, but didn't convince the entire industry to switch from MIPS.


You realize the type of expertise and money needed to design a processor? That’s not even to mention the loads of money you need to back up to TSMC to even get capacity. For cutting edge fabrication, you’re still going to be behind, Apple and Nvidia to get capacity


OpenAI doesn't need a general purpose GPU like Nvidia. It has too major tasks that could benefit from custom ASiCs that we know of: training and inference, for which efficient designs are different.

Designing a specialised ASIC is much easier and cheaper than a general-purpose, public use processor for a wide range of applications. You don't need most of the programmability or even half the compute and fancy memory and scheduling units. You also don't need the to develop a rich API (like CUDA or Vulkan) for third parties to use.

This holds even for state of the art intensive calculation engines with minimised energy consumption. Think of all the crypto-mining ASICs that were built a few years ago. They were relatively cheap to design and optimised for calculations per energy unit.


Your comment is straight to the point, just see how fast ASICs have destroyed the idea of using graphics cards to mine crypto.

I find it very likely that we'll have in a year or two, not only cheaper but also faster hardware that can do training and inference.


> I really fail to see what people see even in Nvidia/Cuda

They're a nice middle ground between a GP CPU and ASICs that is well suited to the kinds of workloads that venture capitalists are, for the moment, interested in investing in. And it's possible that there are enough applications that benefit from GPU architecture that it's more efficient to force a GPU to pretend to be a CPU instead of having a CPU.

Most likely it's just another cycle of the RISC vs CISC pendulum. Like cloud vs desktop, physical vs virtual, serial vs parallel, monolithic vs micro, etc. We'll stick with CUDA and use it to solve as many problems as we can with that method, and then as we discover new ways to create or organize GP CPUs, we'll shift back to those until we find that problems get hard to solve again.


> there's a whole industry of embarked software that will likely be much bigger.

What in embarked? Genuinely curious if this is a misspelling of embedded or if it's something else entirely.


Embedded but on a boat rather than a bed


Don't forget about the crypto scene that generated a lot of demands for GPUs. One day Ethereum moved, finally, to PoS.


It's the same story being played over again and nobody is seeing it.

AI has a lot of similarities to the crypto bubble, but people keep repeating this "ah, but now it is useful!", sure it is. I like ChatGPT, but will people really use all those new AI products being built?

Most things in the real world need an accuracy beyond what AI can offer and will likely offer in a few decades ahead, go learn about AI and any expert will show you that for example, LLMs can't solve some basic problems.

I'm not saying AI will never be as smart as human, but rather that, it will take longer until we've figured out stuff that people are already pricing in into markets as if it will happen in a year.


When the governments see that their investments or subsidies could not break the monopoly of TSMC, I wonder if they’ll change their geopolitical strategy. Instead of protecting Taiwan for silicon supply stability, they’ll encourage an all out war between Taiwan and China. By destroying the head of the monopoly, the fabs in their countries can compete against the market again.


So… why though? Is the market really not mature enough to anticipate the bull whip?


The business cycle in general is an unsolved problem, and I think enough companies were investing during COVID to meet the shortages that they over-invested for the demand impacts of the Russia-Ukraine war and its effect on Western inflation.


Nividia because of bitcoin?


What a sensational way to write this headline. Focusing on profits vs the 11% decline in revenues which is a more salient number.


Reporters love doing this, and I agree with you, it's always for pure clickbait value. I mean, when a company swings from a profit to a loss (which happens all the time), you rarely see a headline "Profits fell 240%!" because reporters know that would be a confusing way to write it, and it would really call out how dumb it is to use change in profit in the headline.


I read the headline and thought "still making a profit! sweet!"


Profit is more important than revenue.


No. It’s not.

The only time that’s true is if your revenues are flat year over year.


> The only time that’s true is if your revenues are flat year over year.

That's only true in an environment that values growth dramatically over profit.

With non-negative real interest rates - profits actually matter.


The only time the GP is false is when companies are reinvesting profits in a way that makes it look like costs.

But relative changes in revenue are much more impactful than relative changes in profit. This one is obvious, because revenues are much larger and bounded at a minimum of 0.


I'm a bit confused These statements seem to contradict themselves?

The only time the GP is false is when companies are reinvesting profits in a way that makes it look like costs. (GP said profits are more important)

But relative changes in revenue are much more impactful than relative changes in profit (you said revenue is more important)


> you said revenue is more important

No. What I said means that relative change isn't a good metric for any of them. But it's much less bad for revenue than for profit.


>> "The only time that’s true is if your revenues are flat year over year."

That also makes a lot of assumptions on its own. Baseline is: profits matter. Capitalists want their cut. Profits is the extraction of the "thing". If capitalists can't put the "thing" in their pockets then why even bother?


If you're an investor in a company, theoretically you can think of the company as a stream of future dividend payments (cash flows) discounted over time.

So maybe the company has $100 in profit in 2024, and they have 5 shares, and they pay out all of their profit as $20 dividend per share. And in 2025 you expect them to make $200 in profit, with the same 5 shares, which is $40 per share.

Although that's a theoretical model, and there's many companies that don't pay dividends at all and still command mighty stock prices (e.g. TSLA) this is how real companies are actually valued -- on the expectation of future cash flows, maybe times some multiple.

The reason why revenue is more important than profits for the vast majority of technology / growth-stage companies is that most of these companies are focused on creating double-digit percent revenue growth every single year. Their profits will be 0 or negative because all of the money that they make from selling widgets is getting spent on marketing or hiring new engineers to build more widgets, or building a factory.

But after 5-10 years, this company will have "scaled over" its fixed expenses like rent, they will have huge revenues, they'll have serious market share, and then you can simply slow down / turn off the marketing and other SG&A expenses, and then suddenly you go from being a 0 or negative profit company to a multi-billion $ profit company.

But if you're too focused on clipping coupons and scrounging pennies as a startup founder, you aren't focused on the right aspect: revenue growth.


> Although that's a theoretical model, and there's many companies that don't pay dividends at all and still command mighty stock prices (e.g. TSLA) this is how real companies are actually valued

Gotta look at stock buybacks too, which are effectively the same thing but more tax efficient for investors. (Though TSLA doesn't do significant amounts of that, and many other giant companies also don't do it relative to profits either, so your general point still stands)


Yes, but from the perspective of the theoretical model, the reason stock buybacks increase share price is that now the (theoretically unchanged) dividend payouts are getting divided among fewer shares outstanding.

The theory is useful to understand that there's some basis to the share value of equities although it probably doesn't have a lot of practical application.


Or if your job is to make a profit.


> Analysts see its semiconductor business — typically Samsung’s cash cow — reporting a more than 3 trillion won ($2.2 trillion) loss for the third quarter

Is this some kind of reporting error, or is that number accurate?


It should be $2.2 billion.


No offense but, how the hell could it be accurate?


It seemed high, but (a) CNBC has a decent reputation for financial reporting, and (b) I know Samsung is big but I didn't know exactly how big, and (c) I hadn't had coffee yet.


The dollar conversion looks wrong for sure


1USD = 1349KRW


"Correction: The key points of this article have been updated to reflect that 3 trillion won is equivalent to $2.2 billion."

from TFA


I remember old article, on HBR, said, semiconductor fabs are now underloaded (predicted about 70% load), so very much sense to create fabless manufacturer.

Looks like, AMD was right, to separate fabs and become fabless, and SEC for some reason cannot do it and suffer losses.


The worst situation player in this market is Kioxia. They only make NAND chips unlike Micron with DRAM or Korean conglomerates.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: