I worked at Nokia Research Center for a while when it was still market leader making smart phones. My observation is that most companies that have a research center also have some level of disconnect between their main business and their research branches. The two just operate on different time lines.
More worryingly and problematic is that there is also a notion of "not invented here" where business units have very little patience for things coming out of research units because they weren't involved. It's petty and stupid and in Nokia's case a large part of the reason why Apple and Google ate its lunch. It's not that they did not have competing technology but that they didn't know what to do with it.
Corporate R&D works best when corporate leadership has some level of affinity with it. Elon Musk is a great example of a true R&D minded leader. All his companies are basically are R&D labs that happen to produce insanely valuable products as a side effect. It takes a certain type of leadership to funnel billions to the right projects and then turn them into successful products. It requires something most business leaders simply don't have: a clue. Elon Musk for all his flaws understands technology deeply. Most of his contemporaries in the car industry are clueless bean counters that know little more about cars than the number of wheels they are typically equipped with. I bet pretty much all of them had R&D labs 20 years ago working on exactly the kind of things that Tesla brought to market successfully. They just bungled turning that into product because of a lack of vision & technical leadership.
I remember reading in the early iPhone days that Apple recognised virtually no spending on R&D. That wasn’t because they didn’t do research or develop new products, it’s that they simply didn’t see that as separate from their normal day to day operations. I think that’s the same attitude as You’re describing with Elon Musk. It’s all R&D all the time. That’s just how products are developed.
Now it’s different at Apple in that they do recognise a lot if spending on R&D but I think that’s for tax reasons.
> Apple's ATG was the birthplace of Color QuickDraw, QuickTime, QuickTime VR, QuickDraw 3D, QuickRing, 3DMF the 3D metafile graphics format, ColorSync, HyperCard, Apple events, AppleScript, Apple's PlainTalk speech recognition software, Apple Data Detectors, the V-Twin software for indexing, storing, and searching text documents, Macintalk Pro Speech Synthesis, the Newton handwriting recognizer,[5] the component software technology leading to OpenDoc, MCF, HotSauce, Squeak, and the children's programming environment Cocoa (a trademark Apple later reused for its otherwise unrelated Cocoa application frameworks).
Here are some blog posts by ATG's Jim Miller giving some background on concepts such as Data Detectors.
Exactly, Steve Jobs would involve himself deeply with product development and be in the loop on a lot of product detail.
What you don't get to see with Apple is all the stuff they don't launch or that lingers on some shelf for years and years. They only talk openly about stuff that they are ready to show and productize giving the false impression that these things just popped into existence.
By the time the iphone shipped, they'd been working on it in some form or another for over half a decade or so. Giving people enough room to do stuff like that with an open ended agenda and strategy is exactly how you move forward as a company. The value of people like Steve Jobs and Elon Musk is recognizing the strategic value of efforts like this in an early stage and then making sure it happens. That requires being clued in enough to recognize the strategic value and focused enough to then make it happen. If you look at the timing, this was shortly after they launched OS X and apparently went "what else can we do with this?" (IOS is based on the same architecture).
Corporate R&D labs are where this stuff gets parked by companies that are too impatient to wait for that. That's exactly what happened in Nokia.
Please could you share your education, position(s) and experience? While reading your comment I am not sure whether you’re just quoting from some book, or if it is first hand insights/knowledge (it sounds like the second, so it’d be valuable to add the above if so).
> That requires being clued in enough to recognize the strategic value and focused enough to then make it happen.
Certainly investor confidence in your / your companies ability to do this, and this matching their investment goals is also a factor..
These 2 exaples are wildly hailed in media worldwide as 'innovators', and are able to get long term strategic funding to innovate. Most CEOs aren't, and can't. Not sure the cause, but certainly a factor somehow.
interestingly, before shareholder-value primate reaching the last exec, the companies with big research units were economic and innovative powerhouses. E.g. think of the Olivetti corporation (which sadly died after it's leading personel exited.)
From what I understand, after reading through Musk's biography, he is not a typical 'research' oriented person. A typical researcher is focused on refining a body of work that is already at the precipice of cutting edge. This research first mindset is about pushing the boundaries of research and the most efficient way to do that is by publishing papers etc.
Musk, on the other hand is a problem first person and uses research to solve those problems. It's very different from a researchers mindset. For ex, Tesla's self driving car unit is full of researchers but they are not doing the traditional research, i.e pushing the boundaries of machine learning/computer vision. They are not there to win a research competition - ones that gave rise to Resnet etc. They are very focused on using deep learning to solve the problems of self driving.
The end goal for Musk is a product and research is a means of getting there, the end goal for a researcher is .. well research.
And people 'pushing the boundaries of machine learning/computer vision' are not there to win a research competition: they are just using linear algebra for their field of applied statistics.
of course. Also, much of the progress at least in computer vision is attributable to competitions such as Imagenet challenge. It's not my intent to downplay the importance of research or the competitions.
You hit the nail on the head with my previous experience as well. Being on the implementation side of things, my team was constantly suspicious of design ideas or mandates that had been developed by the R&D team. It was always the same litany of excuses. Something like their model doesn't reflect our implementation's reality (which would be a great problem to point out and fix if true...). In the end, it always felt like our engineers were the jocks to the R&D team's nerds, and nothing the R&D team did would be respected because they weren't building the "real" thing.
Maybe it's more fruitful to not just consider the Death of Corporate Research Labs, but also ask ourselves "why did they even exist in the first place?" This day in age, the expectation, for better or worse, is that the State sponsors research; this was very much not the case from, say 1850-1950. Luminaries did tons of basic science research under the auspices of corporate (like Irving Langmuir) and private (like Peter Mitchell) labs.
Without assigning causality (I happen to think that the primary cause is "low hanging fruit" phenomenon), the transition to state sponsored research has coincided with a general loss of scientific productivity in many fields, and even in some fields with explosive growth, like genomics, there has been major corporate involvement (half the human genome was sequenced by a not just private, but a wall-street traded, entity).
Yet the narrative of late has been "if the state doesn't sponsor it, nobody will". I don't feel like I have a solid understanding of why corporate research labs were a thing, or even, how science got done in the pre-manhattan-project/Vannevar Bush era.
None of the bedrock game-changer research from that period - EM theory, thermodynamics, radioactivity, QM, relativity, the original discovery of DNA - was funded by corporations.
Commercial research was responsible for conceptual refinements of existing insights, and for innovations developed as industrial and consumer applications, but not for the original intellectual breakthroughs that made specific classes of inventions and devices thinkable.
Computing itself was state-sponsored - first with Turing+Church's theoretical work, then with various practical implementations whose develop was accelerated by WWII, then with state subsidies and grants for further military and industrial development.
Recently state funding has become less and less effective because of the financialisation of academia. Universities now have explicit financial goals and lean heavily on the grant funding game.
Previously the culture was far more tolerant of blue sky inquiry that might not have an immediate payoff - or any payoff at all.
Thermodynamics is an interesting one, because the theory was created to explain the reality of steam engines, a commercial invention. At the time of the first steam engines (Newcomen's is a good starting point), the most popular theories (Aristotle's) declared that there could not be a vacuum.
I would also suggest that computing began no later than Babbage; Turing definitely contributed to the field, but Babbage and Clement kicked it off (largely with government funding).
> Recently state funding has become less and less effective because of the financialisation of academia.
That, along with the decrease in overall amount. State funding for research seems to be more limited than it used to be during much of the last century; especially if we normalize by population size.
Just assuming a normal curve - regardless of the threshold of "minimum ability to count as a researcher" one would expect between two comparable populations of different sizds more capable of and actively driving research as population grows.
Higher GDP is correlated implies higher per person productivity and more specialization to decommoditize themselves vs the global labor. Plus pushing the envelope often requires more funding for novel projects as there isn't an off the shelf for something novel to reduce costs.
They aren't perfect measures by any means (there exist things which boost either not research) but are decent enough proxies.
A single researcher is rarely productive on their own. Effective research requires a tight network of likeminded individuals and support structure.
If the number of corporations needing new discoveries keeps growing but the funding is fixed, that means either the researchers are spread thin or there’s a centralization of research that has a hard time transitioning into the wild. (Both situations exist, and it sucks.)
Replace 'single researcher' with 'a fixed size research group' in my comment.
Your argument about a minimum size research group for productivity is interesting. I am not quite sure whether that means we should normalise by population size, though.
> None of the bedrock game-changer research from that period - EM theory, thermodynamics, radioactivity, QM, relativity, the original discovery of DNA - was funded by corporations.
Ever heard of the Solvay conferences?
While not embedded in one company, these confs were definitely sponsored by an industrial. And strongly impactex 20th century advancements in physics.
One thing to remember is that not all research was being done in corporate labs. A lot of it was being done in universities as today? How was it funded? A lot of competitive funding agencies similar to today's NIH/NSF except being run privately. It's interesting that you bring up Vannevar Bush. He actually ran one of these private funding agencies (The Carnegie Institution) before being tapped to run the Office of Scientific Research and Development and modeled this (and the NSF, which he later ran) after the Carnegie -- that is, he had scientists write proposals which were evaluated by their peers and the ones judged best funded.
People seem to get confused on what actually goes into bringing a new drug to market. Yes, a significant amount of basic research is done through government funding. But discovering a new drug target is about 20% of the way to a new drug.
You still need someone to develop new candidates, screen them, identify promising compounds, bring it through years of clinical trials, develop a manufacturing process, get FDA approval and then sell it.
A good rule of thumb is that the cost of R&D is typically 1/3 research and 2/3 development. Development is almost always entirely private. And a significant amount of research is too.
When people talk about the death of corporate research labs, they're largely talking about the death of fundamental research -- places like PARC, Bell Labs, etc. Corporate research to take fundamental discoveries and find better ways of productizing it is alive and well.
At least from the pharmaceutical perspective, there has definitely been a shift away from basic research. Why? Well, in the past, it was drug companies that had the money to actually fund it. It wasn't a big deal to spend $1M on the latest lab equipment. We had university researchers asking for time on our equipment.
From talking to some of the old-timers, universities started to get better and better funding (NIH grants, royalties, etc) and eventually could buy the equipment they needed themselves, no need to rely on drug companies.
So it might be more accurate to say that basic research funding has grown and most of that growth happened on the university side. Pharma companies have either held steady or decreased their spend on basic research (it really depends on the company).
I know these comments are semi-discouraged, but I honestly found your response to be so insightful that I wanted to thank you for sharing your perspective. At least, it fits with what I've seen (GF in PhD program, stepdad has an extremely successful lab at a non-profit institute doing basic science research in immunology), but not seen stated explicitly.
Pre-1950 may have been different. The NYS Department of Health Lab developed antitoxins for diseases like anthrax and antifungals (ie NYstatin) in that era.
My comment was more the use of entirely in the OP's statement. There was a lot of drug research done before the 1950's and there still is today, but before 1950 it certainly wasn't only the government doing it.
Certainly with the costs associated with drug approvals being exponentially higher today, only Federal government could possibly be in the drug business.
Basic research is one of the riskiest aspects though. It is much easier to justify internal research on a product where basic research shows that it works and suggests it will scale. MUCH harder when it is a moonshot. You'll notice there's a difference in capital between companies that do the former vs the latter.
True. Basic research can takes years to yield anything useful. However, basic research is also relatively low cost compared to the cost of say, running a clinical trial.
A single phase 3 clinical trial can run into the hundreds of millions of dollars. You can fund a lot of basic research for that kind of money.
When I was in grad school, my PI got grants of ~$2M a year and that supported a lab of ~10 researchers.
> When I was in grad school, my PI got grants of ~$2M a year and that supported a lab of ~10 researchers.
There's a big pay difference between grad students, industry researchers, and government researchers, with grad students being the lowest on the totem pole. Which should make sense because they are researchers in training. There's many different types of research too.
That's true, but even if you quadrupled the costs to account for market rate salaries, you're still two orders of magnitude smaller than typical development costs.
I'm not saying Phase III is really expensive, it is. But you're also ignoring the failure rates of Phase I and Phase II. If you only count the success rate of Phase I's that make it to Phase III and beyond then you're ignoring >80% of the basic research cost.
Big Pharma is a classic example of what happens with private enterprise becomes so powerful that "regulatory capture" occurs. They can charge what they like, and undertake other dirty marketing practices, because nobody can compete.
No, they pivoted to shotgun sequencing earlier, which had better scaling properties because it's self-decoupled. There was a hangup about it only affording asymptotic completeness (aka completeness was "only theoretical") and them they realized how silly that statement is, and how the decoupling trade-off gave them an advantage because of their deep pockets.
> This day in age, the expectation, for better or worse, is that the State sponsors research
But the state (at least the US state) has been cutting spending on research over the last several years - and those cuts have accelerated under the current administration which does not seem to understand the value of basic scientific research.
Cutting research is an overstatement: US federal R&D funding has been increasing, just at a significantly lower rate than GDP. Since it's ideas being created and not widgets, you'd expect the same number of researchers to create at least as much positive impact for society through idea generation.
It seems obvious that they haven't.
I don't think it's anything nefarious about government funding (or lack thereof) that did it; it's just we've already picked all the low hanging fruit. Plucking all the undiscovered ideas at the frontier of knowledge will be increasingly expensive, because if they were cheap to discover, we already would have.
Though I always wonder if there's some trove of cheap ideas hanging out somewhere...
I totally agree that stuff get's harder, but in my specific area of expertise, programming languages, I see the perfect example of untapped engineering potential. Thus, when others say there is an R...D pipeline problem in other fields, I'm inclined to give them the benefit of the doubt.
Also, if I my toot my own horn, I think better programming languages has the potential to revolutionize all knowledge work, making everything else more productive, and helping unlock whatever other basic research might remain woefully untapped.
I'd love to hear some more details of the untapped engineering potential!
I have to admit some skepticism. What follows is my opinionated thoughts as a naif: new languages have helped a lot in creating new models of social organization. To be concrete, the significant 90s languages were forgiving enough, powerful enough, and easy enough to learn that they allowed for a kind of triumph of the programmer proletariat over the artisans. That allowed for the explosion of information technology that we've been observing for the past 3 decades, which enables new business models and makes old ones substantially more efficient.
But I don't think a contemporary PL person would think of Java as a prime exemplar of the value that the research community brings to the world. And when I think of more recent language innovation (Go, Rust), they've only been incremental improvements that were anticipated decades ago. And although I love them both, neither has shown any signs of enabling new social modes like earlier languages did: my like of them is purely aesthetic. All the real, productive low hanging PL fruit has already been taken.
> But I don't think a contemporary PL person would think of Java as a prime exemplar of the value that the research community brings to the world.
For someone focused on the language itself, I agree. But consider the JVM: an enormous amount of research has been done on, and applied to, the JVM. Garbage collection and just-in-time compilation are two particular fields that the JVM has benefited from enormously.
APL was created a long time ago and still we aren't using array languages. It's not like we haven't come up with great ideas (statically typed FP and array languages), it's just that the industry is too pathetic to actually capitalize of them.
APL is well alive in Numpy and R, good ideas typically get diffused even to industry. However, it only adopts what fits in the existing systems. Good ideas that cannot be adopted incrementally tend to not diffuse well (e.g. dataflow parallel programming).
> New languages have helped a lot in creating new models of social organization
Glad you agree!
> To be concrete, the significant 90s languages were forgiving enough, powerful enough, and easy enough to learn that they allowed for a kind of triumph of the programmer proletariat over the artisans.
In terms of practice, yes the languages that would go on to be popular and didn't drag one down with manually memory management were created in the 90s. The technology was older.
> Java as a prime exemplar of the value that the research community brings to the world
Agreed. Still not as un-innovative as Go though!
> And when I think of more recent language innovation (Go, Rust), they've only been incremental improvements that were anticipated decades ago.
Rust fully admits it is recycling old research. It's productivity gains are diminished by the fact that it is targeting systems programming which is less productive. If somebody made a a Gc<T>, and made an implicitly garbage collected version of Rust to compete with Go and Java (of course with great FFI to regular Rust for free), it would already be a better candidate than any other popular language, and that's no research required.
> I'd love to hear some more details of the untapped engineering potential!
So I am Haskeller. I would say there are 3 tracks that interest me:
1. The "classic track" of fancier type systems. The Dependent Haskell work is interesting because it should allow the hodge-podge of language extensions we have today to be streamlined into fewer features, reducing complexity. Writing proofs means less productivity, however, combining libraries with lemmas might be extremely productive. Think being able to blindly mash stack overflow answers together with extreme fidelity. That should be a goal.
2. The "categorical strack". Lambda the ultimate....mistake :D. Our programs are higher order, but there is often a first order steady state, like the things we put on whiteboards for our managers. Our current practices utterly fail to capture that first order steady state anywhere in the code, and as languages become "more functional" (java 8+, ES whatever, etc.) we're basically throwing more lambdas at problems without principle.
Using some categorical primitives---and I mean the "realer ones" in https://hackage.haskell.org/package/categories not the ones in https://hackage.haskell.org/package/base that need a lot of squinting to make out the original math---gives us a chance to perhaps fix this. Categories putting the output and input on equal footing helps, and the simple wire diagram makes the plumbing / dataflow way of thinking that every programmer should employ much more apparent. "point free" programming sucks, but I ultimately think we can get something like Scratch that makes sense for programmer-adjacent types, and yet is useful for "actual work".
3. The "incremental track". There is a decent amount of literature around incremental / reactive programming and models of computation, but it needs to be properly synthesized. I think this is will be huge because it will improve techniques that actually match what real world programs do (toy "everything terminates" stuff from school is actively harmful when students extrapolate that theory has nothing to do with practice). This will defeat an entire class of concurrency woahs that currently is a huge source of productivity loss---I don't mean just race conditions, but the more general "how can I analysis (with the ooriginal "break apart" connotations) my program into simply peaces and then synthesize a holistic understanding". In terms of how this will actually happen, this strongly ties into the above.
Let me say 2 more things:
1. we should have "one language to rule them all", in that I can write my firmware, kernel, end applications, and everything in between without compromise in it. People think this is silly, "engineering is tradeoffs, amirite?" but it isn't. The language will begin a "dialect continuum" that supports vastly different clashing idioms for those different domains (GC everywhere? Manual memory management? No call stack even?), but do support flawlessly type-safe FII.
2. Non-programmers think "applied math" means plugin in numbers, and writing regular natural language prose to with it. Math isn't numbers, and people's notion of it must be fixed accordingly. Proof theory means the whole argument, not just the numerical proessing, can be formalized. This is how many jobs should work.
Thanks for the meaty comment! I was expecting a bunch of stuff in the "fancier type system" category and was pleasantly surprised.
Do you have any references to share to learn more about potential practical uses of categories? Assume the knowledge of someone who isn't intimidated by the wiki page on categories but who learns by reading it.
> I was expecting a bunch of stuff in the "fancier type system" category and was pleasantly surprised.
Yeah after enough functional programming, one's sense of the new lowest-hanging fruit switches.
> Do you have any references to share to learn more about potential practical uses of categories?
I'm afraid not. I suppose there is plenty of blogs on Haskell and category theory, and N lab for the actual math, but I don't of a resource that zooms out from the neat tricks and tries to discuss broad needs for better programming and architecture.
But I'm relieved to say that where I work, we are working on some things in the vein of things for tracks 2 and 3. It will be open sourced when it's ready, so... stay tuned, I guess?
That's untrue and not really relevant to the point anyway. Spending changes are small relative to the total spending, and have been increasing, for the most part.
NIH budget for example increased from $32 billion in 2016[1] to $41.68 billion in 2020. Trump has proposed a 6% cut for the 2021 NIH budget but that is not finalized yet.[2]
NSF budget increased 2.5% from 2019 to 2020, to $8.3 billion. [3]
It isn't the budgets, it's the scope (at least in IT/computing). In the last decade(s) there has been a move away from "blue sky" research to focusing on ideas that can be implemented in short timeframes like one year or less. "Little r, Big D" it's sometimes called. Places that used to fund core research in higher risk sectors have scaled back to focus on things that can be transitioned into products within 1-2 years.
Doctoral students are absurdly good value for money, especially in nations that have a high density of high prestige universities, and they both train the workforce, and are trained. So I think it's an obvious move for states (especially ones with a ton of grand old universities) to milk that for all its worth. It's a way of transforming historical prestige, and a smattering of cash, into a competitive economic advantage.
I guess the flipside of this is that, for a business, it makes way more sense to give money to a university than it does to start your own lab. You'd be paying real (and probably decent) wages to people who wouldn't be nearly as motivated as the phds who do the same work for less than minimum wage.
I'm sure it's not the whole answer, but given large profitable companies with (for the moment) unassailable market positions, the answer may come down to "Because they can."
Everyone likely agrees that most large companies should have some longer-term speculative projects going on. As those companies get wealthier and fatter, those research organizations want to grow, they help position the company as an "innovator," and they may even have an OK payback over a long enough time horizon.
As to that time horizon, people have been arguing about that and it's swung back and forth at different forms for as long as I've been in the industry.
I think it's also true that there were always a relatively few large true research labs. There's a reason that, say, Bell Labs, Xerox PARC, and IBM Research come up (in roughly that order) and then most people have trouble naming examples.
RCA had one, and the amount of money they wasted trying to invent three different barely-working home video systems is arguably a big factor on why they went bankrupt.
Maybe separating state funded research from privately funded research is the wrong perspective: for example in France there was a major entanglement of private and public structures (organizations, funding, people) after WW2 and that is still the case, even at the European level (is research done in state-funded labs with/for Airbus in Toulouse public or private research? Does it make sense to try to entangle state, private, European funding there? Who really decided that there must be aeronautical research here and now?).
A more dynamic view might better explain what’s going on: the flow of public money, the grants criterias, the state funded researchers gone to work for private companies for some years and then coming back…
Some example of this entanglement:
- in France any company can ask for a dedicated Credit Impôt Recherche, a tax credit for research purposes. Most seem granted for projects that are very light on fundamental research. That allowed many small companies to set up research/lab teams. It seems a more lightweight continuation of the European policy to help companies to integrate research and to create consortiums.
- when looking up public AI research in France, I found that nearly all the top AI/ML researchers based in France have been recruited by Google and Facebook. They are still fully part of their original, state-funded labs, but now work and are paid by Google/Facebook. They seem to still do the same kind of research they were doing at their state-funded labs.
I’m sure there’s other examples in other countries
In the old days most of the corporate research was in the physical sciences because the products, large "mainframe" computers and their peripherals, were very mechanical and cast in hardware. The advances of the next generation of product were driven by faster chips, more dense storage, printers that handled more complex print streams, faster networks, better user interfaces, etc.. Now that software and interoperability standards have eaten the world and made IT hardware a commodity, the competitive advantage rarely comes from physical science advancement. Relatively few companies now need physical hardware improvements to drive the next gen product. Intel, AMD, 3M, storage vendors, maybe a few more. The barriers to entry in a software world are so low that buying startups is more cost effective than investing in your own people. There's a survivors bias (I think,) in looking at what startups get acquired. For every one that gets acquired there are probably 10+ that don't. Why would a corporate CEO to invest in 11 research projects knowing only one might pan out if he could just wait and pick a winner from 100 startups?
I think part of it is a scale question--there are a number of discoveries that require a huge investment in both manpower and equipment. Beyond the work of corporate research labs, there are some discoveries that don't have immediate payoff--think for example general relativity. Yet, attempts to measure it led eventually to atomic clocks and to GPS.
Implying research today is anywhere close to research of the past. The amount of capital and complexity to do fundamental research far eclipses what was required in the past.
This is absolutely the reason. So much was unknown at the time that a huge chunk of inventions and discoveries were completely accidental. There's no way a young adult can build a particle accelerator and play with it in his basement and discover something new. But countless people did it in the past with chemicals, electricity, pendulums, optical devices, and mechanical contraptions.
Maybe as engineering got more advanced economies of scale paid off more and more. We don't get all our food from family farms either.
Not trying to be some fascist statist, let's just recognize that since the 1980s, we've not been in a new gilded age so much as a gilded age cosplay (e.g. we are not actually closer to well functioning free-market capitalism for the "real" economy), and growth and large-scale investment have stalled accordingly. A lot of libertarian and decentralization ideology has grown up in the era since under false pretenses.
I would love devolved power and decentralization to actually work, but we need better, more honest theory on how to make it good, and less blind faith that centralization and monopolization will fail because that bad and "the arc of history bends towards justice".
> I would love devolved power and decentralization to actually work, but we need better, more honest theory on how to make it good, and less blind faith that centralization and monopolization will fail because that bad and "the arc of history bends towards justice".
I mean come on. The only way you will get to that theory is if you have brave people who try (maybe that's self serving), and if you aren't at least questioning the narrative of "the only way forward is through the state", especially when there is (increasingly sporadic, anecdotal) evidence to the contrary, then you are tacitly discouraging people to try.
I am trying? For example I like to work on Nixpkgs because I think it is the Wikipedia of the software commons. I am doing some things with Nix and IPFS at the moment for similar reasons.
I share my doubts here because HM tends to overly romanticize small organizations. I don't mean to tell people to give up and work for FAANG.
As a (non-PhD) researcher who works for one of the last traditional industrial research labs in Silicon Valley, I've noticed this trend for years. I've seen industrial research labs either close or become increasingly focused on short-term engineering goals. I also have noticed that newer Silicon Valley companies have adopted what Google calls a "hybrid approach" (https://research.google/pubs/pub38149/) where there are no divisions between research groups and product groups, and where researchers are expected to write production-quality code. I've noticed many of my PhD-holding peers taking software engineering positions when they finish their PhD programs, and I also noticed more people who were formerly employed as researchers at places like IBM Almaden and HP Labs switch to software engineering positions at FAANG or unicorn companies.
Unfortunately, as someone who is also working toward finishing a PhD, I've seen very little guides for CS PhD students that reflect this reality. To be honest, I love research and I'd love to stay a researcher throughout my career, but I don't have the same love for software engineering, though I am comfortable coding. Unfortunately with these trends, traditional research is now largely confined to academia, which is very competitive to enter and where COVID-19's effects on its future are uncertain at this time, and federal laboratories such as Los Alamos and Lawrence Livermore. I fear losing my job (nothing lasts forever) and having to grind LeetCode since there are few other industrial research labs, though given the reality maybe I should start grinding LeetCode anyway.
When I was doing my PhD (CompSci) in the late 2000s I looked forward to join Microsoft Research because it seemed they were doing really cool stuff, even at a time when "corporate" Microsoft was the bad kid (me being very pro-Linux at the time).
One thing lead to another and I ended up going to another Research Assistant role and then getting fed up with research and going into industry.
But the idea of the Xero lab, Sun Microssystem and later Microsoft research was something I always really dreamt about. Nowadays I think startups doing autonomus vehicle technology is what looks similarly disrupting to me, but I am too much down into the SaaS rabbit hole.
MS Research does cool stuff but MS is comically bad at turning the results into successful commercial products. By all rights, in terms of research results and focus, the iPad should have been a Microsoft product. Microsoft remains good at making faster horses...
You only mentioned the DOE national labs in passing-I encourage you to give them more thought. I’m currently a researcher at one of them, and if you land in the right group/application area it can be truly meaningful work.
If you’re in the bay I encourage a closer look at LBNL and LLNL, if you’re open to moving, NREL is in a great location too. Obviously Sandia, LANL, ORNL, and PNNL all do good applied CS work as well but the locations are a little more remote and options are better if you can get/hold a clearance.
I know a (non-PhD) researcher who moved on after one of those labs closed and sadly he finds it impossible to get a research job again. Academia is his only option but he'll have to earn a PhD before getting such a job...
It’s a tough situation, especially for non-PhD holders. Even with a PhD it’s possible for a researcher to enter a rough patch where a career change looks inevitable. The last time I was unemployed I started sending out my résumé to various community colleges and universities to enter their part-time/adjunct lecturer pools (I have a masters), but unfortunately my lay-off period didn’t match well with the academic hiring period. I managed to get a few interviews for software engineering positions, but the algorithms-based interviewing bar is very high these days for a software engineering job, and admittedly my software engineering chops degraded due to spending years in research. Ultimately I ended up finding an AI residency program at a research lab, which is how I ended up at my current position, but it required switching research areas and taking a minor salary drop.
"Job" or "research job"? I mean, yeah, it's tough to find a job purely navel-gazing without a PhD (if then), but we hired all the refugees from PARC, Digital Equipment WRL and Olivetti Research Center we could. Most of them made the transition to a more product-focused, results-measured workplace. Maybe adjusting expectations would help.
You should start grinding LeetCode if you plan on interviewing for industry jobs, because it might help you survive those awful white board/algorithm puzzle interviews.
As a rule of thumb I would say that building something "production-quality" takes at least ten times longer than building something I and close collaborators can use. By having researchers build production-quality code the spend a lot more time with software engineering and a lot less time researching. (And of course many great researchers are actually not that great at programming, so you probably lose many of them as well)
Because the skill sets aren't the same. Research in of itself is a different skill set. Getting a PhD is more about learning how to take abstract and vague ideas and turning them into a reality. Making highly readable, robust, and secure code is a different skill set. Sure, you can have people that do both, but at that point you're often asking a painter to do sculpting. Skills do translate, but not well. They probably won't be interested in it either, so you're not using your workers efficiently either. Give the canvases to the people who love painting and give the marble to those that love sculpting. You'll probably get better art in the end.
It is pretty strange to blame the "more relaxed antitrust environment in the 1980s" when it was the 1982 anti-trust breakup of Ma Bell that destroyed Bell Labs and ended the monopoly profit flows that subsidized the telecom labs.
The Bell System, IBM and Xerox, which all had corporate research labs were all monopolies in effect - the Bell System by regulation and the '56 Consent Decree, IBM by overwhelming market dominance, and Xerox by patent protections. This gave all three corporations excess monopoly profits, some of which could be devoted to research. Although the antitrust suit against IBM was settled, it did modify IBM's behavior and it ultimately failed to compete well with mini and microcomputer companies. In Xerox case, the expiration of patents allowed competitors into the market.
Without a monopoly a corporation will find it very difficult to justify research that can be used by other businesses.
The transistor was a disaster for the Bell System. It was not particularly useful in the telephony system of the day, which used high voltages in tube transmission equipment and high, intermittent currents in electromagnetic switching equipment. On the other hand, the 1956 Consent Decree forced the Bell System out of other businesses (audio, computer, consumer electronics, etc.) and out of other countries (Canada, Caribbean). So the Bell System could not use it's new transistor to expand in the businesses where it fit, and having a fundamental patent on it exposed the Bell System to great anti-trust scrutiny and greater regulatory duress, such as the FCC Computer Inquiries.
>having a fundamental patent on it exposed the Bell System to great anti-trust scrutiny
I don't understand how this follows, having fundamental patents should lead to the breakup of basically every large US company by the government by this logic.
AT&T licensed the transistor to a number of companies, so even if the Bell system didn't have a use for the device (it did in fact, in order to reduce the size of telephone exchanges) so they derived revenue from the invention. How this translates into a 'disaster' for the Bell System is unclear since it was eventually broken up because of the decision that long distance and local service should be different markets long after the patent on the transistor had expired.
Subsequent to '56, the prior transistor patents were royalty free, as were all Bell System patents prior to '56. Patents subsequent to '56 had to be licensed on a "reasonable" and "non-discriminatory" basis. So the patent output of Bell Labs was largely useful only for internal Bell System use, for leverage in cross-licensing agreement, and for a minimal royalty revenue compared with the value of the patents to other industries.
Long distance and local service are not really different markets from a customer point of view, at least no one thinks of them as such today. All calls have become flat-rate nationwide. They were different markets from a supplier point of view for a couple decades, since long distance required analog frequency division microwave transmission equipment that had to be engineered end-to-end and switching systems able to translate NPA and NNX into trunk group selections. The latter was expensive and distinguished "toll" from "local" switching systems. However, by the time of the breakup, digital fiber optics and large memory computer-controlled switches already were making those supplier distinctions rapidly obsolete.
The argument is that, if I invent the Telephone 2.0, I can pretty confidently predict I won't be allowed to capture so much of the profit from it. This provides an incentive for companies to shift their R&D spending towards product development rather than foundational research.
If you invent anything related to cell phone standards and it gets adopted by whatever committee sets the standard, you can become a part of the patent pool, agree to license under FRAND and make money from your research.
I believe Peter Thiel made a similar point when he said that monopolies (or at least, profit margins beyond sustenance) give companies breathing space to fund these types of things.
Peter Thiel's ideas about monopolies are ill-conceived.
In a competitive industry the life of corporations might be brutish, nasty and short. Yes. But the life of workers and customers can be quite nice---since companies are competing for them, too.
What are some examples of competitive industries with top-of-band comp? Based on what I've seen, when the going gets tough, companies would rather squeeze their workers than their investors (airline companies and pilots), since compensation falls within a range.
Do keep in mind that industries can be (un-) competitive in different ways. Eg there's a good argument to be made that banks don't compete in competitive markets for many of their products, but they sure compete for labour.
And I've wondered if the blurring of the line between research and product development at the likes of Google and Facebook (MS Research seems to still be an exception) is because they feel they have less breathing space?
Breathing space can also be taken up by stock market expectations. Did that old guard of research lab owners ever reach a similar level of valuation? My spontaneous guess explanation is that back then, stock markets were far more reluctant to price in immaterial advantages like market dominance or tech leadership if it wasn't backed by tangible assets like mining rights or factories.
A monopoly can only grow revenue by increasing prices, or by creating new use cases and growing the market. The latter tends to pay off better in the long-run.
Research tends to take a long-time to hit the market, in a dynamic market there is no reason to expect that a company funding open research would get a strong first mover advantage. A monopoly doesn't have this concern, Intel could fund semi-conductor research with decade+ time horizons and still be the first to put it to use.
Modern business management doesn't care about the long-term. Research pays off future managers and owners in years, or decades, but increasing prices now pays off now in bonuses and dividends for current managers and owners. Behavior follows incentives.
I agree with you in general, but I don’t think any of the 5 big tech companies are only concerned about short term profits. Two are run by founders (Facebook, Amazon), two are run by hand picked successors of founders (Apple, Microsoft) and I can never tell what Google is doing. It has been rudderless for over a decade.
What makes you think that business management doesn't care about the long term?
Business management is rumoured to care about share prices. And as we can see in the current stock market, shareholders are long term enough to eg see past the current pandemic. They also managed to see past Tesla's losses or Amazon's slim profitability.
Yet, Bell Labs weren't the only corporate research lab. And on the big picture, they did disappear at the time that merges become common and most markets turned into monopolies.
This article has a very good argument for why only the market leaders would invest on those labs, and yet they are caused by the enforcement of competition rules.
It's worth noting that Bell Labs existed because the government recognized that the Bell monopoly was a social ill and insisted that to counteract the problem some of the money had to be earmarked for research.
Today the government doesn't even dream of that kind of regulation anymore. The idea of doing something purely for the public good seems to be lost in the race to make the quarterly numbers as big as possible.
The trend changed partly due to the 80s cult of shareholder value worship enthused by Milton Friedman / Reagan that permanently altered the social contract of corporations. If the only responsibility of corporations is shareholders, then long-term growth and investment is difficult to justify given market tendencies to be so short-term focused over time.
These days, those corporate insiders have a lot of their wealth tied up their company's stock. So while it's a terrible way to think about running a company, a lot of companies are run that way because it benefits the people running them.
> These days, those corporate insiders have a lot of their wealth tied up their company's stock.
Yes, that helps a bit to align incentives. In practice, it's only a start.
(It's especially interesting because investors these days tend to be broadly diversified and typically own shares in all the companies in an industry. Managers are typically highly concentrated.)
I think you're coming to the opposite conclusion I implied. Because corporate insiders' wealth is tied up in company stock, they are incentivized to optimize for the value of the stock, which may be counter to the long-term health of the company.
> [...], they are incentivized to optimize for the value of the stock, which may be counter to the long-term health of the company.
That would imply that shareholders are idiots. And especially that all hedgefunds are idiots as well. Otherwise you'd expect a lot of short-selling once the stock price exceeds the long term health, wouldn't you?
If everyone was rational and had full information, yes. But it can take a while for outsiders to catch on, and people are not fully rational. For two high profile examples, look at GE and Boeing. They had a reckoning, but it took a long time.
The operating companies were charged a 1% fee on revenues as a patent royalty and this funded Bell Labs Research, Systems Engineering, and part of Advanced Development of devices and systems. A similar amount was provided by Western Electric for advanced development, development and research on manufacturing technology. At times the research and development funding by the DoD was larger than the civilian part.
I was told that one reason that Bell Labs was formed from parts of AT&T Engineering and Western Electric Engineering was to keep the researchers from meddling with the business and to keep the business from meddling with the research. The latter was important because a strategic objective was to use technological change as a driver to prevent the monopoly telecom business from going to seed. Within Bell Labs there were often competing groups working on alternative next generation systems - analog versus digital, tube versus transistor, space division versus time division switching, coax versus microwave, microwave versus fiber, etc.
Here it's the Northern Electric 500, which is the same as the Western Electric 500 in the USA. The fact that I can no longer place a call from mine (due to the central office^W^W line card not supporting pulse dial) is deeply disappointing.
Yeah another thing that happened in 1981 was Congress created the first R & D tax credit. I wonder if that impacted the structure of labs-- it definitely has increased net research spending but has a lot of rules.
Corporations still do invest large sums of money into long-term projects: most acquisitions as well as buildings are amortised over a decade or so, which is about the time-frame I would expect research to have, on average. Airplane and car development also comes to mind.
So I believe the complaint about "short-termism" has become a bit too popular. It's great to prop up your bona-fides as a cynic. It's less good at explaining actual behaviour.
Instead, I believe research has simply become less profitable over time. The decades from about 1920 into the 1980 were a time of extraordinary rapid successes in the hard sciences: the scientific method had been established, and industry, finance, law, transport, and communications all made sudden jumps into modernity, allowing scientific institutions as we know them today to exist.
If you read about the history of physics, for example, you'll see photos of the entire class of 192x at some German university, and every single person on the photo later won a Nobel or had at least some minor unit named after them.
We have continued to improve institutions, infrastructure, and our ability to broaden the chance to get into the sciences. But, unfortunately, it's almost a rule of nature that progress slows over time. That's sort-of soothing, actually, because it means we aren't entirely incapable of prioritising easier thing.
Government-sponsored research can continue at higher prices. But private, for-profit endeavours have a very specific point where expected returns turn negative.
This 1000%. In the 40s, you invested a bunch of money into a research lab and you got the transistor. If only that kind of payoff could still happen today.
I don't get this complaint? The leverage for R&D is vastly higher now than ever before: You leave Zuckerberg in his dorm room and you get Facebook. You leave Larry and Sergey in a garage and you get Google. You leave Steve Jobs and Jony Ives together and you get dozens of products in Apple.
The money in vs money out in research has never been more absurdly positive in human history.
This is why companies aren't investing in R&D, because they don't need to do as much as they used to, because a little goes a lot farther.
None of those are breakthrough discoveries, hell most of those arguably weren't even the first to market. MySpace existed before Facebook, Search Engines existed before Google, Phones in Japan were just as advanced as the first iPhone.
I suspect the confusion here is that too much market-centered thinking, while essential for doing good business, can lead you to lose sight of "use value" in the big picture.
If Google search and Facebook were to disappear tomorrow, my life would barely change at all (it might even improve). Contrast Google and Facebook with: packet-switched networks, the internet, cellular and wireless networks, sattelites, DNS. These things disappear and your life changes dramatically.
So in terms of market, yes, you can get some good ROI for research, but you're not getting flying cars (or various other technologies with such intrinsic potential to be life-changing) without a LOT of effort.
> Instead, I believe research has simply become less profitable over time.
Every era says this and every era is always wrong.
However, if you insulate businesses against monopoly prosecution and allow them to buy back their stock to manipulate the price, that's a LOT easier to manage than a research lab.
It also didn't help that stock became a "lottery ticket" rather than a "dividend source".
This type of article comes up from time to time, and is generally wrong on many points.
* Corporate Labs are not for primary research, at least in the United States. In the U.S., any primary research is done at universities, developed from research to prototype and patent at NASA, and then licensed to a U.S. company to go to product. For non-primary research, prototypes are developed by corporations via grants from government agencies.
* Bell Labs, before AT&T monopoloy breakup, was a special business case. Its funding existed in a game of legislation maneuvering. It is not an exemplar.
* Think of labs as "tier 5 tech support, when the engineers are stumped".
Most labs are holding areas for smart people to solve sudden business problems. For those who read the famous analysis of criticality accidents in non-military settings, you need some smart folk hanging around to prevent disasters.
For example, Sun Labs had people working (forever) on some hopeful breakthrough. Howard Davidson, a failure engineer, was pulled off task when server boxes started breaking (paper washers being glued in manufacturing) or card connections started failing (silver substituted for gold). These were high impact problems not solvable by line engineers.
Similarly Ricoh Labs had a day when everyone was pulled off to disassemble a large machine and figure out a feeding problem that had a hard solution. Intel keeps materials PhDs in the fab areas just in case something comes up that would slow throughput.
* Many labs have reputations no longer deserved. I worked at PARC and found some teams made magic and others made bloat. Look at the short term results, meaning the past decade, to rate a lab.
Research is very hard and has low financial rewards. If it’s funded by state, it’s poorly paid (think of grad students, postdocs, APs). If it’s funded by corporate, it’s basically tedious product design and it’s not interesting science or even proper research anymore. In industry, it’s typically done by second class citizens.
Long term research is worse.
We have the classical problem of creating value vs capturing the value.
I believe that’s a major factor. I often meet students and the vast majority want good paying 9-5 jobs and don’t want to bother with research or academic papers.
I'm really confused. When looking at artificial intelligence it seems that actually we have dominating corporate labs which through integrated services (such as compute and data), totally dominate academia.
It's coming to the point where researchers that want to make a difference are actively choosing corporate research labs over academic ones.
This headline doesn't jive with my experience having worked with lots of folks in corporate R&D at big tech. After looking at the data in the paper I think it's far too soon to talk about the "death of corporate research labs".
> These examples are backed by systematic evidence...The figure also shows that the absolute amount of research in industry, after increasing over the 1980s, barely grew over the 20 year period between 1990 to 2010. Other data show the same decline
That seems like a cherry-pick. In that very same figure, you can see that 2015 is about 300% of 1990.
So from 1990 to 2015 corporate research spending grew 5% on average annually. I'm not sure how you could refer to that as "dying". Of course, the paper doesn't use that term, just the parent blog post.
I have no sense of non-tech research labs, and maybe if you exclude those corporate research labs are indeed dying, but the headline is click baity.
> I'm really confused. When looking at artificial intelligence it seems that actually we have dominating corporate labs which through integrated services (such as compute and data), totally dominate academia.
But this is just one unique case - where access to a lot of data and a lot of compute makes a huge difference.
I'm not sure that it translates to many other fields in CS.
Corporate research is still going to somewhat be at the whimsy of what's bleeding edge AND has some applicable value; the idea of it just is a little different with things like the 10% or 20% "constructive free-time" model. Each engineer in FAANG-like companies just get a 10-20% salary allotment instead of the equivalent expense of researcher headcount. The idea of "constructive free time" for work is just the "agile" iteration of the research lab.
“ Large corporate labs, however, are unlikely to regain the importance they once enjoyed. Research in corporations is difficult to manage profitably. “
The easy fix is to classify the work as a net positive for a community, and provide tax incentives such that it becomes by definition profitable, since society can cost-in the scenario where the lab doesn’t exist.
Research can obviously be deducted from profits for tax purposes already, just like most other business expenses.
Beyond that, it it's quite hard to set up a system where the incentives still motivate good research. It may also just make more sense for governments to spent the money on university research or institutions like the Max Planck Society, rather than giving it to companies where the public gets less of a say in its use, and any profits are privatized.
I would attribute it to recent culture of ROI grown in high leadership of the company. They don't understand patience, values, human relationships and necessary soft skills to realise that Innovation and research are vital health of the company. Problem is they are all fast thinkers and great presenters and that's what js valued nowadays in orgs. The innovators and researchers can't sustain in this environment. The leadership wants ROI from everything, even research. That's not going to create a good culture and innovation wouldn't grow.
This article argues what I believe is the opposite of the truth: that corporate research productivity is correlated with anti trust enforcement. I think this is very wrong! Bell labs really fell apart AFTER AT&T was broken up. Xerox parc declined AFTER Xerox was ruled a monopoly. Anti trust enforcement KILLED innovation, and I think it’s really important that we get the correct relationship here.
What worsens the problem is that publicly funded research often solves toy application problems (at least in CS); academics know that they need to "apply" their research to get funding/citations/publications with relatively little effort, but from an industry/real-world perspective, many application scenarios hardly make sense and are hand-wavy-ly evaluated (for example, in applications of CS, too often the code is not shared so that the results are practically not reproducible). The current system simply rewards researchers who manage to get into the top venues with the least effort possible, and for many who aren't brilliant the pseudo-application approach is the way to go. IMHO, this broadens the gap between academia and industry, because it's hard for a practitioner to pick out the few relevant nuggets in a stream of half-baked applied research.
This sounds a bit hyperbolic when you have Microsoft doing pie-in-the-sky research on topological quantum computing, Google and IBM research teams arguing over quantum supremacy, etc.
I work in a corporate research lab. We're not big (market cap ~25B) but there is a real thirst for innovations.
I'll echo the point about Research v Development. We're much more heavy on the development side of things, with time horizons of ~3.5 years. Research is ~10 years. Our research department is just me and 4 other guys. Our development department is much larger.
One thing I found different is in M&A. We've purposefully not done a lot of M&As. Mostly because the bureaucracy is so thick here. Monopolizing the time of the smaller companies is simpler and faster than trying to onboard them. I suspect there are healthcare issues involved too.
I remember an interesting article (I can't find the reference) that made the claim that for a number of pharma labs, the advantage in hiring researchers is not that they are likely to make a great discovery--but rather that they will be up to date with the literature and be able to recognize a potentially great discovery and to productize it. This is not necessarily trivial as a number of things that work in an academic lab setting don't translate into actual usable drugs.
I'm not sure how true that is for sectors other than software/AI, but in AI most academic researcher complain that all the moonshot research (often not even commercially applicable) is now solely coming from corporate research labs e.g. Deepmind, Facebook etc., as opposed to universities. They attribute this to poaching of faculty by companies, lack of resources in universities, lack of motivation in grad students to stay in academia.
There should be [more] publicly-funded, non-profit organizations that do research for research's sake, whose fruits are made available to all.
To avoid the problem of controversial projects causing some funding sources to pull their support, there could be mechanisms for funding individual projects.
And to avoid the risk of reappropriation by a government or military, they could be hosted in the EU. :)
> The National Science Foundation estimates that U.S. R&D funding reached an all-time high of $499 billion in 2015. Of that total, the federally sponsored share fell to a record-low 23 percent while the business sector’s share rose to a record-high 69 percent.
This jives more with what I've personally experienced. Also looking at the data in the paper mentioned in the OP I see a ~5% growth in corporate research spending annually since 1953.
So I don't think the data supports the OP's narrative.
I'm claiming that the R part of R&D was always tiny to begin with. That rising spending R&D is not indicative of much more R compared to dedicated research labs.
>That rising spending R&D is not indicative of much more R compared to dedicated research labs.
R&D are not binary options.
I know in some fields a massive amount of published papers are coming from corporations, and I suspect if I could find data on it that the percentage of peer-reviewed publications from corporations is much higher than in the past.
Certainly in various CS and math disciplines I work with, corporations have made massive inroads.
You're trying hard to push at a possible loophole in the rise in spending, but without evidence that more spending was not spent on what you think it should be, it's sensible to assume it was actually spent on increased R AND D.
Otherwise, for every conjecture you can make without evidence, I can just as easily make the opposite conjecture, and neither has been demonstrated.
Take a few moments and try to source some good evidence - I looked at a few directions and didn't find anything one way or the other, except that spending has increased.
I worked at SAIC back when it was a small company (and before changing the name to Leidos). I will forever be grateful for being able to work about 25% of my time on IR&D, all of my own choosing. I think the US government allowed us about 1% or 2% of our gross to be reimbursed R&D - I forget the details.
At my last job before retiring (Capital One, a great company to work for BTW) my team had a huge amount of autonomy on what we worked on, but my team was awesome.
I don’t really understand the relative merits of many small R&D projects vs. huge infrastructure. Perhaps the industry is just moving to a many small projects approach?
Well, corporate research money may be drying up, so now the public is being asked to fund it (at least in California). There's a $5.5 billion bond initiative (Proposition 14) on the ballot for 2020. This is similar to proposition 71 which was passed in 2004 for $3 billion, "The California Stem Cell Research and Cures Act”.
California Proposition 14, the Stem Cell Research Institute Bond Initiative,
https://ballotpedia.org/California_Proposition_14,_Stem_Cell_Research_Institute_Bond_Initiative_(2020)
Trouble starts with the term R&D which names two separate things that rarely go together (I think PARC was somewhat close to doing both). A company research lab would be decidedly not development or it's not a research lab. The best way to run a company research lab is to run it as a marketing tool, like a slightly more on-topic way of sponsoring a sports team. IBM seems to do it that way, at least whenever a bluishly named supercomputer is ruining yet another human game. I wonder if they are aware of that analogy.
Research has (correctly) been outsourced. If you have a good idea, go prove it and then you can sell it to Acme for $100m. That's a much better deal for Acme because they only need to buy things that work. And it's a much better deal for you (you get $100m to share with your investors instead of a $10 dollar voucher [0] for a $100m idea).
This is part of a larger pattern where we are not investing in anything anymore that doesn’t bring immediate returns to some investor; not basic research as in the article, not public infrastructure, not planning for things like climate change. It’s all about immediate gratification, fast returns. Kicking the can on every problem and avoiding any unpleasant decisions or long term investments.
Indeed. Biopharma has some of the highest spend on R&D as a percentage of revenue of any industry sector.[1]
However, keep in mind a big part of that is development, not research. It's still science and not guaranteed to work, but it's more focused on the development needed to bring a product to market versus doing science to make new discoveries.
Interesting, after the Bell Labs example, I was going to guess that anti-trust enforcement meant that companies weren't big enough to afford an internal lab anymore.
But the OP, quoting/citing Arora et al, actually suggests the REVERSE causative correlation:
> Historically, many large labs were set up partly because antitrust pressures constrained large firms’ ability to grow through mergers and acquisitions. In the 1930s, if a leading firm wanted to grow, it needed to develop new markets. With growth through mergers and acquisitions constrained by anti-trust pressures, and with little on offer from universities and independent inventors, it often had no choice but to invest in internal R&D.
And that in fact later "lack of anti-trust enforcement... killed the labs".
Makes sense to me. Goes to show that our "intuition" (of course informed by cultural assumptions) for how capitalism works and it's effects isn't necessarily accurate.
Under-taxation of corporations plays a significant role in the decline of research spending. While potential competitive advantage is one reason to pursue research and development, taxes and the interplay between tax and corporate accounting are another reason (even prior to 1981).
While taxes paid by corporations benefit the general public, from the corporation’s point of view taxes are money out the door with zero possible return. So what does a prudent corporation do with its profits? It recognizes that they are potential capital and seeks to maximize the risk-adjusted return on capital.
It can bank them at the risk-free rate of return, but we should remember taxes will take a healthy chunk out of that return on the front end. It can pay them out to investors as dividends (increasing the return on remaining capital by reducing the denominator). Note that under rational tax schemes the risk adjusted return on capital should be equivalent to investing at the risk free rate of return (modulo management of cash-flow risks). It can buy riskier assets with higher return, such as other companies. And it can make much riskier investments, like research and development. As with most things financial, the best overall risk adjusted return on capital is diverse in both kind and in risk level. It should do all of those things (treating dividends as equivalent to investment at the risk free rate of return), with the proportion going to each tuned to achieve the aggregate optimum(1). Also, can you see how a regulated utility with(like AT&T of old) would find R&D attractive for spicing up its asset mix?
Think of R&D as producing a stream of lottery tickets with the drawing in the distant future. Those lottery tickets (for a tech company) can pay off in two ways. One is that they could produce or enhance a revenue stream. The other is that they could reduce risk in the form of patents (reduce competitive risk and risk of patent suits by threat of countersuit). Both of these payoffs improve risk adjusted return on capital (by generating return or reducing risk). The neat thing about R&D for a company that’s paying real taxes is that much of the cost is (from a tax point of view) is an expense. In other words, that allocation of capital doesn’t have the upfront bite taken out of it that occurs when profits are directed to risk free assets. Looking at this from a risk-adjusted return point of view, having a real tax burden increases the appetite for risk by decreasing the relative attractiveness of the risk-free alternative after taxes. This is doubly true for expenditures that look like expenses to tax collectors but look like capital investment for corporate accounting purposes.
None of this holds true if a company isn’t paying much in the way of taxes on its profits. It doesn’t mean some amount of R&D isn’t still attractive, but the appetite for risk is lower. And let’s be clear, R&D in research labs is the riskiest kind of R&D.
Well, you might say, the 1981 R&D tax credit is sweeter than a deduction. Doesn’t that tilt the scales back? Yes, but subject to the limitations of the tax credit (which are numerous). But, like the deduction, the tax credit is much less valuable for companies that aren’t paying taxes on their profits. And, to be clear, this is a Reagan measure taken with full knowledge of what was expected to happen to corporate tax rates and the impact that might have on R&D investment. It’s a partial mitigant for that, nothing more.
I am fully aware that I have murdered both CAPM and the practice of accounting in compressing this down to a reasonable post with what I hope is a clear narrative. My apologies to practitioners of both.
(1) An important aside: People make equity investments to take more risk in the expectation of higher return. Beyond the cash cushion necessary for minimizing cash-flow risk, massive cash hoards do nothing but dilute the risk (and return) rational investors are actively seeking to take. There is some argument to be made that large tech companies keep huge cash hoards because their core businesses are riskier than they appear (esp black swan events), but it probably has more to do with founders desire for independence.
This is Capitalism. Let the government and educational institutions spend all of the money, do the real work and research, and then as a private company swoop in and reap all of the benefits.
A blatant example would be Gilead Truvada for PrEP. US Tax payer funded research and all profits go straight to Gilead.
A blatant example would be Gilead Truvada for PrEP. US Tax payer funded research and all profits go straight to Gilead.
That's such a gross oversimplification to the point it's just wrong. If you want to read how one of the two drugs in the combo was developed, I'd suggest this journal article [1] - I warn you, the story evolves over 15 years.
The initial discovery of the class of compounds was done at Emory university. Emory then partnered with Burroughs Wellcome. Burroughs Wellcome was then acquired by Glaxo, who halted the research. The rights then went back to Emory, who then spun it out to a start-up called Triangle Pharmaceuticals, who was then acquired by Gilead who brought Truvada to market.
Emory did sue a number of manufactures as they had a patent, but it was settled with a fat ($525M) cash payment to Emory and an on-going royalty stream.
At least in my understanding, the reason why governments have in interest in this kind of model is that companies in return provide employment and pay taxes (if you're lucky).
In the EU, there is a really strong focus these days on exploitation of the results that come out of the research projects they fund (Horizon 2020) that you sometimes almost feel a bit out of place as a researcher. As in: why should a researcher at a university care deeply about how their findings can be turned into products by some companies? I mean, not only are they usually not really qualified for such questions, but also if they were interested in product development, they'd probably work in industry (perhaps even for a higher salary).
More worryingly and problematic is that there is also a notion of "not invented here" where business units have very little patience for things coming out of research units because they weren't involved. It's petty and stupid and in Nokia's case a large part of the reason why Apple and Google ate its lunch. It's not that they did not have competing technology but that they didn't know what to do with it.
Corporate R&D works best when corporate leadership has some level of affinity with it. Elon Musk is a great example of a true R&D minded leader. All his companies are basically are R&D labs that happen to produce insanely valuable products as a side effect. It takes a certain type of leadership to funnel billions to the right projects and then turn them into successful products. It requires something most business leaders simply don't have: a clue. Elon Musk for all his flaws understands technology deeply. Most of his contemporaries in the car industry are clueless bean counters that know little more about cars than the number of wheels they are typically equipped with. I bet pretty much all of them had R&D labs 20 years ago working on exactly the kind of things that Tesla brought to market successfully. They just bungled turning that into product because of a lack of vision & technical leadership.