Hacker News new | past | comments | ask | show | jobs | submit | MillenialMan's comments login

I don't think it's about money. Software engineer salaries are very high, and cubicles are a compromise that still saves space. I think it's because for most people, procrastination reduces output a lot more than distraction and noise, and they procrastinate less when everyone can see their screen.


The vast majority of titles in tech confer more prestige than deserved, because there's no downside to a company handing them out like candy. I don't look at it as a portability issue, the incentive isn't to be accurate to begin with. Inflated seniority is just a perk they can offer that doesn't cost them any money, so everyone offers it. Titles across the industry are meaningless unless you're calibrated to that company's levels.


There are only a few titles that mean anything, and normally it isn't what you think.

Legal at a bank only a vice president or higher can make some loan decisions. Thus if you go to a large chain bank to get a mortgage you will talk to a vice president. Their title must be vice president, but they doesn't make a lot of money, and don't have much power other than the ability to give you a mortgage.

Last time I posted the above someone chimed in "yeah, even though all I do is write code all day I'm a vice president because what my code does can only be done by a vice president"

You also see a lot of salesmen who are vice presidents - again they don't have a lot of power in the company, but the places they sell to want to talk to a vice president so sales hands out those titles.


> there's no downside to a company handing them out like candy.

Some companies believe this, but I don't think it's true. Handing out titles like candy can kill morale.


I get what you're saying but I think it generally does the opposite. People like prestige.


GPUs are $3000 because of Bitcoin, not Unreal Engine.


I'd like languages to have some kind of "delegate" functionality, where you can just delegate names to point to nested names without screwing around with ownership - it would just act like a symlink. The scope of that action is limited and clear (and easy for your IDE to understand), and it's explicit that the subclass is still the "owner" of that property, which makes the whole thing a lot easier to navigate.

E.g. something like:

    class MyClass:
        def __init__(self, member_class):
            self.member_class = member_class

        # Delegate one member
        delegate move member_class.position.move

        # Delegate all members
        delegate * subclass.position.*             

Then:

    a.move == a.member_class.position.move
etc.


C++ can do something something like this (at compile time) in its -> operator (ancient feature, long before C++98 was standardized).

   obj->foo()
will expand into enough -> dereferences until a foo is found. For instance suppose the object returned by obj's operator ->() function doesn't have a foo member, but itself overloads ->. Then that overload will be used, and so on.


In Python you could do something like:

  class Base:
     def func(self):
         print("In Base.func:", self.name)
  
  class Child:
     def __init__(self, name):
         self.name = name
     func = Base.func
  
  c = Child("Foo")
  c.func() #=> In Base.func: Foo


The reason I'd like the construct is because it's explicit - intent (and the scope/limit of your intent) is encoded in what you create. It's clear you intend to do nothing with that name except symlink to the nested member, so the reader doesn't have to anticipate other behaviour (and can't accidentally do something else with it). Generic assignment doesn't convey the same restricted intent, and it doesn't carry those guard rails.

Really though it's a structure that only makes sense in strongly typed languages, so I probably shouldn't have used Python to illustrate the idea.


Those rivets probably aren't modelled, they'll be reconstructed from volume information in the texture. Which is still impressive and a great way of dealing with that type of geometric detail, but it has limitations, and the engine isn't processing actual models at that level of detail.

You can still do stuff like that in UE4. Have a look at what's possible with Quixel Mixer, you can create detail like that surprisingly quickly. I'd tentatively argue that modelling tools are the real MVP when it comes to increased geometric LOD in modern engines. They allow you to add that kind of detail quickly enough that it becomes economical to actually detail rivets.

Either way, it's very cool. The crowd simulation stuff is going to be useful, current tooling there absolutely sucks outside of a few very expensive dedicated products.


If this UE5 demo is using Nanite, then it really may well just be modeled geometry.

Nanite really blurs the line between geometry and texture - in a sense it’s a shader that uses triangle mesh data as if it were a texture source.

This siggraph session will expand your mind: https://m.youtube.com/watch?v=eviSykqSUUw


Yes. Nanite is very clever. A mesh representation that can be shown at a huge range of levels of detail is what makes it go. Much of the work is moved to asset preprocessing. The actual rendering is simplified. There are about as many triangles being drawn as there are pixels, regardless of scene complexity. If a triangle is bigger than a pixel, it's time to zoom in to a higher LOD for that part of the mesh. If a triangle is much smaller than a pixel, a higher LOD can be used. This is a reduction from O(N) to O(1). So draw time is constant regardless of scene complexity.

Watch the SIGGRAPH video to get the feelings of: that's impossible - oh, I kind of see how that works - one pixel triangles? - how do they get that mesh representation set up right? - oh, graph theory - that data format has to be a pain to generate with all those constraints - they're rendering mostly in the CPU? - that's all that needs to be done at render time? - GPUs need to be redesigned to be a better match to this - how to stream this stuff? - that compression scheme is clever.


I stand corrected! Thanks for the link, that's very cool.


It's very cool. I bet Digital Foundry takes an in-depth look - can't wait.

After reading your comment I went back in to the demo to have another look at the rivets up close. In the options menu, you can switch the viewport so that it shows you the triangles that make up object geometry under the "Nanite" system. Would be interested to hear from anyone with access to the demo and the right background what they make of this.

Edit: I took a couple of screenshots of default view vs Nanite triangles view but not sure what the convention is on HN for where to host them. Happy to put them up if someone can clue me in.

Edit: I've just put the screenshots up [here](https://imgur.com/a/2FMpLJa) and a ~30 sec vid [here](https://youtu.be/s1PUCadh1TU), showing cycling through the viewport modes.


Replying to myself to add: Digital Foundry [analysis](https://www.eurogamer.net/articles/digitalfoundry-2021-insid...) is now up. They're impressed.


Host them where ever you like, but hopefully something resilient to a hug of death.

I hear tweet threads are really appreciated /s


That's not really true. Video upscaling is probably going to be linear interpolation, which is very unlikely to add meaningful artifacts, but an intelligent zoom that tries to add visually meaningful information may change the apparent content.


> Video upscaling is probably going to be linear interpolation

This is probably true of the technology that's currently in the courtroom, but not for much longer, e.g.[1]. Apple have even done research on this[2], although I can't find anything that says it's currently in use on the iPhone/iPad.

[1] https://www.tvtechnology.com/news/the-secret-behind-8k-upsca...

[2] https://machinelearning.apple.com/research/gan


Seeing the mess my 4k "smart" TV makes of 1080p video with parallax scrolling (say a car driving between two rows of trees, filmed from the side), I think we're well past linear interpolation in consumer devices with upscaling.


It's rarely altruistic but this seems very win/win to me. Talent without a record can prove their chops and get hired above their on-paper experience level, Google gets an additional avenue of recruitment.


The biggest performance issue Clojure has, which isn't mentioned in the article and is fundamentally unsolvable, is that it misses the CPU cache - a lot.

The data structure that drives the immutable variables, the Hash-Array Mapped Trie, is efficient in terms of time- and space- complexity, but it's incredibly CPU cache-unfriendly because by nature it's fragmented - it's not contiguous in RAM. Any operation on a HAMT will be steeped in cache misses, which slows you down by a factor of hundreds to thousands. The longer the trie has been around, the more fragmented it is likely to become. Looping over a HAMT is slower than looping over an array by two or three orders of magnitude.

I don't think there is really a solution to that. It's inherent. Clojure will always be slow, because it's not cache friendly. The architecture of your computer means you physically can't speed it up without sacrificing immutability.


It is mentioned in the article, one of the last optimizations done there was switching to array. Also, Clojure’s HAMT was designed with CPU cache in mind and it’s performance characteristics don’t degrade over time. Immutable data structures will be slower than arrays - that’s true, but Clojure standard library works on arrays just fine, as demonstrated in the article.


You're right - I missed that, the author does mention arrays having better access characteristics, although he doesn't really explain why HAMTs specifically are slow.

How does Clojure's HAMT avoid fragmenting over time?

> Clojure standard library works on arrays just fine, as demonstrated in the article.

Right, but then you don't have immutability - so you lose all the guarantees that you originally had with immutable-by-default.


> although he doesn't really explain why HAMTs specifically are slow

Hello, author here :)

HAMTs are certainly slower, for "churning" operations, i.e., lots of updates, which is where Clojure exposes the transient API, which gives you limited localized mutability (some terms and conditions may apply)

Where iteration is concerned, they standard library implementation is pretty good. It relies on chunks of 64 element arrays which store the keys and values contiguously. Thus, APIs which expose direct iteration, Iterator and IReduce(Init) operate on these chunks one at a time. It isn't as fast as primitive arrays, but it's pretty fast.


> but then you don't have immutability - so you lose all the guarantees that you originally had with immutable-by-default

Not really, for example take the following solution with execution time mean : 1.678226 ms

    (defn smt-8''' [^ints times-arr]
      (loop [res (transient []) pointer-1 (int 0) pointer-2 (int 7)]
        (if (< pointer-2 (alength times-arr))
          (let [start-element (aget times-arr pointer-1)
                end-element (aget times-arr pointer-2)
                time-diff (- end-element start-element)]
            (recur
             (if (< time-diff 1000)
               (conj! res [(mapv #(aget times-arr (+ pointer-1 (int %))) (range 8))
                           time-diff])
               res)
             (inc pointer-1)
             (inc pointer-2)))
          (persistent! res))))
It only requires the input being an array, but it will return an immutable persistent vector of vectors. So it is very easy to selectively go down to an array in the performance critical sections while being immutable and idiomatic in most places.

> he doesn't really explain why HAMTs specifically are slow

Take this solution using HAMT vectors with execution time mean : 23.567174 ms

    (defn smt-8' [times-vec]
      (loop [res (transient []) pointer-1 (int 0) pointer-2 (int 7)]
        (if-let [end-element (get times-vec pointer-2)]
          (let [end-element (int end-element)
                start-element (int (get times-vec pointer-1))
                time-diff (- end-element start-element)]
            (recur
             (if (< time-diff 1000)
               (conj! res [(subvec times-vec pointer-1 (inc pointer-2))
                           time-diff])
               res)
             (inc pointer-1)
             (inc pointer-2)))
          (persistent! res))))
Now it is 14 times slower than my prior one which iterates over an array, but it is still pretty fast, so that's OP's point, things are often sufficiently fast, and when they are not you can selectively optimize those parts, and easily too, see how similar my two solutions are from one another.

Edit: These assume you've `(set! unchecked-math true)` prior to compiling the functions.


I’m familiar with the implementation of HAMTs - if anyone wants to study one in C I recommend https://github.com/Jamesbarford/hash-array-mapped-trie or my polymorphic fork of it https://github.com/fromheten/hash-array-mapped-trie-poly.

Are there any other key/value data structures where insertion and retrieval are less than O(n) in complexity, but where the memory layout is better ordered for cache hits during searches? Maybe good old red-black trees?


I don't think there's an immediately available answer here. The wide branch factor in Clojure's implementation is very iteration-friendly, less so for updates.

Worth checking out are BTrees and Hitchhiker trees, but I think a definitive answer will be implementation dependent even in those cases, i.e. one might win out over the other for a specific branch factor or other tune-able parameters


I wonder if the wide branching factor of them gives you some cache friendliness.

However, not every use case can benefit from cache optimization and you can use other data structures. It’s not super useful to make generalizations about performance that way.



While I do agree, with pointer chasing down a persistent data structure being basically the worst case use of a CPU, the ease of threading Clojure programs means you can often claw a lot of that penalty back.


Threading doesn't compensate for that degree of slowdown, and itself has overhead. You'll get something back, but not much.


Again, generalisations about performance like that is just not very useful. There are use cases where you get significant benefits from using persistent data structures.

Aeron comes to mind as an example in regards to high performance solutions that uses them. But a more fundamental reason is that immutability can be part of your domain logic. Think video games like braid, or business applications where you need to derive a lot of different views and go back and forth between them, or domains where data just is inherently immutable such as accounting and so on.


I don't really agree. Cache friendliness is almost always a relevant factor as soon as performance becomes an issue. I get what you're saying but as I see it immutability gives you architectural efficiency and in some cases space efficiency, but rarely processing speed.

> Think video games like braid

Braid doesn't use immutable data structures, it uses a historical record (and immutability would be incompatible with some of the mechanics).[1] The author of Braid is actually quite famous for his dislike of functional languages and immutability. He doesn't even like garbage collection because of the overhead it introduces.

Interestingly, he was talking about data structures for codebase representation (in the context of a compiler) a while back, and someone mentioned HAMTs. I'm definitely curious if they would work well there.

[1] https://www.youtube.com/watch?v=8dinUbg2h70


You’re picking out an admittedly bad example in order to miss the point. Different data structures have different tradeoffs. Not every problem lends itself to the same type of optimization, so it’s not helpful to make these generalizations.

Continuing with the example regardless, change records and persistent data structures have different performance characteristics. The former is going fast if you move incrementally between states, the latter enables arbitrary access, comparison and efficient in memory caching of multiple views.

It would be interesting to explore and measure the trade offs under certain use cases.


I understand your point. I'm saying: the subset of problems that benefit in a performance sense from immutability is very small. The vast majority of the time, cache misses slow down algorithms. That's a perfectly reasonable generalisation and I don't really understand why you think it's untrue.

Re: change records, I believe Braid uses a historical record of absolute states, not a series of diffs. The state at a particular time is recreated from a minimal representation (a couple of arrays). That's much more efficient than throwing multiple iterations of the state in a HAMT.


> The vast majority of the time, cache misses slow down algorithms. That's a perfectly reasonable generalisation and I don't really understand why you think it's untrue.

I don't say it is untrue. But not every use case lends itself to CPU cache optimization. Your access patterns might just happen to be arbitrary or fragmented from a layout perspective.

I would argue that this is a very common case, especially for IO bound applications that operate on tiny pieces of data and have deeply nested dispatch rules that each query some section of memory that you don't know it will query in advance.

Or in other words: The clearer your computation pipeline can be, the more you can benefit from CPU cache optimization.

> Re: change records, I believe Braid uses a historical record of absolute states, not a series of diffs. The state at a particular time is recreated from a minimal representation (a couple of arrays). That's much more efficient than throwing multiple iterations of the state in a HAMT.

You caught my interest, I will have to study this. I can't really imagine how it works from your explanation (my bad, not yours). I assumed when you said "change records" it meant that it would be suboptimal to access an arbitrary state (linear as opposed to constant).


In my experience you can usually achieve near linear speed up. My machine can run 24 threads.


Fair enough on scaling. But 24 is still a lot less than two to three orders of magnitude.


Yes, it is. I'd probably ballpark Clojure at 100x slower than the kinds of low level C# I usually write (gamedev). But threading C#, especially low level imperative C#, is so awful I often don't bother unless it's very important or there's an embarrassingly parallel element (genetic algorithms and simulating sound waves on a 2D grid are two cases I've pulled the trigger where both were true).

This leaves Clojure as 1/4 the overall speed, which seems about right. However that's based on a hugely unscientific gut feeling, because generally I don't touch Clojure if performance matters and I don't touch C# if it doesn't.

By the way, I've implemented persistent data structures on disk for a problem they were particularly useful for. If stalling for a cache miss feels bad, try waiting for an SSD :)


Having to use 24 cores to get to 1/4th of the performance of a single threaded C# application seems particularly awful.

Especially as C# is a relatively pleasant language to use.

It doesn't matter how good threading in clojure is if you don't even need to use it to beat clojure on another language (also: great scaling is pointless if you start so far behind).


Yes, hammering in a nail with a screwdriver is particularly awful. I love using Clojure where it's appropriate, but I'm not in the least bit tempted to use it for game dev.


Could you share an example program that does that?


Take that with a grain of salt, from experiments run by myself and others I can share this does not scale linearly.

Hope some performance experts can chime in but this might be related to data structure sharing between caches or allocation contention in the JVM.


If you've got some code, I'm happy to take a look.



I don't have access to the Zulip chat, but the other benchmarks are basically testing allocating in a hot loop. I'm not surprised that doesn't scale linearly, and it's certainly not representative of real world code I've ever written.

If you have code you wrote to achieve something and hit a wall with scaling, I'm happy to take a look.


Up to 4 threads I usually see linear scaling as well, but it begins to drop off afterwards, although I don't have a representative example at hand.

I'd like to see a good example if you have one available, most of my performance work was done in a single-threaded context until now


I don't really have anything off hand.

But code primarily slowed down by pointer chasing down big maps, which is MillenialMan's complaint and fits my own experience, will absolutely be sped up linearly.

A bunch of cores sitting around waiting for the next HAMT node to come in will not interfere with each other in the slightest.


This is entirely hypothesising but I don’t see how these wide trees are awful for the cache. In particular I don’t think they would be much worse than a flat array of objects—the problem is, I think, that objects are references and iterating the things in an array of pointers usually sucks.

For a HAMT, With depth (eg) 2, you should be able to prefetch the next node in your iteration while you iterate the current node of 64 elements. Actually iterating through the objects is going to suck but hopefully you can prefetch enough of them too. (Or maybe hotspot could online things enough to improve memory access patterns a bit to take advantage of ILP). There’s still the problem that you can’t read main memory sequentially except that often all the objects in a HAMT were allocated sequentially so you can read main memory sequentially (as allocation is typically just bumping a pointer, allocation-time-locality tends to correspond to address-locality)


If I understand you correctly, this i a general problem of functional data structures?

> Clojure will always be slow, because it's not cache friendly.

You always have the option to use the java data structures, for the cases this kind of optimization is needed.


Yes, this is a general problem with functional data structures. They have to be fragmented in order to share data. There's also the more nebulous issue that they encourage nesting to leverage the architectural benefits of immutability, which is a complete disaster for cache friendliness.

Replacing the critical path is an option, but that only works for numpy-style situations where you can cleanly isolate the computational work and detach it from the plumbing. If your app is slow because the inefficiencies have added up across the entire codebase (more common, I would argue), that's not an easy option.


You seem to be missing the point lukashrb made regarding using Java data structures. Your claim was "It's inherent. Clojure will always be slow" which is demonstrable false as you can use interop with your host language (Java, JavaScript, Erlang) to use data structure that don't suffer from this. Wrap how you're using them in Clojure functions and the usage code for this is idiomatic while also having cache friendly data structures.


I understand the point, I'm not arguing that you can't do that or speed up your code by doing that.

Python is also slow, but you can still go fast (in some cases) by calling into C libraries like numpy - but the performance is coming from C, not Python. Python is still slow, it's just not an issue because you're only using it for plumbing.

But Clojure is immutable-by-default, that's the point of the language - it gives you various guarantees so you don't have to worry about mutability bubbling up from lower levels. In order to wrap your heavy path you have to go outside the Clojure structure and sacrifice that guarantee. You do lose structural integrity when you do that, even if you attempt to insulate your mutable structure. The system loses some provability.


Calling C from Python is very different from calling Java code from Clojure. Clojure always runs on it's host (Java/JavaScript), so calling functions via interop usually speeds up everything with zero overhead, compared to calling C code from Python which does introduce overhead.

Everything in Clojure comes from Java, it's unfair to compare it to "performance is coming from C, not Python" as Clojure works differently.


That wasn't really my point. There is still a translation process you have to apply to cross the boundary between immutable and native data structures though, and that has its own overhead.


I have learned to embrace polyglot programming instead of trying to use the same language for every possible scenario.

Personally I see this language X is better than Y at Z a waste of time, because I would just use X and Y depending on what I have to do.


In general I would agree, but a significant part of Clojure's appeal is that it's immutable by default, because that allows you to make certain assumptions about the codebase. Introducing mutable datastructures means you can no longer make those assumptions, so it potentially has much wider ramifications than e.g. calling into C code from Python.


> If your app is slow because the inefficiencies have added up across the entire codebase (more common, I would argue), that's not an easy option.

This is where I would have to disagree, in my experience, that is less common. Generally there are specific places that are hot spots, and you can just optimize those.

Could be it depends what application you are writing, I tend to write backend services and web apps, for those I've not really seen the "inefficiencies have added up", generally if you profile you'll find a few places that are your offenders.

"Slow" is also very relative.

    (cr/quick-bench (reduce (constantly nil) times-vector))
    Execution time mean : 3.985765 ms

    (cr/quick-bench (dotimes [i (.size times-arraylist)]
                      (.get times-arraylist i)))
    Execution time mean : 775.562574 µs

    (cr/quick-bench (dotimes [i (alength times-array)]
                      (aget times-array i)))
    Execution time mean : 590.941280 µs
Yes iterating over a persistent vector of Integers is slower compared to ArrayList and Array, about 5 to 8 times slower. But for a vector of size 1000000 it only takes 4ms to do so.

In Python:

    > python3 -m timeit "for i in range(1000000): None"
    20 loops, best of 5: 12.1 msec per loop
It take 12ms for example.

So I would say for most uses, persistent vectors serve as a great default mix of speed and correctness.


A traditional hash table can also be pretty cache unfriendly. I wonder if there are any published measurements that compare these.


Partially unrolled node based data structures could help. Does clojure use them?


You can switch Spacemacs and Doom into Emacs-mode keybindings. Command key sequences become hidden behind a shortcut and modes are disabled. It relaxes the learning curve.


> You can switch Spacemacs and Doom into Emacs-mode keybindings.

Aware of this, and tried it. However much of these distributions is streamlined towards EVIL keybindindings so you don't really get the full experience. Furthermore, a lot of the Emacs keybindings have been opinionated, which again isn't bad but a little jarring.

Also the vi keybindings tend to reactivate every now and again, and often you can take a few goes to realise this as you fumble in frustration and lose your train of thought.

> It relaxes the learning curve.

Yes but you'll have a learning curve either way, and by the time you've passed it you'll know neither vim nor emacs, and you won't get to experience the joy of tailoring your own development experience nor will you have be able to make use of much of to the decades of support content that's available on the web.

As I said, it makes more sense if you've already worked with VIM and want to have the best of both worlds but even then I think you're still missing out on some of the finer experiences Emacs has to offer.


Bluntly: no. Lives saved is probably not a significant component in the cost/benefit analysis when these companies are deciding how much they should invest in an avenue of research.


I understand that's not how they decide what to research, but I'm asking today, this second, why are they not factoring lives into the question "should I tell the world how to make my vaccine". I don't even know how to phrase this without sounding absurd. It's like a superhero letting the villain blow up a stadium because if the city is too safe then they might get bored and not be around when the next villain shows up in 20 years.


Do you know Peter Singer's parable of the shallow pond? Why is what you are puzzled about any different than asking yourself why you haven't donated everything you have right now to save lives in undeveloped parts of the world?


Because of the magnitude and rarity of the problem, the proven effectiveness of their solution, and the ease of releasing it. I am not arguing that every second of every person's life should be devoted to maximum lifesaving utility. I do donate a lot of my income, but I save a lot for myself because I'm a normal self-interested person. I'm not asking anyone at Moderna to make some enormous sacrifice, I'm asking them to maybe risk some potential future business opportunities. Relative to the problem I don't even consider it a sacrifice at all.

In my mind it's like an off-duty firefighter standing next to someone unconscious in a burning building and being able to easily drag them to safety, but deciding not to because it might give them back problems in a few years. It's an emergency, they know how to fix it and can do so easily, and the sacrifice is minimal compared to the danger.


Malaria vaccines are more effective than covid vaccines. Malaria kills a lot more people than covid. Why do you buy sushi instead of malaria vaccines? Relative to the problem I wouldn't consider that a sacrifice. Doesn't matter, I'm still going to buy sushi.

I get where you're coming from but people are just selfish with money, it is how it is.


I truly believe that what I'm asking Moderna to do is not much bigger than the minor sacrifices I make in my own life, mostly donating money and volunteering time. Tiny sacrifices! I have an incredibly comfortable life. If the people running Moderna did what I wanted, they would continue to have incredibly comfortable lives.

Most of the threads I've started have now devolved into people accusing me of being a hypocrite who's never sacrificed anything in my life, so I guess that's the end of the road. I still don't understand why anyone needs to be convinced that this obvious moral choice is a good thing to do, and I guess I never will.


For what it's worth, I don't think you're a hypocrite and I do sympathize with you, I wish they wouldn't pull this shit too. But they're a drug company, you have to expect them to act like a drug company. The number one thing they care about is profit.


Well you should rethink that. The people who invest in Moderna are just like you and that appeals to you because they're other people with their own money that you want to direct.

Pretty much your whole justification is that they've happened to invest in medicine and intentionally positioned themselves outside the burning building. You're investing in what? Yourself? Thin soup.


I guess the argument here is that only people as successful and powerful as the founders of Moderna are capable of judging them. You're right - despite my best efforts, I'm not there yet and probably never will be. If that's the bar, there's nothing more for me to say.


The superhero analogy is akin to saying "football is a game where you run a ball to the end of a field". It completely trivializes the realities of developing highly experimental technologies. Moderna doesn't get to fly in like an invincible superhero and defeat the villain with no real concern for whether or not they're going to be able to do it again.


But they already have the vaccine! There's no immediate risk, and the long term risk is not that serious. I guess the core thing I don't understand is why anyone would assume that releasing their process is equivalent to going bankrupt. Is this process all they have? Would releasing it eliminate all barriers of entry? Would the US government no longer be willing to give them grants if they become too philanthropic and save too many lives? Why couldn't they continue to be a wildly successful company?


It is a serious risk. Their manufacturing process and their tribal knowledge of it is all they have, save maybe political clout. Through this they prop up everything else. On top of all of this, they might not be able to convey the process to the extent needed for others to recreate it, even if they wanted to.

I work at a company that faces this as an existential threat; you might be able to knock off our products in single quantities, but because of decades of process knowledge that no single person (or even a committee of people) could tell you, you won't be able to beat us on price. Once you do that, it's over for us, and any beneficial technology that we planned on developing is going to need to be provided by our competitors, who don't actually have a culture of making improvements, only leeching them.

There are countless industries with high but tenable barriers to entry, and completely dealbreaking process knowledge that actually determines the viability of a company in that industry.


Alright, you've convinced me that it would be a real sacrifice, but it's still so obviously worth it. There's very little I wouldn't sacrifice to do what Moderna could do right now. The worst case scenario seems to be that Moderna employees with "helped develop the most effective Covid vaccine on the planet" on their resumes have to get new jobs, and new treatments based on their technology are developed more quickly now that the whole world can experiment with it.

If they can't communicate their process quickly enough because of the tribal knowledge, then no harm done, and at least they tried.


> There's very little I wouldn't sacrifice

Ha. Maybe there's little you wouldn't sacrifice that isn't yours, but there's very little you would sacrifice yourself to save someone. For $500 right now you can save someone's life for a year. What seems to be confusing you is that you're not thinking of Moderna as real people.


I donate lots of my own money directly to people who need it. No, not all of it, because as I've said I'm a normal self-interested person like the people who run Moderna. That's why I'm not asking them to sacrifice everything they have, or anything close to that. They would continue to lead incredibly privileged and comfortable lives, just like I do.

I guess you already don't believe me, but I do make sacrifices in my own life, and if I were in the position of running Moderna, I still cannot imagine being unwilling to make another one. Doesn't seem like I can convince anyone here, but that's the source of my confusion - why does anyone need to be convinced? How is this not normal? When I say "I would sacrifice my job to save millions of lives", why is everyone like "sure buddy, you like money as much as the rest of us, we know you're lying".


Would you sacrifice hundreds of jobs (maybe even thousands if you account for second-order effects) for a chance (not a guarantee) that some other company will be able to use the technology in time to deal with the Coronavirus? Could you accept that there's a chance that new companies moving into this space will be much less cooperative if Moderna loses the fight after giving away their process?

There's a forest from the trees. Nobody here is saying that you're lying when you say "I would sacrifice my job to save millions of lives". The majority of us would probably do the same, but those aren't the stakes, and nothing is that simple. That's not what forcing Moderna to share their process is. This is a threat to an entity that has been fairly cooperative, and it is that way because of the chance group of people and attitudes that make that entity up. There is no guarantee that whatever comes next will want to play nice.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: