There's an idea in some software communities that you can't "unsee" code. So if you've seen the code even once, you're "tainted" and you can't be permitted to work on the clean room implementation anymore. This tends to be used most often in projects attempting clean room implementations of proprietary software where they want to maintain clear provenance of the code.
The question is, is this based on actual case law or is it just an extreme form of paranoia?
It's not an extreme form of paranoia. I see it as a cost-analysis. It's cheaper to defend any copyright infringement criticism or lawsuit when using a pure clean room design.
> In NEC Corp. v Intel Corp. (1990), NEC sought declaratory judgment against Intel's charges that NEC's engineers simply copied the microcode of the 8086 processor in their NEC V20 clone. A US judge ruled that while the early, internal revisions of NEC's microcode were indeed a copyright violation, the later one, which actually went into NEC's product, although derived from the former, were sufficiently different from the Intel microcode it could be considered free of copyright violations.
However, such a lawsuit will be much harder and therefore more expensive to deal with.
It's much cheaper to make the argument that none of the developers had access to the original source or reversed-engineered design, so any similarities must be result of functional constraints or general knowledge.
There's an idea in some software communities that you can't "unsee" code. So if you've seen the code even once, you're "tainted" and you can't be permitted to work on the clean room implementation anymore. This tends to be used most often in projects attempting clean room implementations of proprietary software where they want to maintain clear provenance of the code.
The question is, is this based on actual case law or is it just an extreme form of paranoia?