A blanket ban like this isn't particularly useful advice. I2C is perfectly useable in less-than-perfect environments, and LEO isn't really as harsh as people make it out to be. It's really only going to be a problem if you assume that everything operates perfectly all the time, and if you're making those assumptions you're going to have a bad time working on remote systems anyway.
Make sure your drivers can handle NAKs and errors on the line. Make sure you can reset subsystems (probably by power-cycling) completely and your system can keep running. Be ready to deal with stale sensor data or having to do a few retries at times. With these and some good testing you'll be fine with I2C, and really there's immense value in having those attitudes in your system design anyway.
I was a design EE at Planet Labs for 4 years, have sent something like 200 satellites to space, each with literally dozens of I2C devices on them, and supported them in orbit.
> Make sure your drivers can handle NAKs and errors on the line. Make sure you can reset subsystems (probably by power-cycling) completely and your system can keep running. Be ready to deal with stale sensor data or having to do a few retries at times.
By the time you've implemented that, you've negated any advantages of I2C. So, why not go directly to SPI?
Because a whole bunch of chips you want to use are on I2C. And/or you want to work with a large number of devices, and a chip select per is a lot of GPIO to burn on this. Also you're going to want to handle errors and bad devices on SPI, so it's not like that's new work anyway.
Additionally, sensor data should be oversampled and filtered/fused using a filter that is robust to outliers. This whether coming in over I2C, or not.
Robust systems assume failures will occur and manage them in such a way that they still do their job correctly.
I'd be interested to hear about the failure modes of I2C that you've observed. Unfortunately I'm not familiar with CAN or SPI. Do these alternatives provide error correction as part of the protocol? Do they offer superior reliability in some other way?
1) Errant clock pulses--either through reflection or ground bounce since your system isn't differential--which put your system into an unknown state since every clock pulse is relevant to transaction state. And the fact that the lines are merely pulled up rather than actively driven makes them more susceptible.
2) Hung buses--once the bus is hung, there is no guaranteed way to reset it. You have to rely on your slave I2C chip actually having a timeout reset.
3) Transactions often complete but can ACK wrong. This is bad if you're doing something that isn't idempotent.
4) Nobody ever gets an I2C module completely correct. I2C has a bunch of little corners (Can it send just an address byte? Does it really get clock stretching correct? Does it send the correct events to the CPU on restarts?) and everybody always seems to miss at least one of them.
SPI: Not great, but the extra control lines and the fact that nothing is bidirectional is an asset for reliability.
The primary advantage in reliable systems is that SPI has a CS/SS line that serves as a reset. Even if your clock bounces or a slave chip gets confused, you can often detect it and drop the CS/SS before you complete the requisite number of SCLK cycles and prevent the transaction from completing. Also, dropping the CS/SS almost always frees the SDI/MISO line even if the chip goes haywire.
CAN: Specifically designed for harsh environments with voltage spikes, temperature fluctuations, RF interference, etc.
Fully differential so resistant to noise--some topologies can even survive a break in one of the lines. Retransmits are baked into the hardware. Error correction is baked into the protocol. System does baud-rate adjustment on the fly so it handles frequency drift.
The downsides are generally more complexity (although that is buried in silicon), external transceivers and normally more current consumption during operation.
SPI is similar to I2C in concept, but there's separate RX and TX data lines and a discrete "chip select" signal for addressing. The chips drive both high and low rather than relying on a pull-up, so SPI can operate significantly faster than I2C.
CAN allows many nodes to communicate via broadcast messages on a shared twisted pair of wires called a bus. The signals are differential, making them fairly immune to noise. The messages are CRC'd and acked, and are automatically retransmitted as necessary. CAN is "content addressed" rather than addressed-to-recipient, and each message's address also acts as a priority for arbitration of the bus; the highest priority message ready to be written always gets the next opportunity to transmit. To make things annoying, messages can only contain 8 bytes of payload.
I2C, SPI and CAN all have different purposes. I2C is nice for interfacing to low data rate chips because of the low signal count. SPI is nice for interfacing to high data rate chips but is 4 signals at least. CAN is nice for connecting microcontrollers that are far apart from each other.
Folks abuse I2C because it's only two signals, and run it over connectors. The protocols built on top of I2C quickly fall apart when the signals are flaky, leading to firmware pain. SPI is harder to abuse because no one wants to run 4 comms signals over connectors, and typically at several MHz, electrical engineers know it's a bad idea. CAN is very robust when done correctly (good enough for drive by wire in cars), but it's also orders of magnitude more complicated.
The typical way to add CAN to a raspberry pi is an MCP2515 chip. The linux driver for the MCP2515 can barely service the chip fast enough though, which means it tends to drop data at the higher bitrates.
> The typical way to add CAN to a raspberry pi is an MCP2515 chip. The linux driver for the MCP2515 can barely service the chip fast enough though, which means it tends to drop data at the higher bitrates.
This isn't always the fault of the driver. The MCP2515 is just a kind of crappy controller and has lots of bugs in spite of how much it's used. Even on a Beaglebone (which doesn't struggle to keep up), the MCP2515s are somewhat suspect.
The Beaglebones have a CAN controller on board, so all you need to do is hook up a CAN transceiver and you're good to go. (Being able to use Wireshark to debug CAN is really quite a nice change over most nasty CAN tools ...) And because the controller is part of the silicon, it's really fast.
I didn't mean to imply that the linux driver is of poor quality. It's just a matter of the kernel's inability to schedule "soft interrupt" handlers with low enough jitter to guarantee the hardware is serviced fast enough. The MCP2515 certainly doesn't make it easy as it only has two RX mailboxes without FIFO semantics. It's kind of funny because any old 8-bit MCU has plenty of oomph to reliably drive an MCP2515 but rediculously powerful 64-bit SoC boards can't handle it.
I2C is alright as long as it's on the same PCB and your code handles nacks and detects hung transfers. Running I2C through a connector is asking for trouble though.
Use CAN, preferably, or SPI, if nothing better. Even old school RS-232 is probably superior.
I2C simply isn't reliable in harsh environments without extensions (I believe that there is an ESA paper on this somewhere).
CAN, in particular, was built for harsh environments (automobiles).