The system performing the backup and the system storing the backups can (and often should) be in different locations.
What (I believe) klodolph is describing is that you can self-sign an encrypted backup and give it to an untrusted third party. When you later retrieve the backup you know that it had not been tampered by the untrusted third party.
In this case you have direct access to all private keys involved, so no web of trust is needed. In this scenario you trust a key only if you have created it yourself.
You still don't need to sign in that case. Either it wasn't tampered with and still decrypts, or it was tempered with, and no longer decrypts.
Edit: to elaborate. What is your threat model? If you are handing over encrypted data, when it is given back you are safe knowing it wasn't tampered. Either that, or the encryption scheme you used is broken.
If you are handing unencrypted data to someone else to encrypt using your public key, then you could use their public key to validate that they encrypted it. But you are, by definition, trusting that they did the encryption like you wanted them to.
Now, if you hand them a key to sign with and a key to encrypt with, you also have to give them the data to encrypt. In which case... why do you trust every link in the chain between you and them that it worked? The game is already over if they have your unencrypted data is in transit between you and someone else to start with.
What (I believe) klodolph is describing is that you can self-sign an encrypted backup and give it to an untrusted third party. When you later retrieve the backup you know that it had not been tampered by the untrusted third party.
In this case you have direct access to all private keys involved, so no web of trust is needed. In this scenario you trust a key only if you have created it yourself.