:) Discussing ever-changing authentication schemes alone could (and does) fill a book. Then protocol changes. Discovery changes. UI changes. Better feedback on copy/delete performance. Pause/resume. Syncing and replication schemes. Performance tweaks. Metadata preservation. Filesystem specific concerns. And this is all before you step outside of the Windows world.
What happens if the stream is interrupted? How long do you wait for it to start? When do you tell the user? What do you tell the user? What options do you give them? Do they even care? What if the disk you're writing to disappears? What if you're doing multiple files but only one of them fails? What if power cuts in the middle of a transfer? What if the destination directory gets renamed? What if the destination volume gets renamed? What if the write speed goes into a hole because the SSD decided now was a good time to start page compaction but the inbound data keeps piling up? What happens if the metadata doesn't come over correctly or is missing? When do you submit the arriving file for indexing? When do you submit the arriving file for security scanning?
Nothing is straightforward in software. It takes a lot of work to make it look easy.
Shouldn't all these problems be dealt with at the filesystem level? I mean, if your filesystem browser needs to be aware of file indexing and anti-malware, you are doing it very wrong.