The biggest problem is that there's no gradual migration process. The first remote worker on a team forces everyone to change how they work and communicate. Even with the whole team accommodating the remote person, they'll still miss out on a lot. It can be frustrating for both sides.
Also, the software for real-time collaboration sucks. Even if you constantly screen-share and video chat, it's not as good as working in the same room. Simple things like pointing at a section of code are difficult to do, and synchronizing files can be a pain.
I'm working on solving this (Floobits YC S13 yadda yadda), but there's still much to be done.
One small startup I helped out briefly many years ago was using Linux/KDE. They had it set up so that everyone's desktop was accessible from everyone else's desktop. In Linux you usually have several desktops which are easy to switch between, and they made it so that everyone else's desktops appeared in your desktop list. There are virtual desktop products for windows too.
I don't know if they used standard software for this, or whether it was their own cobbled-together scripts with normal screen-sharing apps like vnc. I was not involved in setting it up. But it worked very well.
I recall discussing using mosix (that's how old it was) too, so we were a big beowulf cluster for compiles etc. That appealed to them, but I don't know if it was acted on after I moved on.
In this particular case it wasn't about monitoring, this was about collaboration. But you can see how such a system would appeal to managers too.
This complete lack of privacy took some getting used to, but it was super focusing. It quickly sunk in that everything you did (on your screen) was visible to everyone else, and whilst there was toleration for reading the news and so on, everyone was self-censoring themselves to apply themselves.
This was in an open-plan office, but with modern bandwidth could work well remotely.
The biggest problem is that there's no gradual migration process
I'm not convinced this is true: the biggest problem seems to be the way knowledge spills over from person to person when they're in close proximity to each other, and the way new ideas often appear from serendipitous encounters. Popular books like Steven Berlin Johnson's Where Good Ideas Come From and Edward Glaeser's The Triumph of the City discuss the issue in greater detail and cite much of the original research on this phenomenon.
Some of that original research attempts to measure knowledge transfers by proxy, like the way patents tend to cite other patents whose authors live nearby. It's not perfect but it is pretty compelling. Other research relies on descriptions of famous people describing how chance encounters fertilized their ideas.
I'm working on solving this (Floobits YC S13 yadda yadda), but there's still much to be done.
Very cool. If you haven't read Johnson's book in particular you should!
There may not be a full migration path but I would say the first thing that needs to change (and still hasn't happened in many companies) is getting everyone:
* To use shared sources for things (company/department wide Wikis, bug-trackers etc) instead of c:/my_docs/spec.v1.docx being mailed about.
* Reduce the number of meetings (Which, in my experience, are usually unnecessary in a software-dev environment at least)
Obviously some roles are going to be less amenable to Remote working but without the above 2 (or even the first bit) done most companies couldn't even do it if they wanted.
> The biggest problem is that there's no gradual migration process.
Sure there is. The process is similar to all the WinXP to Linux migrations going on. The intermediary step to OS migration is cross-platform applications. The intermediary step to remoting is to phase out everything that is based on paper or face to face communication.
Based on my experiences with heavily collaborative remote work, the gains are tremendous. Local efforts do not scale when when you get an influx of people. Remote processes do. And even for locals, the remote tech works better and faster than the old methods. (Of course a group of locals still get a bonus to communication and are often more productive than a bunch of individual remotes.)
The answer is not screen-sharing, it's making everything an API. The remote worker is responsible for her API, and no one needs to know or care how it works as long as it does.
While I generally do favor service oriented architectures (and remote work), this is woefully short sighted.
Say the remote worker builds you a shaky RoR+MongoDB app, no scalability, no error handling, no tests, no comments. At least in the short run, it works - there are occasional outages, but the worker (remote or otherwise) does a great job keeping it going. Then the remote worker quits. What do you do?
Just because something is behind an API doesn't mean you don't have to worry about knowledge transfer, code quality, etc.
Okay, good points ;-) I may have overstated my case for emphasis; but I maintain that in general, an API is better than a library (there was an article saying just that some time ago on HN; can't seem to find it right now).
I wouldn't even go that far. "API" and "Library" are not distinct concepts. A library has an API - it's the set of public functions (ideally rarely changing) that you are supposed to call from external programs. It is beneficial if one developer can hide their work behind a simple interface with only a few access points (the API), but there is no compelling reason this should be json over http.
The real distinction between a network API and a library is that the network API allows users to avoid thinking about the hardware. The API provider is responsible for allocating hardware, the user need only send messages over the network to use it. I.e., buying hard drives is Amazon's problem, I just GET/PUT my files onto S3.
If the API isn't hardware intensive, this is pointless - you are adding network overhead (i.e., latency, network errors) for nothing.
That just opens another can of worms. Now every single application you either provide or use come with an integration issue attached.
I have read somewhere that was the Amazon way of doing thing but it was a costly investment. A costly technological investment. And even Amazon did it only at team level, not individual level.
I cannot imagine a 500 developers company being able to afford that without some serious competitive advantages down the line. If what I can see around here is any indication (medium size IT company in London), if you can afford not having people on site, there are better opportunities to take in Poland or even India than a small discount on a UK worker.
Unfortunately, a network API letting you mix and match languages still opens up risk from a rogue employee who uses something different than the status quo and then quits, leaving something unfamiliar that needs to be maintained.
What a demoralising (and grossly incorrect) way to look at work.
If your work can be reduced to such an API, then I'd argue that you can be replaced with a script. Most jobs, however, even today, are far more complex than that. As a very obvious example, culture is an extremely important aspect of building a successful business, and a person's contribution to culture is irreducibly complex and unmeasurable, and untameable to stating an API type contract.
I meant the remote worker builds and maintains an API, not that he himself is an API.
> culture is an extremely important aspect of building a successful business
Yes, maybe; but there are many businesses that are already built and that would benefit from treating some parts of their process as self-enclosed APIs.
I would go so far as saying many already do, without knowing it.
That only works if you can separate responsibility out to the extent that each person is responsible for a specific area of the code and works only on their designated area.
What happens in practise is that a feature or bugfix might require a large change to one part of the code and a much smaller change in another.
So the developer tasked with the small change will finish before the other. Now you have an idle developer and an overloaded developer holding the process up. If you have a process whereby the idle developer can't share some of the workload of the other you are going to have very inefficient development.
You also run into problems if one developer is on holiday , off sick or leaves the company since nobody else has ever looked at their code.
There's also a risk of bringing politics into re-factoring decisions. "This functionality should be part of your API".
I've just started using floobits, I work remotely with people all over the place(and they all use sublime) so it's a fantastic product for me. One thing I haven't got my head around is the workflow involving git. Have you guys put any thoughts down about this?
We use Floobits to develop Floobits, so we feel your pain.
If one person commits and pushes, others in the same workspace have to stash, pull, and stash pop. That doesn't always work, since people might edit files in the workspace during the stash/pull/pop. It also doesn't solve the problem of people being on different branches.
We're planning on making our plugins and tools more git-aware. Syncing much of .git would help things quite a bit. We're also toying with the idea of making each workspace a git repo, but that's farther off.
I see there's a git-core feature request open for multiple authors[1]. Would be awesome if floobits could include all of the workspace authors (gazing into the future here...).
Also, the software for real-time collaboration sucks. Even if you constantly screen-share and video chat, it's not as good as working in the same room. Simple things like pointing at a section of code are difficult to do, and synchronizing files can be a pain.
I'm working on solving this (Floobits YC S13 yadda yadda), but there's still much to be done.