Interestingly enough, this feature is the primary reason behind Atom itself existing. We saw the first internal demo of "Atom" (I believe it was "Thunderhorse" at the time) 6-7 years ago, and the main idea was real-time collaboration on code. That sorta took a backseat for awhile as GitHub started to recognize that a collaborative editor was pretty swell in its own right, but glad to see that it's finally all come full circle.
This one single feature could potentially have the largest impact on my workflow in 2018. As much as I love async task management. Real-time collaboration with remote team members should be fascinating ;)
I guess the next step would be something along the lines of Google Wave where you could playback the "conversations" and see the learning process develop.
Even as the author, I think web-based IDEs have pretty limited utility, but it is a great showcase for the power of our API. We (my co-founder and I) built this in about 10 days.
The fact that VS Code's version isn't ready yet, and Atom's is makes me think that Microsoft wanted to respond to Atom's announcement. But I honestly have no idea if that's the case. Just pure speculation.
Edit: Atom's announcement was yesterday. Microsoft's was today. That further fuels my speculative hunch. :)
Or it's based on some paired programming hype/growth/buzz that happened months ago and both decided to implement it as an option. I mean it takes a while to implement complex feature sets like this, so maybe some trend in the industry got people in both communities trying to implement this at the same time?
It's not exactly groundbreaking. SubEthaedit did it years ago.
That VS Code is not Emacs-like, that’s my problem with it.
In my experience nothing beats Emacs at editing text. Vim has better shortcuts, but Emacs is just smart about everything.
My problem with Emacs is that it’s showing its age, it’s hard to configure and you have to learn an old and obscure LISP dialect for it. On the other hand I’ve heard that VS Code plugins are a joy to develop, MS apparently did a good job at that.
But Emacs? Yes please, I want that — plus to be honest, in 20 years from now Emacs will still be around, whereas I have my doubts about these fancy new editors.
> you have to learn an old and obscure LISP dialect for it
As someone who is glancingly familiar with emacs (I have only ever written one elisp function, that too with help) it's a really stupid question, but couldn't emacs have bindings for lua or python or something? That would increase the number of people who can program for it and customize it.
> in 20 years from now Emacs will still be around
I think the real risk for emacs is, over the years, slowly losing the pool of people who care enough to contribute to it -- not just core developers, but also people who write packages, themes, etc. I already see a lot of developers who think Atom / VSCode / Sublime Text is "good enough". You may choose to discount Sublime because it's closed source (I do despite loving it otherwise), but VSCode and Atom are open-source and browser technology is only going to get better.
Elisp would still a better much better language than python or Ruby (for emacs), especially now that lexical binding is becoming standard. Emacs people would like to move to scheme, if anything. (even RMS wishes emacs would move to scheme.)
Emacs is a lisp; there's a very small and tiny layer of C at the bottom and everything else is a tower of lisp. You can interact with it with other languages, via many different means, but in the process you lose the joy and power of working in a live lisp environment.
> there's a very small and tiny layer of C at the bottom [...]
That layer of C is hardly "tiny." Everything from font drivers to process management to a lisp interpreter to window and buffer code, to overlays, and a lot more.
There's 1,262,537 lines of elisp in 25.3, compared to 291,203 lines of C and C headers. While there is a fair amount of C code to do what you mentioned, much of the C is definitions for the core lisp language with 1,483 DEFUN statements in it.
But yah, not tiny, sure. I'm looking forward to it being replaced via the REmacs project.
this is only true if we're strictly talking editing text wihtout any plugin functionality. As soon as you add code completion features vim shows its age, the Ale extension for async linting for example feels very sluggish on a few only slightly dated laptops I tried out and frequently grills the cpu.
I also noticed ALE was very slow with JavaScript so changed it to run on save only.
However Neovim’s plug-in architecture is a big improvement. I’m running Deoplete (which provides intellisense like functionality) on the same machine, and it is basically instant. There are GIFs at the bottom of the repo:
VS Code is trying hard to be an IDE for all languages. If you use pure VS code without extensions, it quite snappy. But as you start adding more and more extensions, it starts slowing down and that too quite fast.
This is true, but its still very snappy for an electron app.
After adding around 10+ plugins on Atom it not only became slower but it started crashing or having internal errors.
With VS code I have 17 plugins installed and it still feels light enough. Personally I disable most plugins until I need them and I think most people should do the same considering how easy it is to disable a plugin.
I have all my language specific plugins disabled until I need to use them.
Does/would a high-end (e.g., Xeon-class) processor and/or a boatload of RAM help keep Code's performance snappy?
FWIW, I run my Code install quite light since my "duties and responsibilities", ahem, only require a few languages/data types, ergo, I am not pushing it hard at all.
FTR, I have a (licensed) install of Sublime, which I confess is snappier than Code, but the difference is so small I use whichever one is better for the task at hand and any difference dissappears under the pressure of an outage enforced deadline :-D
That's why I've been using Atom instead. Since 1.17 or so it got quite usable and it's getting better with every iteration. Nowhere near as fast as Sublime Text but usable for daily coding. I disabled the git plugin because we're using fossil. I hope it doesn't become an IDE like VS Code. At least they made the IDE packages separate.
Regarding fossil, do you use fossil for tickets and self-hosting? How well do fossil tickets work compared to Github issues? Do you use any custom themes for fossil tickets?
We used tickets somewhat and we are self hosting through a nginx proxy in order to log the http activity and proxy over https. The ticketing has markdown support. We use a custom theme, not for tickets but for the whole repo. Fossil tickets are only used internally, we use osticket regular ticketing.
There is also a service for hosting fossil repos called Chisel, but it's nowhere near as useful as Github.
(Several years of realtime editing experience reporting in)
Plain text CRDT and OT systems are often mutually adaptable. Even if they aren't directly compatible, with a bit of work it'll probably be possible to adapt from one realtime editing protocol to the other.
Of course, in the medium to long term having a standard for OT/CRDT operations that works in a lot of source code editors would be fantastic.
I guess that when users make the habit of using the free tool, then the company that provides it can easily embed other convenience habits that could
potentially lead to a future sale.
Both Microsoft and Github have commercial products to sell at a later stage, VSCode has now integrations with Azure, I don't know about Github integrations on Atom but the potential is there.
When the user is using your free tool you are one step closer to provide something additional that is so convenient that is hard to say no, even if is paid.
Ballmer's Law: "Developers, developers, developers." If you want to be a platform vendor, you need to have developers. Lose the developers and you lose the business.
Microsoft has been a platform vendor for a long time. VSCode is part of their argument for why they should continue to be one.
What I'm interested in is whether Live Code Share is a free service.... feels like it might be a paid service. Or limited free then pay? Not sure. It's also for visual studio.
Also be interesting if someone makes a teletype plugin for VSCode :)
Every time I see one of these "code together in real time" announcements I remember SubEthaEdit, which did this flawlessly 14 years ago, and released the collab part as a library that other apps can use: Coda uses (or at least used, I haven't used Coda in years) it and is/was compatible for sharing.
Oh yes, I remember we were using this among attendees at WWDC to write full notes of the talks! We didn't know each other but spread the connection parameters between neighbors.
I just stopped sharing the document. I was just running it over crappy cafe wife for 2.5 hours, max ~20 people, always 5 to 10... performance seemed fantastic - didn't tax my lappy at all. Very impressed!
[Re-shared it at 28e6c3b4-754c-44ef-9406-869604db9db5 - it would be good if you could keep a UUID somehow, but I guess that's unfeasible]
I love Floobits also. Works great in those I've tested: IntelliJ, Atom, and Emacs. But what is still missing in many of these peer programming solutions is the ability to leave for a few moments (or a few days) then return and do a playback of what your peer(s) did while you were gone.
For me, the editor-agnosticism is the most important feature I would want my live coding experience to have. My team uses a mixture of Vim, Sublime, Emacs, VS Code, and Atom, and we have configurations we are comfortable with. It's too bad that this seems to be happening well within the confines of each editor's ecosystem, and not by some common protocol that all editors could share.
The team behind this feature think that editor-agnosticism is also really important, that's why they split up this project into atom-teletype and teletype-core. AFAICT the 'core' library could be used to implement a package in any of the electron based editors.
Could the protocol implemented by teletype-core be (easily) spoken by a non-electron editor, or is there an inherent impedance mismatch where e.g. the protocol structures its data in ways amenable to DOM manipulations of text but not to other text buffer formats?
The protocol isn't coupled to the DOM or Electron in any way. The one thing that makes it easier to implement in electron is that it uses the WebRTC standard for peer to peer connections.
WebRTC is a beast of a protocol to implement, and so far browsers have had the canonical implementation. There's a few C++ libraries out there, but even then it's a lacking ecosystem.
I've tried implementing a very similar algorithm (one could say it's the same approach) in the beginning of this year, but had one remaining issue with concurrent overlapping deletions that I couldn't figure out (and the paper I was basing the algorithm on didn't account for it: http://www.sciencedirect.com/science/article/pii/S1474034616...): https://github.com/jure/rgass
Weihai Yu's implementation does account for it, however: https://dl.acm.org/citation.cfm?doid=2660398.2660401 , but his implementation is in Lisp, and I've never had the stamina to work through it for that one edge case.
Kudos to the team at GitHub, I'll be studying this implementation closely.
That's a nice blogpost! It does describe char-based CRDTs nicely, but the RGASS/teletype-crdt algorithms are string-based, which brings a lot of headaches, as you can have insertions that split existing nodes, etc.
This reminds me of that rant from last week about how computers are less functional these days than they were in the 80's. The reason is that the Amiga had cooperative document editing way back when. (Sorry, can't remember the programs that supported it.)
/Some version of AmigaDOS also had truly relative timestamps. So you might see a file last accessed "Christmas, 1991."
I have the same idea on Sublime Text about 4 years ago, but due to the Sublime API restriction at that time, it's dead on 2014. I have no idea is the Sublime API now well enough to implement this feature?
My project: https://github.com/learning/SublimeTogether
And a screencast: https://vimeo.com/96316581
"Sharing the same physical machine is impossible for remote teams" -- if only there was some kind of network that would allow people to log into a machine remotely...
More seriously, though, in the mid-90s, I worked at a place where most work stations were Sun Sparcs. One way of "coding together" was that one person did an xhost+ to allow a second frame of an Emacs running on a second person's machine to be opened on the first person's display. It was used only very rarely, though.
This looks really neat. As a side note, it bothers me a bit that they haven't used an actual URI, e.g. tty:xxxxxxxx-xxxx-... That would allow for nice hyperlinks from web pages and chat rooms.
“participants all keep their own custom key bindings, packages, and themes.”
what happens when two different people editing the same document have two different settings for # spaces per tab character? whose takes precedence? (Or would there be a possibility of inconsistent spacing depending on who is adding a tab?)
I would guess that tab -> spaces happens locally, and then the spaces are what's sent over the wire. So it would be whoever inserted the tab. (What other possibility could there be?)
If it is displayed spaces per tab character, then there is no conflict in the file but the appearance should be different to different viewers.
If it is space characters replacing a tab keypress, then the spaces in the document should reflect the conversion ratio of the user that typed the tab.
The document isn't "using" anything, since a document is a passive object. Unless you have something like vim modelines where you configure it at the top of the file like `/* vim: set tabstop=8:softtabstop=8:shiftwidth=8:noexpandtab */
`, the document itself doesn't know anything about how things _should_ be (which is the important part).
an opened document inside an editor is absolutely aware of this, and this is what you're collaborating on. almost all IDEs support EditorConfig, which makes standardizing on things like that across teams trivial.
Atom’s detriment with this post is that there’s 4 paragraphs of text before I can even see what this feature looks like, and even then it’s gifs on how to install the feature.
It's amusing to see this while building in Second Life.
The Second Life build tools are a 3D CAD system with real-time collaboration in virtual reality. Several people can be editing the same set of 3D objects simultaneously. Others can stand around and watch, from different viewpoints.
Your comment led me to figuring out if Second Life is still really a thing. And I can't really figure it out. Apparently Linden Lab has over 200 employees so there must be some revenue on it. But is the user base growing, and how big is it?
The user base is flat, but the revenue keeps coming in. About 250 employees, down from a peak of 350, and about $66M in revenue.
Linden Labs is developing something new, called "Sansar", for virtual reality users. It's closer to a video game than a simulated world; it's a platform for "experiences", which are essentially third party games. Whether that works out depends on whether VR gets any traction. Second Life also might get a boost when the new Amazon "Snow Crash" series airs. Second Life is the closest thing to the "metaverse" of Snow Crash.
Second Life is interesting because it is an Internet system that isn't the World Wide Web. It's not HTTP based. It doesn't use HTML or Javascript. It's a huge system of its own. It has a social network, with members, groups, and text and voice chat. Strangely, about 40% of avatars never move; they just use the chat functions.
There's a whole world in there that is totally independent of Google and Facebook.
I can appreciate your efforts here. It isn't trivial to put something like this together but I'm going to go ahead and state the obvious: Face to face is far superior to a solution such as this one for what I think are self evident reasons. You're better off, by orders of magnitude, getting on a plane or train and going to see your code buddies.
When I'm coding with someone we're not usually on the same files or piece of code anyway. And I don't get any value from seeing them type away at the code file that they are focused in, or knowing which file they're in beyond the name. It's a simple "hey mate what line are you on?" question.
I get excellent value from talking to and seeing the other devs face to face figuring out where their head's at and what 'page' they're on. That's the stuff of a proper realtime and face to face code sesh IMHO.
Being together in person doesn't solve the problem of wanting to edit the same code simultaneously. For some projects, git is enough, but for something really small this seems like it would really help.
I'm not saying it isn't a legitimate problem but I've never had it.
For me, if we're not using git I could get the other dev's latest changes by simply doing a file copy of whatever I needed and then get it into my own environment. I work offline with most of my dev mates. We are rarely ever coding at the same time of day given time zone differences and sleeping patterns.
If we're a small team we're probably quite aware of where the other one is in the code and what they're working on. I don't need real time for that.
I think being able to see the other dev type in real time in the same file doesn't solve anything for me and may in fact be distracting and counter productive. I'm not sure 'real time' fits here.
"We are rarely ever coding at the same time of day given time zone differences and sleeping patterns."
This is to aid pair programming. I work on a remote team as well, but there are times we pair when hashing out of a particular issue that either myself or one of my teammates is having. It's far quicker to do this than to work disconnected and go back and forth, esp. if it may impact the schedule/sprint. In the past I've worked with devs that had as much as a 10 hour time difference and we literally scheduled times where our schedules crossed so we could pair on a specific issue. One person drives at a time but it's common to switch back and forth so tools like this are helpful.
Don't know Atom's features here and my first thoughts were somewhat negative but the video here of VSCode's similar new feature dispelled many of my initial objections.
In VSCode's feature you can even debug together and the remote user can inspect your machine. You aren't sharing a screen, you're just sharing program state and file changes.
I have never tried real time collaboration. What scenarios are there when you need real time collab. I know technical interviewers prefer this. Do you think it causes distraction when you have 2 people writing code on a same file. I would rather one finish and then do my stuff.
Ever been to a hackathon? It's hard to coordinate over Git when you're just getting started on a codebase and there's barely anything for the 3 other people to work on. I've used Cloud9 for this very successfully in the past.
A lot of effort put into "dumb" text editors, but we still don't have (to my knowledge at least) a truly cross-platform / cross-backend IDE that will allow you, for example, to open .sln/.cmake/.make/whatever projects/solutions and provide you with a nice interface to change project settings or even compiler used. This could save a lot of development time and improve productivity, and I'm sure it's a much more useful and common use case than collaborative editing.
Really gave atom a hard try. Back to notepad after a few months. The add-ons are the only advantage, but are grossly overshadowed by the resource consumption this behemoth requires.
Yeah, someone complains about the resource usage in every Atom post.
I feel likes it's a rehash of emacs vs vi debate from 30 years ago. Emacs needed many megabytes more than vi so people wouldn't use it.
As time goes on the resource usage becomes less of a problem. My 4 year old laptop has 16GB of RAM and I don't really worry about it. I'll get 32GB or 64GB in my next computer. I never liked quibbling over memory. My time is much more important to me and I want the best tools. I also want them to swim in RAM.
Yes, Emacs stands for "eight megabytes and constantly swapping". How silly this sounds today.
I wonder why do those people even care.
The only time I look at my resource usage is when apps start to behave funny. Or when it's an app that I am developing. Other than that, why should one care?
One argument could be made that they are using memory wastefully. Not sure that's the case. The baseline memory consumption is higher, so what? It's a tradeoff. It's easier to build a better editor using browser-based tools, as much as I like Elisp. Over time Atom and VSCode will close the gap.
Now, if there is a memory leak, or memory increases non-linearly with the workload, than it could be a problem. VI and Emacs are pretty great with large files (Emacs not so great with long lines), browser-based editors usually do not work as well. But there is no reason they shouldn't, it just takes engineering effort.
"Yes, Emacs stands for "eight megabytes and constantly swapping". How silly this sounds today. I wonder why do those people even care."
Well, because if an application you're using is constantly swapping, it'll slow that application down to a crawl. This was especially true back in the day when disk was slow (and expensive.. as was RAM).
People today are used to being awash in resources. RAM is fast, plentiful, and cheap. Disks are relatively fast and cheap.
You have to imagine what it was like to live in a resource-constrained environment where you actually had to care about how much memory and disk you used, and how you were using it. These decisions had severe, immediately apparent practical consequences.
What I meant to say was that "eight megabytes" sounds silly today. Who cares if an app uses 8MB today? The extrapolation is that it will be the same for Atom or other editors (I'd argue that it already is).
I created my first programs with a computer which had exactly 28815 bytes free when it booted up (out of a possible 64k). If you plugged in a floppy drive, the free memory dropped further.
Personally I don't mind RAM usage that much. I'm more concerned about excessive CPU usage resulting in unnecessary battery drain on my laptop or annoying fan-spinning.
Atom for me consistently sits at 0-0.1% (mostly 0) CPU usage when idling. You may want to investigate some of the extensions you're using if the numbers are different for you.
It was the lag (and at the time, struggling to open "large" files) that stopped me giving it more than a cursory check a year or so ago. No idea if it's worth looking into again? Would be hard to beat sublimes snappiness doing almost any task.
For certain definitions of 'high-end', maybe. You've been able to configure workstation-class laptops with 64 gb for a few years now.
And most mid- high-end laptops will happily accept 32GB, again going back a few years.
(Broadwell removed the density limitation that made 16GB DIMMs a no-go; anything that takes DDR4 should support 16GB SODIMMs, excepting the very low end Atom, Celeron etc.)
You really aren't. Atom runs fine on a Macbook Air. There are definitely performance issues, but we're addressing them; it's just that our team is small and our initial goal at launch was to produce the most hackable text editor possible.
I had to drop Atom when it began launching white flashes when scrolling on a 2011 MBA. I saw that this was a recurring problem. Is that an issue that is being addressed?
"I feel likes it's a rehash of emacs vs vi debate from 30 years ago. Emacs needed many megabytes more than vi so people wouldn't use it."
The thing is, this was in fact a valid critique of Emacs back in the day, and it cost Emacs users. I know I stopped using it back then partially because of its resource use (and because it was a lot slower than vi and because of its finger-twisting keyboard shortcuts).
If Emacs was as light on resources as vi was back then, it would have more users today.
I've been using Atom since 2015 and it's only getting better on each new version.
I tried going back to Sublime which has objectively much better performance but that doesn't make the whole experience better. It's like sitting in a Formula 1 car with no cushion and no AC.
I really want to create a public-facing roadmap that's specific to this issue. Unfortunately, our resources are limited so we often don't focus enough on blogging/publicizing our planning... but in the meantime, here's something of a brain dump:
In terms of our actual data structures and algorithms, we're already starting to be in really good shape. We've dropped a number of components of our core TextBuffer to C++, ensured that most of our algorithms scale logarithmically with file size, cursor count, etc, and made use of native threading for important operations.
1. The one remaining structure that we need to drop to C++ is what we call the 'display index' - the thing that stores the locations of things like folds and soft wraps. Once we do that, opening large files (which is already reasonably fast) will be like butter.
2. Our find-and-replace is already pretty good - you can type without almost any lag even when we're storing the locations of millions of search results. But now that we have the ability to easily use background threads, there are some easy optimizations we could do there. The search results could really update instantaneously, we no longer need to wait until you pause when typing in the search box.
3. We have in the works a major change to our syntax highlighting system using my incremental parsing library Tree-sitter. Once this lands, it should eliminate any perceived latency in syntax highlighting (as well as enable a host of great syntax-related improvements).
4. Atom still uses synchronous IO (which blocks the UI thread) in some places. This is because it was created before GitHub created Electron, so node APIs were not available from the outset. Many of these have been eliminated, but there are several Git-related code paths that we still have not updated. This probably kills the experience of editing on remote drives like SSHFS for some users. We need to revisit these code paths.
What are the current plans about Coffeescript? I just looked at the code on Github, it says 85% Javascript, 12% Coffeescript. Is the plan to port the 12% to JS6? (and hopefully not Typescript)
Great work with Atom editor. I successfully conviced my friends to move from VSC to Atom on macOS.
Thanks! Yeah all of the CoffeeScript in atom/atom should be gone in a few months probably. We use plain JS now.
It'll probably take a while before there's no more CoffeeScript in the entire Atom org. We're gradually converting the code to JS as we come upon it for other reasons.
It's like having Leonard Cohen transpiled to Justin Bieber. Now look at it from the Justin Bieber fans point of view, why would they listen to Leonard Cohen in order to hear Justin Bieber ? Even though research says Leonard Cohen makes better music! From Leonard Cohen fan's point of view it doesn't matter as they only hear Leonard Cohen, not Justin.
On the Atom.io website https://atom.io/ it reads "A hackable text editor for the 21st Century". Well I can code in JavaScript 5 and 6. I love love it when a project sticks to web standard, and not have to worry about slangs I don't care about.
> Typescript is great at catching bugs
The same with JavaScript. Google Closure compiler and anyway most IDEs "understand" JavaScript too and catch bugs. JavaScript is dynamically typed not typeless.
I was surprised to see someone who cared what style of javascript a program is written in, especially if they aren't an active contributor. Maybe you are?
As did I, but I just couldn't get over how slow it was.
Startup time on windows was rough, meaning I had to remove it as a default editor almost immediately for most filetypes. Worse off, opening large files would grind Atom to a halt and usually lock the editor up.
Beyond the performance, I thought it was a great text editor. The problem is I just have no use for a slow-to-start editor when I can just DL vscode and get the same features with considerably better performance.
Am I really the only one on this forum to think that live shared coding is a terrible idea?
The only pro for this is there are no conflicts, so merging is easier. However, live conflicts are still bound to happen if 2 people want to work on the same section of the code, right?
And the cost of this is the total unability to debug...
I've only done direct pairing twice. It can help come up with a game plan and get new people in the company up to speed, but it is terribly terrible stressful.
I don't like it. I think small services, incremental changes, strong tests and solid code reviews all work much better. Especially the tests and code reviews. At my last shop the company had an internal Gitlab and we used the Gitlab CI. Reviews + CI really helped keep the coding standards pretty high and helped younger devs not make beginner Scala mistakes coming from a Java background.
IMO coding out of sync is one of the main useful functions of all modern DVCS systems. Coding together can be useful or fun, I'm sure, but I've noticed that other people don't enjoy my coding style very much in a group setting.
I like to just sit there and read through the code for a long time to figure out how it works, maybe throw in a print statement here or there, and reason out loud about it (or mumble about it myself). Most people like to step through interactively and debug together, and in a group setting this can become a real conflict in my experience.
Apparently the protocol can be implemented by any editor, so hopefully we'll end up in a glorious future where this works in lots of browsers (including VSCode, who just announced their own version of this)
One common way to attribute multiple authors in a commit is to add a `Co-authored-by` commit message trailer. You can read more about git commit trailers at https://git-scm.com/docs/git-interpret-trailers.
As an example, if this was a commit message, the trailer adding my attribution would be like this:
Not sure if there's been a way to integrate separate emacs processes, but it has always been possible to launch a new frame (emacs speak for window) displaying the same buffer, and given network transparency in X, have it displayed on any remote computer running X.
Back in college (late 90's) I did collaborative class project (in Z notation) with XEmacs and a partner on a remote X server. It actually worked as long as one of us wasn't in the minibuffer.
Am I the only one who finds it ironic that the development teams behind the top two open source editors introduce support for collaborative editing on the dame day, apparently unaware that they were both working on the same thing, and there being no evidence of collaboration between the two?
On-topic, I remember recently Uber and Lyft were working on a similar feature, and both knew about the other but neither knew that the other knew. I wish I could remember what the feature was.
i prefer this over floobits in that i don't need a floobits account, just a github account, which pretty much all devs will already have. it would be nice if there was a way to do this without needing the github account though.
Interestingly enough, this feature is the primary reason behind Atom itself existing. We saw the first internal demo of "Atom" (I believe it was "Thunderhorse" at the time) 6-7 years ago, and the main idea was real-time collaboration on code. That sorta took a backseat for awhile as GitHub started to recognize that a collaborative editor was pretty swell in its own right, but glad to see that it's finally all come full circle.