Hi I'm the author of SwampDragon (I was rather surprised to see this on Hacker news today!).
I'm celebrating my daughters first birthday today so I have to keep it a bit short:
About the serializers:
You do need separate serializers from DRF (they are not the same package after all).
As someone pointed out on Github as well, this is depending on Redis 2.8 since it's using Redis pubsub, and I will add this to the documentation.
If you have any questions feel free to email me at: hagstedt at gmail.com.
I will try to compile all questions into an FAQ and put it on swampdragon.net.
Why does this need a brand new namespace, and a whole new community, instead of it just being a proposed pull-request to Django mainline?
Like, I understand the need to communicate with new sites and docs and a 'project moniker' and so, but wouldn't realtime just be another commit to the main Django base? If not .. why?
Realtime is a significant change, especially since Django wasn't made to handle long-lived client connections. SwampDragon uses Tornado to overcome this limitation. Plus it comes with a JavaScript component. It nearly turns Django into Meteor! Definitely more than "just another commit". :)
I wonder how the serializers will coexist with the Django Rest Framework serializers in an existing app? How different are they? Will I have to have dozens of duplicate definitions? One for DRF and one for SwampDragon?
Looks like it has a dependency on Redis and its pub-sub?
My speculation:
I didn't dig much into code but if Swampdragon relies on Redis pub-sub to notice when a result of a query might change and then refetches the query and diffs the result (this is a common strategy I saw among different "real-time" web-frameworks), then depending on the complexity of queries and rate of changes you can put the load on your database. And depending on the fetched result, you can spend more CPU cycles on diffing the results.
Edit: Swampdragon runs in a Tornado IO loop for real time functionality, and uses Redis Pub/Sub for actual Pub/Sub messaging. It's more like Django bolted to Tornado than Pub/Sub added to Django.
* Objects are diffed in the python process before saving the new state. They
are diffed with what was loaded from the DB at object initialization time.
* Then, the whole object is stored, and the changes are broadcasted. Since
the whole object is stored, you can get race conditions if multiple server
processes modify it concurrently.
* Messages are sent from Python to Redis Pub/Sub channels, and received by
Python again. If the Python process has (Websocket) subscribers for the channel,
messages are delivered. The indirection over Redis allows having multiple
Python processes.
It's not quite accurate to say it's like Django bolted to Tornado.
Tornado deals with the websockets, but everything else is Django.
You could easily add this to an existing Django project.
It's also using Django settings, etc.
You are right about the potential race conditions using the self publishing model, but you don't have to use the self publishing model though, it works just as well using the routers + serializers.
I don't know much about django and websocket how they scale have not tested them anytime. I have used Wheezy.Wheezy is loosely coupled which is similar to flask. I have used it with websocket. Scales amazing. https://bitbucket.org/akorn/wheezy.web/
I'm celebrating my daughters first birthday today so I have to keep it a bit short:
About the serializers: You do need separate serializers from DRF (they are not the same package after all).
As someone pointed out on Github as well, this is depending on Redis 2.8 since it's using Redis pubsub, and I will add this to the documentation.
If you have any questions feel free to email me at: hagstedt at gmail.com. I will try to compile all questions into an FAQ and put it on swampdragon.net.