Huh? Care to elaborate on this? As far as I know, GTK (and Qt IIRC) use a single threaded event loop. That's not at all the same thing (albeit can be used for similar things).
Qt and Gtk's single threaded event loop are akin to what Go calls it's scheduler. That scheduler in Go is also (partially) single threaded, but event handlers run in other threads.
Having event handlers run in separate threads is very much supported in both Qt and Gtk (they can't be UI event handlers in quite a few cases, but network events and file reading in separate threads scheduled by the central event loop like in Go is not a problem).
I will say that it's much better organised and with much less caveats in Go.
And a point of personal frustration : both Qt and Gtk support promises through the event loop, Go does not. I find that a much more natural way to work with threads.
Go's scheduler is a single threaded event loop, like every other scheduler on the planet. It runs, sequentially, in different threads which makes the situation confusing, but it's still single threaded.
It's also cooperative. An infinite loop will effectively kill it (a single infinite loop will kill it before Go 1.2 I believe, but now you need enough of them). More importantly, there's a number of simultaneous syscalls that will kill a go program.
I like about go that it's moving the OS into the application. The thing is, Go's OS is not a very good one. It doesn't have the basic isolation that OSes provide. I hope it will improve.
This is not true either. In fact it doesn't make sense. Schedulers are not single threaded event loops. The scheduler (any scheduler) is entered in various scenarios. Sometimes voluntarily, sometimes not. Sometimes the scheduler code can run concurrently, sometimes not. Sometimes the scheduler code can run in parallel, sometimes not.
The Go scheduler is both concurrent and parallel.
> It runs, sequentially, in different threads which makes the situation confusing
I don't know what this statement means. The Go scheduler certainly runs on different threads. So what.
> It's also cooperative.
Actually it's not purely cooperative, it does voluntary preemption, very similar to voluntary preemption in the Linux kernel. The check happens in every function prolog.
> More importantly, there's a number of simultaneous syscalls that will kill a go program.
There's self-imposed user-configurable limit that defaults to 10000 threads for running system calls. The limit has nothing to do with the Go scheduler, it can be set arbitrarily high with no penalty.
> It doesn't have the basic isolation that OSes provide.
The most basic isolation provided by operating systems is virtual memory. Go is a shared-memory execution environment, so this doesn't apply. What other "basic isolation" is provided by operating systems that's missing from Go?
> I don't know what this statement means. The Go scheduler certainly runs on different threads. So what.
It means that some of the data structures the scheduler examines on every run are shared data, with locking. That makes it effectively single threaded, even if it technically runs on different CPUs (at different times). Put another way it means that it'll never run faster than a single-threaded scheduler would.
> Actually it's not purely cooperative, it does voluntary preemption, very similar to voluntary preemption in the Linux kernel.
You mean preemption inside the linux kernel, in some kernel-space threads ? Because it sounds very different to preemption of applications.
> The check happens in every function prolog.
So it's cooperative. The standard that is normally used is simple : does "for {}" crash("block" if you prefer) some part of the system ? On the linux scheduler the answer is no. In Erlang the answer is no. On the Go scheduler, the answer is yes.
On the linux scheduler with proper ulimits it's bloody hard to crash the system, for instance, forkbombs, memory bombs, ... won't do it. I hope we'll get a language where you can do that too (and the JVM comes quite close to this ideal, some JVMs actually have it even).
> It means that some of the data structures the scheduler examines on every run are shared data, with locking.
This is true for every scheduler, not only for the Go scheduler.
> That makes it effectively single threaded
It would limit parallelism to one, if there was a single lock. This used to be the case, but now the locking is more finely grained. But this only matters if there's lock-contention anyway, which is not the case for current Go programs.
> single-threaded scheduler
Again, no such thing as a single-threaded scheduler. Even when talking about systems with a big kernel lock, or with a global scheduler lock. The scheduler is not "single-threaded" or any other term like that because the scheduler is not a thread, it's not an independent thing, it only runs in the context of many other things.
> Put another way it means that it'll never run faster than a single-threaded scheduler would.
As mentioned already, this is not strictly trye for Go, but this matters more for thread schedulers in kernels, less so for the Go scheduler, mostly because the number of threads executing Go code is very restrictive, maximum 32 threads at the moment. It's very likely that this situation might change. For example, the SPARC64 port that I am doing supports 512-way machines, so I'd need to increase the GOMAXPROCS limit. Then maybe we'd have more lock contention (I doubt it).
It's true that the scheduler will probably not scale this well, and will need improvement, but it's unlikely it will be because of lock contention.
> You mean preemption inside the linux kernel, in some kernel-space threads?
Yes, voluntary preemption inside the Linux kernel, not preemption of user-space threads. The Linux kernel can run a mode (common and useful on servers) where it might yield only at well defined points. The name is a misnomer, this is not really preemption, but it's not cooperative scheduling either. It's something in the middle and it's a very useful mode of operation, nothing wrong with it.
> So it's cooperative.
It has the good parts of both cooperative and preemptive scheduling, but yes, it's certainly cooperative.
> The standard that is normally used is simple : does "for {}" crash("block" if you prefer) some part of the system?
Not with GOMAXPROCS > 1, which is now the default on multi-way machines (all machines).
> On the Go scheduler, the answer is yes.
Only sometimes. This is fixable while still preserving voluntary preemption, since the voluntary preemption-check is so cheap that you can do it on backward branches if you really need it. This wasn't done since this wasn't a big problem in practice, even with the old GOMAXPROCS=1 default, but there's room for improvement.
> On the linux scheduler with proper ulimits it's bloody hard to crash the system, for instance, forkbombs, memory bombs, ... won't do it. I hope we'll get a language where you can do that too.
I don't understand the analogy. It is not clear what "crash" means here, and it is not clear how it would apply to a runtime environment. All that stuff, forkbombs, etc, means that you can configure the system so arbitrary code can't affect the system in those particular ways.
But for a language runtime you don't have arbitrary code usually, you control all the code. So I don't understand how any of these would apply.
Coming back to the scheduler. There's always room for improvement. Until relatively recently, the Go scheduler barely scaled past two threads! (although not because of lock contention). Now it scales really well to (at least) 32 threads. There are still improvements to be made, and I am sure they will be made. I was just addressing the "single-thread" issue.
I see we mostly agree, but one thing here is a glaring error :
> It has the good parts of both cooperative and preemptive scheduling, but yes, it's certainly cooperative.
For me, the best part of cooperative scheduling is that you can work entirely without locking shared data structures, because you get "transactions" for free. This means it's rather difficult to get data races, deadlocks, etc. Go's scheduler certainly does not give you that, trades it for spreading work over different cpus.
So it has problems:
1) necessity of locking, using IPC mechanisms, ... (like preemptive schedulers, and let's face facts here : channels aren't enough in real world apps)
2) everything gets blocked by large calculations (like cooperative schedulers)
3) more generally, easy to crash by misbehaving thread due to unrestricted access shared resources (not just cpu) (like cooperative schedulers)
2) integrated event loop that scales (like cooperative schedulers)
If you want to see a programming language with a "scheduler" that doesn't have the bad parts of cooperative schedulers, check out Erlang. If you attempt to crash erlang with infinite loops, bad memory allocation, ... (on a properly configured system) that just won't work, the offending threads/"goroutines" crash leaving the rest of your program running fine. The offending threads will restart if you configure them to do so (which is really easy).
The same can be achieved, with much more work, on the JVM, or, also with much more work, with python's "multiprocessing" library, part of the standard library.
> > On the linux scheduler with proper ulimits it's bloody hard to crash the system, for instance, forkbombs, memory bombs, ... won't do it. I hope we'll get a language where you can do that too.
> I don't understand the analogy. It is not clear what "crash" means here, and it is not clear how it would apply to a runtime environment. All that stuff, forkbombs, etc, means that you can configure the system so arbitrary code can't affect the system in those particular ways.
Crash means that the system/"program" doesn't respond (in a useful manner) anymore.