Hacker News new | past | comments | ask | show | jobs | submit login

I wrote an addendum https://www.codeotaku.com/journal/2018-11/fibers-are-the-rig...

With C++/assembly, you can context about 100 million times per CPU core in a tight loop.




The one additional comment I have is that this addendum doesn't involve a reactor/scheduler in the benchmark, so it excludes the process of selecting the coroutine to switch into, which is a significant task. The Go benchmark I posted above is running within a scheduler.

But, I appreciate the addendum.


So, that's a good point, and yes the scheduler will have an impact probably several orders of magnitude in comparison.

That being said, a good scheduler is basically just a loop, like:

https://github.com/kurocha/async/blob/bee8e8b95d23c6c0cfb319...

So, once it's decided what work to do, it's just a matter of resuming all the fibers in order.

Additionally, since fibers know what work to do next in some cases, the overhead can be very small. You sometimes don't need to yield back to the scheduler, but can resume directly another task.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: