Python has actually had concurrency since about 2019: https://docs.python.org/3/library/asyncio.html. Having used it a few times, it seems fairly sane, but tbf my experience with concurrency in other languages is fairly limited.
I find asyncio to be horrendous, both because of the silliness of its demands on how you build your code and also because of its arbitrarily limited scope. Thread/ProcessPoolExecutor is personally much nicer to use and universally applicable...unless you need to accommodate Ctrl-C and then it's ugly again. But fixing _that_ stupid problem would have been a better expenditure of effort than asyncio.
>I find asyncio to be horrendous, both because of the silliness of its demands on how you build your code and also because of its arbitrarily limited scope.
Do you compare it to threads and pools, or judge it on its merits as an async framework (with you having experience of those that you think are done better elsewhere, e.g. in Javascript, C#, etc)?
Because both things you mention "demands on how you build your code" and "limited scope" are part of the course with async in most languages that aren't async-first.
> Because both things you mention "demands on how you build your code" and "limited scope" are part of the course with async in most languages
I don't see how "asyncio is annoying and can only be used for a fraction of scenarios everywhere else too, not just here" is anything other than reinforcement of what I said. OS threads and processes already exist, can already be applied universally for everything, and the pool executors can work with existing serial code without needing the underlying code to contort itself in very fundamental ways.
Python's version of asyncio being no worse than someone else's version of asyncio does not sound like a strong case for using Python's asyncio vs fixing the better-in-basically-every-way concurrent futures interface that already existed.
>I don't see how "asyncio is annoying and can only be used for a fraction of scenarios everywhere else too, not just here" is anything other than reinforcement of what I said.
Well, I didn't try to refute what you wrote (for one, it's clearly a personal, subjective opinion).
I asked what I've asked merely to clarify whether your issue is with Python's asyncio (e.g. Python got it wrong) or with the tradeoffs inherent in async io APIs in general (regardless of Python).
And it seems that it's the latter. I, for one, am fine with async APIs in JS, which have the same "problems" as the one you've mentioned for Python's, so don't share the sentiment.
> I've asked merely to clarify whether your issue is with Python's asyncio (e.g. Python got it wrong) or with the tradeoffs inherent in async io APIs in general (regardless of Python)
Both, but the latter part is contextual.
> I, for one, am fine with async APIs in JS
Correct me if you think I'm wrong, but JS in its native environment (the browser) never had access to the OS thread and process scheduler, so the concept of what could be done was limited from the start. If all you're allowed to have is a hammer, it's possible to make a fine hammer.
But
1. Python has never had that constraint
2. Python's asyncio in particular is a shitty hammer that only works on special asyncio-branded nails
and 3. Python already had a better futures interface for what asyncio provides and more before asyncio was added.
The combination of all three of those is just kinda galling in a way that it isn't for JS because the contextual landscape is different.
Which is neither here, nor there. Python had another big constraint, the GIL. So threads there couldn't go so far as async would. But even environments with threads (C#, Rust) also got big into async in the same style.
>2. Python's asyncio in particular is a shitty hammer that only works on special asyncio-branded nails
Well, that's also the case with C#, JS, and others with similar async style (aka "colored functions"). And that's not exactly a problem, as much as a design constraint.
What has GIL to do with the thread model vs asyncio? asyncio is also single threaded, so cooperative (and even preemptive) green threads would have been a fully backward compatible option.
JS never had an option as, as far as I understand, callback based async was already the norm, so async functions were an improvement over what came before. C# wants to be an high performance language, so using async to avoid allocating a full call stack per task is understandable. In python the bottleneck would be elsewhere, so scaling would be in no way limited by the amount of stack space you can allocate, so adding async is really hard to justify.
>What has GIL to do with the thread model vs asyncio?
Obviously the fact that the GIL prevents effient use of threads, so asyncio becomes the way to get more load from a single CPU by taking advantage of the otherwise blocking time.
How would the GIL prevent the use of "green" threads? Don't confuse the programming model with the implementation. For example, as far as I understand, gevent threads are not affected by the GIL when running on the same OS thread.
Try C# as a basis for comparison, then. It also has access to native threads and processes, but it adopted async - indeed, it's where both Python and JS got their async/await syntax from.
Asyncio violates every aspect of compositional orthogonality just like decorators you can't combine it with anything else without completely rewriting your code around its constrictions. It's also caused a huge amount of pip installation problems around the AWS CLI and boto
Having both Task and Future was a pretty strange move; and the lack of static typing certainly doesn't help: the moment you get a Task wrapping another Task wrapping the actual result, you really want some static analysis tool to tell you that you forgot one "await".
Concurrency in Python is a weird topic, since multiprocessing is the only "real" concurrency. Threading is "implicit" context switching all in the same process/thread, asyncio is "explicit" context switching.
On top of that, you also have the complication of the GIL. If threads don't release the GIL, then you can't effectively switch contexts.
> Concurrency in Python is a weird topic, since multiprocessing is the only "real" concurrency.
You are confusing concurrency and parallelism.
> Threading is "implicit" context switching all in the same process/thread
No, threading is separate native threads but with a lock that prevents execution of Python code in separate threads simultaneously (native code in separate threads, with at most on running Python, can still work.)
Not in CPython it isn't. Threading in CPython doesn't allow 2 threads to run concurrently (because of GIL). As GP correctly stated, you need multiprocessing (in CPython) for concurrency.
They're emphasizing a precise distinction between "concurrent" (the way it's structured) and "parallel" (the way it runs).
Concurrent programs have multiple right answers for "Which line of computation can make progress?" Sequential execution picks one step from one of them and runs it, then another, and so on, until everything is done. Whichever step is chosen from whichever computation, it's one step per moment in time; concurrency is only the ability to choose. Parallel execution of concurrent code picks steps from two or more computations and runs them at once.
Because of the GIL, Python on CPython has concurrency but limited parallelism.
> Threading in CPython doesn't allow 2 threads to run concurrently (because of GIL)
It does allow threads to execute concurrently. It doesn't allow them to execute in parallel if they all are running Python code (if at least one is rubbing native code and has released the GIL, then those plus one that has not can run in parallel.)
I have used asyncio in anger quite a bit, and have to say that it seems elegant at first and works very well for some use cases.
But when you try to do things that aren't a map-reduce or Pool.map() pattern, it suddenly becomes pretty warty. E.g. scheduling work out to a processpool executor is ugly under the hood and IMO ugly syntactically as well.
I love asyncio! It's a very well put together library. It provides great interfaces to manage event loops, io, and some basic networking. It gives you a lot of freedom to design asynchronous systems as you see fit.
However, batteries are not included. For example, it provides no HTTP client/server. It doesn't interop with any synchronous IO tools in the standard library either, making asyncio a very insular environment.
For the majority of problems, Go or Node.js may be better options. They have much more mature environments for managing asynchrony.
Until you need to do async FFI. Callbacks and the async/await syntactic sugar on top of them compose nicely across language boundaries. But green threads are VM-specific.
Iraq peak US troops: range from 168,000 - 192,000 (2007)
Vietnam peak US troops: 543,000 (1969)
Korea peak US troops: 320,000, unclear what year
edit: obviously, Russia's involvement in Ukraine over the last few days would be by far the biggest operation in Europe since WW2. Just not globally (by a long shot).
Thanks, useful to compare, Putin has a similar amount of troops as Iraq II then, although with shorter supply lines. Given how difficult the US had it in occupying both Iraq and Afghanistan, and the damage it did the country, this could well end Russia, at least for another 30 years.
I was too young to really get the scale of the first gulf war. Just transporting 700,000 troops seems astounding, let alone keeping them equipped in enemy territory.
The sim2real approach can work pretty well as long as you are very in tune with where your simulator falls short relative to the real world and take steps to circumvent those shortcomings.
We were able to train the robot to climb stairs completely by feel/proprioception without any sort of vision. We trained it in simulation, and then transferred it to the real world without issue.
Actually, I think it is conventional neural networks which can only approximate finite state machines. RNNs are (in theory, not so much in practice) Turing complete.
Turing completeness requires access to unlimited read/write memory. RNNs only have a fixed dimensional state.
I guess I'm theory that starte is continuous, but it has to be a pretty optimistic model that assumes we can handle unbounded data like that.
Intuitively I think as long as your activation function is sufficiently expressive (e.g. not a step function or something), you should be go to go in theory since that’s what you’re feeding back. Might take a while.
The word I was looking for is "robust". Any realistic model must be able to accept tiny perturbations (like Gaussian noise) of the vectors, since that's how floating arithmetic works. An RNN can't be robustly Turing complete.