No questions from me, just some appreciation and thanks for the release. While it is clearly not founded solely on the pure and selfless love of AWS for Rust, it is nevertheless very positive for the language to have good stable ways to work with major platforms. Writing things on AWS in Rust is now a significantly easier sell.
Thanks for all the work on this, looking forward to trying a few new pieces out!
Thanks for showing up and answering questions. Congratulations on the release.
What kind of plans for support of Rust's evolving async ecosystem?
Any particular reason why the public roadmap does not show the columns similar to "Researching", "We're Working On It" like the other similar public AWS Roadmaps? See example for Containers: https://github.com/aws/containers-roadmap/projects/1
Would be nice to have fully working examples on Github, for most common scenarios across most AWS services. This is something that historically
AWS SDKs have been inconsistent on. Just a request not really a question :-)
> What kind of plans for support of Rust's evolving async ecosystem?
We were hoping async-function-in-trait would land before GA, however, we have a plan to add support in a backwards compatible way when it's released.
> Any particular reason why the public roadmap does not show the columns similar to "Researching", "We're Working On It" like the other similar public AWS Roadmaps?
Our roadmap has unfortunately been in a state of disrepair for some time. We're hoping to get it cleaned up and accurate post GA.
> Would be nice to have fully working examples on Github, for most common scenarios across most AWS services. This is something that historically AWS SDKs have been inconsistent on. Just a request not really a question :-)
There are lots of examples here [1], some simple, some quite complex. If there's something you have in mind, please file an issue! Having great examples is one of our priorities.
The blog post mentions support for 300+ services. I have a couple of questions:
1. It would be interesting to see a comparison between the Rust service coverage and other language SDKs that have been around for a while such as Java. Is there such a place to see this comparison?
2. Will the Rust SDK stay up to date with the latest services as they're announced?
I'm very excited to see this announcement. It's been a long time coming.
The Rust SDK is built on top of the smithy-rs code generator. On the service coverage front, you'll find nearly 100% parity—There are some legacy APIs that aren't supported. It also doesn't have many "high level libraries" (e.g. S3 transfer manager) that can find for other languages.
New services will come out the same day as all other SDKs–All SDKs utilize the same automated system to deploy new releases.
The only exception is services which require extensive custom code. We're still catching up on those for the Rust SDK.
Disclaimer: I am not working on the SDK nor for Amazon.
As far as I read the code of some AWS SDKs, the SDKs (in most languages) are generated from interface files and are thus always in-sync and cover the same APIs in every language.
Yes, it'd be nice to have a CDK based on Rust with ergonomic libs for downstream langs derived from that via FFI (e.g. Python/PyO3) instead of the JSII abomination they ended up with
Are there plans to improve the compilation times? Aws sdk crates are some of the slowest dependencies in our build—which feels odd for what are basically wrappers for http clients.
It's on our radar—one of the biggest issues is that some of the services like EC2 are absolutely massive. We're investigating ways for customers to only compile the operations they need, etc.
What are the differences in the design principles of the AWS Rust SDK compared to AWS SDKs of other languages? In what ways is it special to work best with the Rust ecosystem?
Probably the biggest one is "batteries included but replaceable." The Rust ecosystem is still maturing, so we did a lot of work to make reasonable default choices but still allow customers to make different ones.
Although this is true in theory, in practice you need to be very careful when writing code if you want to target WASM. One example: `SystemTime::now` will panic on some WASM platforms!
I attended a re:Invent session yesterday on using Rust as a Lambda runtime. The potential performance improvements, especially with limited memory, was quite compelling. I’m looking forward to trying this SDK out with Rust Lambdas.
At my company we’ve written all of our Lambda functions in Rust. It’s a perfect fit with the constraints in Lambda. We did customize the runtime somewhat for our needs but that wasn’t all that complicated.
I realize this is a "how long is a piece of string" question, but I'm wondering what cost benefits you might realistically see from moving lambdas from Python to a faster language like Rust? You pay (partly) for execution time so I guess you should see some savings, but I'm wondering how that works out in practice. Worth it?
Here's a fun answer to that question: Rubygems saved infinity money. That is, they got resource usage down to the point where they could move to the free tier.
This paper is not about lambdas and their typical operations specifically, but it shows that across a variety of tasks, as of 2017, Rust is more environmentally friendly than Python.
I am gonna be honest: I hope this paper never gets cited by anyone, ever. There's a number of very weird issues about it, but I don't think what it actually shows is demonstrative of reality, even if it happens to show Rust in a good light.
The title conflates languages and their implementations. Different implementations prioritize different things. They occasionally do test different implementations, as in the main Ruby distribution vs JRuby, but it is still annoying.
The second, and I think largest issue, is that they chose the Language Benchmarks Game as the set of sample programs to test. I do not believe that the kinds of programs in the Language Benchmarks Game are representative of the broader set of software written in most languages. They tend towards math-y, puzzle-style programs, and not CLIs, web applications, GUIs, or anything else.
A very specific issue I have is that Typescript and JavaScript are very different in their analysis, and that's very confusing to me, given that all JavaScript is valid TypeScript, and you would execute it in the same way. This may be an artifact of issue #2, which is that the benchmarks game is only as good as the people who wrote the programs, and it's quite possible that the folks who submitted the TypeScript code didn't do as much perf work as the JavaScript code, but it is still a confusing result that's not explained anywhere in the paper.
A final one (and this is the one I remember least well, so I may be wrong here) is that it is not reproducible. They do not mention which date they retrieved the programs from the Benchmarks Game, let alone the source code of the program, nor released the scripts that were used to collect the data, though they describe them. This means that these discrepancies are hard to actually investigate, and makes the results lower quality than if we were able to independently verify the results, let alone update them based on what has changed since 2017, which is an increasingly long time ago.
In short, I do not think this paper is literally useless, though I think that it does not actually demonstrate its central claim very well, and is difficult to evaluate the actual quality of the results, making it a far weaker result than the title would suggest.
I am not denigrating the benchmark game in this comment, I am saying that the paper does not convincingly make the argument for its thesis. I know you are proud of your work. You yourself encourage people to understand exactly what the benchmark game is and is not. Suggesting that it is representative of all programs is something that you yourself literally have in the FAQ: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
> We are profoundly uninterested in claims that these measurements, of a few tiny programs, somehow define the relative performance of programming languages aka "Which programming language is fastest."
Just because I do not think that using the Benchmark Game is a good idea to demonstrate their thesis does not think that I do not think the Benchmark Game is bad.
Additionally,
> The authors provided a repo, including test program source code, that is still available 5 years later.
That link gives "We are sorry, but you do not have access to this service".
Trawling through the wayback machine, I did find that the older pages link to https://github.com/greensoftwarelab/Energy-Languages, which does seem to provide the contents of the specific programs used and the benchmarking software. Excellent.
Agreed. My response was to your comments about the "Energy Efficiency across Programming Languages" conference paper.
You found the JS/TS "very confusing": I suggested a simple cause.
~
> Suggesting that it is representative of all programs is something that you yourself literally …
Huh?
How have you read "profoundly uninterested" to mean "Suggesting that it is representative …" ?
~
> That link gives …
I really did just click-on (Microsoft Edge) the link odyssey7 provided, click-on the "footnote 1" link in the paperSLE.pdf, click-on the "[1] Measuring Framework & Benchmarks" link, without any difficulties.
Instrument your python code and gather metrics. Maybe use a profiler. If it is heavily CPU limited and it spends all time in python interpreter calls it might benefit from moving to a more efficient language. It it’s mostly waiting on IO (eg remote services) it might be a negligible difference.
I'm a Rust beginner, so please excuse any naivete herein: Does this SDK _necessarily_ require an async runtime or is it possible to use it in a traditional sync application using whatever extra facilities (e.g. block_on) which would be required to "normalize" it?
You can use tokio’s block_on to sync-ify. You need to instantiate a runtime, but you don’t need to do run your whole application in it, just the Future.
edit: Tokio can be beefy. You might look at some of the smaller single-threaded runtimes to execute your future in the main application thread if you’re only concerned about serial execution.
Thanks. To further clarify, the SDK can be used from within a Tokio runtime or using Tokio's facilities in a synchronous runtime. Can other async runtimes be used? (The linked post seems to imply that they can.) It looks like Tokio gets installed as a dependency and I see the following when trying to use the futures package:
> thread 'main' panicked at /home/dev/.cargo/registry/src/index.crates.io-6f17d22bba15001f/aws-smithy-async-1.0.2/src/rt/sleep.rs:128:20:
there is no reactor running, must be called from the context of a Tokio 1.x runtime
if you use other Async runtimes, you need to "wire them up", in this case by providing a "sleep" implementation. I'd strongly recommend using Tokio, especially if you're a beginner. I think the "beefy" statements are not necessarily accurate. You can use it as a single-threaded runtime if you want. Tokio is not going to have a significant impact on your compile times or binary size (given you're already using the SDK!)
One thing I love about block_on is that it has a dedicated threadpool with a ton of threads. For async code, you want around as many threads as cores so the thread can run at full speed and have the scheduler handle switching, but for block_on, most of the time is sleeping, so the core can just switch between them all and take care of any that are done sleeping. Just don't use it for CPU intensive tasks.
I just heard about AWS CRT at the AWS ReInvent Innovation talk on Storage.
1. Does the Rust SDK use CRT under the hood? I use the Rust SDK to access S3 and wonder if there are any automatic performance gains?
2. I couldn't find good material on how AWS CRT works and how it is integrated with the Java or Python S3 connectors. I would appreciate a more technical explanation. Do you have any links that explains this in more depth?
One thing I sorely missed was workers for consuming SQS messages. Ended up having an intern adapt a worker for the old community AWS SDK (rusoto) into this: https://github.com/Landeed/sqs_worker
Also on my dream list of features: gRPC support for Lambda.
Hah that reminds me of a decade or so ago - there was an entire unofficial node SDK before the official one came out. The unofficial one still supported a bunch of features outside the main one for a while.
Agreed - the Java SDK v2 has had 20 updates this month alone (sometimes two or three times in one day) so definitely not much in the way of manual updates.
As with all the other AWS SDKs, the bulk of the code is generated. The JSON service definitions are shared, the effort (one expects) is in being adding support for all the different ways in which the JSON indicates that services behave, and making it look like it could have been hand-written.
What are some valid reasons why people wouldn't now use these rust libraries and extend them to their preferred language? Maintaining clients is tedious work and prone to abandonment.
I would expect AWS to provide custom libraries for basically every language. The cost of a few full-time engineers who are experts at any reasonably popular language is probably pocket change compared to how much even a few companies using that language might spend on AWS services.
Not all languages have a great interop story with Rust. Binding the JNI is especially tricky, for example. Furthermore, when performance isn't important, the need to package and compile Rust code may be an unnecessary hassle.