The author clearly knows this, but others may not, so: immutability and "const" aren't the same thing. Const typically prevents name rebinding (as the type system permits). Immutability is the presumption that the whole data structure can't be mutated by default (though there are usually ways to mutate cheaply--without bulk copying, that is--provided by most immutable data APIs).
Const works like this:
not_const int[] foo = [123, 456]
const int[] bar = [456,789]
foo = [1,2,3] // no problem; rebinding is allowed if it meets the 'int' type constraint.
bar = [1,2,3] // compile error!
Immutability, as typically defined (there's a bit of a semantic debate here sometimes) prevents changes to the contents of complex data structures:
mut int[] foo = [123, 456]
not_mut int[] bar = [456,789]
foo[0] = 789 // no problem; mutation is allowed so long as it's with valid types.
bar[0] = 123 // compile error!
Const typically prevents name rebinding (as the type system permits).
In C++ you can only call `const` methods through a const binding. `const` methods mark `this` const, so such methods cannot modify member variables. (Minus all escape hatches, such as `const_cast`.)
The author clearly knows this, [...] Immutability is the presumption that the whole data structure can't be mutated by default
The situation in Rust is a bit more complex. Rust distinguishes between interior and exterior mutability.
`let mut` bindings and `&mut` references exhibit exterior mutability - you can call methods that borrow `self` mutably (`&mut self`). Whereas `let` bindings and `&` references cannot. By the way, you can simply use moves to introduce exterior mutability for data that you own:
let a = Vec::new();
// Not allowed: a.push(1);
// Move the vector.
let mut b = a;
b.push(1);
Even even if a data type uses an immutable binding or reference, it could still mutate that data structure if it has interior mutability (you just can't call methods that use self mutably). See the following small example, which uses an immutable binding to call a method that mutates the internal state of the struct.
It's interesting that Rust doesn't have immutable data. It "only" prevents shared mutable data, but you can always mutate data if nobody else can observe it:
let bar = [456, 789]; // const binding to owned data
let mut bar = bar; // allowed if nobody else can see `bar`
bar[0] = 123; // OK!
Immutability in Rust is not property of the data, but the way you access it.
This explanation of const sounds much like Java's final, but not like C++'s const guarantees, which enable you to guarantee nobody downstream of a const modified reference will be allowed to perform a write operation through that reference.
You cannot get around that behavior by assigning the reference to a non-const copy. That isn't allowed.
But if some other code gets a non-const reference to an object via another pointer path, that code can mutate the object, even though const pointers to that object may exist. In Rust, this is not the case.
That difference severely weakens the meaning of the const property in C++.
No. C++ const just means "I won't mutate this object" (through the const pointer). It doesn't mean that somebody else won't mutate the some object (through their non-const pointer to the same object).
struct X {
X(Y val) : y_{val} {}
// Can be called on const and non-const X objects
const Y& y() const { return y_; }
// Can only be called on non-const X objects
Y& y() { return y_; }
private:
Y y_;
};
void f(X& x) { x.y() = 222; }
void g(const X& x) { x.y() = 222); }
const X x{123}; // x.y() == 123
x.y() = 222; // compilation error
f(x); // compilation error (f's parm is non-const)
g(x); // compilation error (can't assign to const ref)
// Cannot take non-const ref or ptr of x
X& xr{x}; // compilation error
X* xp{&x}; // compilation error
If the original declaration of the variable is const nothing can modify it. (And as my first example shows, the original declaration can always be const if you want it to be, regardless of context.) You can't take a non-const pointer or reference to x without explicitly circumventing the language like this:
Sure, if the object was declared const in the first place, then it (mostly) can't be mutated. But the real power of const comes from const references, which in Rust guarantee that the referenced object will not be mutated while the reference is alive, even though it might have been mutated before.
You're thinking of a const pointer or reference. Of course if the data the pointer is pointing at changes, if you dereference it you get something else. But notice: the pointer, ie. the page number of your book, or the offset in your array, did indeed remain const. It is just the contents of the book/array that changed.
So, a const value cannot be mutated locally or globally. So if you make something const and expose it, it cannot be changed.
With microservice, I mean an application that speaks HTTP, accepts requests, speaks to a database, returns a response (possibly serving HTML), packaged up in a Docker container and ready to be dropped somewhere in the cloud.
Huh? That's a CRUD app. A "microservice" is a service which is a component of a larger service. Usually a service run by the same organization. Many microservices don't speak HTTP; they use Google protocol buffers or something similarly efficient. Serving a Facebook page involves about a hundred microservices, and they don't talk HTTP to each other.
A microservice is about the size of a service, not about the underlying technology. This service has few endpoints, and so is a microservice. This also isn't a CRUD app, as it has no delete, for example. It only has the CR of CRUD.
Is it? Isn't it a purely organizational paradigm where each teams are responsible for a specific service and clear API boundaries are defined between teams? rather than the "size" of an application? (which is hard to quantify, if my app does AI and needs 100,000 line of code for a single endpoint, is it still micro?).
That's why I believe "micro" is an unfortunate qualifier what is essentially basic SOA.
There is at least a 1:, possibly :*, relationship between teams:microservices. The "micro" refers to responsibilities, each service fulfills a single purpose. So yeah, a 100k line service could still be "micro" if it owns 1 thing. That's the beauty, you can start out with a quick hackjob to get the doors open and then replace it with a 100k behemoth later.
Mapping each team directly to a microservice seems dangerous because it encourages the service to be as big as the team, rather than as big as it needs to be. The "microservices" approach is all about keeping your components small.
But you're right, microservices is pretty indistinguishable from a (healthy) SOA. And SOA is pretty indistinguishable from the UNIX philosophy when you get down to it.
I see them as different axes. A service can be small, and it can be the only service. SOA means your larger service is comprised of smaller services, but says nothing about the size of those services.
An URL shortener is a micro service without SOA. A huge web application with four services that are half a million LOC each is SOA, but not micro services.
> Although “microservice” has become a popular name for this architectural style, its name does lead to an unfortunate focus on the size of service, and arguments about what constitutes “micro”. In our conversations with microservice practitioners, we see a range of sizes of services. The largest sizes reported follow Amazon's notion of the Two Pizza Team (i.e. the whole team can be fed by two pizzas), meaning no more than a dozen people. On the smaller size scale we've seen setups where a team of half-a-dozen would support half-a-dozen services.
> This leads to the question of whether there are sufficiently large differences within this size range that the service-per-dozen-people and service-per-person sizes shouldn't be lumped under one microservices label. At the moment we think it's better to group them together, but it's certainly possible that we'll change our mind as we explore this style further.
For what it’s worth, I much prefer your version. It emphasises the similarities between “empterprisey” SOA and the “cool hip” Microservice, while also highlighting one of the bigger differences.
People get too caught up in “how small is micro”, which is just silly extremism.
I'm waiting for the service to be a bit more consistent to blog about it, but we've started with a friend to write a webservice in rocket and it's been a total pleasure
It may be useful to some people, as we've most of the feature you find in a "real life" service
* database connection
* schema creation
* json handling
* pagination
* some endpoints are non-json
* support for CORS
* automated isolation test (i.e we test the service as a blackbox)
* generation of a minimal bin-only docker image that with travis-ci
* self-contained dev environment (vagrant up and you're good to go)
The only catch I've had with Rocket is that it relies on nightly. This can create issues with other libraries, and it is constantly requiring new versions of the nightly version.
If you're going to use Rocket, I recommend pinning the Rocket version in a Cargo.lock, and setting a specific version of nightly with rustup override.
Strikes me as having quite a big cognitive overhead.
I wouldn't try using this in unless being cornered by performance considerations or wanting a pure rust software architecture.
In case you're not familiar with Swagger/OpenAPI, it's a format for specifying (generally REST-ful) HTTP APIs. The key benefit over just writing your API directly in Rocket is that you specify your API once and then generate client and server implementations that join up -
even if they're in different languages.
Or, for another point of view: personally, I really, really don't like codegen. I view it as a wart. So for if you're using Ruby, you can use my Modern framework (not yet publicized, but being used in a pre-production capacity right now, docs to come) to generate an OpenAPI document from your API. Fire up the web server, it automatically generates and serves an OpenAPI document and you're off to the races.
IMO it's a smarter framework than Grape for the tasks I've set out to tackle--JSON-based RESTful APIs (though, as it's an OpenAPI-first service, there's no reason you can't handle binary data or XML or whatever you want) that leverage tools like dry-types to unambiguously define OpenAPI schema--and it addresses a lot of my pet peeves with stuff like weirdly stateful and mutable DSLs during the definition step.
The reason I mention it is because there are three languages I've been using regularly: Ruby, Node, and Rust. I intend to, next time I need a service in Node or Rust, write a Modern equivalent there. =)
Agreed, code generation can be unpleasant, but I think it depends on your workflow.
The key point with the Rust Swagger codegen is that it generates a whole crate, so you just import it and don't really need to worry that it's auto-generated.
Our Swagger/OpenAPI specifications are mastered in a repository separate from our client/server code.
Our plan is to use CI on these repos to generate crates and push them to an internal crate repository. We've prototyped this, but are limited until the alternative crate repositories (https://github.com/rust-lang/rfcs/blob/master/text/2141-alte...) work completes. (We've been quite active in pushing this forward.)
All the client/server code that uses the APIs then just imports the crate as usual.
(I think this is pretty slick.)
It's kind of clever to generate the OpenAPI document from the service that's implementing it, but how do you handle multiple parallel implementations of the same service?
For example, we're building software that we sell to telecoms operators, and one of our APIs is for retrieving information about a telephone number. Depending on the operator, they might store their data in a number of different types of databases (there are standards, but not everyone follows them :( ). These databases are fundamentally different from each other; it's not just different flavors of SQL - it's actually different query primitives. As a result, we have different implementations of this service - one per database type (there is essentially no common code between them). I think defining the API separate from the implementation is pretty crucial to making this work.
That sounds distinctly less awful than most other codegen solutions, I'll give you that. =) Codegen will forever and always make me uncomfortable but that's a better model for sure.
> It's kind of clever to generate the OpenAPI document from the service that's implementing it, but how do you handle multiple parallel implementations of the same service?
I don't. Like, emphatically I don't. I don't view an OpenAPI document as a standard to code against (though that idea is interesting)--I view OpenAPI documents as an easy way to document an API and to generate clients.
That boils down to preference though. A lot of people, myself included, find Rocket to be a highly ergonomic, state-of-the-art web framework. I think you're wrong to discount it because it doesn't look, at first glance, the exact same as the libraries you've used yourself.
Why? Even writing micro service in C++ is dead simple these days:
1. write the service specification in Thrift.
2. Generate the server and client stub with the framework.
3. Write the service functions.
4. Most probably you'd need to include some database/cache libraries. Poco project does a fairly good job.
5. Add some more logging and metric libraries.
6. General tools/libs are available in poco and folly.
7. If you are messing with REST, Microsoft Rest SDK and Facebook Proxygen is already providing a very solid foundation. High performance servers are already written. You just need to fill the call backs and configure the thread pools.
Why do you consider this more difficult than doing the same in Go? You just need some intermediate C++ knowledge for above tasks. No need to touch template programming, manually managing threads, ...
IMO, the goroutine model has the same synchronization challenges as multithreaded programming. For network code that is io bound, I find async/await much cleaner.
It's not that bad with channels. I suppose you don't have to worry about synchronization issues as much with async/await on Node since that's a single threaded environment, but async/await with something like C# isn't any less complicated in that regard that goroutines and channels.
Generally Go code isn't written like typical multithreaded code with synchronization unless you're really trying to optimize something.
When you talking about async/await, you're actually talking about asynchronization, and when you talking about goroutine, you're talking about parallel (which is a selling point of Go).
If the thread that responsible for executing the async/await blocks, then the entire progress/thread rely on that async/await will block. So if you want to comfortably use async/await, the entire call stack must be build for it, and that can be a huge challenge for the ecosystem of that language.
goroutine resolved that problem by spawn a new system thread so the rest of the progress will not be effected too much by the block. Go also resolved that problem by wrapping IO on top of an internal async API so basic IO operation will not cause actual block for too long.
If you want a compiled systems language with async/await you should take a look at Nim. Implemented fully using macros so you can even extend it (Nim's metaprogramming features are awesome).
well I think Code generation can suck. But it does not need to suck.
I like how grpc/protobuf work. And generating the code saving it in a repo and using it, is fine. for RPC-like stuff.
However, generating code with a compiler can actually suck. Like having your database layer (jooq like) generated while compiling so the code does not get checked in anywhere.
How come on every thread showing an example of how to do something in Rust there's a staunch C++ defender thinking the whole world should be using their language? C++ is a complex beast that due to its age has quite a number of disadvantages and no two codebases I've seen are the same - a measure of a language is its simplicity and C++ subjectively isn't thus. If you like C++ that's fine; doesn't mean you can't do it in Rust as well.
I think it's good that other languages are evolving that are safer, make it easier to do concurrency, have an idiomatic style and still are performant whilst removing legacy cruft. It's the natural evolution of things. Every C++ team I've been on debates every feature usage (e.g. auto/lambdas/etc) where one team's good code is another's bad idea due to sheer amount of legacy features vs new ways, what language constructs are good vs completely bad ideas (everyone has completely different opinions), how should I import libraries, what build tool to use, etc.
Because some of us actually like both languages, Rust and C++, and dislike the misconception some have about what C++ is capable of, because they only know how to write "C with a C++ compiler" kind of thing.
Any complex language that gets wide market adoption evolves into "complex beast that due to its age has quite a number of disadvantages and no two codebases I've seen are the same".
> Every C++ team I've been on debates every feature usage (e.g. auto/lambdas/etc) where one team's good code is another's bad idea due to sheer amount of legacy features vs new ways, what language constructs are good vs completely bad ideas (everyone has completely different opinions), how should I import libraries, what build tool to use, etc.
well yeah, and if you then ask these people to debate instead the use of $LANG instead of C++ you will get 100 times more debate.
The commenter to which you’re replying is arguing that the cognitive overhead in developing a microservice even in C++ isn’t so bad compared to, say, Go.
It doesn’t seem to be an advertisement for C++ or a condemnation of Rust.
> Why? Even writing micro service in C++ is dead simple these days
Apologies to the commenter then. I misread that to be a "why bother?" to the original blog post and do this instead. All tech has its pros and cons and trying other things can be good too.
That code looks awful, sorry but rust syntax is horrible. I want to like rust but after a couple months of trying I hate the syntax and the over complicated nature of the language.
Disclaimer: I'm not in the 'rust community' (nor would I want to be!). I'm not an evangelist for it, and I'm not very good at it. I like the language though. Anyway...
I really think people need to get over syntax. I don't find rust particularly good looking either, but I get over that because I like the semantics and it sits in a nice place in the ecosystem of programming languages. Likewise i don't like how C# uses PascalCase everywhere, or puts opening braces on its own line - yet when I program in C#, I adhere to those conventions.
Syntax is incredibly subjective, and also superficial. If you think the semantics of a language suck or are not a fit for what you are doing - that's a reasonable conversation. But you really shouldn't limit your language choice based on non-alphanumeric characters or whatever your objection is.
> Syntax is incredibly subjective, and also superficial. If you think the semantics of a language suck or are not a fit for what you are doing - that's a reasonable conversation. But you really shouldn't limit your language choice based on non-alphanumeric characters or whatever your objection is.
I like the semantics of Rust, I actually use Rust, and plan on continuing to do so. I still hate the syntax and module system though.
Rust has an overly complicated modules system compared to any other language Ive used. Rust has a million different string types, Can't use inline assembly unless I use a non stable compiler, etc, etc. On paper rust sounds perfect until you use it.
edit:
And no I'm not talking about the borrow-checker, I like that and it took very little time to figure out.
Steve do you have any details to share on this? Is there a related RFC? The module system is, oddly enough, one of my favorite things about Rust -- so I'd like to keep abreast of any changes to it.
You and me are the only two that like it ;) let me reply to your sibling with the details, it’ll be a few minutes to type it all out. It’s more of a “reduce confusion of paths” than anything else at this stage.
I'd also like to add my voice to those who like the current system. For me at lease, the flexibility it affords outweighed the initial difficulty I encountered learning it.
So, we wanted to do something more sweeping, but in the RFC discussions, some of the bigger ideas were too controversial, so we had to pare them back.
Timeframe: fairly soon. You can already try out most of it on nightly. There's still some final details to work through though.
Before we get into details, none of these things are breaking changes; in Rust 2015, we will add lints to nudge you in this direction, in Rust 2018, those will move to deny by default. This means if you truly love the old system, you can still use it, by allowing instead of denying the warnings.
Main ideas:
* absolute paths begin with the crate name, or "crate" if they're paths in the same crate.
One of the core issues that these changes address is that, if you don't develop the right mental model around defining items and use, counter-intuitive results can happen. For example:
extern crate futures;
mod submodule {
// this works!
use futures::Future;
// so why doesn't this work?
fn my_poll() -> futures::Poll { ... }
}
std is even worse, as you don't even write the `use` or `extern crate` lines:
fn main() {
// this works
let five = std::sync::Arc::new(5);
}
mod submodle {
fn function() {
// ... so why doesn't this work
let five = std::sync::Arc::new(5);
}
}
Quoting the RFC:
> In other words, while there are simple and consistent rules defining the module system, their consequences can feel inconsistent, counterintuitive and mysterious.
With the changes, the code looks like this:
extern crate futures;
mod submodule {
use futures::Future;
fn my_poll() -> futures::Poll { ... }
}
fn main() {
let five = std::sync::Arc::new(5);
}
mod submodle {
fn function() {
let five = std::sync::Arc::new(5);
}
}
Nice and consistent.
That being said, there's also some discussion here that hasn't been totally sorted. Using the crate name in this way has some technical problems, and so we might make it that it's "extern::crate_name", so that is
mod submodule {
use extern::futures::Future;
This is a bit verbose though, so we're not sure that's what we want. See the end of this post for that discussion.
* "extern crate" goes away.
Speaking of the code above, why do we have to write `extern crate futures` anyway? It's already in your Cargo.toml. Cargo already passes --extern futures=/path/to/futures.rlib to rustc. In the end, it just feels like boilerplate. Again, there's that inconsistency between std and futures in the code above. Removing the extern crate line makes it more consistent, and removes boilerplate. 99% of the time, people put the line in the crate root anyway, and half of the 1% who don't get confused when it doesn't work when they do this.
* The "crate" keyword can be used like pub(crate) can today, for making something crate-visible but not public externally
This feels superficial, but ends up also being a much easier mental model. Here's the problem: you see "pub struct Foo;". Is Foo part of your public API, or not? Only if it's in a public module itself! pub(crate) is longer than just crate, and is often the thing you actually want when you use 'pub' inside something that's not public. So let's encourage the right thing, and one that's easier to tell at a glance.
* mod.rs is no longer needed; foo.rs and foo/ work together, rather than foo/mod.rs
There's tons of awkwardness here. Most people use foo.rs until they give it a submodule, then they have to move it to foo/mod.rs. This just feels like meaningless change for no reason. Instead of "mod foo; means look in foo.rs or foo/mod.rs", it becomes "mod foo; means foo.rs". Much more straightforward. Same with a "mod bar" inside foo.rs, it becomes foo/bar.rs, (well, as it is today, but you can see how this is more consistent overall. If it had submodules, it might be foo/bar/mod.rs!)
Also, if you have a bunch of `mod.rs` files open in your editor, you have no idea what module they corresponds to, as they all say `mod.rs`. Now they'll say the file name instead.
----------------------------
That's the quick summary. I've left out some details. If you want to try this yourself, grab a nightly and add this:
I've got to admit Steve, you really spiked my blood pressure this morning with a statement like "we're changing the module system." -- I'm outraged that you would link to this well reasoned, well written RFC that has such a clean and simple migration story! ;-)
I'm actually most hyped about `#![feature(crate_visibility_modifier)]` to be honest. I know it's essentially just an alias for pub(crate), but I'm all about typing less parens in my item definitions! I didn't know about the other `pub(...)` modifiers for the longest time, but they've been so useful for things like games programming, where the entire point of the exercise boils down to "cross cutting concerns" and "eh you've got a &mut Player anyways, just reach in there and poke at the state!"
The `../mod.rs` change is also quite nice. I mean, at the end of the day it'll only save me a `mv` command and a refresh of my directory listing in vim, but sometimes those small context switches can have a surprisingly large impact on flow; since now I'm thinking about filesystems and module trees rather than the problem at hand.
Hehe, I'm glad you're feeling positive about it. It took a lot of blood and tears to reach this point, honestly.
> I mean, at the end of the day it'll only save me a `mv` command
Yeah, as you say, it feels minor, but hopefully, a lot of tiny ergonomic changes will end up feeling significantly better. It's also why the epoch concept is important; it gives us a way to talk about how all these little changes every six weeks build into something much bigger and nicer.
Why not move the version specifier from Cargo.toml to the extern statement ala Qml?
import QtQuick 2.7
This has multiple benefits:
* You only need to go to up to 1 file to add an import
* Tools like cargo-script don't need special comment syntax for inline dependency specification.
* The source code functionality arguably depends on the version of the libraries included as well as the name, so it keeps it together.
This seems pretty obvious though so I'm guessing there's a reason it wasn't done?
> Also, if you have a bunch of `mod.rs` files open in your editor, you have no idea what module they corresponds to, as they all say `mod.rs`. Now they'll say the file name instead.
This is probably the most important reason to make the change tbh. It doesn't seem like a big thing but it's one of those ergonomic papercuts that will make the user experience subtly better once it's fixed.
Sounds great! One thing I've always had trouble with is detecting unused dependencies. If a project grows fast, it's easy to leave some unused modules in Cargo.toml. At least, matching them with their `extern crate` counterpart helps detect unused ones, by relying on rustc for the check.
While that would make you unnecessarily build the dependency the first time, it at least wouldn't be in your final binary, since everything would be unused. That said, we could still warn about it anyway, even without extern crate.
The two different string types are the result of the borrow checker. Rust's string handling would be unusable without the difference between &str and String.
Rust is incredibly usable for a systems programming language that doesn't have a garbage collector.
Often I'm writing code that's as high-level as other languages. Though by its nature it also has aspects that you don't even have to think about in other languages, like Fn vs FnOnce vs FnMut when working with closures. So it's undoubtably going to be more difficult than other languages.
For example, I think most Rust users would agree that it could be confusing when a fn returns a Cow<&str> vs OsString vs String. But it's straight-forward to convert those into String even if you don't care what the differences are.
I think it's fair to simply not have an appetite for a certain language's set of idiosyncrasies.
> That code looks awful, sorry but rust syntax is horrible.
Compared to what? Compared to Python, JS, Go, etc, yes, those are different tools for projects with different requirements, that allow for a leaner syntax. Compared to other languages of it's category (non garbage collected, manual memory management), Rust looks alright, IMHO.
Which is why those languages should be used for microservices, not Rust. The majority of web apps and microservice projects don't need the fine grain control of Rust. Rust is for replacing C and C++, would you write your microservice in C or C++? The way Mozilla or the Redox project is using Rust is a great example of where Rust should be used. In kernels, parsers, JITS and other low level libraries. I'm not saying you can't write a microservice in Rust but your going to be much more productive using a language like go, node, python, java, etc.
You're starting to tell people what they "should" do and it's starting to smell of sour grapes because you didn't particularly like Rust nor achieved a productive level of familiarity. Which makes you a poor candidate for evaluating the strengths/weaknesses of Rust beyond your tastes.
Like someone hijacking Haskell discussion because they didn't grasp functional programming and don't see how anyone else could.
I'm writing most of my new code in Rust that I would've written in Node or Go. Sometimes it makes more sense to use something else. So what?
Maybe it's time to leave the theater so others can enjoy the show. Your "me no likey" posts have kinda overstayed their welcome without advancing the discussion.
> You're starting to tell people what they "should" do and it's starting to smell of sour grapes because you didn't particularly like Rust nor achieved a productive level of familiarity. Which makes you a poor candidate for evaluating the strengths/weaknesses of Rust beyond your tastes.
I wrote a working Macho-o Parser in rust and I plan on continuing the symbolic execution engine I started in rust, so I'm not a complete beginner to rust. Also there is such a thing as the best tool for the job. And for web services I don't think rust is that tool of choice.
> Like someone hijacking Haskell discussion because they didn't grasp functional programming and don't see how anyone else could.
I understand why rust exist and it makes sense, a safe low level systems programing language. What I don't get is why one would use rust for the vast majority of microservices. There are easier more productive tools available.
> I'm writing most of my new code in Rust that I would've written in Node or Go. Sometimes it makes more sense to use something else. So what?
So you acknowledge that there are better tools for building microservices but your defense for using rust is "so what"? I mean that's fine for personal projects but when your putting things in production you should be using the best tools available.
> Maybe it's time to leave the theater so others can enjoy the show. Your "me no likey" posts have kinda overstayed their welcome without advancing the discussion.
Lol, so only positive opinions of rust are allowed here? But seriously, steveklabnik responded to one of my post with so much useful and awesome information that your, I need to leave post is ridiculous.
Well yeah, that was the point: "While there exist a number of high-level, Flask or Django like frameworks that abstract away most of the fun about this, we will opt for using the slightly lower-level hyper library to handle HTTP..."
You should check out Rocket or Actix-Web or Gotham for something a bit more higher-level. Also know that it's Rust we're talking about - it's an expressive languages, but it's also a systems language, if you want something you can toss in a breakpoint and start introspecting or experimenting you'll probably want Ruby or Elixir.
I made a very simple web service with Rocket (see https://github.com/MrBuddyCasino/alexa-auth), which is better but still - I‘d go for Kotlin or Go instead if performance requirements allow it.
Are you sure rocket is actually faster than the Go/JVM equivalent?
I'm a big fan of Rust, and I don't deny that it's possible for it to be faster, but my understanding of where Rocket's async story is that it isn't there yet.
You're also using reqwest::Client inside your handlers, which is synchronous.
I'd just find it hard to believe that could beat an async JVM or Go implementation.
You're right, my statement was slightly misleading. Rocket is certainly not there yet, so it would be pointless to use it to get better performance right now. Actix however is already extremely fast and shows the potential of Rust, which _in the future_ I suppose might be a driver to pick Rust over other alternatives.
Since language ergonomics and productivity are inferior over e.g. Kotlin, it would be pointless to pick where it not for technical advantages such req/s, memory usage or latency.
Kotlin runs on the JVM too, and easily interops with Java libraries and frameworks. So all three of your points apply equally to Kotlin as they do to Java.
I haver been looking at Rust as a replacement for a c microserver I made,
I have a thing that listens as root and serves as a user dictated by their id.
so it's
as root:
on connect fork:
forked process:
read request to determine ID
drop privileges to UID and GID of ID
process request
Because of the root component I want it to do as little as possible while being root, which c is good for. but it's also probably riddled with holes, only some of which I am aware.
This looks like something right out of a CTF. I'm pretty sure you can overflow that buffer and smash the stack. It's also vulnerable to a path traversal attack. What happens if the filename is '../../../etc/passwd'?
You should also pass -fstack-protector in your Makefile.
Had a quick glance and your code is littered with unchecked function calls and potential overflows.
Also: Cookie:../../../<filename>
Where <filename> is a file starting with a value that's interpreted as a valid uid by atoi(). You're saved by a NULL pointer deref when the unchecked getpwuid() fails if the resulting uid is >0 but invalid (unless you're running it on a system where NULL is mapped to readable memory).
The reality is that I only wrote as much as I needed to go back to working on the project I needed it for. It works for the 'Everything is fine' case, which is what I needed to go back to developing the client side. Even a hint of malicious intent could probably bring it to it's knees.
But therein lies the rub. Is it worth hardening it or should I just go to Rust where most of those things just won't pop up?
So... did you use HTTP because the client side required it or it made the client side much easier, or was that just what came to mind? Because this seems to be exactly the type of thing I would generally use the default OpenSSH installaction on the box for, pre-shared keys, and possibly even setting a specific shell on the specified public key on the user side on the server to prevent random shell access.
There's some really interesting advanced features of OpenSSH that most people will never have need for, but you can come up with some really interesting solutions. For example, you could also use a single remote account that allows SSH access and has a separate public key for each user that sets an environment var for the target desires user, and restrict the command run to sudo with that environment variable defining to run as that specific user, and make sure sudo is configured for the users allowed.
A microservice isn't a bad idea, it's just interesting how many ways there usually are to accomplish what seems like odd, specific custom workflow in most UNIX environments.
Client side was all browser. https://github.com/Lerc/notanos also incomplete. I go back and add things to it from time to time. It's my long term toy project.
Shouldn't RUST_LOG use "microservice_rs" instead of "microservice"? the "_rs" is a crate name.
Another thing, in a make_post_response stub function I think it's better to use StatusCode::NotImplemented instead of NotFound (when trying this code I thought the match did not work correctly as everything returned NotFound).
The plan is for Rocket to run on stable by the end of 2018. v0.4 is getting closer and the two biggest features remaining to be implemented for it are connection pooling (which I have an open PR that's being worked on) and CSRF support.
Can you cite this claim? I've taken a look at the list of unstable features that Rocket uses, and at times it almost seems like a deliberate ploy to use every nightly feature imaginable. And the feature that it really does actually need for its API, procedural macros, isn't guaranteed to land this year AFAIK (of course, I'd love to be wrong).
> Nevertheless, I believe taking this route and going slightly lower level with Hyper gave you some nice insights into how you can leverage Rust to write a safe and performant webservice.
Using hyper directly definitely gives you a better idea how everything fits together and it is much more helpful for understanding how Futures work.
Diesel is a synchronous library, so the post's database functions make a blocking call and then return a future which, of course, doesn't make them asynchronous. They'd still block the event loop.
You could fix this by passing a `&futures_cpupool::CpuPool` into your database functions and wrapping the post's function bodies in `pool.spawn_fn(|| { ... })` so that they execute on a different thread and return a future of their result.
To have a global IO pool that you pass into fns, you can store it in the Service struct:
struct Microservice { io_pool: CpuPool }
Now you can access it in your service's handler (the call() fn) with `self.io_pool`.
Diesel doesn't return futures, the author of the blog post returned a future and the person that you commented on is pointing out that that's pointless because Diesel is synchronous.
> Immutability (const) by default
The author clearly knows this, but others may not, so: immutability and "const" aren't the same thing. Const typically prevents name rebinding (as the type system permits). Immutability is the presumption that the whole data structure can't be mutated by default (though there are usually ways to mutate cheaply--without bulk copying, that is--provided by most immutable data APIs).
Const works like this:
Immutability, as typically defined (there's a bit of a semantic debate here sometimes) prevents changes to the contents of complex data structures: Edit: missed a space.