Hacker News new | past | comments | ask | show | jobs | submit login
Writing a Microservice in Rust (goldsborough.me)
265 points by blacksmythe on March 8, 2018 | hide | past | favorite | 125 comments



Minor nitpick:

> Immutability (const) by default

The author clearly knows this, but others may not, so: immutability and "const" aren't the same thing. Const typically prevents name rebinding (as the type system permits). Immutability is the presumption that the whole data structure can't be mutated by default (though there are usually ways to mutate cheaply--without bulk copying, that is--provided by most immutable data APIs).

Const works like this:

  not_const int[] foo = [123, 456]
  const int[] bar = [456,789]
  foo = [1,2,3]     // no problem; rebinding is allowed if it meets the 'int' type constraint.
  bar = [1,2,3]     // compile error!
Immutability, as typically defined (there's a bit of a semantic debate here sometimes) prevents changes to the contents of complex data structures:

  mut int[] foo = [123, 456]
  not_mut int[] bar = [456,789]
  foo[0] = 789    // no problem; mutation is allowed so long as it's with valid types.
  bar[0] = 123    // compile error!
Edit: missed a space.


Const typically prevents name rebinding (as the type system permits).

In C++ you can only call `const` methods through a const binding. `const` methods mark `this` const, so such methods cannot modify member variables. (Minus all escape hatches, such as `const_cast`.)

The author clearly knows this, [...] Immutability is the presumption that the whole data structure can't be mutated by default

The situation in Rust is a bit more complex. Rust distinguishes between interior and exterior mutability.

`let mut` bindings and `&mut` references exhibit exterior mutability - you can call methods that borrow `self` mutably (`&mut self`). Whereas `let` bindings and `&` references cannot. By the way, you can simply use moves to introduce exterior mutability for data that you own:

    let a = Vec::new();
    // Not allowed: a.push(1);
    
    // Move the vector.
    let mut b = a;
    b.push(1);
Even even if a data type uses an immutable binding or reference, it could still mutate that data structure if it has interior mutability (you just can't call methods that use self mutably). See the following small example, which uses an immutable binding to call a method that mutates the internal state of the struct.

https://play.rust-lang.org/?gist=6816dc9a03aca1e779f08443224...

The downside of interior mutability is that borrowing rules are enforced at run-time rather than compile-time.


C++ also has the `mutable` as a way to achieve something akin to internal mutability


It's interesting that Rust doesn't have immutable data. It "only" prevents shared mutable data, but you can always mutate data if nobody else can observe it:

    let bar = [456, 789]; // const binding to owned data
    let mut bar = bar; // allowed if nobody else can see `bar`
    bar[0] = 123; // OK!
Immutability in Rust is not property of the data, but the way you access it.


This explanation of const sounds much like Java's final, but not like C++'s const guarantees, which enable you to guarantee nobody downstream of a const modified reference will be allowed to perform a write operation through that reference.

You cannot get around that behavior by assigning the reference to a non-const copy. That isn't allowed.


But if some other code gets a non-const reference to an object via another pointer path, that code can mutate the object, even though const pointers to that object may exist. In Rust, this is not the case.

That difference severely weakens the meaning of the const property in C++.


See my comment below for how to prevent this: https://news.ycombinator.com/item?id=16550209


Your definition is at odds with C++ (one of the most widely used languages of all time), and its use of `const`.

C++'s const is your "immutability".


No. C++ const just means "I won't mutate this object" (through the const pointer). It doesn't mean that somebody else won't mutate the some object (through their non-const pointer to the same object).


That's not always true.

  struct X {
    const int i{123};
    const int j;

    X(int j_) : j{j_} {}
  };

  // non-const object; x.i == 123; x.j == 111
  X x{111};
  x.i = 222; // compilation error
  x.j = 333; // compilation error


And a more idiomatic example:

  struct X {
    X(Y val) : y_{val} {}

    // Can be called on const and non-const X objects
    const Y& y() const { return y_; }

    // Can only be called on non-const X objects
    Y& y() { return y_; }

  private:
    Y y_;
  };

  void f(X& x) { x.y() = 222; }
  void g(const X& x) { x.y() = 222); }

  const X x{123};  // x.y() == 123
  x.y() = 222; // compilation error
  f(x); // compilation error (f's parm is non-const)
  g(x); // compilation error (can't assign to const ref)

  // Cannot take non-const ref or ptr of x
  X& xr{x};  // compilation error
  X* xp{&x}; // compilation error
If the original declaration of the variable is const nothing can modify it. (And as my first example shows, the original declaration can always be const if you want it to be, regardless of context.) You can't take a non-const pointer or reference to x without explicitly circumventing the language like this:

  X& xr{const_cast<X&>(x)};
  X* xp(const_cast<X*>(&x)};


Sure, if the object was declared const in the first place, then it (mostly) can't be mutated. But the real power of const comes from const references, which in Rust guarantee that the referenced object will not be mutated while the reference is alive, even though it might have been mutated before.


I see. That's a nice feature.


You're thinking of a const pointer or reference. Of course if the data the pointer is pointing at changes, if you dereference it you get something else. But notice: the pointer, ie. the page number of your book, or the offset in your array, did indeed remain const. It is just the contents of the book/array that changed.

So, a const value cannot be mutated locally or globally. So if you make something const and expose it, it cannot be changed.


So in the const example, bar[0] is mutable? If not, I don't get your point.


With microservice, I mean an application that speaks HTTP, accepts requests, speaks to a database, returns a response (possibly serving HTML), packaged up in a Docker container and ready to be dropped somewhere in the cloud.

Huh? That's a CRUD app. A "microservice" is a service which is a component of a larger service. Usually a service run by the same organization. Many microservices don't speak HTTP; they use Google protocol buffers or something similarly efficient. Serving a Facebook page involves about a hundred microservices, and they don't talk HTTP to each other.


They might speak HTTP/2 though; gRPC does this, and gRPC is quite popular.


Thank you for the explanation, I never understood this. I wonder how many people do though (is everyone on the same page about this, usually?).


A microservice is about the size of a service, not about the underlying technology. This service has few endpoints, and so is a microservice. This also isn't a CRUD app, as it has no delete, for example. It only has the CR of CRUD.


> A microservice is about the size of a service

Is it? Isn't it a purely organizational paradigm where each teams are responsible for a specific service and clear API boundaries are defined between teams? rather than the "size" of an application? (which is hard to quantify, if my app does AI and needs 100,000 line of code for a single endpoint, is it still micro?).

That's why I believe "micro" is an unfortunate qualifier what is essentially basic SOA.


There is at least a 1:, possibly :*, relationship between teams:microservices. The "micro" refers to responsibilities, each service fulfills a single purpose. So yeah, a 100k line service could still be "micro" if it owns 1 thing. That's the beauty, you can start out with a quick hackjob to get the doors open and then replace it with a 100k behemoth later.

Mapping each team directly to a microservice seems dangerous because it encourages the service to be as big as the team, rather than as big as it needs to be. The "microservices" approach is all about keeping your components small.

But you're right, microservices is pretty indistinguishable from a (healthy) SOA. And SOA is pretty indistinguishable from the UNIX philosophy when you get down to it.


I see them as different axes. A service can be small, and it can be the only service. SOA means your larger service is comprised of smaller services, but says nothing about the size of those services.

An URL shortener is a micro service without SOA. A huge web application with four services that are half a million LOC each is SOA, but not micro services.

I wonder what Martin Folwer has to say...


> How big is a microservice?[0]

> Although “microservice” has become a popular name for this architectural style, its name does lead to an unfortunate focus on the size of service, and arguments about what constitutes “micro”. In our conversations with microservice practitioners, we see a range of sizes of services. The largest sizes reported follow Amazon's notion of the Two Pizza Team (i.e. the whole team can be fed by two pizzas), meaning no more than a dozen people. On the smaller size scale we've seen setups where a team of half-a-dozen would support half-a-dozen services.

> This leads to the question of whether there are sufficiently large differences within this size range that the service-per-dozen-people and service-per-person sizes shouldn't be lumped under one microservices label. At the moment we think it's better to group them together, but it's certainly possible that we'll change our mind as we explore this style further.

[0]https://martinfowler.com/articles/microservices.html


Thanks! I was on mobile so it was a bit awkward. I'll have to revise my understanding :)


For what it’s worth, I much prefer your version. It emphasises the similarities between “empterprisey” SOA and the “cool hip” Microservice, while also highlighting one of the bigger differences.

People get too caught up in “how small is micro”, which is just silly extremism.


He and James Lewis had this to say about it. https://www.thoughtworks.com/insights/blog/microservices-nut... I believe James' talk at the end of the article says he wishes it was called something else because of the implication of size.


No, and it's not even a very useful distinction to make. Just keep that a microservice doesn't have to speak HTTP (or be in Docker for that matter).


It's a web-service. "microservice" is just like "serverless", a buzzword that means everything and its contrary.


I'm waiting for the service to be a bit more consistent to blog about it, but we've started with a friend to write a webservice in rocket and it's been a total pleasure

It may be useful to some people, as we've most of the feature you find in a "real life" service

   * database connection
   * schema creation
   * json handling
   * pagination
   * some endpoints are non-json
   * support for CORS
   * automated isolation test (i.e we test the service as a blackbox)
   * generation of a minimal bin-only docker image that with travis-ci
   * self-contained dev environment (vagrant up and you're good to go)
https://github.com/allan-simon/sentence-aligner-rust


The only catch I've had with Rocket is that it relies on nightly. This can create issues with other libraries, and it is constantly requiring new versions of the nightly version.

If you're going to use Rocket, I recommend pinning the Rocket version in a Cargo.lock, and setting a specific version of nightly with rustup override.


Strikes me as having quite a big cognitive overhead. I wouldn't try using this in unless being cornered by performance considerations or wanting a pure rust software architecture.


Note that he is using Hyper, which is a low level library. The article mentions [Rocket](https://rocket.rs/guide/getting-started/), which has higher level API.


If you're defining your APIs using Swagger/OpenAPI (https://swagger.io/specification/), you can autogenerate Rust client and server stubs using the standard Swagger code generators (https://github.com/swagger-api/swagger-codegen/).

In case you're not familiar with Swagger/OpenAPI, it's a format for specifying (generally REST-ful) HTTP APIs. The key benefit over just writing your API directly in Rocket is that you specify your API once and then generate client and server implementations that join up - even if they're in different languages.

(I did a lot of the development work on the "rust-server" Swagger codegen, and blogged on it at https://www.metaswitch.com/blog/metaswitch-swagger-codegen-f...)


Or, for another point of view: personally, I really, really don't like codegen. I view it as a wart. So for if you're using Ruby, you can use my Modern framework (not yet publicized, but being used in a pre-production capacity right now, docs to come) to generate an OpenAPI document from your API. Fire up the web server, it automatically generates and serves an OpenAPI document and you're off to the races.

https://github.com/modern-project/modern-ruby

IMO it's a smarter framework than Grape for the tasks I've set out to tackle--JSON-based RESTful APIs (though, as it's an OpenAPI-first service, there's no reason you can't handle binary data or XML or whatever you want) that leverage tools like dry-types to unambiguously define OpenAPI schema--and it addresses a lot of my pet peeves with stuff like weirdly stateful and mutable DSLs during the definition step.

The reason I mention it is because there are three languages I've been using regularly: Ruby, Node, and Rust. I intend to, next time I need a service in Node or Rust, write a Modern equivalent there. =)


Agreed, code generation can be unpleasant, but I think it depends on your workflow.

The key point with the Rust Swagger codegen is that it generates a whole crate, so you just import it and don't really need to worry that it's auto-generated.

Our Swagger/OpenAPI specifications are mastered in a repository separate from our client/server code.

Our plan is to use CI on these repos to generate crates and push them to an internal crate repository. We've prototyped this, but are limited until the alternative crate repositories (https://github.com/rust-lang/rfcs/blob/master/text/2141-alte...) work completes. (We've been quite active in pushing this forward.)

All the client/server code that uses the APIs then just imports the crate as usual.

(I think this is pretty slick.)

It's kind of clever to generate the OpenAPI document from the service that's implementing it, but how do you handle multiple parallel implementations of the same service?

For example, we're building software that we sell to telecoms operators, and one of our APIs is for retrieving information about a telephone number. Depending on the operator, they might store their data in a number of different types of databases (there are standards, but not everyone follows them :( ). These databases are fundamentally different from each other; it's not just different flavors of SQL - it's actually different query primitives. As a result, we have different implementations of this service - one per database type (there is essentially no common code between them). I think defining the API separate from the implementation is pretty crucial to making this work.


That sounds distinctly less awful than most other codegen solutions, I'll give you that. =) Codegen will forever and always make me uncomfortable but that's a better model for sure.

> It's kind of clever to generate the OpenAPI document from the service that's implementing it, but how do you handle multiple parallel implementations of the same service?

I don't. Like, emphatically I don't. I don't view an OpenAPI document as a standard to code against (though that idea is interesting)--I view OpenAPI documents as an easy way to document an API and to generate clients.


The rocket API still looks messier and harder to use than any node.js or golang web application library.


That boils down to preference though. A lot of people, myself included, find Rocket to be a highly ergonomic, state-of-the-art web framework. I think you're wrong to discount it because it doesn't look, at first glance, the exact same as the libraries you've used yourself.


Rocket might, but it's also using lots of Rust-specific features. If you want to copy Node, you can: https://users.rust-lang.org/t/a-new-crate-simple-server-a-ba...

Someday I will finish my Express port...


Looks about on par with Express with all the obvious benefits that Express does not have.


Why? Even writing micro service in C++ is dead simple these days:

1. write the service specification in Thrift.

2. Generate the server and client stub with the framework.

3. Write the service functions.

4. Most probably you'd need to include some database/cache libraries. Poco project does a fairly good job.

5. Add some more logging and metric libraries.

6. General tools/libs are available in poco and folly.

7. If you are messing with REST, Microsoft Rest SDK and Facebook Proxygen is already providing a very solid foundation. High performance servers are already written. You just need to fill the call backs and configure the thread pools.

Why do you consider this more difficult than doing the same in Go? You just need some intermediate C++ knowledge for above tasks. No need to touch template programming, manually managing threads, ...


Because Go has built in primitives to make micro services.

The cleanest asynchronous io model that I have ever seen.

Simple serialization by default.

Did I mention the blazing fast compile times? Just that would make it way more “faster” to build your service in Go :)

And above all the language is simple with robust idioms.

Okay, I admit it is somewhat too simple, cough generics. And you won’t get C like performances for sure.


IMO, the goroutine model has the same synchronization challenges as multithreaded programming. For network code that is io bound, I find async/await much cleaner.


It's not that bad with channels. I suppose you don't have to worry about synchronization issues as much with async/await on Node since that's a single threaded environment, but async/await with something like C# isn't any less complicated in that regard that goroutines and channels.

Generally Go code isn't written like typical multithreaded code with synchronization unless you're really trying to optimize something.


async/await and goroutine is not same thing here.

When you talking about async/await, you're actually talking about asynchronization, and when you talking about goroutine, you're talking about parallel (which is a selling point of Go).

If the thread that responsible for executing the async/await blocks, then the entire progress/thread rely on that async/await will block. So if you want to comfortably use async/await, the entire call stack must be build for it, and that can be a huge challenge for the ecosystem of that language.

goroutine resolved that problem by spawn a new system thread so the rest of the progress will not be effected too much by the block. Go also resolved that problem by wrapping IO on top of an internal async API so basic IO operation will not cause actual block for too long.


Not necessarily, .NET and C++ task models (as done on Windows) rely on thread pools.

When a thread responsible for async/await needs to block, the task is parked and another one will get allocated to the running thread.


the effort to add async/await to Rust is ongoing, with an experimental RFC [1]. There is also a working implementation using procedural macros [2].

[1] https://github.com/rust-lang/rfcs/pull/2033 [2] https://github.com/alexcrichton/futures-await


If you want a compiled systems language with async/await you should take a look at Nim. Implemented fully using macros so you can even extend it (Nim's metaprogramming features are awesome).


Code generation sucks...Especially when it breaks...the more you use it, the more you try to avoid it.


well I think Code generation can suck. But it does not need to suck.

I like how grpc/protobuf work. And generating the code saving it in a repo and using it, is fine. for RPC-like stuff.

However, generating code with a compiler can actually suck. Like having your database layer (jooq like) generated while compiling so the code does not get checked in anywhere.


How come on every thread showing an example of how to do something in Rust there's a staunch C++ defender thinking the whole world should be using their language? C++ is a complex beast that due to its age has quite a number of disadvantages and no two codebases I've seen are the same - a measure of a language is its simplicity and C++ subjectively isn't thus. If you like C++ that's fine; doesn't mean you can't do it in Rust as well.

I think it's good that other languages are evolving that are safer, make it easier to do concurrency, have an idiomatic style and still are performant whilst removing legacy cruft. It's the natural evolution of things. Every C++ team I've been on debates every feature usage (e.g. auto/lambdas/etc) where one team's good code is another's bad idea due to sheer amount of legacy features vs new ways, what language constructs are good vs completely bad ideas (everyone has completely different opinions), how should I import libraries, what build tool to use, etc.


Because some of us actually like both languages, Rust and C++, and dislike the misconception some have about what C++ is capable of, because they only know how to write "C with a C++ compiler" kind of thing.

Any complex language that gets wide market adoption evolves into "complex beast that due to its age has quite a number of disadvantages and no two codebases I've seen are the same".

Even something like C.


> there's a staunch C++ defender thinking the whole world should be using their language?

That's in no way a fair characterization of the parent poster.


> Every C++ team I've been on debates every feature usage (e.g. auto/lambdas/etc) where one team's good code is another's bad idea due to sheer amount of legacy features vs new ways, what language constructs are good vs completely bad ideas (everyone has completely different opinions), how should I import libraries, what build tool to use, etc.

well yeah, and if you then ask these people to debate instead the use of $LANG instead of C++ you will get 100 times more debate.


The commenter to which you’re replying is arguing that the cognitive overhead in developing a microservice even in C++ isn’t so bad compared to, say, Go.

It doesn’t seem to be an advertisement for C++ or a condemnation of Rust.


> Why? Even writing micro service in C++ is dead simple these days

Apologies to the commenter then. I misread that to be a "why bother?" to the original blog post and do this instead. All tech has its pros and cons and trying other things can be good too.


That does seem a bit painful to me. Maybe it will get better with some Rust macro DSL though...


> That does seem a bit painful to me.

It's because it's written using a low level library. Here's the first two sections (the http handling part) in Rocket:

    #![feature(plugin, custom_derive)]
    #![plugin(rocket_codegen)]

    extern crate rocket;

    use rocket::request::Form;

    #[derive(FromForm, Debug)]
    struct NewMessage {
        username: String,
        message: String,
    }

    #[derive(FromForm, Debug)]
    struct TimeRange {
        before: Option<i64>,
        after: Option<i64>,
    }

    #[post("/", data="<message>")]
    fn message_create(message: Form<NewMessage>) -> String {
        format!("{:?}", message)
    }

    #[get("/?<times>")]
    fn message_query(times: TimeRange) -> String {
        format!("{:?}", times)
    }


    fn main() {
        rocket::ignite()
            .mount("/", routes![message_create, message_query])
            .launch();
    }


This looks exactly like the Python code I'm currently writing. That's what I would call a Flask-like API. Nice seeing this in Rust!

A Python equivalent using the typing module:

    @app.get("/<times>")
    def message_query(times: TimeRange) -> str:
        return format("{:?}", times)


That code looks awful, sorry but rust syntax is horrible. I want to like rust but after a couple months of trying I hate the syntax and the over complicated nature of the language.


Disclaimer: I'm not in the 'rust community' (nor would I want to be!). I'm not an evangelist for it, and I'm not very good at it. I like the language though. Anyway...

I really think people need to get over syntax. I don't find rust particularly good looking either, but I get over that because I like the semantics and it sits in a nice place in the ecosystem of programming languages. Likewise i don't like how C# uses PascalCase everywhere, or puts opening braces on its own line - yet when I program in C#, I adhere to those conventions.

Syntax is incredibly subjective, and also superficial. If you think the semantics of a language suck or are not a fit for what you are doing - that's a reasonable conversation. But you really shouldn't limit your language choice based on non-alphanumeric characters or whatever your objection is.


> Syntax is incredibly subjective, and also superficial. If you think the semantics of a language suck or are not a fit for what you are doing - that's a reasonable conversation. But you really shouldn't limit your language choice based on non-alphanumeric characters or whatever your objection is.

I like the semantics of Rust, I actually use Rust, and plan on continuing to do so. I still hate the syntax and module system though.


It might not be for you. But that's not very damning criticism.


Rust has an overly complicated modules system compared to any other language Ive used. Rust has a million different string types, Can't use inline assembly unless I use a non stable compiler, etc, etc. On paper rust sounds perfect until you use it.

edit:

And no I'm not talking about the borrow-checker, I like that and it took very little time to figure out.


> Rust has a million different string types

Your negative is my positive, I can transform a &[u8] to &str without any allocations for instance.


We’re simplifying the module system this year, incidentally.


Steve do you have any details to share on this? Is there a related RFC? The module system is, oddly enough, one of my favorite things about Rust -- so I'd like to keep abreast of any changes to it.


You and me are the only two that like it ;) let me reply to your sibling with the details, it’ll be a few minutes to type it all out. It’s more of a “reduce confusion of paths” than anything else at this stage.


I'd also like to add my voice to those who like the current system. For me at lease, the flexibility it affords outweighed the initial difficulty I encountered learning it.


Count me in on that. I love Rust's module system as it is today. :-)


Cool, I'm excited in the results, any timeframe, what are the main ideas?


So, we wanted to do something more sweeping, but in the RFC discussions, some of the bigger ideas were too controversial, so we had to pare them back.

Timeframe: fairly soon. You can already try out most of it on nightly. There's still some final details to work through though.

Before we get into details, none of these things are breaking changes; in Rust 2015, we will add lints to nudge you in this direction, in Rust 2018, those will move to deny by default. This means if you truly love the old system, you can still use it, by allowing instead of denying the warnings.

Main ideas:

* absolute paths begin with the crate name, or "crate" if they're paths in the same crate.

One of the core issues that these changes address is that, if you don't develop the right mental model around defining items and use, counter-intuitive results can happen. For example:

  extern crate futures;

  mod submodule {
      // this works!
      use futures::Future;

      // so why doesn't this work?
      fn my_poll() -> futures::Poll { ... }
  }
std is even worse, as you don't even write the `use` or `extern crate` lines:

  fn main() {
      // this works
      let five = std::sync::Arc::new(5);
  }
  
  mod submodle {
      fn function() {
          // ... so why doesn't this work
          let five = std::sync::Arc::new(5);
      }
  }
Quoting the RFC:

> In other words, while there are simple and consistent rules defining the module system, their consequences can feel inconsistent, counterintuitive and mysterious.

With the changes, the code looks like this:

  extern crate futures;

  mod submodule {
      use futures::Future;

      fn my_poll() -> futures::Poll { ... }
  }

  fn main() {
      let five = std::sync::Arc::new(5);
  }
  
  mod submodle {
      fn function() {
          let five = std::sync::Arc::new(5);
      }
  }
Nice and consistent.

That being said, there's also some discussion here that hasn't been totally sorted. Using the crate name in this way has some technical problems, and so we might make it that it's "extern::crate_name", so that is

  mod submodule {
      use extern::futures::Future;
This is a bit verbose though, so we're not sure that's what we want. See the end of this post for that discussion.

* "extern crate" goes away.

Speaking of the code above, why do we have to write `extern crate futures` anyway? It's already in your Cargo.toml. Cargo already passes --extern futures=/path/to/futures.rlib to rustc. In the end, it just feels like boilerplate. Again, there's that inconsistency between std and futures in the code above. Removing the extern crate line makes it more consistent, and removes boilerplate. 99% of the time, people put the line in the crate root anyway, and half of the 1% who don't get confused when it doesn't work when they do this.

* The "crate" keyword can be used like pub(crate) can today, for making something crate-visible but not public externally

This feels superficial, but ends up also being a much easier mental model. Here's the problem: you see "pub struct Foo;". Is Foo part of your public API, or not? Only if it's in a public module itself! pub(crate) is longer than just crate, and is often the thing you actually want when you use 'pub' inside something that's not public. So let's encourage the right thing, and one that's easier to tell at a glance.

* mod.rs is no longer needed; foo.rs and foo/ work together, rather than foo/mod.rs

There's tons of awkwardness here. Most people use foo.rs until they give it a submodule, then they have to move it to foo/mod.rs. This just feels like meaningless change for no reason. Instead of "mod foo; means look in foo.rs or foo/mod.rs", it becomes "mod foo; means foo.rs". Much more straightforward. Same with a "mod bar" inside foo.rs, it becomes foo/bar.rs, (well, as it is today, but you can see how this is more consistent overall. If it had submodules, it might be foo/bar/mod.rs!)

Also, if you have a bunch of `mod.rs` files open in your editor, you have no idea what module they corresponds to, as they all say `mod.rs`. Now they'll say the file name instead.

----------------------------

That's the quick summary. I've left out some details. If you want to try this yourself, grab a nightly and add this:

  #![feature(
      crate_in_paths, 
      decl_macro, 
      extern_in_paths,
      crate_visibility_modifier,
  )]
Note this includes the verbose "use extern" stuff.

If you'd like to read the details yourself: https://github.com/rust-lang/rfcs/blob/master/text/2126-path... and https://internals.rust-lang.org/t/the-great-module-adventure... ; the former is the RFC that was accepted, the latter is the discussion about the extern issue, with a few different variants.


I've got to admit Steve, you really spiked my blood pressure this morning with a statement like "we're changing the module system." -- I'm outraged that you would link to this well reasoned, well written RFC that has such a clean and simple migration story! ;-)

I'm actually most hyped about `#![feature(crate_visibility_modifier)]` to be honest. I know it's essentially just an alias for pub(crate), but I'm all about typing less parens in my item definitions! I didn't know about the other `pub(...)` modifiers for the longest time, but they've been so useful for things like games programming, where the entire point of the exercise boils down to "cross cutting concerns" and "eh you've got a &mut Player anyways, just reach in there and poke at the state!"

The `../mod.rs` change is also quite nice. I mean, at the end of the day it'll only save me a `mv` command and a refresh of my directory listing in vim, but sometimes those small context switches can have a surprisingly large impact on flow; since now I'm thinking about filesystems and module trees rather than the problem at hand.


Hehe, I'm glad you're feeling positive about it. It took a lot of blood and tears to reach this point, honestly.

> I mean, at the end of the day it'll only save me a `mv` command

Yeah, as you say, it feels minor, but hopefully, a lot of tiny ergonomic changes will end up feeling significantly better. It's also why the epoch concept is important; it gives us a way to talk about how all these little changes every six weeks build into something much bigger and nicer.


Why not move the version specifier from Cargo.toml to the extern statement ala Qml?

    import QtQuick 2.7
This has multiple benefits:

* You only need to go to up to 1 file to add an import * Tools like cargo-script don't need special comment syntax for inline dependency specification. * The source code functionality arguably depends on the version of the libraries included as well as the name, so it keeps it together.

This seems pretty obvious though so I'm guessing there's a reason it wasn't done?


You’re forgetting the difference between cargo.toml and cargo.lock. In order to solve this, cargo would have to parse your source code.

Another way to think of it is, Cargo.toml contains all metadata about the build, and this is fundamentally metadata.


> Also, if you have a bunch of `mod.rs` files open in your editor, you have no idea what module they corresponds to, as they all say `mod.rs`. Now they'll say the file name instead.

This is probably the most important reason to make the change tbh. It doesn't seem like a big thing but it's one of those ergonomic papercuts that will make the user experience subtly better once it's fixed.


Sounds great! One thing I've always had trouble with is detecting unused dependencies. If a project grows fast, it's easy to leave some unused modules in Cargo.toml. At least, matching them with their `extern crate` counterpart helps detect unused ones, by relying on rustc for the check.


Great!

While that would make you unnecessarily build the dependency the first time, it at least wouldn't be in your final binary, since everything would be unused. That said, we could still warn about it anyway, even without extern crate.


The foo/mod.rs being replaced with foo.rs and foo/bar.rs is great news. This is how GNU Guile does their module system, and it's always felt so nice.


The two different string types are the result of the borrow checker. Rust's string handling would be unusable without the difference between &str and String.


Rust is incredibly usable for a systems programming language that doesn't have a garbage collector.

Often I'm writing code that's as high-level as other languages. Though by its nature it also has aspects that you don't even have to think about in other languages, like Fn vs FnOnce vs FnMut when working with closures. So it's undoubtably going to be more difficult than other languages.

For example, I think most Rust users would agree that it could be confusing when a fn returns a Cow<&str> vs OsString vs String. But it's straight-forward to convert those into String even if you don't care what the differences are.

I think it's fair to simply not have an appetite for a certain language's set of idiosyncrasies.


> That code looks awful, sorry but rust syntax is horrible.

Compared to what? Compared to Python, JS, Go, etc, yes, those are different tools for projects with different requirements, that allow for a leaner syntax. Compared to other languages of it's category (non garbage collected, manual memory management), Rust looks alright, IMHO.


Which is why those languages should be used for microservices, not Rust. The majority of web apps and microservice projects don't need the fine grain control of Rust. Rust is for replacing C and C++, would you write your microservice in C or C++? The way Mozilla or the Redox project is using Rust is a great example of where Rust should be used. In kernels, parsers, JITS and other low level libraries. I'm not saying you can't write a microservice in Rust but your going to be much more productive using a language like go, node, python, java, etc.


> Which is why those languages should be used for microservices, not Rust.

I think generalizations such as this one don't make much sense. But I agree with the rest of what you say.


You're starting to tell people what they "should" do and it's starting to smell of sour grapes because you didn't particularly like Rust nor achieved a productive level of familiarity. Which makes you a poor candidate for evaluating the strengths/weaknesses of Rust beyond your tastes.

Like someone hijacking Haskell discussion because they didn't grasp functional programming and don't see how anyone else could.

I'm writing most of my new code in Rust that I would've written in Node or Go. Sometimes it makes more sense to use something else. So what?

Maybe it's time to leave the theater so others can enjoy the show. Your "me no likey" posts have kinda overstayed their welcome without advancing the discussion.


> You're starting to tell people what they "should" do and it's starting to smell of sour grapes because you didn't particularly like Rust nor achieved a productive level of familiarity. Which makes you a poor candidate for evaluating the strengths/weaknesses of Rust beyond your tastes.

I wrote a working Macho-o Parser in rust and I plan on continuing the symbolic execution engine I started in rust, so I'm not a complete beginner to rust. Also there is such a thing as the best tool for the job. And for web services I don't think rust is that tool of choice.

> Like someone hijacking Haskell discussion because they didn't grasp functional programming and don't see how anyone else could.

I understand why rust exist and it makes sense, a safe low level systems programing language. What I don't get is why one would use rust for the vast majority of microservices. There are easier more productive tools available.

> I'm writing most of my new code in Rust that I would've written in Node or Go. Sometimes it makes more sense to use something else. So what?

So you acknowledge that there are better tools for building microservices but your defense for using rust is "so what"? I mean that's fine for personal projects but when your putting things in production you should be using the best tools available.

> Maybe it's time to leave the theater so others can enjoy the show. Your "me no likey" posts have kinda overstayed their welcome without advancing the discussion.

Lol, so only positive opinions of rust are allowed here? But seriously, steveklabnik responded to one of my post with so much useful and awesome information that your, I need to leave post is ridiculous.


I am with you there, like Rust, but in what concerns microservices I would always pick a GC language with JIT/AOT.

The language is a tiny piece of the end-to-end development experience.

Now, for playing around on a ESP32? C++ or Rust (when available).


Well yeah, that was the point: "While there exist a number of high-level, Flask or Django like frameworks that abstract away most of the fun about this, we will opt for using the slightly lower-level hyper library to handle HTTP..."

You should check out Rocket or Actix-Web or Gotham for something a bit more higher-level. Also know that it's Rust we're talking about - it's an expressive languages, but it's also a systems language, if you want something you can toss in a breakpoint and start introspecting or experimenting you'll probably want Ruby or Elixir.


How about Actix? From the README at https://github.com/actix/actix-web#example:

    extern crate actix_web;
    use actix_web::*;

    fn index(req: HttpRequest) -> String {
        format!("Hello {}!", &req.match_info()["name"])
    }

    fn main() {
        HttpServer::new(
            || Application::new()
                .resource("/{name}", |r| r.f(index)))
            .bind("127.0.0.1:8080").unwrap()
            .run();
    }


AFAIK actix was pretty high in the techempower rankings this year, top 10 or 20 in most.


I made a very simple web service with Rocket (see https://github.com/MrBuddyCasino/alexa-auth), which is better but still - I‘d go for Kotlin or Go instead if performance requirements allow it.


Are you sure rocket is actually faster than the Go/JVM equivalent?

I'm a big fan of Rust, and I don't deny that it's possible for it to be faster, but my understanding of where Rocket's async story is that it isn't there yet.

You're also using reqwest::Client inside your handlers, which is synchronous.

I'd just find it hard to believe that could beat an async JVM or Go implementation.


You're right, my statement was slightly misleading. Rocket is certainly not there yet, so it would be pointless to use it to get better performance right now. Actix however is already extremely fast and shows the potential of Rust, which _in the future_ I suppose might be a driver to pick Rust over other alternatives.

Since language ergonomics and productivity are inferior over e.g. Kotlin, it would be pointless to pick where it not for technical advantages such req/s, memory usage or latency.


Yeah, I have seen actix, it looks great. The recent techempower benches have been particularly revealing of Rust's potential in this area.


Why Kotlin but not Java?


Kotlin is what Java would look like if it was invented today.

Type inference, explicit non-nullabililty, first class functions, function types, extension methods, etc.

Eveyone I know who's used Kotlin would not willingly go back to Java.


Java is fine, too. If I have the choice I prefer Kotlin these days. Same as if I have the money I go by taxi, not by bus.


- JVM is a good platform? Aka if you can think about it, it's probably been done in JVM and/or Java.

- Java's GC is highly customizable?

- It's one of top contenders in most web related benchmarks (see: https://www.techempower.com/benchmarks/#section=data-r15&hw=...)


Kotlin runs on the JVM too, and easily interops with Java libraries and frameworks. So all three of your points apply equally to Kotlin as they do to Java.


I haver been looking at Rust as a replacement for a c microserver I made,

I have a thing that listens as root and serves as a user dictated by their id.

so it's

    as root:
       on connect fork:
           forked process:
              read request to determine ID
              drop privileges to UID and GID of ID
              process request
Because of the root component I want it to do as little as possible while being root, which c is good for. but it's also probably riddled with holes, only some of which I am aware.


Is it public? Can I take a look?


There's a version of some vintage at https://github.com/Lerc/userserv (it's terrible, don't judge me)

It has node bits there but they aren't necessary for the basic static serving. If it gets a WebSocket connection it punts it on to node.


https://github.com/Lerc/userserv/blob/master/authenticationt...

This looks like something right out of a CTF. I'm pretty sure you can overflow that buffer and smash the stack. It's also vulnerable to a path traversal attack. What happens if the filename is '../../../etc/passwd'?

You should also pass -fstack-protector in your Makefile.


Had a quick glance and your code is littered with unchecked function calls and potential overflows.

Also: Cookie:../../../<filename>

Where <filename> is a file starting with a value that's interpreted as a valid uid by atoi(). You're saved by a NULL pointer deref when the unchecked getpwuid() fails if the resulting uid is >0 but invalid (unless you're running it on a system where NULL is mapped to readable memory).


Hey! I said don't judge me :-)

The reality is that I only wrote as much as I needed to go back to working on the project I needed it for. It works for the 'Everything is fine' case, which is what I needed to go back to developing the client side. Even a hint of malicious intent could probably bring it to it's knees.

But therein lies the rub. Is it worth hardening it or should I just go to Rust where most of those things just won't pop up?


So... did you use HTTP because the client side required it or it made the client side much easier, or was that just what came to mind? Because this seems to be exactly the type of thing I would generally use the default OpenSSH installaction on the box for, pre-shared keys, and possibly even setting a specific shell on the specified public key on the user side on the server to prevent random shell access.

There's some really interesting advanced features of OpenSSH that most people will never have need for, but you can come up with some really interesting solutions. For example, you could also use a single remote account that allows SSH access and has a separate public key for each user that sets an environment var for the target desires user, and restrict the command run to sudo with that environment variable defining to run as that specific user, and make sure sudo is configured for the users allowed.

A microservice isn't a bad idea, it's just interesting how many ways there usually are to accomplish what seems like odd, specific custom workflow in most UNIX environments.


Client side was all browser. https://github.com/Lerc/notanos also incomplete. I go back and add things to it from time to time. It's my long term toy project.


Shouldn't RUST_LOG use "microservice_rs" instead of "microservice"? the "_rs" is a crate name.

Another thing, in a make_post_response stub function I think it's better to use StatusCode::NotImplemented instead of NotFound (when trying this code I thought the match did not work correctly as everything returned NotFound).


The amount of ad network traffic rolling off of this page is staggering.

Made more amazing by the fact that only one ad shows on the entire page.


Excellent guide, something that I'll need in near future.

I wonder why the author chose hyper instead of iron though...


I'm pretty sure Iron is no longer maintained.


Yea unfortunately Iron isn't actively maintained anymore.


Or rocket/Gotham. Personally, I preferred working with rocket far more than iron, and that despite the fact that rocket is nightly-only.


The plan is for Rocket to run on stable by the end of 2018. v0.4 is getting closer and the two biggest features remaining to be implemented for it are connection pooling (which I have an open PR that's being worked on) and CSRF support.


Can you cite this claim? I've taken a look at the list of unstable features that Rocket uses, and at times it almost seems like a deliberate ploy to use every nightly feature imaginable. And the feature that it really does actually need for its API, procedural macros, isn't guaranteed to land this year AFAIK (of course, I'd love to be wrong).


Sergio has said it on IRC multiple times.

Procedrual macros are not only landing this year, but in the first half of the year. See the roadmap.


I've seen that macros 2.0 are landing this year, but it's unclear to me whether "macros 2.0" only refers to `macro!` or also to procedural macros.


Macros 2.0 means both Macros By Example 2.0 and Procedrual Macros 2.0.

:confetti_ball:


> Nevertheless, I believe taking this route and going slightly lower level with Hyper gave you some nice insights into how you can leverage Rust to write a safe and performant webservice.

Using hyper directly definitely gives you a better idea how everything fits together and it is much more helpful for understanding how Futures work.


Diesel is a synchronous library, so the post's database functions make a blocking call and then return a future which, of course, doesn't make them asynchronous. They'd still block the event loop.

You could fix this by passing a `&futures_cpupool::CpuPool` into your database functions and wrapping the post's function bodies in `pool.spawn_fn(|| { ... })` so that they execute on a different thread and return a future of their result.

To have a global IO pool that you pass into fns, you can store it in the Service struct:

    struct Microservice { io_pool: CpuPool }
Now you can access it in your service's handler (the call() fn) with `self.io_pool`.


Or use pg_async[1] that integrates with Futures/Tokio/Hyper.

[1] https://crates.io/crates/pg_async


I've never touched Rust or Diesel... but what is the point of a synchronous library that makes blocking calls _and returns futures_?

(I mean - why futures, if the call was blocking and presumably have a real result to return?)


Diesel doesn't return futures, the author of the blog post returned a future and the person that you commented on is pointing out that that's pointless because Diesel is synchronous.


Wild guess here, but it might make it compatible with frameworks or middleware that expects futures.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: