I haven't looked too deeply into Rust yet, but was able to understand this coming from Elixir. Pattern matching makes for a beautiful solution. This is a similar solution in Elixir:
fizzbuzz = fn(x) ->
case {rem(x, 3) == 0, rem(x, 5) == 0} do
{true, false} -> IO.puts "fizz"
{false, true} -> IO.puts "buzz"
{true, true} -> IO.puts "fizzbuzz"
_ -> IO.puts x
end
end
Enum.each Range.new(1, num), fizzbuzz
Since functions are also pattern matched in Elixir (and Erlang!) it could also be done without using case and handled purely as functions.
Python doesn't have pattern matching but the code is basically the same. I guess better cases for showing off the feature are ones where the patterns aren't just True/False tuples.
for i in range (1, 101):
fbsign = (i % 3 == 0, i % 5 == 0)
if fbsign == (1, 1): print("Fizzbuzz")
elif fbsign == (1, 0): print("Fizz")
elif fbsign == (0, 1): print("Buzz")
else: print(i)
Well, here's an exhaustively checking version. Not statically still of course.
for x in range(1,101):
print [
# %3 = 0
[ x, "Fizz" ],
[ "Buzz", "FizzBuzz"] # %5 = 0
][x%3 == 0][x%5 == 0]
(I'll note this was hard to get than if-checks and pattern matching so I won't be replacing such logic with matrices in my Python program any time soon...)
Some people are sayint this is extremely non idiomatic python. I think most of the problem is not following the style guides. Here's a pep8 compliant solution that is a bit more idomatic, and almost as compact.
In Python, the way to do pattern matching is with dictionaries of functions.
fizz_buzz = {(True, True): lambda x: "Fizzbuzz",
(True, False): lambda x: "Fizz",
(False, True): lambda x: "Buzz",
(False, False): lambda x: x}
for i in range(1, 30):
fbsign = (i % 3 == 0, i % 5 == 0)
print(fizz_buzz[fbsign](i))
I see it as over thinking simple stuff. Maybe I think about performance too much but constructing a dictionary and then defining functions to do simple signature thing is imo just over designing things. Too much abstraction for expressing a simple concept. Performance aside, you need to go find the dictionary once you see code like that while with chain of if-else statements it's right there in front of your eyes.
As to 0 and 1 thing... Python has C'ish boolean (basically an integer type). Using True and False as numbers is completely standard as PEP 285 indicates. I tend to agree that maybe here using True/False is a bit more natural but it really is type-purity nitpick in my view.
Could you point out which part of PEP8 the code in my original post violates? While I don't really like a lot of currently popular style I see a lot of values in following the PEP's and long established conventions.
I contemplated using `match` in the article, but decided that there was enough to think about and that it was long enough already. Thanks for pointing it out, though—it certainly is a great thing!
Weird to see this mix of a very imperative for-range iterative loop with a very functional pattern match, which makes it look similar to an SML or OCaml solution to FizzBuzz. I guess this is the definition of multi-paradigm right here.
Will Rust's type checker warn you of a non-exhaustive pattern match?
Rust was first conceived by an avid Ocamler, and it was originally implemented in Ocaml too. Although the pot has been stirred quite a bit since those early days, the influence still remains, including the expression heavy programming style, pattern matching, 'let's, HM inference, and `var: T` declaration syntax. Whilst Rust is quite procedural and (you rarely use recursion), it often feels quite functional due to those things.
It wasn't strictly HM, as it had extensions for the subtyping that lifetimes require. It was based on HM, however.
The new bespoke scheme gives approximately the same results as HM but is drastically simpler. For all I know this could inhibit Rust's future ability to do even more powerful things with types, but AIUI this scheme has the advantage of being actually decidable given the extensions to HM that we would require in the current language.
I ain't a type theorist though, so take this as hearsay. :)
Strictly speaking, I think almost no extant languages, and certainly no mainstream ones, use pure HM, but many take it as a starting point. Certainly, HM has no notion of ML modules, or type classes, or record types, or lifetimes. Nevertheless many languages using those things use HM as a starting point.
I'm curious (and a bit skeptical) of your claim that the scheme is "drastically simpler" than HM. HM is a beautifully simple design, which can be expressed (abstractly) in just a couple of lines.
"This scheme simplifies the code of the type inferencer dramatically and (I think) helps to meet our intutions (as I will explain). It is however somewhat less flexible than the existing inference scheme, though all of rustc and all the libraries compile without any changes."
Pattern matching is one of the ways to get "two-mod" code where the modulus operator is used two times. For example, string concatenation and assignment operators do the same in this Python code:
for i in range(1, 101):
x = ""
if i % 3 == 0:
x += "Fizz"
if i % 5 == 0:
x += "Buzz"
if x == "":
x += str(i)
print(x)
If anybody is randomly curious it can be fun to solve FizzBuzz in Haskell the same way that this Python code does it, but (to be more idiomatically Haskell) storing the FizzBuzz success in a Maybe (or some other variable). If you define the appropriate `~>`, and higher-precedence `|~`, and higher-precedence `|~~>`, you can write the above function as:
fizzbuzz x =
mod x 3 == 0 ~> "Fizz"
|~ mod x 5 == 0 ~> "Buzz"
|~~> show x
It's interesting because it's sort of a "follow-through guards" situation; the (|~) operator can at least be turned into a type signature of (Monoid m) => Maybe m -> Maybe m -> Maybe m.
I guess beauty is in the eye of the programmer. I'd choose Python's or Ruby's FizzBuzz. It's beautiful that everyone can immediately understand those. This one, not so much. As a little experiment, I've deliberately avoided learning Rust to see if I can understand its idioms without reading any docs. I can sort of guess at what's going on here by reverse engineering what should happen with FizzBuzz, but it's not at all intuitive. For example, as an outsider, I'd expect it to be (0, 1) instead of (0, 0) since it's matching both the 0th and 1st patterns. Whereas (0, _) would be "0th pattern but not the 1st," or something, even though that really wouldn't make much sense because "0" would refer to which pattern it's matching, rather than the position of the argument determining which pattern it's matching. Etc.
If Rust is the most robust way to solve a problem, it should naturally catch on. It seems pretty promising in that regard.
EDIT: As a counter to my comment, my argument would be equally applicable to Lisp, and Lisp is beautiful. So my argument is probably mistaken.
Maybe someone has to learn a language before judging whether it's beautiful.
def fizzbuzz x
case [x % 3 == 0, x % 5 == 0]
when [true, false] then puts "fizz"
when [false, true] then puts "buzz"
when [true, true] then puts "fizzbuzz"
else puts x
end
end
The one and only time I was asked FizzBuzz in an interview, I wrote it this way, so it's not all that contrived (in my opinion, anyway!)
It must just be a matter of familiarity. Rust's pattern matching is quite similar to other languages with pattern matching.
For example in OCaml the match statement would be:
match ( i mod 3, i mod 5) with
| (0,0) -> Printf.printf "Fizzbuzz"
| (0,_) -> Printf.printf "Fizz"
| (_,0) -> Printf.printf "buzz"
| (_,_) -> Printf.printf "%d" i
Other than the irrelevant syntax bits like 'with', the semantics are identical, in order matching with no fall through, and _ for unnamed and unused bindings.
To me with very limited experience in it, Rust really feels like OCaml with a skin that C programmers will understand.
Oh, okay! Cool! I didn't realize it'd be valuable to anyone. I'll try to put together something for you, and I'll take it seriously so that it isn't biased one way or the other. I have some stuff coming up, but after seeing some incredibly neat stuff written in Rust, I'm planning on doing a project myself, and I'll email you with a raw braindump of my first experiences with the language, along with a list of previous languages I've learned as well as my experience level with each. Thank you for maintaining Rust's docs!
To be fair, you can bind to any name. So, binding to `a` instead of `_` would work as well. You could then use that bound value in the corresponding expression.
However, I would have expected rustc to complain about unused variables in
fn main() {
for i in range(1i, 101) {
match (i % 3, i % 5) {
(0, 0) => println!("Fizzbuzz"),
(0, a) => println!("Fizz"),
(b, 0) => println!("Buzz"),
c => println!("{}", i),
}
}
}
but neither the playpen nor yesterdays snapshot complains. And if you want to suppress warnings about unused variables, you prefix the variable with an underscore, or just use only the underscore, which got common to mean "I don't care what value gets bound to this name.".
That's a very useful feature. Maybe I'll go ahead and learn Rust now. If it has a features like pattern matching, which seems about ten times more useful than the classic switch statement, then it probably has a lot of other insights worth learning.
If you were to start a hypothetical project written in Rust, what would it be? I'm looking for something to cut my teeth on.
I would suggest you port over a project that you are already familiar with. It's easier to learn a new syntax when you don't have to grapple with implementation as well. And you get to have an objective comparison of the same project implemented 2 different ways.
> As a little experiment, I've deliberately avoided learning Rust to see if I can understand its idioms without reading any docs.
A result of applying this approach to languages in general would be only knowing a couple of fairly similar languages. Which is bad. Really, really bad. Language influences our way of thinking in a non-trivial way, knowing only one kind of language is limiting.
Even worse, you're going to forever stay constrained to one family of languages that you didn't (probably) even chose yourself. In a current world it's ok if you were introduced to a C-like language as your first, but what if it was Pascal or Scheme?
In short: DON'T DO THIS. Learn more languages, the more FOREIGN (ie. you can't understand anything without docs) the BETTER. The 'intuitive' languages only let you express the same solution again and again, while breaking AWAY from your intuitions and learning 'non-intuitive' languages let's you see and implement DIFFERENT solutions.
> As a little experiment, I've deliberately avoided learning Rust to see if I can understand its idioms without reading any docs.
As an experiment of what? Whether rust code makes somewhat sense to you depends to an extent on what languages you already know (I guess ML languages would help). Same as with Ruby and Python.
I fail to see how a simple switch statement doesn't read just as easily. I mean, sure I have to know the mod 15 trick, but... not exactly hard.
function fizzBuzz(i) {
switch(i % 15) {
case 0: return "fizbuzz";
case 5:
case 10: return "buzz";
case 3:
case 6:
case 9:
case 12: return "fizz";
default: return i.toString();
}
}
[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15].map(fizzBuzz)
I actually agree it is better. But... my point was that the switch statement was already pretty readable. If there are gains, they feel pretty small in these examples.
And to be clear, I like pattern matching. A lot. I just don't feel this really shows it off that well.
For what it's worth, String in Rust is similar to StringBuffer in other languages. You can append to a String; you can't append to a slice, which always represents a fixed view.
A slice has storage that is borrowed from somewhere else, but it itself does not have its own storage.
When you type `"foo"`, you are creating "static" storage (in the binary) and the slice is borrowed from that fixed-position location in memory.
Mostly, your functions produce Strings and consume slices.
I don't think a common ancestor is how you want to do this. Instead, I'd encapsulate the common behavior in a trait (a.k.a. an interface, in other languages), and then write functions that accept any parameters that implement that trait. In Rust, you can do this even on "built-in" types like strings. For an example, see the `print_me` function below, which operates on both string types using a trait that I've defined myself.
trait WhatIsThis {
fn what_is_this(self);
}
impl WhatIsThis for String {
fn what_is_this(self) {
println!("'{:s}' is a string!", self);
}
}
impl WhatIsThis for &'static str {
fn what_is_this(self) {
println!("'{:s}' is a string slice!", self);
}
}
fn print_me<T>(me: T) where T: WhatIsThis {
me.what_is_this();
}
fn main() {
print_me("Blah blah blah");
print_me("Yada yada yada".to_string());
}
I think this is an anti-pattern. I'm not a fan of the slicing syntax in general, which seems to exist only to paper over the missing traits that wycats mentions.
Also, the location of storage is slightly more visible (in general) in Rust than in other languages.
I have personally found this to be pretty clarifying, because as much a we may like to abstract over it, the location of storage often worms its way into the programming model even in HLLs.
And that indeed is the real distinction between what Rust has and what other languages tend to have—the fact that &str doesn’t have its own storage. The equivalent to string types in other languages would be more like SendStr.
That functions produce Strings and consume slices is a good way of expressing it.
That was informative... and it reinforced my perception of Rust as something to look into if I ever have to do something that absolutely necessitates the use of something low level like C++/Assembly. But for everything else that I can get away with (and that is a lot so far) I'll stick with Go because it's so much faster and shorter to write.
I think it's, in general, useful to differentiate the speed of code-writing as a new developer and the speed of code-writing as an experienced developer.
The reality is that people have very little time to try out new languages, and have to rely on anecdotes about the long-term cognitive costs of things like this.
My personal experience is that many of the seemingly more-onerous things about Rust end up falling away once the rhythm of programming sets in.
This is likely similar to how the error-handling approach of Go looks onerous at first, but seems to be something that doesn't slow people down too much in practice.
This is not obviously true to me, Rust provides more tools for building abstractions, even just "handle errors less manually"-abstractions, so they're possibly not that different (this certainly applies to the 'shorter' section).
I guess it may be true for the things Go is suited/designed for; time and experience will tell.
I'm pretty happy with the speed at which I write Rust code, but I can definitely churn out Go code more quickly. (Probably on the order of how quickly I can write Python, although refactoring Go code is much faster.) I'm not sure exactly why, but my guess is that there are fewer abstractions to deal with (and fewer opportunities to make abstractions). I have written several medium Go applications (near or above 10 KLOC, which ideally, I would never hit in a dynamic language), and I'm pretty happy with how the code turned out in all but one of them. (But that one is a window manager.) I haven't yet written a similarly sized Rust application, though.
I do write Rust code more quickly that I write Haskell code though. :-)
I was referring to "development using C++", which encompasses the language, compilers, build tools, code editors, ecosystem, basically everything that is involved when you're doing work and distributing result binaries.
There are IDEs that have their proprietary project formats that are incompatible.
Having header files makes refactoring by hand much more difficult than necessary, yet refactoring tools for C++ are mostly not possible.
Irregardless of IDEs, I'd still need to learn how to use cmake and other systems to build and use libraries. It's a large pain compared to `pip install ...`.
...need to learn how to use cmake and other systems to build and use libraries.
On OSes for which this is true you'd be forced to compile python yourself as well, because it's also just a dependency (of pip, for one!) written in a compiled language.
Edit: My point is: usually it's as easy as 'yum install', but on the rare occasion you will need to compile, I admit. However, that could happen with pip too; don't tell me its repos are always completely up to date. And in those cases it won't be quite as simple as 'pip install'.
Now, of course, no one would program that way, but I think it does help visualize what really happens.
The obvious cost from delaying the printing is that you have to branch a second time, later in the code, to consume the value. I wonder how feasible it would be to introduce some kind of compiler transform that could invert the control flow, essentially pasting the surrounding code into the inner branches, to make this abstraction cost-free.
"may not compile" would be a better title than "may not work". To a programmer familiar with compiled languages, "may not work" implies that the code will compile but produce the wrong result/a crash/undefined behavior. Rust aims to catch mistakes at compile time, so the headline is quite sensational under this interpretation.
> There is a trade‐off here for them; as a general rule, such languages have immutable strings
This is a bit disingenuous. I understand what he's aiming at, and he also mentions StringBuilder later on, but saying that a GC necessatites immutable strings is simply not true.
As a counterexample: PHP has mutable strings and uses copy-on-write in situations where it "feels" that conflicts could occur. (Granted PHPs rules on how it handles its variables is a bit arbitrary and magical, and PHP didn't have a GC till version 5.3 .. but the argument still stands.)
I think that "{true} if {cond} else {false}" is quite an unnatural and confusing construct, especially when you attempt to nest them. Although I'm not really familiar with Python I thought that was concatenating 'FizzBuzz' with the value of some nested ternary expression and only realised the order was inverted when I tried to parse the inner one.
The vast majority of conditionals in languages I know follow the {cond} {true} {false} order: IIf({cond}, {true}, {false}) in VB, SQL, and spreadsheets; if({cond}) {true} else {false} and ternary {cond}?{true}:{false} in C and C-derived languages; (if {cond} {true} {false}) in the Lisp family; if {cond} then {true} else {false} in ALGOL/Pascal, etc. There's probably a reason for this order, as seeing a condition in the middle of an expression feels surprising and unexpected.
Python does the same, but it has a 1-line version of conditionals that is what the parent uses. Many Python programmers enjoy 1-liners, but I think once you start adding else statements to them they become unreadable.
The OP's characterization is reasonably accurate in my experience. I've run into several Python programmers who didn't actually know about Python's special "if expression" syntax.
Moreover, the code you've presented here certainly isn't idiomatic, which counts for something.
sure, but when you're implicitly comparing code segments (by placing them next to each other), you should at least make the effort to make them more the same, instead of pointing out that one language is missing a feature used in the other language, especially when this claim is false.
the formatting can of course be improved:
for i in range(1, 101):
print('FizzBuzz' if i % 15 == 0 else
'Buzz' if i % 5 == 0 else
'Fizz' if i % 3 == 0 else
i)
there's also a suspicious "return" at the end of the second code segment which mysteriously appeared some time after the first one; looks like the author was trying a little too hard to differentiate python and rust.
I disagree. I think code comparisons should be done using idiomatic code. I personally would not consider chaining `if` expressions like you've done here idiomatic Python.
let result = if i % 15 == 0 {
"FizzBuzz"
} else if i % 5 == 0 {
"Buzz"
} else if i % 3 == 0 {
"Fizz"
} else {
i
};
in rust? either this is good, readable code or this is poorly written, unintelligible code. you cannot make the argument that sometimes it is readable and sometimes not based on the presence of braces.
I've seen it quite frequently and kindof like it because it doesn't introduce any state that could leak out or get mutated from somewhere else. Although the ternary operator doesn't make as much sense in python as in other languages since there is no const keyword, otherwise that's what the ternary operator is usually used for.
I’ve certainly written `a if b else c if d else e` before, and it reads perfectly naturally—but you do want to be careful doing such things. They’re very easy to overuse.
I deliberately didn’t go about omitting the number 15 or the string FizzBuzz, because that would have distracted from the key points I was making about Rust. It is not possible to make it as efficient under those constraints—you end up needing either more than one print call, or to use an owned string, where I was able to end up with a solution that didn’t require any heap memory at all.
yeah. I'm being too nitpicky. I guess part of the "beauty" is that 3*5==15 but I realized that you'd need more complexity after hitting the reply button.
Scala guy here. This kind of thing is useful - I'm happy to pay the costs of garbage collection so I don't need it for ownership, but for separating out async operations from sync operations, or database-transactional operations from non-database operations, it's great to be able to represent that difference in the type system (and without a huge syntactic overhead). But then if you want to abstract over types like MaybeOwned, you need higher-kinded types to be able to work with them effectively. Has Rust got any further towards implementing those?
In the contrived Python example, FizzBuzzItem is only called if the result is a number (not in any of the modulo 0 cases) - is that intended? I can see that it works, but it's breaking the analogy for me with the Rust code.
You know Rust team, it almost might be worth specializing this exact error message about mismatched string lifetimes to include a URL to this post, if not mismatched lifetimes in general.
We have unique diagnostic codes for each error, and I have plans (and an in-progress PR) that points to a web page with a much longer "this is what this error looks like, here are some strategies with how to fix it" in the works.
The feature list in rust really does have my eye. The biggest one in particular was type inference.
The reason type inference was such a big one was because if you use it right, annoying situations like "Two types of strings? What is this?" go the hell away. You have three types, static built in and binary strings, and a third that only makes the gaurentee that the datatype can do all the things a string aught to be able to do, and from there the compiler works out the practical implementations.
This article has done a great job in killing my enthusiasm for the language.
I guess it's implementation of type inference only goes as far as golangs, in that const keyword.
Maybe I was being a bit naive in what I was expecting, hell maybe what I'm expecting isn't reasonably possible. bleh.
Type inference doesn't exactly paper over the differences between types automatically. It just infers types, and doesn't complain as long as all the types line up.
Consider doing something similar in Haskell, setting a variable to either be a string or Text:
GHCi, version 7.4.1: http://www.haskell.org/ghc/ :? for help
Prelude> import qualified Data.Text as T
Prelude T> let x = (if True then "foo" else T.empty)
<interactive>:3:32:
Couldn't match expected type `[Char]' with actual type `T.Text'
In the expression: T.empty
In the expression: (if True then "foo" else T.empty)
In an equation for `x': x = (if True then "foo" else T.empty)
Sure, Haskell can sometimes auto-infer very complex types, and has more extensive type inference than Rust does. But it's not magic, and will not do everything for you.
What you're asking for is not type inference, but something else. Perhaps what you really want is weak typing (automatic type conversions), or message sending (can not statically dispatch).
Rust has type inference similar to Haskell: type information can flow "backwards". It is very different to Go and C++ where types of locals are 'inferred' from their initialiser, and nothing else.
E.g.
fn main() {
let mut v;
if true {
v = vec![];
v.push("foo");
}
}
is a valid Rust program: the compiler can infer that `v` must have type `Vec<&str>` based on how it is used. I don't think it's possible to syntactically write a Go program at all similar to this (a variable has to be either initialised or have a type), and the C++ equivalent (using `auto v;`) is rejected, as `auto` variables need an initialiser.
The C++ analogous, although not exactly the same, is to use a `make_vector` wrapper, like
template<typename...Args>
inline auto make_vector(Args&&...args) {
using T = typename std::common_type<Args...>::type;
return std::vector<T>{{std::forward<Args>(args)...}};
}
...
auto v = make_vector("asd", "dsa", std::string("asdsa"));
It will obviously not deduce types after the vector is declared, but it's as close as one gets to type deduction based on the vector's content.
There is one instance in C++ where information does flow backwards in a sense: disambiguating template overloads. For example,
using fn_type = std::vector<int>(&)(int&&,int&&,int&&);
auto v = static_cast<fn_type>(make_vector)(1, 2, 3);
In this case, the static_cast information flows "back" to the type deduction of `make_vector` to deduce what Args&& is. This is not very useful, just a curiosity.
Doesn't type inference stop at function boundaries, though? I'll grant you that idiomatic Haskell uses type annotations for function signatures (unlike idiomatic OCaml), but it is optional (which is convenient in a REPL).
The type inference does mean that you can care less about what specific type you’re working with, and that it is rare that you will need to write types out (except in signatures—the type inference is deliberately only local), but the distinctions are certainly still there, and due to the nature of the language must be.
Really, this is showing one of the more tricky parts of Rust, potentially to balance the claims of excessive bullishness for Rust that I have heard levelled at me! Don’t let it dim your enthusiasm too far; Rust is still very much worth while trying out in practice.