The precise domain modeling is the real paradigm shift.
The whole point of static typing is to inform the compiler about your intent so that it can provide guarantees about correctness. F# makes it easy to define lots of small types that precisely model state so that you can give more responsibility to the compiler to verify your program.
I cannot overstate how important this is to maintaining correct software overtime. If you guard your touches with non F# code, you need far fewer unit tests and probably even less runtime checks because you already know if the types are right, so is the logic. And when you need to write unit tests, the functional style makes your tests very easy to write.
I have personally struggled with domain complexity in C# that i was able to model precisely in F# and have it work perfectly on the first try.
I would say it's the same with Rust and Haskell. And I agree, the superior type system of these languages is really a game changer for me and it's very hard to go back to languages missing these features.
I think a lot of it could be personality based. With C, Lisp and Forth you bash out code and run it, see if it works and then make changes. You get much faster visual feedback.
With Haskell, you need to put in a lot more up front thought. Then stuff doesn't compile and you spend ages working out why.. You can go for hours without having anything running. The cause of the errors can be quite abstract and obtuse and may seem like it is nothing to do with the actual problem you are trying to solve.
Ghc haskell has a bunch of features to make type errors into runtime errors and a really good repl. You can do interactive development pretty well this way.
The way I write php isn't that different from how I write haskell, especially with phan running on save. Php actually requires a bit more thinking in advance because I can't refactor it quite as easily.
What could also be taken into account is that I used to be up there preaching the static typing gospel. And then I gradually got fed up with the ceremonies, since they didn't pull their own weight and couldn't keep up.
It's all compromises. The claim is that in return for following rigid rules, they guarantee this and that. Sort of the same deal the state has pushed down our throats since forever. And for a while it seems reasonable, until you pull hard enough to notice the chains around your ankles.
I have spent some time loving all 3 of those languages, and also, most recently, F#.
I really do think that an ML-style type system is the better way to do domain modeling. The compiler support you get in F#, such as ensuring that your match expressions are complete, are nice. But what really makes me enjoy working that way is that (the non-OO bits of) F#'s type system makes it very easy to create domain models whose intent and inner workings are just obvious.
You're right, it's not correct to cast the differences as "missing features" - it's just different ways of doing things. And each has its own advantages and disadvantages. F#'s set of tradeoffs is just the one that most suits my tastes when I'm building LOB applications.
One of the features that makes F# great for organization is the sequential ordering of compilation units which makes it easier to understand code dependencies, and makes it awkward to create mutually recursive types (requiring them to be in the same code file separated by `and`, or defined in signature files in advanced).
This might sound like an unwanted restriction, but 9 times out of 10, having mutually recursive types is signs of a code smell. You can almost always model your problem better with a slight indirection through an interface or function, it helps avoid the need for mutation, and makes testing simpler.
I disliked the module\file order restriction when I started, but have come to appreciate it because every time it felt like a thorn in my side it turned out that the thorn was prodding me towards a better organizational structure.
I've even found that apart from enforcing better design, it can also trigger it. As an illustration, I've had a function 'f' in module 'C' that I decided made more sense in module 'A'. When I moved the function however, module 'A' would no longer compile because 'f' was dependent on a module 'B'. No problem I thought, I'll just move module 'A' below module 'B'. Whoops, now module 'B' won't compile; I didn't realize it was dependent on module 'A'. If I resist the urge to just revert everything and return the function 'f' to module 'C', and I investigate "why does 'f' feel like it belongs in module 'A' and yet placing it there introduces circularity...", often I'll discover a beneficial refactoring that I probably would not have thought of otherwise.
It's a mouthful and you'll still have to deal with all the standard library, or other third-party libraries, which produces and consumes Option<T>, but you can address that with some aliases and conversion operators if you want to go all-in:
type 't voption = ValueOption<'t>
type VOption<'t> = ValueOption<'t>
let inline (!) opt =
if System.Object.ReferenceEquals(opt, null) then ValueNone else
match opt with None -> ValueNone | Some x -> ValueSome x
let inline (?) vopt =
match vopt with ValueNone -> None | ValueSome x -> Some x;;
// usage
let x = !(List.tryFind (fun x -> x = 0) [1]);;
// [<Struct>] val x : int voption = ValueNone
There is Mechanic (https://github.com/fsprojects/Mechanic), an OSS project that's meant to take away some of the pain, though I don't have any experience using it and I'm not aware of how useful it is in practice.
We found it quite nice for properties of objects that appear over time. Think things like Order that might or might not have delivery details. In C# you are making classes with nullable delivery timestamps, delivery person, etc. And one or two properties isn't that bad but it gets a little onerous when you start to have constraints like "these four properties are either all null or all populated". In F# it is trivial to set up a new constructor for Delivery that includes all of these properties. There is no unspoken agreement about that, you can set it in the model.
I see, although it looks like you're really after a state-machine there. Which I admit aren't the easiest to create in C#, and immutability and null-safety definitely make it easier...
Although personally I'd rather have the different states encapsulated in separate classes than a single type that encapsulates all possible states and enforces them through the constructor.
Also, it defines them as nested classes, and marks the outer class as `sealed`, as they're closed types, and it prevents the type being extended with new cases.
C# can't do this because the compiler rejects putting `abstract` and `sealed` on the same type.
Why do you struggle with this in C#, especially given that you are familiar with the F# style? I don't know C#, but in Java I'd write this as:
public class Order {
final Optional<DeliveryDetails> delivery;
public Order(Optional<DeliveryDetails> delivery) {
this.delivery = delivery;
}
}
public class DeliveryDetails {
final long deliveryTimeMs;
...
public DeliveryDetails (long deliveryTimeMs,...) {
this.deliveryTimeMs = deliveryTimeMs;
...
}
}
My IDE writes most of these lines for me. I believe C# will have similar or more succinct constrcuts.
The ide maybe will write a small part of it, but you’ll have to keep reading it forever.
And obviously it is modelled wrong because it is possible to have a Delivered order without delivery details, there is nothing that enforces it.
Compare it with how I would write it in f#:
type OrderId = OrderId of string
type DeliveryTime = DeliveryTime of long
type DeliveryDetails = { deliveryTime: DeliveryTime...}
type Order = { id: OrderId ...}
type DeliveredOrder = { id: OrderId, deliveryDetails: DeliveryDetails...}
Probably it is even better to define delivery time using the unit of measures and specifying it as ms.
What is the difference in this way?
That an order cannot ever have DeliveryDetails, while a DeliveredOrder must have DeliveryDetails.
As a bonus you can’t just pass any string as an order id (for example a description) but you need to pass an actual order id. the same is true for the deliveryTime with the adddd advantage that you won’t be able to perform operations on it with a different unit of measure.
In java or c# you would kill yourself if you try to do something similar and moreover you won’t have all the constraints specified here and the immutability automatically enforced.
they said "an order cannot ever have DeliveryDetails"
so, if in a nominative subtyping situation like you suggest, a DeliveredOrder is-a Order, and so we can see that some subset of Orders CAN have DeliveryDetails. any method receiving an Order could receive a DeliveredOrder.
No, most of the time is wrong.
If you have a DeliveredOrder you want to make sure that is not delivered again, so the delivery function should accept only an Order, not a DeliveredOrder.
If in some different system you need a domain object that is both an Order and a DeliveredOrder than in F# you simply use an union type:
type Order = UndeliveredOrder of UndeliveredOrder | DeliveredOrder of DeliveredOrder
And in this way you can write a function that accepts both an UndeliveredOrder and a DeliveredOrder.
perhaps, perhaps not. in the abstract there's no way to valuate it. it depends on what it means to be an Order and what it means to be a DeliveredOrder, what assumptions are made by code that receives an Order, etc.
> i agree it will take much more time to implement that in c#.
If I am reading the code correctly, here is the Java version (with Lombok [1]):
@Data class OrderId { final String id; }
@Data class DeliveryTime { final long time; }
@Data class DeliveryDetails { final DeliveryTime deliveryTime; }
@Data class Order { final OrderId id; }
@Data class DeliveredOrder {
final OrderId id;
final DeliveryDetails deliveryDetails;
}
I can imagine C# also having similar expressive powers.
[1] http://jnb.ociweb.com/jnb/jnbJan2010.html. Lombok reduces the drudgery of writing some of the code in Java. Modern JVM languages like Scala and Kotlin have native constructs to express this.
If you are not writing java and you are using lombok then yes, this simple case seems covered well.
I still can’t see how Lombok will help with the exhaustive pattern matching in case the order is a union type of several orders types as explained in my other comment or with avoiding mixing up ms and seconds when using a unit of measure in f# for deliveryTime.
Also I’m curious how you would change just one field of an immutable object with 10s of properties in Lombok and if the resulting java code is as efficient as F# with its immutable data structures that use the copy on write semantics.
I agree that exhaustiveness check enforced by compiler is something I will miss in Java.
Persistent data structures have been implemented in Java if that's what you mean by efficient mutations to immutable structures. I can't imagine such structures being hard in any language hosted on the JVM or CLR.
F#, and ML-family languages, surely have their killer features. I am only contesting the claim that the GP made that modeling a domain is a struggle in C# when compared to F#.
Imagine if someone came on this thread and claimed that they struggle to write effectful code in F# which they have been writing in Haskell. Of course, you have counter-evidence of that in all the F# programs you have written so far! I feel the same about the inability-to-domain-modeling claim.
that's true, you can recreate it in some sense but you can't get to what F# can guarantee. A big promise of discriminated unions is the ability to make invalid state unrepresentable. Matching on the different type constructors is a fantastic way to only express coherent states.
Terminology nitpick: Pattern matching is done on value constructors (or just "constructors", but at the intersection of FP and OOP that could be confusing).
"type constructor" means something like `List` (as opposed to `List[Int]`) – a generic type that hasn't been applied to any type argument(s) yet, and will "construct" a type (like `List[int]`) when you apply it.
I don't think i can release the exact code but my case was like this.
I was writing a little program to help glue some things together in our build/release pipeline. This tool would be deployed to the build server and get invoked by the build agent. (This could have been a script, but the complexity got to be too much to keep organized)
The tool had two halfs:
- The frontend whose job was to gather up all the 'input' from CommandLine and Env vars, do some parsing, then spit out proper types/objects.
- The backend that would interpret this data and make decisions, make some API calls and maybe copy some files.
Because of the way our our software is built, we have 5 or 6 different 'flavors' of our app that needed special treatment during build. The complexity of branching on if it was a build step or a release step, the different flavors of our app and the need to deal with input data that may or may not be there got the best of me and I spent weeks making tweaks to deal with NREs at runtime because i hadn't handled some weird case.
So i trashed the tool and rebuild the front end in F#. I spent a little time making a very accurate type representation of the data model. Including defining a lot of stuff as optional and introducing a lot of discriminated unions to represent possible branches. Then i essentially just filled in the blanks (match cases) and fixed the compile errors until every case was covered and I was done. No bugs.
You see the big, big win of F# is the default path doesn't let your cheat yourself.
You MUST handle every switch/match case.
You MUST fully construct your records.
If your function may fail, you MUST use Option to express None/Null
And you MUST handle every option type as potentially None and write handling logic
When you define your data model, just be on honest about what data needs to be where and the compiler will keep you on the straight and narrow.
I love the trend of languages having easy online interpreters. tryhaskell.org is also very nice (though not quite `ghci`; for example, it doesn’t support `:t`).
Great thing about this article is that it treats C# with respect, even as it lambasts it :) Good examples real world situations, no exaggerations. FSharp For Fun and Profit often uses straw-man implementations of C# to compare against, which I think does harm to the argument.
The only thing I don't like is the DI story presented here and many other F# guides. Having gone down that rabbit hole and using F# in production for 5 years now, I have gone back to classes and interfaces in almost all cases.
Functions as DI mechanism suffers from a few things. First, it's hard to search for implementations. They could be defined anywhere. With interfaces, the implementations are a hot key away. Second, functions only replace single-function interfaces. Even with a purist view of ISP, there are still plenty of occasions where a service would expect to call multiple functions on a contained service. Passing all those in as multiple parameters gets unwieldy in real world scenarios. Third, F# has no way to inject a dependency into a whole module, so you have to go function by function. Injection into classes is much easier to manage this way. Finally since none of the services or interfaces are named, they don't work with IoC containers.
The good side is that there is nothing in F# preventing you from using classes and interfaces in your design. Unsurprisingly this is my preferred way to go about it.
I kinda wish the function based DI wasn't brought up so much and so early. It's a big mental leap that I think turns off some people coming from other languages. And for limited benefit. Or negative benefit in plenty of cases.
Can the F# compiler not convert a lambda to a single-function interface instance? Honest question. I'm mostly a Java guy, and I know that is how Java works under the hood because the JVM does not have the concept of delegates. But it seems like a fairly simple and safe transformation, even when your underlying target does have delegates.
No, while implementing such a feature wouldn't be difficult, one of F#'s imperatives is to ensure type conversions are explicit everywhere. Plus, given dotnet does have delegates, and F# has anonymous classes that would give a one-line workaround, this feature would be of limited use.
I've definitely structured f# modules that way. The module exposes one function that "injects" the dependency into multiple other functions, and then returns them as a blob.
I'm not sure if that's "the fsharp way" (i suspect it isn't) and maybe I'll look back on it some day and cringe, but it's working pretty well for some of my (admittedly small) side projects I'm tinkering with right now.
Yes, that's what I do too. "Service" modules contain all the functions, with all dependencies explicitly passed into them, and at the bottom of the module is an interface with the dependencies assumed injected and a factory function that takes all the dependencies and returns an object with that interface.
In general I use a suave functional styled api on the top, OO-style SOA in the middle, and mostly functional style at the bottom layer, something of an FP-OOP-FP sandwich. I jokingly call it sandwich oriented programming.
Not the OP but IME with a lot of web/api development work this seems to be the case more than not.
I do realize that most web work is really more functional than OOP, but classes do make it very easy to bundle dependencies without adding noise to the implementation/signatures.
I have a seasonal project that spins up around this time every year. Last year, I chose F# for a (relatively small) component of the project and I was not disappointed. I don't have access to the customer's systems, and usually with Python there's a bit of back and forth where I have to fix bugs that only customer is able to trigger. My F# program worked flawlessly the first time and has caused no complaints. I'm excited about writing more F# this year.
The FSSF site needs updating (which is happening). This explosions of options is quite a detractor, and they're aware of it as they are doing a redesign.
Yeah, the fact that Mono is inheriting all BCL code from .NETCore nowadays, but is still not .NETCore (because it has its own runtime) is a bit confusing.
That being said, the best/easiest way to use .NET in the Mac is just install "Visual Studio for Mac" (the Community edition is completely free), which will under the hood install all you need (Mono, .NET Core, etc).
I don't know F#, but I do know C#, and it feels like the author is trying to make C# look more difficult/verbose than it needs to be to make F# look better. Here is an alternative version of the "Everything is a function" section's IPasswordPolicy thing: https://ideone.com/3Savt9
I even went a little overboard with the function that takes a list of functions and returns a single composed function -- I could've also written a one-off that uses "&&" like the F# version:
bool isValidCustom(pw) { return f1(pw) && f2(pw) && ...; }
Am I missing something? Why did the author make the C# version so complex??
Well you're right of course. But my only thought here is that Dustin's code looks like idiomatic C#, but yours does not.
You can do function composition in C# but I've virtually (heh) never seen it. Classes and Interfaces are the default abstractions in C# land even if there are other available.
I haven't read the article in depth, but the authors example is written according to SOLID principles where composition is decoupled via interfaces and dependency injection. I think the author is trying to show the difference between highly decoupled code examples in c# and f#. In f# you compose with functions, in c# you compose with object instances.
I've been diving into f# in 2018. I came from c# world, and it really made me a better developer. Really recommend the language. Think it has a great balance between functional programming and OO if it is really needed.
I really do think it's easier to write better .net programs with f# then c#. Think this article point out some great stuff.
That said, it's quite hard to actually get to use it in production at the places I've worked. My college's are scared of it (that is pretty reasonable, considering c# is a good stable language and my team is confident that what we develop will make the customers happy).
One thing that makes me hesitant to really try to get this adopted in my company is the tooling. My experience hasn't been great. It works, but it has been pretty unreliable. Some documentation was more focused on the "pre dotnet core" era of f#. I know i've been spoiled, but stuff like debugging, tooling and refactor tools can't touch Visual Studio.
There are some decent options, Jetbrains Rider and VS Code are a couple of them.
Small things, like creating a record type while typing is inconvenient.
type Customer = {
Id: int;
Name: string;
Age : int;
}
While typing an instance of the record
let c1 : Customer = {Id = 1;
gives you a bunch of compiler warnings, I know I'm not done, Compiler knows there are more properties. But without looking at the record type it's hard to see what order things are.
I'm not sure if this is just OCaml thing, but C kind of languages are better at predicting what you want to create. It's weird that with the very predictable Hindley–Milner type system it can't provide the developer with good information.
I know it's a small thing, but when working with large projects, I don't want to look everything up, I have the compiler have my back.
I'll really think Microsoft should help the Fsharp Foundation more, seeing what an amazing language f# is. But it really could use some more love from MS. Perhaps they should help the community be more confident that they can solve the business problems they run into.
I hope Microsoft would say: "Hey this is how you create The Boring Line Of Business App, with all the bells and whistles you normally need". From how to deal with Dependencies in large projects (No, a couple of (amazing) conference video's of Mark Seemann do not give me enough confident that after 6 months of development it still is manageable)
Until the Tooling is done, I just keep enjoying the language and learn how to become a better developer with it.
"That said, it's quite hard to actually get to use it in production at the places I've worked. My college's are scared of it (that is pretty reasonable, considering c# is a good stable language and my team is confident that what we develop will make the customers happy)."
That's my problem too. I would like to give F# a go but at work it seems close to impossible to implement. Most people don't see the need and if there is a problem it will be blamed on F#. It's hard to win there if management doesn't fully support it.
I've previously used Ocaml at my last job and use Scala at my current one, it's interesting that F# (just judging from this blog post) looks more similar to Ocaml than to Scala - like they've more aggressively pulled out syntax and embraced partial application etc.
I'd always assumed F# was to C# what Scala is to Java - and I think that probably does represent their design goals, so I wonder what the different considerations were that led to them being relatively quite far apart.
But F# tries to stay very close to its functional roots, and describes itself as "functional first".
By contrast, Odersky has always been clear that Scala tries to be a pragmatic mix of things from OOP and FP.
I also, personally, feel like F# has a tendency to be a bit more conservative about adding language features. For example, Don Syme has been very resistant to adding typeclasses to F#, because of how it would interact poorly with the rest of .NET. I love me some ad-hoc polymorphism, but, as someone who is currently working in a mixed Scala/Java codebase and frequently stumbles over the incompatibilities between Scala and Java, I've come to appreciate that decision in hindsight.
Right - I'm pretty familiar with Ocaml so the distinction is pretty clear on my end. I'm just a bit suspicious of this one thing in the comment I replied to.
"No matter if you are already a functional developer from a different community (Haskell, Clojure, Scala, etc.) or you are a complete newbie to functional programming (like I was 3 years ago) I think F# can equally impress you" -> "For this task and for the rest of this blog post I'll be comparing F# with C# in order to show some of the benefits."
F# is neat, and I see why it's useful if you have a big .NET program already, but for anyone else I don't see why I'd pick it over a more mature (and less Microsoft-centric) functional language like OCaml, SML, or Erlang.
Does F# have any unique features that other functional languages lack, or is .NET integration its killer feature?
.NET integration is probably the biggest killer feature, because that unlocks official support for most important platforms that you'll need to use. Using AWS or Azure or GCP because your business is moving stuff to a butt provider? You have access to fully-supported SDKs maintained by teams who do that stuff for a living. And so on. .NET also has a spectacular standard library, and with .NET Core, runs _very_ well in any environemnt.
But from a language standpoint, here are three unique features
* Type Providers, which let you generate types based on a data source, and tie compilation to the use of that data being correct
* Active Patterns, which are similar to Haskell's View Patterns, and let you tie some arbitrary functionality that ultimately returns an Option into a pattern for neat pattern matching
* Computation Expressions, which let you express, compose, sequence, etc. monadic and monoidal computations in a convenient syntax that's super easy for newcomers to grok. There is also an RFC and WIP implementation that expands these to support applicative constructs
There's more (Units of Measure, universal quantification via Interfaces, etc.) but these three tend to be something people like a lot.
Thanks, I completely missed to talk about type providers, active patterns and unit of measure, but then the post was already so long lol. I think I'll add other good blog posts which cover all these topics to my final notes!
As others have said, F# interesting language features are computation expressions, active patterns, units and type-providers. The library, platforms and ecosystem benefits are gravy. Though subjective, the syntax is clean too, being somewhere between an ML and Python.
Something that no one has mentioned yet is that F# is now among the fastest functional first programming languages. At least according to (take with a grain of salt) benchmarks like [1] and https://www.techempower.com/benchmarks/
As someone said on HN a while ago: F# works better on Linux than OCaml does on Windows. I recently was trying to decide which new language to learn, and reduced the choices to F# and OCaml. The catch? I need something that works on both Windows and Linux. I first attempted OCaml, and gave up soon after starting. I could tell that fighting Windows would be a constant battle. So I went with F#.
And then of course, parallel computation is an issue with OCaml.
SML: Kind of lacks libraries. I did learn some SML in the past and loved it, but I want a language that will let me be about as productive as Python is. SML lacks a strong ecosystem.
The .Net ecosystem is definitely a killer feature. Everything you want, and it 'just works', as well as everything being seamlessly integrated with each other. This is only possible with the backing of a huge amount of funding, and there are probably millions of man hours spent on it at this point.
Why is the .NET integration a feature rather than an antifeature? Don't null and subclassing tend to poke holes in F#'s type system's ability to detect mistakes? And would it not be better to compile to native code than to require the Mono VM or whatnot?
Regarding F#'s real world usage, today I came across an end-to-end F# stack, SAFE stack.[0]
It looks very intriguing and I would be interested to know if anyone here has experience with it and has thoughts on it.
Yep, almost all of my side stuff and an ever-increasing proportion of my work stuff is in SAFE. It's actually built on Dustin's Giraffe library, which itself is built on top of Asp.Net Core. Cross platform, great ergonomics due to Giraffe, and I get to share client side/server side domain models, validations, etc due to the shared F# heritage.
That "famous slide" really rings true for me. F# allowed me to forget (or at least not worry about) a TON of OO design patterns. Data goes in, transformation happens, data comes out.
I do like that F# is very practical about OO though.[0] I feel that the language often strikes a balance of programming paradigms that enables me to be very productive.
I really wish Unity3D would relinquish control of my Visual Studio Solution files and have a much nicer path to including its libraries as a dependency manually so I could start using some of the other languages in the CLR ecosystem.
I've managed to get it to work before by very carefully searching for the DLLs in the Unity install directory and compiling my own DLLs into a Plugins folder in my Unity project, but it's very brittle. It's not obvious which set of DLLs to import (there are lots of different versions of each, for various rhyme platforms), and I upgrade new Unity versions frequently, so the location is a moving target. And Heaven help you if you want to depend on something that is only released as a script bundle in a Unity Package (like every VR SDK out there right now). And given how Unity's braindead YAML metafile system works, you're pretty sure to break your component references in your scenes and prefabs with even the most minor of rearchitecturings.
Unity having its own build pipeline is a huge impediment. It all but forces you to write custom scripts that do some of the most basic things, as if we live in the Node hellscape of Gulp/Grunt/Webpack. And none of it is documented well enough to get it right without hours of ping-pong with test runs.
Note: the REPL doesn't _quite_ work fully cross-platform with .NET Core yet. But the work is ongoing and something we're (fairly) close to releasing. When it's done, you can simply "reference a package" in a script or interactive session, and it will resolve whatever that dependency graph is and let you use it as if you were editing source code in a project in an IDE.
Like I mentioned in another thread > Been learning F# the last two months. Wanted to learn Ocaml first. But F# is the sweet spot, beautiful functional syntax . Can leverage the now open .Net Ecosystem + pretty fast for doing financial stuff that I wanna do.
Honest question: As someone who does web development and is looking to learn a new programming language in 2018, would you recommend F# over Elixir (and why)?
This has been previously mentioned but F# gets all of the goodies that come with being a .NET language. Elixir comes from Ruby land and thus has a framework called Phoenix(?) which also has its fans.
I personally opted for F# because of the domain modelling tools and the offloading of various caveats and edge cases onto the compiler's stack rather than the meatspace stack, but Elixir has many fans for a good reason no doubt, but I'll defer to someone with more experience working with it to weigh in.
Elixir obviously is Erlang under the hood, but its syntax was inspired by Ruby's simplicity. José Valim, the creator, was also a Ruby contributor IIRC.
The Elixir web world gets a lot more love than the F# one. F# always touts the advantage of living in the .NET world, but it means most things are built with C# in mind, not F#. There are F# frameworks, and they are nice, but the community is quite small. But the advantage is the .NET world has a vast amount of libraries for doing most things.
OTP and Elixir is a better story than F# equivalents. There are no lightweight isolated processes in .NET
Elixir also has macros ( a very sharp double edged sword ) which can make frameworks nice to use.
F# is stronger typing which is nice and often has the trait if it compiles, it works. Though the normal downsides of dynamic typing in Elixir are not so bad.
But the best thing is to try both and make up your own mind
I'm a C# and Typescript guy, and recently spent a day with each of Elixir and F#.
I loved them both!
Elixir has a great 'getting started' site, which was really well put together, and similarly the 'F# for Fun and Profit' site was great too, so I feel like I had equally good resources to introduce me to them both.
I'd love to spend more time with both of them, but in general I'm more comfortable with static typing, so if I progress further with either, it'll be F#.
I only wish that Microsoft would start treating F# as a first-class citizen alongside C# :/
Honestly I was super psyched to get into F# after finishing Dan Grossman's programming languages class, but I abandoned it pretty quickly because setup was so difficult on my Mac.
1) Multiple versions of .Net available (.Net Core, Mono...) and different tutorials and documentation would apply only to one or the other.
2) I went with .Net Core because the community seemed to have decided on that, but there is no easy way to do a cross-platform GUI in F# using .Net Core.
3) I just couldn't find find a simple tutorial taking me from beginner to intermediate using a cross-platform toolchain and development environment. I stumbled around with Visual Studio Code for a while, ran into some compatibility issues with hundreds of lines of errors messages...
I'm sure it would have been fantastic if I had had an F# person help me get set up and started, but the setup curve was too steep to be enjoyable (actually, setting up is never enjoyable. I just want to get to the good part of coding stuff!).
F# seems like such an amazing language every time I look at it — a modern version of OCaml, with a clean syntax, some novel ideas, and fewer legacy warts.
However, I wish that it had, like OCaml, a native AOT compiler that didn't come with the baggage of the .NET runtime. Some people might consider this a benefit, not baggage, of course. But it's the same reason I'm put off by Scala.
From what I understand, to even run a compiled F# program you have to have Mono installed? It does look like you can get AOT compilation with Mono [1], but the limitations (e.g. no generics, apparently) seem too onerous.
After being deeply burned by the .NET framework, I consider OCaml significantly superior to F# just because it doesn't depend on .NET or Microsoft.
And yes, somehow the OCaml guys have managed to implement generics and AOT compilation to optimized machine code long before the .NET framework existed, but now this is something "truly hard" on .NET.
My problem with OCaml that while the semantics are top notch, everything else is stuck in the past.
The syntax is quirky, there's a bunch of legacy baggage (does anyone use the OO stuff?), there's no SMP support (though I know this is being worked on), the toolchain feels antiquated (the REPL still, to this day, doesn't come with Readline support built in; rlwrap is required!), etc. Not to mention the lack of modern libraries, frameworks, package management, etc.
Reason and BuckleScript are nice, though. Reason cleans up a lot of my complaints, while still not being quite as elegant as F#. But there's this sense that nobody really wants to make any groundbreaking effort here; there doesn't seem to be any push to make Reason a first-class, new syntax for OCaml, so it's stuck being a web-oriented veener for new users, while old "industrial" users like Jane Street are content to continue with their thing.
OCaml's object and class systems are excellent; I prefer them to just about anything else. They rarely get used for the simple reason that algebraic types, functors, and first-class modules are better suited to modeling almost any kind of domain logic.
Also, I like OCaml's extremely simple and direct syntax. I'm fine with F#'s implicit `in`, though it requires whitespace-sensitivity. I think ReasonML went in the wrong direction, cluttering it up with curly braces and ubiquitous tuple-like syntax for function arguments.
OCaml actually has excellent tooling — among the best — and some (but not many) great libraries. What it lacks most is great documentation...
>OCaml actually has excellent tooling — among the best
I don't think even the core OCaml devs would claim this. Rust (and Java and Microsoft) toolchains are examples of excellent tooling. OCaml, not so much.
That said, the tooling situation today in OCaml is much better than it was a few years ago, which is real progress.
Yes and no. OCaml already has one of the fastest production compilers in the world, acceptable error messages (better than Haskell's, anyway), a great REPL (utop), a high-quality and well-documented build system (dune), a powerful package manager (opam), and a very good language server for IDE integration (merlin).
What it's sorely missing right now is higher quality documentation output. Currently, it's hard to navigate the generated documentation (e.g., no search bar), it's not held in a centralized online location, and it doesn't do a great job dealing with complex module/functor hierarchies (especially in the presence of destructive substitution).
Unfortunately, much of what I described above doesn't come out of the box. To fix this, the OCaml community is seeking to emulate Rust's cargo tool via the development of the ocaml-platform: http://ocamllabs.io/doc/platform.html
An aside: I didn't know Java had good tooling. I know it recently gained a good REPL. But what is the official package manager, and where is the centralized repository for packages?
I think modern OCaml's toolchain is quite nice — Dune, OPAM, Utop, OCaml/ReasonML support for VSCode. I am a newcomer to OCaml world, but so far it's been great for me.
Its part of dotnet core, so you don't need Mono installed. And you can build a self-contained dotnet core application so you don't need core installed to run it.
AOT is possible via something like CoreRT, but its still a bit beta and not straightforward. From my experience the only limitation with CoreRT and F# is certain reflection heavy functionality. No problem with generics.
Well, if not Mono, then the .NET Core runtime package, right? It's not self-contained. I read this [1], but it's unclear to me how I build a self-contained binary, or what options are available for AOT.
If you build with 'msbuild publish -r debian8.x64' or any of the other valid 'runtime identifiers', the output of the build is an xcooy-deployable folder that contains your binary and all it's dependencies including the runtime. I use this at work to make Debian packages that are standalone.
AOT not supporting generics? that's a pretty bad reading/summary of the webpage you link to (not to mention, that page may be a bit outdates, and more cases are supported by AOT now; take in account this is the mode that iOS apps developed with Xamarin need to use).
In dotnet land generics are JITted on demand at runtime when you first execute a specific instance of that generic type.
So List<int> doesn't necessarily get JItted even though List<long> has been. This poses a bit of a problem for AOT compilation.
You would essentially have to either
A) Trace all possible execution paths and determine every needed generic instantiation and precompile and ship them all. This ain't easy, and it might be impossible. Also expect massive binaries due to all those types.
Note that since Windows 8, .NET Store apps are fully AOT to native code, there is no JIT.
Window 8.x used the cloud compiler with MDIL deployments (based on Bartok MDIL from Singularity), doing on-device linking.
Windows 10 makes use of .NET Native toolchain.
The only restriction is reflection, all classes that are accessed via reflection need to be explicitly mentioned on a rd.xml file, otherwise the linker might prune them.
It does, if the AOT compiler is advanced enough to examine all code and determine that `int` is a possible type being used with List<T>. Mono has had a lot of advancements on these techniques over the years, as it's the only way for them to let Xamarin devs deploy apps on the iPhone.
In regular .NET you can instantiate generic types with a type parameter provided at runtime.
Of course it's not something you should ever do in regular code, but it can be useful for deserialization and stuff like that, so you can create e.g. a list of arbitrary types received over the wire.
Does the AOT compiler just crash if asked to compile a project that uses reflection?
Right, maybe the best way to figure out its completeness is just testing it. AOT has received a lot of love these days, the runtime accepts many flags when dealing with it, e.g. "fullaot".
Do HN users have any F# references, books, or guides they recommend? I've always been intrigued by F# but haven't found a good project / use case for it, but I suspect that's because I need to know more about it first.
Think it gives a good overview of the language and it's easy to read. Considering that after a day of working I'm not that sharp anymore it was a good read for me.
Also mentioned in the article, but Scott Wlaschin book really show the strengths of the language. Most of his stuff is great actually, but his website (FSharp for fun and profit) is not the most coherent read, I love using it as reference. I also recommend his conference talks, it has quite a bit wide range of topics.
I'm genuinely sorry that you had a bad interview experience, feel free to email me more details so that I can make sure the feedback is routed to the right people.
Over several years now, I've been very happy with the engineering quality and culture; I feel lucky to work with the people here.
While I really like the concept of functional programming and F# is definitely on my list of practical useful languages to learn, this article is clearly written by someone who doesn't know C# very well.
Take the "To hell with interfaces" example.
public interface ISortAlgorithm
{
List<int> Sort(List<int> values);
}
public class QuickSort : ISortAlgorithm
{
public List<int> Sort(List<int> values)
{
// Do QuickSort
return values;
}
}
public class MergeSort : ISortAlgorithm
{
public List<int> Sort(List<int> values)
{
// Do MergeSort
return values;
}
}
public void DoSomething(ISortAlgorithm sortAlgorithm,
List<int> values)
{
var sorted = sortAlgorithm.Sort(values);
}
public void Main()
{
var values = new List<int> { 9, 1, 5, 7 };
DoSomething(new QuickSort(), values);
}
No, no, no, no, nope.
public static class MergeSort
{
public static List<int> Sort(List<int> values)
{
// Do MergeSort
return values;
}
}
public void DoSomething(Func<List<int>, List<int>>
sortAlgorithm, List<int> values)
{
var sorted = sortAlgorithm(values);
}
public void Main()
{
var values = new List<int> { 9, 1, 5, 7 };
DoSomething(QuickSort.Sort, values);
}
No interface necessary.
Or immutability:
"public struct Customer
{
public string Name { get; }
public string Address { get; }
public Customer(string name, string address)
{
Name = name;
Address = address;
}
}
So far so good, but unless someone knows C# very well one could have easily gotten this wrong."
Really?? This is beginner stuff.
I 100% agree that null sucks, functions should not need to be in classes and immutability is great, but this kind of strawman doesn't help his point.
You are right about the first example (although the F# version looks nicer =D)
But that struct example is less than great.
Just last night I had a readonly C# struct like that, that I used as a dictionary key and wondered why my perf tests took a nose dive when the Dictionary hit a few dozen entries.
My boneheaded mistake was i forgot to override GetHashCode and Object.Equals. Great my perf was looking decent, but even then I was allocating 32 bytes of memory on every lookup.
I forgot I needed to also implement IEquatable<> to prevent boxing on calls to Object.Equals.
My little 2 item struct is 3 lines of definition and 7 lines of boiler plate to get it right. and my GetHashCode impl is suboptimal.
F# records do all of that for you. every time. automatically.
I primarily use F# over C#, and I would prefer the interface version over the functions in many cases. An interface makes it a bit more explicit and self-explanatory what is expected to be passed. A Func<List<int>, List<int>> is not very meaningful and potentially ambiguous. Obviously, anywhere you expect more than one function, an interface is more desirable because it avoid the need to pass around multiple objects.
One thing that makes it a bit nicer in F# than in C# is that you can define object expressions wherever an interface is expected, so there's no need to declare a new type in the static scope somewhere (similar is possible in Java).
DoSomething { new ISortAlgorithm with
member this.Sort values = ... }
values
In terms of how they work, there's not really much difference. If something expects a function and you need to capture some local variable within that function, the framework will create a new type for that closure anyway.
Yes, my point was more that they're pretty much the same thing. A delegate and an interface with a single method are essentially isomorphic in their behaviour.
The delegate obviously breaks down when you have more than one method. For that interfaces work just fine. Even in the case of using a delegate where you capture variables, the C# compiler creates a new type, instantiates it and passes the method in place of the delegate. It's hardly any different to deriving from an interface.
I just tend to use interfaces more out of habit even for single methods. Creating an object expression in F# is pretty much the same outcome as creating a closure in C#.
EDIT: I guess the other problem is that F# functions are not directly compatible with Func<_> and Action<_> types, so when calling a C# function expecting these, you need to wrap the call in an anonymous function. C# interfaces and F# classes are compatible though.
In the first example the named interface would be easier for me to read in a large project than Func<List<int>, List<int>> and with actual code there for implementation the two lines to define the interface would end up being a tiny fraction of the total lines of code. An interface would also provide better documentation opportunities for someone trying to write a mock or new implementation.
The code in the article isn't outlandish or particularly verbose.
Your sort solution limits which algorithms can be used, because you can't pass in a Func which might need an additional parameter anymore.
This is why most C# developers would use an interface for this. With an interface you add extra parameters or dependencies to the constructor of the concrete implementation. If this example wasn't sort algorithms but password hashing algorithms this would be clear.
So yeah.. it proves my point... writing good C# is not that easy, requires a lot of experience and mistakes before someone knows how to write SOLID code in C#. Thanks for helping my point!
Technically its passing 'a list -> 'a list, and only becoming an int array because thats the final input value. The fsharp functions are closer to the C# functions Func<List<IComparable>, List<IComparable>>
Or to be even mooooooore pedantic the fsharp functions would probably be written to be seq<'a> -> seq<'a>, so that would be Func<IEnumerable<T>, IEnumerable<T>> where T : IComparable<T> :)
I think I just tried to say that I didn't specifically intend to write very verbose C# to make a point. I tried to write good real world production C# and then compare it to how I have seen people do DI in real world F#, which is partial application in this case. There is other ways of doing it too, but this is the most common pattern I've seen.
There's a thread every few months about 'Why You Should Learn X', and I feel overwhelmed by it. I liked the one about Rust and got into the Rust boat. Now I have no time and energy to learn F#. That's both frustrating and debilitating. Personal Opinions.
I heard F# supports this newish thing called dependent typing which allows the type checker to not only check for type correctness but logic correctness as well.
I've never worked with dependent typing but is it true? Does the F# really take away the need for all unit testing on a project?
F# does not support dependent typing. And in general, because it has to interop smoothly with .NET code written in C# and VB.NET, F# is not nearly as hardcore on functional purity as something like Haskell or
Agda, and correspondingly cannot guarantee as much safety through its type system.
It is a wonderfully pragmatic language though and does guide the user towards more reliable software patterns than C# does. It sort of bridges the gap to be something you can actually use in production today, whereas the dependently typed languages are still in the realm of experimentation and toy projects for now.
Dependent typing generally doesn't play well with type inference (leading to things being all kinds of undecidable). I doubt it will ever make it's way into f#. GADT might, which provides a subset of the nice things of DT, but is much much easier to implement.
No, if you are a functional developer from haskell, you will not be impressed by F#. Why add silly things like this to your sales pitch if F# is your first functional language, and you're still pretty new to it?
There are a lot of claims about why F# is better and not much data to back them up. I'd really like to learn F#, but are there proven benefits to code quality / shipped defects / etc? For example, looking for research like: https://quorumlanguage.com/evidence.html
I don't touch anything that runs on CLR or a JVM: waiting for the world to compile is unacceptable. For functional programming, I'll just continue learning ANSI common Lisp which compiles straight to machine code, thank you very much.
It's slow and bloated. For someone who grew up coding intros and cracking protections in machine code on C=64 and the Amiga, such bloat simply won't fly. It's an abomination. I hate wasting my life away waiting for slow software.
If it makes you feel better, the JVM and dotnet communities agree and have been working on strategies to reduce the bloat a lot. They also offer a ton of other resources to help tune things to your liking.
I'm sure you can hand write a piece of C or Common Lisp that is very fast. But I'm sure you would also acknowledge that people that are good at JVM/Dotnet could probably write a pretty fast piece of code in them as well. So at a certain point you have to start weighing other things, like productivity, safety, tooling, libraries, etc.
I find if I really dislike something, I try learning it, and at the end I find that I may not prefer it, but I always learn something from the experience. Think of it as an exercise in sharpening your debate arguments against proponents of these platforms.
I had a semester of Java and AWT programming at the university. The professor was really good. My final project got a good grade. But after it was over, I knew I'd never touch Java again: any language where I needed to instantiate three different objects just to output a single message to the screen is crap.
Available libraries or lack thereof were never criteria by which I judge a language. Chalk that one up to competing against other coders on the scene. If someone disasembled one's intro code and found that one were using pieces of someone else's code, one instantaneously lost face and would never be able to live it down. It was plagiarism. 35 years later those same people still get derided and laughed at. It's considered extremely lame, because it means one wasn't capable of writing one's own routines. This is elitist in the extreme and those are the stakes for making it to the elite programming circle that is the scene. I'm perfectly fine with that, under the motto "it's lonely at the top". And that's as it should be. IT doesn't need more programmers producing so much bloated garbage; it needs the best programmers producing small, light and fast software that's a joy to use to its users.
So for me, having been tempered by such an elitist, competitive environment, library availability is never important. I understand that open source is different and I understand the advantage of not duplicating code, so if it's already been written and if it is good, clean code I'll use it, but I've no qualms about rolling my own.
I would never allow for library availability to influence my choice of a programming language: the language must stand on its own, including how easy it is for the system administrator to install and upgrade my application during its software managent lifecycle.
As programmers we should never make it easy or convenient for ourselves at the expense of complicating users' life or wasting their time. Never ever is that excusable.
This point of view seems to have a lot to do with programming as craft, and very little to do with programming as engineering.
Nothing wrong with that, but most developers are more concerned with the latter. Writing your own (compression routines|HTML parser|stepper motor driver|user interface controls|game engine) that's fast, cleanly designed, and bug-free is an act of craftsmanship. But if it takes a week and working around the quirks of one off the shelf takes an hour, and the resulting program meets the same requirements either way, it's the wrong engineering choice.
"But if it takes a week and working around the quirks of one off the shelf takes an hour, and the resulting program meets the same requirements either way, it's the wrong engineering choice."
(Un)fortunately, that's not how the world works: it never takes an hour to get the work done even with the off of the shelf solution, and software which gets put together this way usually ends up needing to be babysat and the long term costs to maintaining it are exorbitant. Curiously, the quality of the software ends up being extremely poor not because of using readily available components, but because of the mentality of the people springing for such solutions.
And somehow I have the impression that you and I have different defintions of "engineer": a programmer is not and never had been one. An engineer (which is what I do) gathers requirements, writes a technical specification based on those requirements, designs the software architecture which meets the specifications put forth, proceeds writing said software using scientific theories and principles (the actual process of engineering) and finally writes comprehensive documentation for that software.
Someone I interviewed some years back did their interview in F#. I didn't know F# and tried to make that abundantly clear to them, but they persisted nonetheless. I had messed around with O'Caml back in college and so at least had a passing familiarity with functional language concepts, but I really had no idea if what the candidate produced could be considered quality production code. That didn't help their chances of getting the job.
Maybe they want to do F# in their job, in which case they successfully filtered you out. Although it might have been smarter for them to ask about F# in an earlier round, or pre interview.
Safer threading with immutability
Safer programming with null-safety
Safer logic with precise domain modeling
The precise domain modeling is the real paradigm shift.
The whole point of static typing is to inform the compiler about your intent so that it can provide guarantees about correctness. F# makes it easy to define lots of small types that precisely model state so that you can give more responsibility to the compiler to verify your program.
I cannot overstate how important this is to maintaining correct software overtime. If you guard your touches with non F# code, you need far fewer unit tests and probably even less runtime checks because you already know if the types are right, so is the logic. And when you need to write unit tests, the functional style makes your tests very easy to write.
I have personally struggled with domain complexity in C# that i was able to model precisely in F# and have it work perfectly on the first try.