F# is a beautiful language. You always want to use the right tool for the job but honestly F# is so right for so many jobs.
I know a lot of people don't want to hear this but these types of languages, functional first, are the future of our industry. (In the sense that in the 2000s Java like laguanges were the future of our industry). I might be reaching here, but in my opinion, these are the right languages for the Cloud and that's why they are getting so popular.
I would agree if I felt that the industry were moving towards "better" software - as in, as an industry, we said "Wow, we need to seriously take a step back and start writing systems more reliably, more securely, etc".
I do not think we're going in this direction, necessarily.
Agreed. Objects for everything results in code that is riddled with hidden mutable state and side-effects (state is not encapsulated). It's impossible to reason globally about such systems, even before threads are added. Objects can be used as a reasonable module system, which is why many still advocate them, although IMHO better module systems do exist (e.g. in OCaml).
I honestly wish this was true but I doubt it. F# is different enough to have a big curve, and yet the benefits aren't immediately tangible, it's just a bunch of small things that add up - but try selling that to someone. Meanwhile mainstream languages are picking up features from it (eg. C# will eventually have record types and pattern matching). I think languages like this will influence the mainstream but I don't see them being mainstream in the future.
The problem is that F# leads to shorter code by being incrementally better all over the place - so every example you show to someone he'll go "oh that's cute but I can do something similar with C#" but when you sum up those "cute tricks" you end up with 1/3 code, much less noise, and easier to maintain code. But it's hard to sell because there's no "one big thing that is 3x better than C#".
When you don't know a lot learning something different is actually easier than when you're already proficient in C#. You expect an intern to take x ammount of time before he can be productive, so if he spends it on learning how to do it with F# or C# it won't change the x much. But when you have a senior who knows how to do something with C# he will not want to invest ammount similar to x, even a 1/2 of the time, because he can get it done with what he knows.
Like I said to OP F# doesn't have that "hard sell", it's just a bunch of incremental improvements that end up being a big deal together, but each one on it's own is unimpressive.
As a developer with 10 years of C# experience and two weeks (and counting) of F# experience, I'm already finding it easier and more enjoyable to do a lot of things in F#. Results may vary.
Sure, but the majority of companies in the world with IT department, don't have software as business, rather as a cost center for support their actual business.
Companies like Facebook aren't what the majority of us works on.
So when selling languages to management "look Facebook does it" usually doesn't help at all, what one needs are how that adoption will help those IT costs go down.
From my research and experiments, I think that ML languages help a lot "getting the specs right". That means that you have more confidence that the code does what you want it to do.
I can see a lot of value for that in non-tech companies that correctness is crucial, like finance, insurance, health...
I love ML languages, my first was Caml Light, OCaml wasn't yet born.
Also share your opinion, and go even further, for me personally IT projects should be accountable just like in many industries.
However my experience in enterprise consulting, with applications written in Excel, or languages that allow for "replaceable programmers", is that business doesn't care if software is the same quality of 1 € shop items, as long it generates the desired output.
I am asserting most of us across the world, work in boring companies whose main business is not software development and don't use such languages and never will, yes.
You had only been asserting "don't use such languages", you hadn't really got to "and never will". And I don't know whether I disagree, but you've certainly not made a case for it.
> No, you are the ones asserting "don't use such languages".
Oops, did you read that as imperative? It was just meant to be present tense. I didn't mean that you were recommending people not use these languages, just saying jobs aren't presently available in most companies. I missed the ambiguity when I wrote it; my bad.
My point was that the fact that you are not currently being allowed to use these languages doesn't imply that new languages won't be added to the list of "Java, C#, JavaScript, C++, SQL" - there was a time when none of those were on the list. It may be that there is something in particular that will prevent the addition of "these languages" but it hasn't been spoken to.
Future of our industry? Such languages are popular and suited to certain types of projects and solutions certainly. Particularly for systems in computing, and information and data science.
However, for complex representational systems, including both transactional and continuous systems in business, commerce and industry, other languages and paradigms can remain better suited.
> However, for complex representational systems, including both transactional and continuous systems in business, commerce and industry, other languages and paradigms can remain better suited.
Hello. Respectfully, what is a complex representational system? Why are other languages/paradigms better suited over a language like F#?
> Respectfully, what is a complex representational system?
A database is one example.
A database is also global mutable state for the programs that write to it, and that significantly compromises the advantages of using functional programming languages for such programs. You might start with some nice pure functional code, but the database adds massive side effects.
Actually, the benefits of pure functional programming are even more clearly exposed if you're doing complex IO operations like disk access and mutability. The ability to capture the effectful operations (read a table, write a row) as statically-typed action containers is invaluable because it lets you sequence the effects in a correct order with very little effort.
This is the #1 misconception I see surrounding pure functional programming - that once you run into mutable state, you run into problems.
It is entirely the opposite. PFP exposes the murky aspects of mutability, provides abstractions around it, and ensures that you're not tripping over yourself in regards to mutability.
Maybe we should get rid of databases (or limit them to reporting).
Back in the days they were mandatory since memory was expensive and you simply had to access stuff from disk and needed something that made it reasonably efficient. Now memory is so cheap and for quite many business applications it would be feasible to keep everything in memory. SSDs with GB/s level read speeds would enable restoring the stuff back to memory in reasonable time in case the cluster goes down.
I think the whole database thing also caused major issues on the object oriented side and actually stopped people from really using OO (or getting benefits out of it). Instead of building intelligent and smart objects it is easy to end up with objects that are just containers for data.
Probably there has been thoughts on how to do this, one old project I remember is Prevayler[1].
SQL and ORM are a superb fit for a great many applications, incidentally a large share of real world applications. The four-letter abbrev. CRUD comes to mind, as well as reporting.
Most kinds of NoSQL remove some benefits of the relational model while not really giving you the benefits of object persistence. I never felt a great need to use it; since quite some of these databases also had all sorts of issues that not exactly endeared trust for serious applications -- I'm not working on software where that's an OK thing to have (though for others it might be an acceptable tradeoff).
The real deal persistent object databases are a completely different beast from both NoSQL and relational models (only some NoSQL concepts like explicit / application-level indexing carry over). Two important things to note about these: 1.) There are not many of them 2.) Properly using them already requires proper OO technique. Failing 2.) will make it an unmaintainable mess. Certain kinds of applications benefit greatly from these, and there they also tend to perform better in both developer experience and efficiency as well as application performance than forcing a huge impedance mismatch down the throat of an ORM -- which pretty much always means that the underlying relational DB is used in very anti-patternish ways, resulting in poor performance regardless how good the DB actually is.
The idea is not to query things from the persistent store. Instead you would keep all the data in memory. The persistent store is just there in case your server goes down. In case of OO, there would be one big object graph which you then navigate using the functionality available in your programming language. For example take a list of things, pass it through the filter function to pick what you want.
My feeling is that in traditional business apps you don't have so much needs to these random searches over huge amount of data. It's more about taking hold of one piece of the graph and then navigating to related objects (dummy example: search for customer, then start looking at that customers orders).
I'm not familiar with Smalltalk, but I'm sure there was some ideas related to persistence other than just ORM and relational database (but could be those ideas have been proven as bad over the years). I just find it a bit strange that we have been building these things pretty much the same way for the last 20+ years. Meanwhile the hardware has changed much. Even with quite small corporate hw budget you are looking at half a terabyte of RAM and tens of cores in a single server.
It's the basis of most Zope apps, including Plone, but also many other applications; there's not much talk about it, because it just works. Like in the comment I made below; if it's a good fit ze ZODB is quite literally worth every LoC in gold.
The database itself doesn't do that much (apart from giving you transparent object/application persistence, transactions and MVCC :), as one would expect.
The Zope people made an excellent job of modularising it (or, in other words: this is by principle extremely modular), so there are a bunch of packages commonly used around it (eg. BTrees, zodburi, often either ZEO or RelStorage and stuff like eg. zope.container).
(Historically it's also very interesting - development started in 1997! The revision history is an interesting read, too, many familiar names pop up, including GvR)
a. Transactional systems [1], based around actions that need to be recorded, typically for commercial or legal reasons, like software used to record your purchases, payments, reservations, participation, interactions, obligations, results, commitments, plans, and significant events.
b. Non-transactional continuous systems, like software controlling an elevator, warehouse conveyor system, or a vehicle engine management system.
The Tezos project is writing a new safety-focused blockchain implementation in OCaml. Would you classify that as a representational system or otherwise?
I don't know anything about Tezos or blockchain implementations sorry, but I suspect that it is primarily focused on the processing of data, rather than the modeling of the concepts in a domain – a specific area of activity or knowledge.
As such it is more of a Type 2B project - "Information, data science and analysis projects, systems often focused on the processing of data." (http://aryehoffman.com/entry/project-types)
Most mainstream languages don't even have sum types, but have to encode them via other means (enums, tags, inheritance). They are not necessarily the best choice for modelling data, just your preferred choice.
From the description, 2(b) sounds like a strict superset of 1(a). In fact, it sounds like the
most generic possible description of an IT system. I mean, what computer system is not focused on the processing of data?
Anyway, I asked you about a blockchain technology because it fits perfectly into your description of a representational technology. In fact it's a breakthrough in the space, and I'm finding it hard to believe that an IT professional doesn't have even a basic idea about about it.
There's not enough detail here to really comment. I can say that Haskell has the best software transactional memory implementation out there, thanks to purity and controlled side-effects. Haskell's explicit handling of state also helps provenance and keeping state managed / in the database.
> I know a lot of people don't want to hear this but these types of languages, functional first, are the future of our industry.
Mainstream languages will incorporate functional features and remain popular, and they will not be superseded by pure functional languages. Java and C# are already doing this.
They will still remain generally more painful and more exposed to their issues and choices:
* mutable by default
* OO by default
* null by default
* structural equality a pain to implement
* immutable types a pain to implement
* verbose syntax / failing at the DRY principle
* statement based rather than expression based
* large codebase following those idioms
I don't think they'll not remain popular, but I think a more important share of people will eventually "get it" that there are alternative approaches which are sound, same or greater potential to achieve and thriving eco-system.
I am aware of them, my comment was more a kind of heads up, because I have been in too many meetings about technology adoption, whose presenters though an endless list of features was the best approach.
Really reduced codebase size and features like type provider which increase safety and saves from writing boiler plate code do save on the bottom line, especially in maintenance.
Also, if you are using vanilla C# for math computation, you can save CPU/energy by translating it to F#, it tends to run significantly faster, that also impacts the bottom line in some contexts.
The other thing is footprint of codebase, despite almost 15 years of using C#, after only two years, I consistently write F# which is 1/3 of equivalent C# code, writing idiomatic code that any F# developers would grasp.
There are ways a C# aficionado would take to try to not be so verbose, but those solutions won't be considered idiomatic by 99% of C# developers.
At the end of the day, codebase size matters a lot.
I disagree. It is the frontend that made me want to learn functional programming in earnest. Redux and ImmutableJs were the catalysts.
I'm currently learning a couple of LISP variants (Closure and LFE). With ClojureScript, Elm, PureScript, BuckleScript, and even plain ole ES6, functional on the frontend is more compelling than ever.
I disagree too. Frontend is getting more complex each day, and I find that functional programming languages help a lot with large projects. Frontend SPA's are crying for help with the exploding complexity :)
Good point on the complexity of today web apps. I'm not really into web front-end and my comment was more about native frontend development, where you need to add/remove things from screen, run animations, you know, those sort of things that are inherently non-functional. Sorry for not being clearer.
So, if we only want to call a couple of animations and submit a form, I agree that a small script with three jQuery-style functions can do the job.
But that's rarely the case nowadays, in my experience. Almost every frontend project I came across professionally became a full application at some point.
There's the need of dealing with concurrent user interactions, online data requests to the server, non-traditional form behaviors, different routes...
I think there's a typo in the createQty function: shouldn't it return (uint16 0) if n is less than 0, not greater?
Otherwise, great article. This is exactly how I learned to program with Scheme: mosel the domain carefully, slowly building up helper functions, and conposing at the end.
It'd be especially nice if database schemas mapped to F# types more readily. JSON db's kinda do it, but with no defined schema and no joins. It seems like it'd be not terribly difficult to augment regular SQL databases with something akin to the F# type system to define table schemas, rather than just flat data in columns. Then "SQL" could be extended to include `match` expressions.
I've taken the approach of converting my models to XML, mostly because they map 1:1 from F#-land to XML-land, and I can use SQL Server's XML Indices, but most importantly, I can use XSLT to evolve my persisted models upon change change to F# code.
Then using a bit of F# Quotations, I can convert most match queries to XPath queries and query SQL Server directly (still work-in-progress).
But running into performance problems, so it's not 100% done.
Yeah, I did the first part a couple years ago too. I also put the whole thing behind a VCS so we'd have version-controlled data. But, yeah, it didn't scale well enough and I ended up dropping that for mongodb, and later for postgres.
Here is a version of basket add-to with a recursive local function for doing the insert, avoiding the clumsy mapcar and "did we insert or not" check copied from the F# code.
I know a lot of people don't want to hear this but these types of languages, functional first, are the future of our industry. (In the sense that in the 2000s Java like laguanges were the future of our industry). I might be reaching here, but in my opinion, these are the right languages for the Cloud and that's why they are getting so popular.