This is a nicely written essay and, I think, completely wrong. (One person's experience here, just as a disclaimer.)
I've interviewed a lot of functional candidates in a decade-long stint of functional programming professionally. My interview approach is always the same. All practical exercises, no leetcode here. You can do the exercises in the language you're most comfortable in.
If I had to pick a language that predicted you'll do poorly on a practical interview exercise, I would pick F# every time. For some reason the candidates just do not do well. Now I can think of some confounders - maybe the types of people who would apply to a job with some dynamic programming requirements and people who are good at F# just have no overlap. I've thought about that.
But it seems like this line
Pragmatists don't make technology decisions on the basis of what is better. They prefer the safety of the herd.
Implies that if someone just saw some code in F# and realized what people can do in it, they would be super impressed. I have not found that to be the case. If that's a general problem and not just a quirk of my personal experience, that has to be fixed first.
That's weird. I use F# because of its pragmaticism. Every other language tacks on feature after feature. I would say my F# code is boring, which is what I love about F#. And it's especially boring compared to other functional languages. The other pragmatic functional languages are Erlang and Elixir.
I would consider most popular languages, like C#, Java, and Python decidely unpragmatic as languages. There's way too many hoops to jump through to concisely describe the problem domain. In F#, I define some types that describe the domain, write some functions, and move on. It's that easy.
I think F# programmers lack that gamut because they get comfortable in the eager execution type safe world and stay there with no particular reason to learn dynamic programming techniques. There is also the effect that it allows less advanced functional programmers to be productive so that in randomly sampling currently active functional programmers the F# programmer is less likely to be advanced.
Scala developers were referred to a Java refugees, Swift developers to Objective C refugees, and F# as C# refugees. A weird side effect of Microsoft doing a better job with C# is that there less of a push to F#. Plus F# by virtue of being in Dev Div had the core value proposition (Ocaml on .Net) undermined by the Win vs Dev Div internal battles that tried and failed to kill .Net.
I have been programming for 20 years, and yet despite having used dynamic languages I don’t actually know what it means to leverage dynamic programming techniques. For instance, I’ve never encountered a JavaScript codebase that I have thought couldn’t benefit from just being statically typed with Typescript. I get the impression that dynamic programming, besides the odd untyped line of code, is best used only for extremely specific cases?
The problem is the word dynamic is overloaded, and I'm not at all sure which one your parent comment meant.
"Dynamic programming" traditionally has nothing to do with dynamic languages but is instead a class of algorithms that are "dynamic" in the sense that they represent time.[0] This might be what your parent was referring to because these algorithms lend themselves well to Haskell's lazy evaluation, and they reference F# as being eager.
That said, they also talk about F# as being type safe, so they could also be referring to dynamic programming languages. The grandparent was definitely referring to this one, but "dynamic programming techniques" sounds much more like the algorithmic meaning.
To be clear I wasn't referring 'dynamic programming' but as you say, the use of a dynamic language, or programming without types, mimicking what I assumed the original poster I replied to meant.
My guess is that interviewees wishing to return to the typed world they are comfortable with would first try to type the JSON they are working with. Given that the JSON is messy this could be an unbounded amount of work that is unlikely to pay-off within the span of an interview.
Ok that is very confusing because "dynamic programming" is a very specific thing, and also super popular in leetcode questions. Maybe half the questions on leetcode.com involve dynamic programming.
It has absolutely nothing to do with dynamically typed programming. It's also a really terrible name for what is essentially caching.
By untyped I assume you mean dynamic languages? I’m some contexts it’s not convenient to lug around a type checker, embedded languages for example. Other times if doing macro heavy programming (lisp, forth) it’s hard to build a type system that can properly type check the code or resolve the implicit types in a reasonable amount of time.
In the context of JSON you can work on it without types from a typed language. It’s just that as a force of habit coders may chose to spend time adding types to things when they shouldn’t.
> For instance, I’ve never encountered a JavaScript codebase that I have thought couldn’t benefit from just being statically typed with Typescript
That's the type bias. If you look at a non-typed codebase, it always feels like it will be better with types. But if you had a chance to go back in time and start the same codebase in Typescript, it would actually come out way worse than what you have today.
Types can be great when used sparingly, but with Typescript everyone seems to fall into a trap of constantly creating and then solving "type puzzles" instead of building what matters. If you're doing Typescript, your chances of becoming a product engineer are slim.
There is much naïveté among the strongly typed herd. When tasked to create glue code translating between two opposing type systems, which is a very common data engineering task, reaching for a strongly typed language is never the best option for code complexity and speed of development. Yet the hammer will often succeed if you hit hard enough and club the nail back to shape when you invariably bend it.
There is much naïveté among the strongly typed herd.
Is exactly the reverse also true? Let me try: "There is much naïveté among the weakly typed herd." For every person who thinks Python or Ruby can be used for everything, there is another person who thinks the same for C++ or Rust.
Also, the example that you gave is incredibly specific:
When tasked to create glue code translating between two opposing type systems, which is a very common data engineering task
Can you provide a concrete example? And what is "data engineering"? I never heard that term before this post.
I'm a data engineer, it's a fairly new role so it's not well defined yet, but most data engineers write data pipelines to ingest data into a data warehouse and then transform it for the business to use.
I'm not sure why using a static language would make translating data types difficult, but I add as many typehints as possible to my Python so I rarely do anything with dynamic types. I guess they're saying for small tasks where you're working with lots of types, when using a static language most of your code will be type definitions, so a dynamic language will let you focus on writing the transformation code.
Thank you to reply. Your definition of data engineer makes sense. From my experience, I would not call it a new role. People were doing similar things 25 years ago when building the first generation of "data warehouses". (Remember that term from the late 1990s!?)
I am surprised that you are using Python for data transformation. Isn't it too slow for huge data sets? (If you are using C/C++ libraries like Pandas/NumPy, then ignore this question.) When I have huge amounts of data, I always want to use something like C/C++/Rust/C#/Java do the heavy lifting because it is so much faster than Python.
Yes, it's definitely a new word for an old concept, same as the term data scientist for data analyst or statistician.
I find Python is fast enough for small to medium datasets. I've normally worked with data that needs to be loaded each morning or sometimes hourly, so whether the transformation takes 1 minute or 10 minutes it doesn't matter. The better way is of course to dump the data into a data warehouse as soon as possible and then use SQL for everything, so I only use Python for things that SQL isn't suited for, like making HTTP requests.
Using a static language to manipulate complex types, particularly those sourced from a different type system (say complex nested Avro, SQL, or even complex JSON) is much more awkward when the types cannot be normalized into the language automatically as can be done with dynamic languages. Static languages require more a priori knowledge of data types, and are very awkward at handling collections with diverse type membership. Data has many forms in reality -- dynamic languages are much more effective at manipulating data on its own terms.
You realize every single thing that dynamically-typed languages can do with data types, statically-typed languages can do too? Except when it matters, they can also choose to do things dynamically-typed languages can't.
Lots of people assume static typing means creating domain types for the semantics of every single thing, and then complain that those types contain far more information than they need. Well, stop doing that. Create types that actually contain the information you need. Or use the existing ones. If you're deserializing JSON data, it turns out that the deserialization library already has a type for arbitrary JSON. Just use it, if all you're doing is translating that data to another format. Saying "this data is JSON I didn't bother to understand the internal content of" is a perfectly fine level to work at.
About monkeypatching, perhaps we have difference definitions. From time to time, I need to modify a Java class from a dependency that I do not own/control. I copy the decompiled class into my project with the same package name. I make changes, then run. To me, this is monkeypatching for Java. Do you agree? If not, how is it different? I would like to learn. Honestly, I discovered that Java technique years ago by accident.
Another technique: While the JVM is running with a debugger attached, it is possible to inject a new version of a class. IDEs usually make this seamless. It also works when remote debugging. Do you consider this monkeypatching also?
> You can’t do monkeypatching or dynamically modify the inheritance chain of an object in a statically typed language.
There's no theoretical reason you can't. No languages that I know of provide that combination features, because monkey-patching is a terrible idea for software engineering... But there's no theoretical reason you couldn't make it happen.
I think you've conflated static typing with a static language. They're not the same thing and can be analyzed separately.
So how would a statically typed language support conditionally adding methods at runtime? Lets say the code adds a method with name and parameters specified by user input at runtime. How could this possibly be checked at compile time?
You could add methods that nothing could call, sure. It would be like replacing the value with an instance of an anonymous subclass with additional methods. Not useful, but fully possible. Ok, it would be slightly useful if those methods were available to other things patched in at the same time. So yeah, exactly like introducing an anonymous subclass.
But monkey-patching is also often used to alter behaviors of existing things, and that could be done without needing new types.
You would need another feature in addition: the ability to change the runtime type tag of a value. Then monkey-patching would be changing the type of a value to a subclass that has overridden methods as you request. The subclasses could be named, but it wouldn't have much value. As you could repeatedly override methods on the same value, the names wouldn't be of much use, so you might as well make the subclass anonymous.
In another dimension, you could use that feature in combination with something rather like Ruby's metaclasses to change definitions globally in a statically-typed language.
I can't think of a language that works this way currently out there, but there's nothing impossible about the design. It's just that no one wants it.
In a dynamic language, everything is only defined at runtime.
Given that, a sketch of a statically-typed system would be something like... At the time a definition is added to the environment, you type check against known definitions. Future code can change implementations, as long as types remain compatible. (Probably invariantly, unless you want to include covariant/invariant annotations in your type system...)
This doesn't change that much about a correct program in a dynamic language, except that it may provide some additional ordering requirements in code execution - all the various method definitions must be loaded before code using them is loaded. That's a bit more strict than the current requirement they the methods must be loaded before code using them is run. But the difference would be pretty tractable to code around.
And in exchange, you'd get immediate feedback on typos. Or even more complex cases, like failing to generate some method you had expected to create dynamically.
Ok, I can actually see some appeal here, though it's got nothing to do with monkey-patching.
I love using "mixed" dynamic/static typed languages in these scenarios... you can do that data manipulation without types, but benefit from types everywhere else... my two favourite "mixed" languages are Groovy on the JVM, and Dart elsewhere... Dart now has a very good type system, but still supports `dynamic` which makes it as easy as Python to manipulate data.
A major problem with doing data transformation in statically typed languages is that its easy to introduce issues during serialization and deserialization. If you have an object
class myDTO{
string name;
string value;
}
var myObjs= DerserializeFromFile<myDTO>(filepath)
SerializeToFile(myObjs, filePath2)
filepath2 would end up with without the extraProperty field.
You can also write code like
function PrintFullname(person) {
WriteLine(person.FirstName + “ “ + person.LastName)
}
And it will just work so long as the object has those properties. In a statically typed language, you’d have to have a version for each object type or be sure to have a thoughtful common interface between them, which is hard.
All that bring said, I generally prefer type safe static languages because that type system has saved my bacon on numerous occasions (it’s great at telling me I just changed something I use elsewhere).
You can write code in a statically typed language that treats the data as strings. The domain modelling is optional, just choose the level of detail that you need:
1. String
2. JSON
3. MyDTO
If you do choose 3, then you can avoid serde errors using property based testing
"Most" (I mean "all", but meh - I'm sure there's some obscure exception somewhere) parsers will have the ability to swap between a strict DTO interpretation of some data, and the raw underlying data which is generally going to be something like a map of maps that resolves to strings at the leaf nodes. Both have their uses. The same can also be done easily enough by hand as well, if necessary.
If you are truly interested in understanding my point of view -- a great way to do it would be to learn how to use this Clojure DSL: https://github.com/redplanetlabs/specter
You could also think about why Nathan Marz may have bothered to create it.
As for data engineering, I think ChatGPT could tell you a lot, and its training is dated from 2021.
As someone who tried very hard to incorporate specter into their speech-to-text pipeline, I feel compelled to point out, it gave me a lot of NullPointerExceptions while learning. I don't think it's a great example of the value of dynamically-typed langs.
In retrospect, Marz's hope that specter might get incorporated in clj core was wildly optimistic (even if the core team wasn't hostile to outsider contributions), because it feels like he built it to his own satisfaction, and never got around to removing the sharp edges that newcomers cut themselves on.
It's a shame, because I think specter is a cool idea, and would love to see a language based on its ideas.
They keep trying to kill .NET, just check how much WinDev keeps doubling on COM and pushing subpar frameworks like C++/WinRT.
One would expect that by now, out-of-process COM would be supported across all OS extension points, instead it is still pretty much in-process COM, with the related .NET restrictions.
Then there is the whole issue that since Longhorn, most OS APIs are based on COM (or WinRT), not always with .NET bindings, and even VB 6 had better ways to use COM than .NET on its current state (.NET Core lost COM tooling).
Doesn't look to me like they're trying to kill .NET at all. Maybe F# in particular isn't getting the love and attention it deserves but they'd have to be mental to be actively trying to kill off something as popular as .NET
Kill in the sense that from WinDev point of view, the less .NET ships on Windows the better.
In case you aren't aware, WinRT basically marks the turning point started with Longhorn ideas being rewriten into COM.
With WinRT, they basically went back to the drawing board of Ext-VOS, a COM based runtime for all Microsoft languages, hence why .NET on WinRT/UWP isn't quite the same as classical .NET and is AOT compiled, with classes being mapped into WinRT types (which is basically COM, with IInspectable in addition to IUnknown and .NET metadata instead of COM type libraries).
Mostly driven by Steven Sinofsky and his point of view on managed code.
This didn't turned out as expected, but still the idea going forward is to make WinRT additions to classical COM usable in Win32 outside UWP application identity model.
"Turning to the past to power Windows’ future: An in-depth look at WinRT"
Beyond WinRT, .NET (Core) has supported all the raw COM, including COM component hosting since at least .NET Core 3.0. It's Windows-Only, of course, to use that, but that should go without saying. It's mostly backwards compatible with the old .NET Fx 1.0 ways of doing COM and a lot of the old code still "just works". .NET has proven that with .NET 5+ and all the many ways it (again) supports raw Win32 fun in the classic WinForms ways. (And all the ways that even .NET Fx 1.0 code has a compatibility path into .NET 5.)
It would have been nice if Windows had stronger embraced .NET, but WinRT is still closer to .NET in spirit than old raw COM anyway.
.NET Core doesn't do COM type libraries like the Framework does, you are supposed to manually write IDL files like in the old days.
Additionally CCW/RCW infrastructure is considered outdated and you should use the new, more boilerplate based COM APIs introduced for COM and CsWinRT support.
Lots of changes, with more work, for little value.
This seems to be implying that F# programmers do 'poorly' because the language 'protects' them. But, isn't that good? Why have a language that purposely tries to trip you(dynamic), and you are a 'good' programmer if you don't get tripped. This seems to be rewarding people that are good at using a bad language. Since if you are using F# you don't learn the techniques for using other bad languages, doesn't make that programmer bad, it just means F# is more seamless.
Indeed, I’m suggesting an alternative explanation for the given observations based around the absence of a strong selection criteria bias. I’m of the strong opinion that F# is a great language and that people of different levels of skills can be productive in it. As opposed to a C++/lisp combo where only the most careful programmers get to keep both of their feet.
F# is a slice through the language design space that optimizes for developer productivity, other languages with different design choices are optimized and are indeed better for other things.
I think there is a benefit to learning learning ‘bad’ languages as they teach you about the different design trade offs that are available. A person with dynamic programming language experience would have known that the given JSON task was tractable without types and not started the task with a ‘type all the things’ mentality.
I disagree on almost every count: it's a badly written essay, but it makes a valid point.
> I've interviewed a lot of functional candidates in a decade-long stint of functional programming professionally. My interview approach is always the same. All practical exercises, no leetcode here.
Does this mean that you're measuring, like, how fast someone can deploy a CRUD webapp in the given language? I can imagine F# would do poorly on that kind of metric; it's optimized for maintainability and doesn't take unprincipled shortcuts, whereas something like Ruby lets you type one line and it will blat out a bunch of defaults via unmaintainable magic.
> Implies that if someone just saw some code in F# and realized what people can do in it, they would be super impressed. I have not found that to be the case. If that's a general problem and not just a quirk of my personal experience, that has to be fixed first.
I don't think it's a general problem; almost everyone who spends any significant time working in F# returns to C# reluctantly if at all. It's already a "better" language in the sense of how nice it is to program in and how productive a programmer feels when doing so. But those aren't the metrics that matter.
That's interesting, because I've tried to use F# several times, and never really felt comfortable. I've written in a bunch of different languages and F# is probably the most disappointing because of how much I want to like it.
I feel like F# deceptively presents itself as simple, when in reality it is closer to C# in complexity. I've written in languages that are actually simple (tcl) and it is a joy. I've written in languages that are unashamedly complex (Wolfram language) and that is also fun. But F# occupies that weird middle ground where it seems easy to do what you want, but for some reason you trip over your own shoelaces every time you take a step.
I wrote F# for a long time, and there were definite phases to learning and becoming comfortable with it. For example, it's often pitched as a functional language but in reality its a functional-first hybrid language on the .NET framework - to be efficient with it is to embrace this and write imperatively when you need to.
Pure syntax wise, its pretty nice, though having a single pass compiler makes it a bit dated feeling compared to other languages.
Forcing linear dependence of files and definitions is considered a feature of F#. In codebases that allow out of order definitions, things get wild real quick.
I found it rather annoying that you cannot organise your code to be readable from top-to-bottom in a file, going from the big picture to finer details. Which, I think is much easier for humans.
That is a matter of habit and not something that is "easier for humans", I think. I've written enough OCaml, which also enforces linear dependence of modules and definitions, that I often find it jarring to read code the other way around and I'd be lost without an IDE that lets me jump to definitions.
I'm not particularly attached to order of definitions within a module, but I definitely like the linear ordering of modules. Without the compiler enforcing that you always end up with cross dependencies between modules, which I think makes it much harder for humans to read the code.
I've had similar conversations about variable shadowing. I find it natural and it bothers me when I can't do it, especially in functional languages, but I have a friend who really doesn't like it and finds it jarring. I think his dislike stems from having learned Erlang before any language that has shadowing, because in Erlang when you reuse a variable name it'll be checking for equality with the value you're "assigning" it to.
I have not found that to be an issue at all. Most F# modules are just types at the top and then functions. This generally does mean that F# modules start with the big picture (i.e., the types) and then move to the details (i.e., functions). If a module is doing something more complicated than that, then I think it's a problem of the module's design likely being too broad.
In codebases that allow out of order definitions, things get wild real quick.
As I understand, Java allows this. You can effectively have circular dependencies. If true, I have worked on multiple million+ lines Java projects in my career; none of them were "wild real quick". Also, C++ has forward type declarations to effectively allow the same. Again, there are ginormous projects like Google Chrome and Firefox written in C++.
It can be an issue in C#. I've seen some mad circular dependencies between files. The file hierarchy doesn't always reflect the namespace hierarchy which sometimes doesn't reflect the actual code dependencies hardly at all and untangling the messes that result from that can be a big deal.
I'm not sure a project gets into that state "real quick", but it remains something that projects can do over time, sometimes without realizing it, especially when a DI abstraction makes it even less obvious how circular a project's dependencies really are.
I'm not sure why those things are necessarily a problem, particularly when with IDEs or at least editors that can parse a language's symbols. We're not programming the dark ages. Do you have a specific example of why these are an issue?
I think wild real quick might be a bit extreme but cyclomatic complexity[1] is a source of unnecessary complexity in software. While not a problem per se, all unnecessary complexity adds up. Leading to the eventual death of the project[2]
Curious about what one of your typical "practical exercises" looks like?
I'm not a great fan of f# (it's not rational - but every time I've tried to dip my toes, something in the syntax has felt... Tedious? In a way that for example StandardML does not). But it still seems like an eminently powerful and pragmatic language, so I'm surprised by your experience.
Fascinating that fsharpers trip up on ad- hoc json wrangling - to be fair i used to agonize over nested lists/association lists in lisp - rather than doing the "professional" thing and YOLO assume that three levels down, behind :user - there's a fifth element :email...
I'd love to know what dynamic programming exercises you're asking interviewees to complete in the timespan of an interview that wouldn't show up on LeetCode.
I think they meant it in the literal sense; "programming requirements that are dynamic", not dynamic programming algorithms you'd use for the knapsack problem :)
It involves a lot of preparation on my part. The ask for the candidate is usually a small service that does some trivial thing in our business domain. The interview problem is usually making a script to manipulate the data, or serve an API endpoint that calls my API and transform the data to match a certain output shape.
Interesting. So is actually executing the script, server, or API call without errors part of the interview? Is that what makes them "practical" or is it because the data structures and algorithms are related to your business domain?
Not trying to nitpick, your comment just piqued my curiosity because you made the point of distinguishing your exercises from leetcode and also stated that those who chose F# were generally poor performers.
It is, but I’m very forgiving of scripts/services that don’t run right the first time if it’s clear the logic is on the right track. If your logic is sound and you’re stuck on an esoteric error, I usually count that the same as completing the exercise. (There have been cases where the person shows no debugging ability at all, which I do treat as a problem. But if you’re reading the error and there’s just not enough time for a fix, eh you were close enough.)
jacamera is on point. Perhaps it's not "leetcode" but it's a seemingly one-dimensional, time-constrained quiz on a specific skill, parsing messy JSON (your words), with you as the sole judge. Personally from experience, when I approach a new API, I recognize it's likely to take a few iterations, based on how that API's data integrates with the rest of my program, to determine the best way to deserialize the API's responses into objects. If I've only got 45 minutes, yea, I'm just going to map it quick and dirty and it's going to look ugly.
Your observation about F# may be valid, but this does appear to be a test of one specific use case for a language, not how productive people are when building entire applications with them.
Why did you take from my comment that parsing messy JSON is the only part of the exercise? It’s just the part that we seem to get stuck on with F# candidates, not the only thing I ask about.
Parsing realistic (messy) JSON, usually. I would have expected F# to shine at that due to type providers, so it was doubly surprising to me. The F# candidates I've seen spend most of the interview manipulating the data.
This is likely because people don't have enough real-world experience with functional programming paradigm techniques like 'parse, don't validate' and modelling only the parts needed to handle dynamic input, e.g. as shown here https://lexi-lambda.github.io/blog/2020/01/19/no-dynamic-typ...
It's unfortunate that they decided to pick F# on such problems when they didn't have the (mental) toolkit to tackle them, but I think it speaks more to people being really eager to communicate their enthusiasm for FP. I wouldn't try to ascribe any higher meaning to it.
I've seen the opposite, and have seen some pretty big fast large professional systems being built in it. Admittedly most of them are closed source, and the community contribution allowed to be made is small. Most of the developers using it normally just market themselves as C# developers when they change job because of some of the above opinions they see online. Have seen speed comparisons compared to other similar tiered languages internally where F# shines with less code as well particuarly around math (C# is just starting to catch up here). Even seen dynamic problems solved in it to your point in a professional setting to speed up very often run calculations to a large customer base.
However technical capability and efficiency of the tool isn't the only concern picking up the comments in this thread so far. The herd effect sadly can make people nervous creating a barrier for people to try it especially if career is in it. What other people think of it, and their preconceived notion can matter.
This is, from a distance, fascinating anecdata. I would expect F# itself to excel in this area (type oriented development being a huge selling point), and people who’ve chosen it to be attracted to those parts of F# which should excel for it. I’m often surprised when my at-a-distance expectations don’t meet reality, but not so often astonished by it.
How messy is the JSON exactly? Type providers are awesome when the schema is consistent.
But tbh, from what little I know, I'd be expecting you to expect me to solve the issue from first principles. So using a "and then magic" technique might be something interviewees shy from.
This sentiment is surprising. Doesn't Python crash a lot at run-time, specifically because some 'dynamic types' clash. They become a real type at some point, at compile time or at run time. Why wait for the crash to figure it out?
Isn't ad-hoc data wrangling the feature of F#? Why are so many people here against F# for reading/manipulating ad-hoc data, that's what it is good for.
I find I spend a lot of time building records for JSON data. Type providers are nice but I've found them to be a little untrustworthy.
Normally now I ask Chat GPT to create an F# record from the JSON. Then go through and check everything line by line, redo the types to something sane and use that.
But if it were a quick script to do a one off then a type provider will work well.
TypeProviders don't do well with messy JSON and are finicky at the best of time, the last thing I'd rely on during an interview.
Depends on what you mean by messy? Non-conforming JSON? A custom FParsec parser might be able to sensibly extract the data. If it is conforming to JSON then you'd use normal F# code to work with the standard JSON parsers.
> Implies that if someone just saw some code in F# and realized what people can do in it, they would be super impressed. I have not found that to be the case. If that's a general problem and not just a quirk of my personal experience, that has to be fixed first.
Maybe I am mistaken here, but it seems to me that the line you are quoting from the essay implies the opposite of what you said.
>If I had to pick a language that predicted you'll do poorly on a practical interview exercise, I would pick F# every time
As someone that has just spent a while learning F# and really enjoys it, this makes me sad. I hope that I don't have some trait that drew me to F# that also causes me to be less competent.
Don't be sad. This is definitely a skewed perspective from a single person which makes it worse than useless as a way to gauge a random persons competence.
I'm sure you could find a person who would claim this same exact thing for any language if you searched around a bit.
To derive a useful conclusion you would need to control for all of the factors involved in the interviews and objectively measure their performance compared to other people's, anything less than that is completely useless at best.
I took this post about F# talent to really imply -> F# has language features that protect or handle 'some concept', and then when the programmer was asked to do something else, they had trouble because they wouldn't normally have had to deal with it in F#. It isn't necessarily the F# programmers skill level, if dealing with other languages and needing more code to cover gaps, and knowing the gaps.
Can be a great Java programmer, now go do something in assembly, doesn't reflect on the Java skills.
I don’t think you have any good reason to think you’re inherently less competent.
The most important thing is to always be learning. You’ll never be done, embrace that and rejoice in it. Most people more or less stop learning in early adulthood: don’t do that, and you’ll eventually be ahead of the pack.
Are you implying this is a F# problem? What language do candidates do well in?
Is F# too approachable due to its closeness to C#, popular with C# professionals, and therefore not approached as a functional programming language as much as a C# extension?
I've seen Elixir, Python, Javascript, Java, Scala, Clojure, C#, Go, and more; not sure who is at the top, but F# is 100% at the bottom (just in my personal experience.) I wouldn't chalk it up to being too approachable.
I did think F# had a pretty steep learning curve. So, not approachable. But how did it compare to Scala/Clojure? Those seem to have similar problem with being approachable, and adapted.
Just curious what kind of career did you have that enabled you to have a long stint as function programmer? The fact that you don't use Leet code makes it also sound you are not FAANG so makes it more intriguing.
I wonder if that is because people who pick obscure languages are more interested in looking cool than getting the job done.
Like in an interview setting, even if they say you can choose any language, it is probably a safer bet to go with something the interviewer has probably seen before.
I've interviewed a lot of functional candidates in a decade-long stint of functional programming professionally. My interview approach is always the same. All practical exercises, no leetcode here. You can do the exercises in the language you're most comfortable in.
If I had to pick a language that predicted you'll do poorly on a practical interview exercise, I would pick F# every time. For some reason the candidates just do not do well. Now I can think of some confounders - maybe the types of people who would apply to a job with some dynamic programming requirements and people who are good at F# just have no overlap. I've thought about that.
But it seems like this line
Implies that if someone just saw some code in F# and realized what people can do in it, they would be super impressed. I have not found that to be the case. If that's a general problem and not just a quirk of my personal experience, that has to be fixed first.