Resharper has proven to be a great way for me to start learning and integrating new language features. I code in my original style and resharper suggests changes based on new features like pattern matching. At first I didnt see the benefit, but this is a hugely powerful feature once you start to wrap your mind around it.
I have clocked in way more hours in Rider than VS. I can flawlessly run and debug our decades old monolith with 40-50 projects and millions LOC, containing C#, VB, Web Forms, WPF, files with 80k lines...
And all of our .NET Core services on MacOS, with all my unix-y tooling.
There is “VS for Mac”, but it’s not really VS. It’s Xamarin Studio rebranded.
Rider works excellent on my Linux box at home as well, where the alternatives are MonoDevelop and Omnisharp + text editor.
> There is “VS for Mac”, but it’s not really VS. It’s Xamarin Studio rebranded.
This was only true for the first releases, they have been integrating common code from VS and this is one of the reasons why VS moved away from COM plugins into a .NET ones.
I have actually also switched more or less to only using Ubuntu, but have been using VS Code. It has become quite powerful in my book and I like that it is the same environment I use across React Native and C#. But I may have to have a little look at Rider then.
My only complaint is I can’t ignore auto-gen’d code like EF migrations in dotCover, so my unit test coverage looks terrible. It doesn’t help that I write terrible unit tests, but still!
I run a little powershell script on build that adds the [System.Diagnostics.CodeAnalysis.ExcludeFromCodeCoverage] attribute to generated classes in certain folders.
I switched to Rider a few months ago but I still feel like I struggle with some things.
For some reason I can’t change the position of the debugger when running and aspnetcore app and make it go back so I always have to re-run whatever request I happen to be debugging. Does that happen to you?
It also seems like ReSharper had more tips and tricks going on when compared to Rider, but I can’t imagine that would be true.
As a long-time ReSharper user, I’ve been meaning to give rider a close look for some time now. Unfortunately the last project I was on was heavy into WPF, and that was about the only box VS didn’t tick.
Any tips, tricks, or pitfalls to avoid making the switch from VS to Rider?
I used VS for years before switching to Rider and honestly... just jump in, there's not much to know. If you've used ReSharper the features should seem pretty familiar.
It is snappier than VS+R# (and I can't bear to give up all the R# features), it supports emacs key bindings, it has (IMO anyway) much better support for ancillary stuff like directly issuing DB calls or interacting with source control, and I'm just used to the IntelliJ platform after using it for years.
I've moved my laptops/desktops over to Ubuntu from Windows.
Rider was what I thought I'd give a go, with the idea that if it was really terrible I'd fire up a VM with Windows and VS.
As someone who's used Visual Studio since VS 2002, and ReSharper since (probably) 2006 - moving to Rider was pretty much run and go.
There's a few minor differences, but the only one that bugged me was the regional formatting options for the debugger. I still havn't figured out how to set it to use ISO8601 or something like it for formatting dates/times, and instead it picks something based on my OS configuration.
as an aside, if you got Rider as part of your resharper subscription, try it out, it is basically a drop in replacement for visual studio but faster and with many many little quality of life improvements you never knew you needed :)
I've recently switched to Rider full time from VS, so far it's been a really good experience & has fixed the main issues that I was experiencing with VS.
I've found it quite problematic to reliably connect to private NuGet feeds though which is a bit of a pain.
The spell checker is probably the single greatest feature in Rider. It works really well on composite type names. E.g. 'CustomerPreferences' would not get a squiggle under it for spelling, but 'CusomerPreferences' would. Performance is arguably better in most circumstances, but it doesn't support some of the bleeding edge .NET Core functionality as well as VS2019 does.
It also doesn't support some dried up crusty .NET functionality like webforms either (it compiles it fine, but doesn't have the code generation tools that VS has )
It's faster yes, but seems to be a bit more memory hungry out of the gate than VS+Resharper.
There is one HUGE thing I miss however - the Package Manager console. There's still a few things I have to do in there and thus have to switch back to VS.
VS is 32 bit, so instead of getting more memory when it needs it slows to a crawl and eventually crashes. I think that the early versions of VS are a bloodbath. VS 2019 just crawled with a small solution just because there were mixed C# and F# projects.
I should remember to upgrade always after 2 or 3 minor revisions instead of being a guinea pig for MS.
On the other hand some time ago I got a bug in rider where the F# code worked perfectly in VS but didn’t work at all in rider and until today I have rider highlighting some errors in perfectly valid c# code but the build is successful while for resharper that code is fine. And the biggest current pain that I have is that I cannot debug ny tests in VS but they work perfectly in rider. So for now I’m sticking to both of them until rider irons up some small wrinkles.
Tests not running might be X86/X64 issue. Certainly nUnit will find the tests but not run them until you change the default architecture (Resharper will run them happily in VS).
Definitely more memory hungry. But I see it as a good deal. I feed it all the memory it wants, it gives me stability, xplat, and an embarrassment of riches as far as features and conveniences. It's earned that memory.
For our 32 project solution, Rider is so awkwardly slow at startup and if you pull from master it just dies. Also it just loves to notify you about everything.
I still prefer VS.
How can a IntelliJ Idea derivative written in Java be faster than VistalStudio which is native code? Both Idea and PyCharm work much slower than VisualStudio on my machine.
At least for me, Rider is definitely faster than Visual Studio, both in startup time (easily human-measurable) and operation (also easily human measurable, by lack of intermittent hangups and laptop fans constantly whining).
I'd also say that Rider is more stable - I've only ever seen issues with preview versions; the production versions are solid, which can't be said for Visual Studio.
I was a Visual Studio fanboi for many years before discovering Rider, so wouldn't be easily swayed - please, try it and see for yourself!
Because VS has limitations on the amount of memory it can use and although they have been working on this, there is still functionality which is highly single threaded whereas Rider tends to offload most of the functionality from the UI thread into background processes.
The memory limitations can be partially kicked down the road by setting /largeaddressaware on devenv.exe. Ofc it's still beyond frustrating when you have a large solution with many component projects. I personally ram against the 4gb ceiling constantly.
I see, this means Rider is faster than the VS on many-core machines with huge amounts of memory. That's why it isn't going to be fast for me - I only use laptops, with 4 GiB RAM and dual cores each.
That's not why VS is still 32-bit. Apparently folks at MS dont think the extra address space isnt worth it and think it would adversely affect performance [1].
In my own experience in migrating servers from 32 to 64-bit on Linux 12 years ago, 64-bit was around 10-20% slower than the 32-bit build. Dont how itd fare today (no longer at that job). Even though performance was demonstrably slower, 64-bit it was due to edict from the CEO.
For what its worth, neither of the services I was responsible benefited from the extra address space. Mem usage for bother services peaked at around 750 MB.
Rico has a point to a degree and every VS version since then reduced memory footprint and improved performance (mostly due to not calculating stuff up-front which may never be needed and doing it on demand).
Microsoft has also pushed since VS 2008 (!) for plug-ins and extensions to use an out-of-process model. I think ReSharper got the message in summer 2019 that it might be a good idea – and incidentally, a lot of their performance comparison of Rider vs. VS comes down to "Rider uses ReSharper in another process, so the IDE doesn't slow down when ReSharper does things", while they themselves still pursued the slower model in VS. Heck, until a few releases ago ReSharper was still using the old synchronous COM APIs for querying the solution after loading, leading to severe pauses when opening a solution or re-loading it after switching source control states.
Vanilla VS is plenty fast and not that memory-hungry. In my experience it's extensions that try to sync with IDE state where a lot of the waiting comes in and in-process extensions where a lot of the memory usage comes from.
I worked in Visual Studio full-time for many years, now I'm coding in PhpStorm (IntelliJ IDEA) since July 2019, and while overall it's really powerful, I still kind of miss VS. One notable problem I have with PhpStorm is that oftentimes when I'm trying to find files or text in files, I press the respective keyboard shortcut and start typing immediately, in WebStorm the first few letters often end up being typed into the currently opened file instead of the "Find..." dialog, which never happened to me in VS. A minor issue, but it occurs every single day.
A similar thing makes me go nuts in Windows as well - in Windows 7 I pressed the WinKey and started typing immediately to find an application, never had a problem, but Windows 10 often misses the first few keystrokes after the Start menu opens. I can't believe this hasn't been fixed for years.
Windows 10 start menu is so bad I cut it out from every Windows machine I have any authority over. Try Keypirinha, it's very snappy and supports fuzzy search.
Why is it bad? It lists installed apps alphabetically, almost the same way all the previous Windows versions did and it has a search field to find an app quickly (AFAIK it can also search through other things but I always disable indexing and integrated web search functionality). How often do you even have to use it? I've just pinned everything I use regularly to the task bar and only use the start menu on rare occasions.
Reshaper is known to be quite slow, slowing down VS by a lot, how much depends on the project. Besides, newer versions of VS and its refactoring features have come a long way, for me, it has made Resharper obsolete.
Having switched from VS to JetBrains Rider a few months ago, I cannot recommend it enough. Way faster and less clunky than VS without Resharper, especially if you're targeting Unity. They're so far ahead I simply don't see any way for Microsoft to catch up any time soon.
Have you attempted to use VS recently without it? Newer versions of VS have come a long way to match its refactoring features. What feature specifically are you missing?
I don't understand why upcoming programming languages completely ignore the IDE experience or at most throw some basic code completion (which works half of the time) and syntax highlighting at you (not even semantic highlighting).
I'll take a "limited" language with excellent IDE support over a "powerful" language that you use notepad with.
I'd argue that Kotlin is still a "Upcomming programming languages" - and I think one of its reasons for success is how well integrated it is into the IDE. (especially if you use intelliJ)
Heads up for C# devs, you should switch all your code bases to use (x is null) and !(x is null) instead of ==. The is operator can't be overloaded, and always compiles to IL eq, whereas == can be overloaded in custom types. Of course it would also be nice if everything was moved to nullable reference types [0], but that's a non trivial amount of work. Note that most C#8 features can be enabled manually through the csproj files, including nullable reference types, even if you're using framework.
C# compiler doesn’t regard “x is object” or “x is null” as null checking statements, so you get warnings if you’re using nullable references.
Also, I don’t understand the scare over operator overloading. It’s not common and it works fine with nulls too as long as it’s implemented correctly. If it’s buggy, you’re screwed for other cases anyway, it isn’t much helpful to try to fix null checks only.
I find this advice overhyped because of these reasons.
For performance reasons as well. An overloaded == operator turns a simple ceq into a function call, involving a new stack frame push and pop. If you do this on a hot path, switching to (is null) and (is object) can provide a significant speedup and will likely reduce gc pressure.
> The is operator can't be overloaded, and always compiles to IL eq, whereas == can be overloaded in custom types.
That's exactly the reason to use the opposite. In certain C# environments (like Unity game engine) the == is overloaded in a very meaningful way (when C# objects are used as representations of unmanaged objects, and they appear to "equal null" if the represented object is no longer available in the system), and using methods other than == to compare to null (such as casting to boolean) can introduce hard to detect bugs.
Thanks. Just learned that object/tuple deconstruction works with is keyword as well (I mean makes sense). That makes argument checking so much less burden.
That also explains the switch in parameter checking from == null to is null for the default code analyzers/refactors/code hints.
I wrote up how I'd do this in scala, just to compare and contrast
val shape: Option[Shape] = ???
shape.map {
case Square(0) | Circle(0) => 0
case Triangle(b, h) if b == 0 || h == 0 => 0
case Rectangle(l, h) if l == 0 || h == 0 => 0
case Square(side) => side * side
case Circle(radius) => radius * radius * math.Pi
case Triangle(base, height) => base * height / 2
case Rectangle(length, height) => length * height
}
what I noticed
* scala doesn't have a way for cases to fall through, so the first cases have to each declare that the result is 0. it would be cool if we could use something in place of '=>' to make the case fall through.
* C# doesn't have structural matching, so the 'when' keyword is used more often.
* scala's pattern matching is exhaustive so if we assume the shapes are in a sealed type hierarchy then we don't need a default case.
* it's idiomatic to use 'Option' instead of null in scala, but there are lots of libraries in c# that offer option monads, so it's more a point of what's idiomatic than what's possible.
just FYI, the C# can be rewritten as a switch expression which is more comparable to the scala
return shape switch
{
Square s when s.Side == 0 => 0,
Circle c when c.Radius == 0 => 0,
Triangle t when t.Base == 0 || t.Height == 0 => 0,
Rectangle r when r.Length == 0 || r.Height == 0 => 0,
Square s => (s.Side * s.Side),
Circle c => (c.Radius * c.Radius * Math.PI),
Triangle t => (t.Base * t.Height / 2),
Rectangle r => (r.Length * r.Height),
_ => throw new ArgumentException(message: "shape is not a recognized shape", paramName: nameof(shape))
};
C# 8 also has nullable reference types, i.e. comprehensive nullability analysis, which is comparable though not identical to an option type. I really wish for exhaustive pattern matching and ADTs in C#, though.
It always feels a bit half baked when the class library isn’t designed with it though. It would be as if generics were added but not System.Collections.Generic. So while I long thought it would be the perfect addition to C#, I’m not so enthusiastic any more. It would be great (F# is!) but not as good as it would be in a language and platform where it was there all along.
In F# this is already an issue when you want to use Option/Result but as soon as you do anything with the BCL you need to handle exceptions instead, and convert to/from error cases of ADTs.
Nullable reference types also feel pretty half baked with EF at the moment. If you have a non-nullable column that you want to represent with a non-nullable property in your EF model, pretty much the only option (the last time I looked) was to configure EF to use a nullable private field and access it through a non-nullable public property. It's a lot of extra work.
>We will also aim to be done with null-annotating our core libraries when .NET 5 (November 2020) comes around – and we are currently on track to do so. [1] [2]
> It would be as if generics were added but not System.Collections.Generic
I started with C# in the 1.0 days before generics. The lack of strongly typed collections was very annoying with all of the casts involved. I used to use a template engine/code generator to create strongly typed collections. Can't remember the name of the tool... Probably been since 2002 or 2003 since I used it.
I’m surprised there isn’t an F# feature that automatically wraps a call with a try/catch wrapper that returns an Option<T,TException> - something like that could be done in a library, right?
I'm a bit curious about the constant need of hkts. I had a fairly manufactured answer to the following question, which would have saved a couple keystrokes with hkts, but I don't yet get why it is constantly so high on the request list. It seems like something that is cool in theory but would rarely be used in real world code. https://stackoverflow.com/q/21170493/171121
Wait till you get to work in some complex code bases and you'll understand why jamming all kinds of idioms into one language can become a disaster. It's a tool after all, if you know how to use it properly you don't paint yourself into a corner. However, too much flexibility can lead to many problems in larger teams. Good luck!
I don't know, what's easier to understand? Sorting a list with an IComparer instance that you have to implement in a concrete data type somewhere, or just tossing it a lambda expression? Just because it's new syntax doesn't mean it's automatically more difficult to understand the language.
I got nothing against functional programming, I embrace it. However, im stating again, shoving everything under one umbrella is bound to create a complex monster. Functional programming is actually easier to understand in F# rather than C#, the idioms do translate but clunkily. Do yourseves a favor and spend some time outside C# and you’ll come back illuminated.
Not saying C# is bad, thats what you all seem to understand though.
Yes, its all subjective, I know. But stepping out of the garden is what I’d like people to take out of this. Saying C# is the most beautiful language (like some commenter states) is true only if you haven’t dabbled in many languages. And once you do you feel stupid for having had this conviction in the first place. Ive used C# throughout my career, I don’t diss the language or the ecosystem, but other things are to be considered as well and cargoculting is a thing
Java proves that limiting language features for "simplicity" just pushes the complexity somewhere else and generally much worse.
C++ is usually taken as a language with too many features but really it just has a few very flexible features and people abused those features to do all kinds of metaprogramming. As C++ has been adding more native metaprogramming idioms it has actually been getting simpler to code in.
I appreciated Smalltalk for a while but it is simple only on the surface and once you peel below that it's extremely complicated. Simple syntax doesn't necessarily lead to simple designs.
Lisp's limited syntax allows everyone to create their own "language" and that's arguably worse than the fixed set of statements that exist in other languages. I'd even argue that Lisp is inhuman because it's brutal compared to natural languages.
This is probably why neither language is more than an intellectual curiosity. COBOL, Fortran and BASIC have also all "stood the test of time".
>I appreciated Smalltalk for a while but it is simple only on the surface and once you peel below that it's extremely complicated.
Not really. Definitely not compared to C# or Java. Smalltalk simply has more system-level code accessible to the user.
I've worked with Java environments that tried to replicate the visual programming features of Smalltalk. They were about 10X the size of a modern Smalltalk distribution (Squak or Pharo), had at least 100 times slower startup time and you still needed an external IDE to get anything "serious" done with them.
I agree with your sentiment. But writing good code is also avoiding playing "who knows more language features" between the coder and the reviewer.
I have seen a function which incorporated all new language features of that language version just because the dev could. A simple for loop would have made the same work. The reviewer failed here totally. Readability, Simplicity and consistency in the code base are a thing when reviewing.
Preventing this is a duty of the code reviewer and the team.
There may be a point where the total economic cost of newbie training exceeds the benefits of improving efficiency for the experienced. C++ arguably did that. Another problem is that people will use different languages if the learning curve grows too high.
Yes, i find that a good argument especially if you plan on using this feature a lot it can save you from shooting yourself in the foot. It also is a better idiom in F#, nicer to grok, but that’s subjective.
However, only direct experience will make you reconsider.
I’ve seen a discriminated union implementation in C# the other day and was repulsed.
I use OneOf<T...> in C# a lot now. If you’re thinking of the same one I am the only ugly thing about it is the code-generation required to generate all of the variations from OneOf<T0,T1> to OneOf<T0,T1,T2,T3,T4,T5,T6>.
From the features they are porting into the language I think this is the case; albeit C# will always be the more verbose language and the features will feel somewhat clunky at times IMO. Pattern matching, async yield return, async/await, etc all were in F# in some form beforehand with features like records and DU's probably being investigated as well. When I read a new C# language version announcement it does feel like I'm reading a subset of the F# feature list.
I used Scala for a bit and then came back to C# and felt like they were porting over all the Scala features, but I think the F# explanation may make more sense.
Well C# 7.2 and 7.3 was moving it more to Rust. So that F# statement is a bit limited. They steal best ideas from everywhere and integrate it into the multi-paradigm language C# actually is.
In 2020 Pattern Matching is just a elementary feature everyone wants to have in all general purpose language. Like async/await. Or LINQ. That is just 101 for languages from now on.
There are also linters [0] available for checking that your cases are exhaustive, which can even be configured to emit the lack of exhaustive checking as an error, rather than just a message or warning.
Is `match` a reserved word in C#? I don't use it that much, but have used scala and F# pattern matching and they all use the term `match`, which seems more intuitive.
It's not reserved (and that's probably why it's not used). Most teams working on languages with a decently long history are very reluctant to add new keywords, because it will break preexisting code.
`match` is a very common variable name, not only in relation to regexes.
There are of course ways to do contextual parsing so that you can introduce new keywords (I believe this was done in C# with the LINQ extensions), but it complicates things for all eternity, so you want to avoid it.
C# isn't too hesitant to introduce new keywords because it supports contextual keywords.[0] Many C# keywords are also valid identifiers. In this case, using `match` in place of `where` probably wouldn't have introduced any incompatibilities.
In fact, `where` isn't a reserved keyword, either--you can have an identifier named `where`.[1]
To an existing C# programmer, `where` makes a lot of sense, since it's used for matching elsewhere (e.g., LINQ).
C# has never added new reserved keywords. All keywords added after 1.0 are contextual and can still be used as identifiers. With some things like nameof, they even get their special meaning only if there's nothing else of that name that could be called. They take backwards compatibility of existing code quite serious. In fact, the only instance I can remember where C# had a breaking change was in C# 5 with how the foreach loop variable interacted with closures. The probability of code existing that relied on the old behaviour is probably really low, though.
There have been breaking API changes--.NET Core and the upcoming .NET 5 being good examples--but they're generally handled well. I can't recall any major breaking language changes, though.
They don't let backwards compatibility keep them from introducing new features and syntaxes, which is quite nice. Most languages that want to avoid breaking backwards compatibility seem too hesitant to introduce new language features.
"where" is for match guards. Do you mean use "match" instead of "switch"? Given that the statement construct is already called "switch", it makes sense that the expression form doesn't try to use a different keyword for what is largely the same thing.
Class hierarchies are not a desirable feature of object-oriented languages, they're an unfortunate side effect of their historical development. (For example, early versions of Java did not actually have the interface keyword at all.)
There are several very good reasons that the inherited wisdom about OOP includes such phrases as "prefer composition and delegation over inheritance".
There are some cases where class hierarchies are actually good, but they're far rarer than most would suspect. The Liskov Substitution Principle is less of a guide as to how to use inheritance as much as it is a guide as to when you should not be using inheritance at all.
And to address your point: the only time objects with class hierarchy and subclases should be accepted as arguments to methods is precisely when the LSP holds. Otherwise, you're gonna have a bad time.
These are all valid points and it shows my lack of understanding (same with the other comments), even though I use C# on a semi-regular basis I have a lot to learn!
I don't envy any newcomers to the language though, there is so much to take in.
My general advice to anyone working with C# is the same advice given by the authors of Smalltalk to developers.
1) Always prefer composition and delegation to inheritance.
2) Always prefer interface inheritance to implementation inheritance even if in the short term it results in code clones.
3) If you intend to have multiple subclasses that adhere to the Liskov Substitution Principle, it is always better to have IFoo and X implements IFoo, Y implements IFoo... than it is to have abstract class Foo where X, Y extend Foo. You can still do that, behind the scenes, but your public interface should whenever possible be nearly entirely interfaces and concrete classes.
4) Inheriting the body/behavior of methods is seductive but ultimately only a poor excuse for not delegating that will create maintenance headaches down the line.
"Is the ability to build class hierarchies not the ultimate reason to use C#, an Object-Oriented language?"
The ultimate reason to use C# is because it's a modern managed language with an excellent base library, excellent tooling and excellent debugging experience.
Object oriented hierarchy is not a value in itself. Often it's an antipattern. I'd day 80% of time you are off much better by cleanly separating your program to "data" and "algorithms", and manipulating data as far as possible in immutable fashion. I.e. when mutating an array, don't overwrite elements, rather copy the values to new array. This will be easier to understand, and likely faster due to how memory and caches work on modern platforms. Etc.
"Which one of the two is idiomatic?"
Idiomatic is a weak argument when talking about programming. Either the problem is trivial, hence all you have left are to discuss trivialities, or you don't understand the problem and are therefore discussing things of very little consequence.
Writing maintainable, understandable code is important. But "idiomatic" gives the idea that there is some higher level ultimate truth on what is always the best formulation for each and every problem.
If you are writing boiler-plate code, then yes, best pattern will emerge eventually. But then it should be self evident which is best way to move forward. Hence, 'idiomatic' once again loses it's value.
> Writing maintainable, understandable code is important. But "idiomatic" gives the idea that there is some higher level ultimate truth on what is always the best formulation for each and every problem.
This is the sentence that really rings true. Idiomatic is often dogmatic, especially in the OO language world. As far as I am aware, there's very little research backing up the GoF doctrine but it's often rattled off as "what you should be doing" and idiomatic.
As someone who 'saw the light' on this later in my career I have since moved to using functional programming (in C#, because the project I work on started life as OO and is a never ending huge web-app); Mathematics has a lot more to say about correctness than the imperative/OOP world of idioms, and I find I can trust my code much more than in the past.
I needed to create my own Base Class Library to make it happen [1], which has been a labour of love, but has fundamentally changed our multi-million line code-base for the better. But, obviously it has to fight against the baked-in mentality of devs in the OO/C# space by providing a non-idiomatic solution.
The next step on the evolution will be category theoretic approaches (I believe), languages like Statebox are already starting down this path. How languages like C#, that still have the legacy of the OO world baked into its framework and grammar, will cope with this evolution long-term remains to be seen. However, right now I think they're just about getting it right. And C# especially is still one of the easiest languages to be productive in, with a world class tool chain.
No, the ultimate reason to use C# is to solve problems and build useful applications!
But you are correct that method overriding and pattern matching partially solve the same problem in different ways. C# is not really a pure OO "one way to do it" language anymore, it is a multi-paradigm language, for better or worse. Arguably it have been since anonymous function was added.
Method overriding and pattern matching actually solve the same problem in precisely opposite ways.
In the functional programming community, this is referred to as "the expression problem".
To understand this problem properly you must first accept the idea that every object (or class of objects) in an OO context is essentially an interpreter defined by how it responds to sequential messages (methods).
The crux of the expression problem is the tension between adding new operations over a type (i.e. new methods on the base class of a hierarchy) which requires implanting that method for every existing subclass... and adding new types (new subclases) over which an existing operation (method from the parent class) applies, which requires implementing every existing for the new type (subclass).
There are some "clever" approaches to "solving" this problem, but they only really make sense for types that look and act like ASTs. Despite many complex programs essentially being interpreters for some internal "language", this isn't as useful as one might hope, and the machinery necessary to "solve" the problem relies on a lot of compiler and language extensions.
If you want an example of (uh) "solving" the expression problem, the canonical starting point is Wouter Swierstra's "Data Types a la Carte" paper.
I'm super embarrassed as a dotNet developer, at times 'engineer', that I don't know this..
I can't list how many time's I've ran into this kind of issue when trying to explain to a peer/intern.. (Only know I probably messed up a bit now) And HN links like this just remind me that I would love to take a 'bus mans holiday' just to catch up on the framework.
This is a huge plus to the benefits of sending employees and me (please send me) to those release conferences. 'New dotNet Core coming ?' : I want to be there, but I have to live in Europe and not able to take holidays, so the stream is saved; I'll watch it later (I rarely do). :(
Whatever about patterns and higher level issues, knowing the framework is also damn key important and often overlooked as a given.
Really knowing the framework, all the way down to the compile time really offers some incredible results, I have been fortunate to work with some who really knew the framework versions that we were in and it was always so much fun to put bets on how much time that team would save from ours and other teams from even just a light refactor.
Most important is that developers should know the framework, not because of high time savers, but so that nobody is re-writing the wheel (intentionally did not say : 'not reinventing')
I wish I had the time to always dig through the release notes of the latest framework abilities, not to mention 'Core'.. I might bring this up as a task for the team, but does anyone genuinely have good suggestions on how to drill into fellow team mates that its important to check up on framework functionality..
I really feel kinda insulting to say: 'maybe google it to see if there is a simular issue that could shine a light?'
God bless HN.
Edit: ReSharper is my best colleague.. I'll copy an important note from below:
>Note : I am not fighting against tools, just that sometimes, me included, the tools make us not go and search why resharper is screaming: 'you're an idiot'!
Keeping a code base up to date is a cultural thing. Keep up to pace yourself. Suggest new features during code review. Encourage curiosity. Hire developers who are interested in technology, from diverse backgrounds. Be enthusiastic and evangelize about new things, it will inspire others even if you don’t realize it.
I'm guessing you use Visual Studio and don't have ReSharper? If so, I have 2 recommendations to make:
1. Get ReSharper! When it sees code that could benefit from language features, it will suggest it
2. Get JetBrains Rider - seriously, try it - it's an alternative IDE with all the ReSharper stuff built-in. I personally find it a better experience than Visual Studio, and it makes my laptop fans whine far less!
1) I do have resharper, I made a comment here about how it makes me lazy because it does it for me, and how I get nervous even jumping on a VS without it
2)Rider though.. It's new to me.. I'll certainly give it a go.
But one of my main points was : 'Hey, we work with this language, we should know it'
Get me ?
Note : I am not fighting against tools, just that sometimes, me included, the tools make us not go and search why resharper is screaming: 'you're an idiot'!
Visual Studio also does code suggestions for when you can replace a pattern with a newer language construct. I suspect ReSharper is better at it, but I haven't tried VS2019 yet.
I'm surprised at the no. of recommendations for resharper even now. I thought most of the new features of resharper was already included as part of VS2019. Any specific feature that's missing ?
Resharper still has a bunch more refactoring and navigation tools. Whether that's those that are missing and you're using them, is probably up to each person themselves. The single feature I found missing all the time was smart completion which suggested matching locals/properties based on the expected type of arguments or missing expressions during typing. Everything else I tend to use, navigating to files, types, members, basic refactoring (most-used are extract local and extract method anyway) are all there and wouldn't need Resharper.
C# 8 also has "switch expressions" (1) meaning that all those "return" keywords fall away
so this is valid (although useless) c# 8:
public static bool Invert(bool value) =>
value switch
{
true => false,
false => true
};
I worry though that the "switch" syntax is becoming a big complex monster, and can be used to make utterly unreadable code, rather than simplifying as promised. We'll probably see both in practice.
It might have been nicer if they could have used a new keyword e.g. "match" to carry the new syntax, rather than overloading "switch", but that would not be backwards compatible with C# 1-7, as "match" was not a reserved word. (2)
Hopefully switch the syntax that supports these various cases doesn't get too tricky.
You should try it out sometime, regardless of coercion. There isn't much of a barrier to entry anymore. Even installing VS2019 is a pleasant experience now.
I have yet to meet a developer who gave C# a legitimate try and then decided it was entirely not for them. That said, I only personally know developers who are working in the realm of B2B application development, so perhaps there are other incompatible use cases that I am blind to from my current perspective.
Yeah, this is basically the boat I'm in. Python dominates. You either write it in Python, or maybe get away with writing it in something else and then wrapping it in Python.
I don't mind Python, but now that I've gotten a taste of ADT's and non-nullable types, writing Python feels like a kludge.
Python (the language) feels more like a less-capable version of JavaScript now - with all of the disadvantages of lacking AOT type-safety (I know static type-checks are now available with 3.6 - but remember that most of the world is still using Python2...).
I've always seen switch statements as an anti-pattern in C#. These examples however, do demonstrate neat time saving techniques.
Does anybody have examples of real-world use cases that take advantage of this that couldn't (or shouldn't) be solved in a more OO way (Visitor, Strategy, Template Method, etc)
Switching a simple switch with a strategy pattern can make code far more complex and hard to understand. Switches are simple to code, simple to read, and easy to change.
Because the gods of OO decreed as such and now the logic must be distributed across a dozen classes/files.
Notice the language "that couldn't (or shouldn't) be solved in a more OO way", their goal is to create OO code, not simple to read and easy to change code.
Because it’s a switch statement, not an expression and a lot of people tend to use it for doing stuff more complex than a single line. Being it a statement there is nothing that forces you to return from each case and you may forget to use the break keyword mistakenly executing also the next case.
I try to use switches only for enums or things with a 1 to 1 simple mapping. Enums to strings for example. Although even there my preference is to use an attribute on the enum.
Well sure, I’ve been using “is” alongside switch statements for a while now but admittedly doesn’t come up too often. Let’s say you have a ControlTemplate that’s so deep in the stack you have to pull symbols for it yet most of the properties are inaccessible in the current scope. The only thing it will spit out which is accessible at runtime is the type. The requirement is to process the data correctly (e.g. transform it into a List<T> or custom object). You could either write GetType() all over the place and attempt to access this mega-object at runtime or just use a switch statement to determine the resulting type with “is”. The situation I was in where this became preferable was an application where the user picks the controls themselves (a form builder with fancy controls) which has the ability to add 3rd party controls from different libraries. It reminded me of throwing darts blindfolded- case target is bullseye: try { throw.dart; } catch(Exception ouch) { LogThrows(“you probably don’t want to catch this”, ouch); } break;
At work our team just finished implementing a filtering library for our web services that's loosely based on OData. The query string value is parsed into an expression tree, then all of the operations on the expression tree are implemented using pattern matching. You could of course implement this with the visitor pattern, but in comparing the two we decided to use pattern matching because it's significantly less verbose and easier to understand.
It should be avoided, but there are legitimate use-cases. Deserialization, for example, might yield a few different types. Especially if you don't control the endpoint and the data back is a mess.
When it is used, it's very nice to have help from the language. I much prefer that to some kind of purist, "they shouldn't do that so we won't help". I certainly miss having type guards when I work in Java.
As long as you use virtual dispatch, an indirected jump table, or some form of run-time tagging, you are writing type-sensitive code whether you have dedicated syntax and types for it or not.
"Type-sensitive" code is encouraged when using the functional style, where you treat objects as dumb "bags of data" that have no inherent functionality of their own. It's still discouraged when following an object-oriented "smart objects" pattern.
How is that particularly functional? In Haskell the same example would be approached at type-level using type classes, something extremely similar to using Interfaces.
I guess in a dynamic functional language you'd probably avoid doing this type of contact-oriented stuff, but you'd rather pass a callback at the last point rather than check type of what you got passed.
I worked in Scala for a while and it was pretty common to use these kind of pattern-match statements for a kind of dispatch mechanism. Not even necessarily with types, it might be something like "do one thing when field a is null and field b is not, another thing when it's the other way around, another thing when they're both null, and still another when neither is." The nice thing about it is the compiler can assert that your match is exhaustive.
> do one thing when field a is null and field b is not
And that would have been a good example! contrary to the one in the docs.
Pattern matching for checking fields of things of the same type is a good use. Pattern matching when checking your parameters type? bad.
> The nice thing about it is the compiler can assert that your match is exhaustive
If you pass a new Shape object for which you forgot to implement a new case in the pattern matcher, you just get the default case, which is wrong, but the compiler can't know that.
Now have the function argument ask for something that implements IArea interface and if you don't implement getArea() in your new Shape class, the compiler will know.
Well I think it is pretty common to have a combination of these. You're right that if you configure a default matcher the compiler can no longer tell if the match is as exhaustive as you intended, but I think you can often just avoid that altogether.
I'd say this is probably a bad example of usage for the feature. I use pattern matching in a few places in my own code-base, and it's almost always on interfaces that have a large set of extension methods applied. It's kind of like an entity-component architecture, except the components are added at compile-time, rather than run-time.
Unfortunately, real-life use scenarios are verbose and involved and require a lot of background context that detracts from just demonstrating the syntax. This is just demonstrating the syntax. Standard patterns of software design still apply.
I personally hate when official documentation uses badly applied approaches because it will be invariably be quoted as source for some "best practices" argument at some point.
That's clearly a use case for interfaces and type-level programming.
They could have made a better and shorter case with error handling logic.
Yes, it's much easier to write and maintain (up to a point) than an object hierarchy and potentially multiple levels of virtual functions because all the logic is in one place and obvious instead of distributed through x number of files. Of course once you get rid of the OO you'll probably be switching on an enum or something instead.
It's less abstract, simpler and potentially much faster, it's only considered bad by people that think writing OO software is the goal and not a tool to use.
That's not what he meant. It's not the casing of the names, it's the switching on specific types, rather than building interfaces and using generic programming.
One thing that I like about my lib is that is serializes to json pretty well, and i added some more special-case variants such as accumulative errors (AccResult) and some applicative lifting operations.
Hey I wanted to thank you!
I stumbled across your repo a few years ago, when I was looking at more powerful ways to use the C# type system.
I ended up pulling a heavily inspired version of your code into a utility library I made (for a zero dependency package).
A new construct to save a line of code? Whatever happened to "every proposed language feature starts at -100 points, and must must become positive to even possibly be considered."?
Given that probably 90% of code is written by C or D class programmers, every new feature adds cognitive load that makes it less likely that the median programmer (who almost certainly works offshore and who likely doesn't have enough English skills to process the tutorials) will write bug-free code.
Once upon a time MS understood that highly educated university graduates are only a small percentage of the programmer market and kept that in mind.
Looking at the new language changes, that's been forgotten in order to please the programming elite.
Spare a thought for the poor souls who curse every time Microsoft adds a new blade to the Swiss Army knife of C# because they know they'll be the ones mopping up the blood of the countless programmers who've cut themselves on the new feature.
I feel for you! But people misusing language features is not really the fault of the language per se, it is more a failure of process, culture, tests and code reviews. I don't believe dumbing down a language leads to fewer bugs, it will just lead to more verbose bugs.
From PHP 8 union types to C# pattern matching, I find it beautiful how languages tends to have more and more similar feature sets.It's like we're reaching a consensus on which language constructs are beneficial.
I suspect that it's not an issue of respect so much as an issue of governance. C# remains a commercial language, wholly owned by a single corporate entity. That, I think, allows its maintainers some luxuries that community languages don't enjoy. Chief among them is a whole team of full-time language designers and implementers who can sustain a sort of deep concentration 40-ish hours a week for years on end. I'm pretty sure it's closing in on a decade that there's been talk about how to implement pattern matching into C#, and I'm absolutely sure that a lot of blood, sweat, and tears went into it. But that kind of effort is more manageable, on a personal level, if you're getting paid to do it and if most the thornier bits of the process are happening in a private-ish space and among a small group of people you know well.
By contrast, with a community language where these decisions are made in public, by committee, and perhaps by a team of volunteers, it seems like things are always just a bit more strained. I'm sure some of the more famous PEPs took a huge personal toll on GvR, and there's no doubt that they caused a lot of high emotions. I see similarly troublesome patterns in Nim and Scala, where it would seem that "trying to avoid too much conflict with people who are forcefully communicating strongly held opinions over a medium like the Internet where it's difficult to modulate emotions" can be a real factor in the decision of what language features to include and how. And, in that kind of environment, it's probably particularly difficult to keep the Internet, with all its . . . Internetiness, at bay for long enough to really make sure you've got all the details dialed in right. Much easier, I imagine, to go for the punt and get the whole business over with.
Technically it's owned by .NET Foundation now, but admittedly it's composed of MSFT people by a large part. Also, language design happens on GitHub, and the community is encouraged to get involved.
As long as Microsoft finances the .NET/C# team, neither runtime/libraries nor the language will include anything which does not fit in their strategies/vision. The .NET Foundation will not change anything here. And that is good. The .NET Foundation ensures that the product is legally usable on Linux/Macs/other non Windows platforms and additionally helps the ecosystem.
Yes. Actually that fact is more interesting than it looks.
It's all of our good fortune that it's in YC's interests to fund HN just as it is, with the moderators' primary job being to keep it interesting and hopefully keep the community happy. If HN were a startup, we would have to play the growth-hacking game. If it were the media property of some larger corporation, some manager would eventually put a monetization squeeze on it. Either of those tactics would ruin this place, whether or not they succeeded.
YC doesn't need us to do such things because a happy HN is the most valuable-to-YC HN. It's basically an accident—a dual accident of how YC's business works and how things historically developed—that we ended up in that spot. It's a special position to be in, and our first responsibility is to preserve it. Hence our motto, Move slowly and preserve things.
It's not that late to the game, considering its paradigm. How many other large object-oriented imperative languages have it? Not C++ or Objective-C or Java. Kotlin has destructuring, but not full pattern matching.