Interestingly, the feature they said that about (async rewriting in catch/finally blocks) is almost exactly the feature that has been floating around the clojure-sphere as an example of something tremendously complex that was made easier by being more imperative. It's hard to explain how much of a rat's nest something can be without being trapped in it yourself, but "people thought it was impossible to do _at all_, and we were doing it in a purely functional way until we refactored" sounds pretty nuts.
async in a catch/finally is exactly the feature I'm looking forward to most - specifically the ability to define what to do in a failure situation in a concise asynchronous fashion without having to fall back to callbacks or spinning up a background thread.
I love all the little space saving features C# has been adding. Null checking is something that every developer has to do constantly, making it a one character thing is super cool, while still maintaining readability.
Lambdas, null conditionals, LINQ, compile-time generics, etc, none of these features on their own are all that amazing, but when you add them all up it just makes for a more pleasant experience.
Most of my time is spent in Java, building an Android app right now, and C#, building services, and I constantly feel like I'm doing battle with Java's syntax (it's interesting to me that the new Android Studio compresses lambda-like classes, like runnable, to look like lambdas).
I would say that both LINQ and lambas are amazing on their own (and even better together). Both of these completely change how you write code and makes it much more compact and readable
I feel a little disappointed by LINQ. On the one hand, it is an awesome feature. On the other, I feel there is a more general theory of what LINQ does that could have been integrated into the language core, and then it wouldn't be necessary.
In other words, LINQ should have been a library, not a language feature. (In my ideal world. It is possible that someone with better info could see such a move as a terrible mistake.)
AFAIK LINQ pretty much is a library since its just extension methods, in itself I don't there's any other specific language feature needed to support it, is there something else you had in mind?
I do agree though that LINQ seems to hint at and start evolving C# into a different semi-functional kind of language, and what we're stuck now with is a language with one foot in both worlds. Hopeful another language will come along one day and chisel out those ideas into a more pure form. (and I'm not meaning a pure functional language like for instance F#)
I mean that it is basically a DSL with special syntax, and that it would be nice to be able to write such things in libraries (without needing it to be part of the C# spec.)
There is obviously a relationship between code like
.Select(blah).From(blah).Where(blah)
and
SELECT blah FROM blah WHERE blah,
and I think it would be worthwhile to look into a language feature that allowed a reliable translation between these kinds of things. Maybe I want to write
LET a = blah IN blah
from
.Let(a, blah).In(blah)
or some such thing. It it just syntactical sugar? Is it a pattern that can be abstracted well?
Well, each linq "operator" is just an extension method to the ienumerable interface that are chained together. So nothing prevents you from doing your own operators.
Most of the features that came along around the time of LINQ (var variables, extension methods, anonymous classes, lambdas etc) was done specifically to support the LINQ kinda syntax but LINQ in itself is not so much part of the language as it is of the .net libraries.
Btw, I personally think the "dot notation" syntax is superior to the "query expression". It's easy to follow from start of the line to the end and you get great autocompletion along the way.
I liked chaining expressions for things that were done in memory and query style for deferred. My brain handled the code a bit better when things were written that way.
F#'s not a pure functional language, it's a multi-paradigm language that has very good support for the .NET object model and imperative programming, even if it is "functional-first".
Hmm, perhaps it would work for LINQ to objects, where the LINQ statement just becomes a delegate that is executed at runtime. I can't see how it would work for LINQ to anything else (SQL etc..), where the compiler only emits an Expression which is then consumed to generate SQL/Amazon queries/whatever.
Null-conditional operators is going to be a real nice thing to have and somethings that's been requested for a long while. Saying that most of the other stuff seems to small nice to haves and cleanup.
I kinda wish they'd put more thought and energy into some tidy way of doing error checking, pre-conditions etc. locating and avoiding bugs is still one of the main concerns in any project and not enough are done to help with that. There are extensions but they are all a little clunky or require too much effort.
If they did that correctly I think it could be a game changer on par with Linq and rx
pre/post conditions can be handled via https://github.com/ghuntley/conditions or via code contracts. Conditions runs on all platforms (mono/windows/xamarin mobile) and does runtime inspection vs Code Contracts which unfortunately runs only on Windows but does have the added advantage of being able to specify behaviour on interfaces and validation being performed at compile time. There was a project w/GSOC back in 2010 that tried to bring Code Contracts across http://monocodecontracts.blogspot.com.au/ which got merged w/ Mono 3.0 but the corresponding tooling in MonoDevelop is missing so it is useless. See http://www.mono-project.com/docs/about-mono/compatibility/ for more information and please bump this ticket https://bugzilla.xamarin.com/show_bug.cgi?id=8400 in support.
Code Contracts, the library, has always been free. The static checker, the really important piece, was only available in expensive editions for commercial use.
The other problem w/ Code Contracts is inheritance. A call to base.DoStuff(arg); might fail, even though the child class handles it and doesn't need the contract to be satisfied by the parent.
Is there a difference between design-by-contract and the condition modifications allowed by the Liskov substitution principle? They seem related but I'm not well-enough versed in the differences.
"...the subtype must meet a number of behavioral conditions. These are detailed in a terminology resembling that of design by contract methodology, leading to some restrictions on how contracts can interact with inheritance:
Preconditions cannot be strengthened in a subtype.
Postconditions cannot be weakened in a subtype.
Invariants of the supertype must be preserved in a subtype."
I see. This is new, it used to be so that although the APIs were available, only Premium, Ultimate and Test had the tooling to generate verifiable builds.
I'm happy C# is adopting more from F#, but only half joking, where are all the comments complaining that new syntax and more operators make code harder to read? Every time there is an article about F#, half the comments are people saying how it has too many operators. Add a few more to C# and every comment is praise.
F# tries to make things too concise. I think it would be more approachable having curly brackets and maybe less inference (or bring great tooling and actual killer features). Love the immutability and non-null default!
Ideal language for me is somewhere between C# and F#. C# is moving there (but F# doesn't).
If you want "actual killer features", they'd by definition be unreadable, since you've never seen them before. I used to think F# was unreadable, then I realized it was those killer features that have no "curly equivalent" that was tripping me up. Once I learned them, it became easier to read.
Before I knew what lambdas were JS was hard to read too, and before I learned inheritance both Java and C# were confusing messes.
And what would F# need to move to? You can type out the types now, you can do all the OO of C#. The only thing I can think you mean is as add curly braces around functions and classes, which wouldn't make it objectively easier to read, just more familiar to some C# developers. Unless you mean add higher kinded types, but that would move it closer to Haskell.
One thing about the null checking stuff that I started thinking about... wouldn't it be more cleaner and less error prone if accessing subproperties of null classes always returned null instead of an exception? ie:
var someVar = class1.class2.SomeProp;
would return null, even if class1 was null. I have a hard time coming up with cases where you actually would want to force an exception. I realize however they reason for the new operator since they might not want to change how existing code works...
I think this has to do with types. In what you propose the returned type for primitive types (int, ...) would always be `Nullable<T>` and you would have to cast them to get the type `T`.
In many cases you can guarantee that the sub-property is not null and you can safely access the value. I think the proposed operator `.?` is the perfect solution and gives you the flexibility to decide what you can guarantee and how you want to access the sub-property.
I don't think you should involve nullables ,the compiler would simply rewrite to check if link in the chain is null and then assign to null.
But value types are a good point, either it could then set it to default(T) or not set it at all, but it kinda has to be default(T) because null makes most sense for reference types. Might give some unexpected behavior if it variables are set to 0 because of a null reference
hmm, yeah, that's a really good point. I guess this could sorta be resolved if you had non-nullable reference types.
You get the feeling though many of these problems comes down to null's largely being a language defect in itself which should have been adressed differently
After async in a catch/finally this is the other thing I'm most excited about. Being able to watch/unwind the result of a lambda will make working w/UI's developed in a functional reactive programming manner w/RX so much easier.
> Thanks; this shows how rusty my C# skills have become.
C# has improved fairly rapidly. I haven't done it professionally in some time now and I feel like I don't have a good handle on the full language anymore.
I'm somewhat concerned the language is getting too "big", but I'm not sure it's at that point yet. They keep adding solid features.
I haven't touched Java since school yet it feels like nothing major has changed in terms of writing code (outside of tools of course).
I haven't coded in C# for 2 years and I feel like I need some time to catch up. Seeing the difference between the languages makes me feel like people who only do Java are going to be hopelessly left behind.
I hate to break it to you, but there is literally nothing in this world inspired by Swift (yet). The language is simply too new and too derivative. Quite "innovative" for Apple dev world, though.
Syntax like `${x}` inside of string literals has been around a lot longer than Swift and is about as close to `\{p}`. I would be surprised if Swift has anything to do with it.
Jesus Christ, if you people are going to argue about something as silly as which language had a minor feature first, maybe spend some time learning about programming languages first? Python and Ruby have had equally sophisticated string interpolation since before Scala existed. Perl has had similar string interpolation features for at least 20 years. And it wouldn't surprise me if lisp did it before Perl.
The actual "discussion" was not about who invented string interpolation (I was talking about shell/`sh` string interpolation btw. which predates Python, Perl, and Ruby). It was about who they "copied" the syntax from since using a backslash as opposed to a dollar or a hash sign isn't quite as common. And none of the languages, including Scala, mentioned here uses that exact syntax. It's just an attempt to trace back syntax decisions to where the authors of the language might have seen it. Fun and games - lighten up. ;)
You were downvoted - it seems - just because. Or maybe someone felt offended by your mention of a fictional character. Whatever.
More to the point: feature known as string interpolation is really, really old. And popular. Just go here: http://rosettacode.org/wiki/String_interpolation_%28included... and see for yourself. TCL is from 1988, for example, which predates Scala quite a bit. Oh, and also predates Swift.
Yeah, I'd like people who comment on language features to study PL design history a tiny little bit more.
My example was from shell string interpolation/`sh`, which predates TCL by more than a decade. Not sure where your "I'd like people to study PL design history" comes from. It sounds kind of pretentious tbh.
We were looking for popular languages that use a syntax similar to `\{x}` for string interpolation. TCL uses a dollar sign, so pretty much as far from the C# syntax as possible (inside of the limited syntactical range out there). At least this page does not list any language that uses the same syntax: http://rosettacode.org/wiki/String_interpolation_%28included.... But maybe your vast knowledge of PL design history can provide information on one that does?
Ignorance doesn't annoy me--if anything, it's exciting, because there's so much people can learn. What annoys me is that people buy into so much hype around languages that are mediocre at best.
Good point. But my assumption would be that backslash was chosen because it's a reserved sequence that didn't have a meaning yet. E.g. they could have started suddenly treating "It costs $x, where x is the number of rides" as a compile time error when it previously was a perfectly valid string literal. And not everyone wants to introduce yet another string literal syntax like ES6 does with backticks...
Actually I doubt that. Haven't read the specifics, but for me that sounds like a compile time feature.
C# already has a ton of those, the most prominent so far are extension methods.
Foo.Blergh()
can be implemented by a class that offers
static void Blergh(this Foo foo) {}
and the compiler will just replace the call above, looking like a specific thing Foo can do with
SupportingClass.Blergh(thatFooInstance)
I'd say the nameof() implementation is most likely similar and will just replace the name of the local symbol with the symbol itself, during the compilation.
> a compile time feature. C# already has a ton of those, the most prominent so far are extension methods.
Yes, nameof is compile-time. Similarly, the ?. generates code (a "? :" or if statement) and so does the string interpolation (a call to string.Format).
I've even sketched an outline of a talk of the c# language features that are "just" sugar. e.g. walking through a statement like
var evens = numbers.Where(x => x % 2 == 0);
and showing what it looks like without the compile-time sugar features like the lambda, extension method use of "Where", "var" inference, etc.
Of interest I suppose to just-beyond-beginner devs
Did you document that somewhere? I might be interested to share that internally. We're C# shop, but quite a number of my coworkers aren't exactly up to speed and a "Let's talk about 3.5 and upwards" session is planned in a couple of weeks..
It's resolved at compile time. It's exactly equivalent to the literal "x", but enables refactoring tools to pick up renames, compilers to error if the name is typoed, etc.
There are many edge cases you need to consider, so a seemingly simple language feature can become hard to get right quickly.
I highly recommend reading through https://roslyn.codeplex.com/discussions/570551 and the other four revisions of the nameof() spec, they do a very good job at explaining the tradeoffs of each design.
You're mistaken, kinda. The syntax wasn't originally "using static", it's just "using" but with a qualified class name instead of a namespace name. The example includes:
using System.Math;
System.Math is a static class, not a namespace. It doesn't import a single static method, it imports the entire class.
However, in the italicized note, you can see they changed their minds after releasing the preview. It's going to be "using static", but they didn't update the example. It will still operate on the entire class and not an individual method though.
Eh, a lot of this looks cool, but doesn't solve any really hard problems.
It's easy to mess up a null check, but it's equally easy to find the bug and fix it.
It's easy to mess up a string format, but it's equally easy to find the bug and fix it.
It's easy to mess up a chain of Rx event streams and when you do--you're out of luck buddy. Have fun debugging that.
Async rewriting for error handling does solve a really difficult problem, but there are much deeper problems with using C# for the kind of imperative threading code where that problem comes up. I wouldn't write that in C#, that's certain.
Like I said, doesn't solve any hard problems. If having to type an extra few if-statements is the hardest programming problem you have to solve all day, you have a damn easy job.
Don't get me wrong: I like that syntax change--I'm all for shorter, cleaner code. But I've moved on from C# because it was ineffective for solving hard problems like multithreading, scaling, networking, crypto. Eliminating a few if statements doesn't fix that.
I love the human style Microsoft has taken on in the past few years.