As languages go, C# may be the ultimate "middle ground" because it really excels and providing a middling implementation of [your-favorite-language-feature].
That is, if you're a functional programming enthusiast, you can do a lot of functional programming in C#. Sure, it's often more painful than haskell or lisp or F#, but it's workable.
If you're a dynamic programming enthusiast, C# has a perfectly workable (but ultimately unsatisfying) implementation of dynamic.
Systems programmer? Mark your code as "unsafe" and twiddle bits to your hearts content! It's not as quick or necessarily easy as C, but it's there!
Even its' OOP, which is C#'s "official" paradigm, is probably disappointing to smalltalk guys, but hey, it's better than Java, right?!
It's not glamorous or flashy, it's a workhorse language. And at being a workhorse language, I'd say it really is one of "the best".
Then why use C# over F#? F# has a more flexible dynamic system (provide your own implementation), better type inference, and even more unsafe access. So you lose nothing but goto and a bit of loop flexibility.
In theory, a better JIT could provide as-good-as-C output, but in practice you end up screwing around and ending up with less-than-optimal machine code. But F# actually provides an advantage here: inlining. The F# compiler can inline any function, whereas the JIT is somewhat limited in what it will inline. This makes a big difference, as the CLR JIT does a much better job optimizing inside one function than it does cross-function.
I guess one thing I left out in my post that I should have added was: it's easy for people to learn.
I bet just about anyone with much programming experience can spend 5 minutes looking at C# and understand most of the language. It's familiar enough for anyone from a C/C++/Java/JavaScript/PHP background that it's not intimidating, even at first blush. I bet C# is more comfy for a lot of Ruby and Python folks, too.
I really, really, really like F#. The main problem with F# isn't a language or syntax problem (in fact, it's not a problem at all). F# (or other functional-first languages) isn't just different syntax-wise, it's different thinking-wise. A "good" F# program is generally written in a fundamentally different way than a "good" C# program. This is the declarative vs. imperative split. Sometimes it's really hard to convince people to change the way they think about how to go about solving a problem.
Sorry, but you're really attacking a straw man here.
Neither C# itself nor the OP's article sell C# as being a prime language for functional, dynamic, or systems programming. Rather, it has inherited some convenient little bits of each paradigm to make the language that much nicer to write.
For example, don't want to redeclare that "List<List<LongTypeName>>" you're copying over because it has a scope of 2 lines? Just dynamically type it by calling it a "var" and be done with it. Boom: Cleaner code and zero performance hit because the compiler infers the type such that the resulting IL is identical at runtime.
Sure, given features can be misused to produce ugly or "unsatisfying" code, but that's true of any language and is ultimately the programmer's fault. Just because said features are provided, doesn't mean you need to use them. You should ultimately be using the right tool for the job, after all. C# gives you the power of having a wider choice of tools, and I love it.
Whoa, I'm not attacking anything. I like C# a lot, but I think I'm realistic about it -- it's nobody's perfect language, except in the case of "what's a widely-adopted, multi-paradigm language that I can do typical OO stuff in, but also all that cool dynamic and FP stuff they talk about on hackernews all the time? Also, my coworkers need to be able to understand it?" That's why at the end of my reply I say I think it's one of the best "workhorse" languages.
Anyway, I'm sure others will point this out, but var is NOT dynamic typing. It's type inference, which you seem to know, judging by the last sentence in that paragraph. Dynamic typing doesn't just mean that no type is declared -- it means that no type is inferred (by the compiler).
Right, sorry about that, I interpreted your post as being more snarky than it actually was in retrospect. I just love how quickly I get work done with C#, so apologies for getting defensive :)
As for the var feature, you're absolutely right that I'm confusing two things there, because as another commenter pointed out, I use it mainly as a "save myself typing when it doesn't matter" feature.
"var" is not dynamic typing. It's just a way to enable type inference - it's still as static as can be. It also has a very limited useful scope in C#, as type inference isn't supported in most areas.
I've long suspected that when programmers ask for dynamic typing, 95% of the time - or more - they are really trying to save FINGER typing, and not data typing.
This is perfectly fine and good - bookkeeping is one of those things that computers are good at.
Miguel I'm confused then - your experience with Mono certainly means you have a better understanding, but I thought I understood it pretty well, too.
Places lacking type inference:
- Parameter definitions
- Method return types
- Generic type parameters (sometimes)
- Lambda expressions (due lack of syntax to indicate quoted code)
- Field declarations
The only place type inference works for declarations is in local variable declarations and lambda parameter types if the delegate type is known.
Perhaps the definition of "most" is arguable, but C# has many places that need you to unnecessarily specify types, and there's no good theoretical reasons for any of it, right? (Except the whole syntax sharing for expression trees versus anonymous functions, which is debatable.)
"var" is a shorthand to not declare the type when it can be inferred, in most cases to avoid Foo foo = new Foo(). The point is that it avoid repetitive typing, while still preserving type safety and everything that comes with it: compiler checking that right types are used in the right places.
[Digression While it is possible to use in var foo = GetIt() the practice is discouraged because it makes the type of "foo" opaque to the casual reader]
For a parameter definition, its use clearly makes no sense. What is the type for the following method?
void Demo (var x) {}
"var" in the above example would not really add much value, neither would the following example, where inference will oscillate from useless to fragile, depending on who you ask:
void Demo (var x) { var j = x.Parent; }
For method return types the reason is simple, it serves no useful purpose, in fact, it can be quite damaging as the public API contract can change during routine work. Consider a method:
var Demo (PointF f)
{
if (f.X < 0) return f.X;
return 1;
}
The above method signature cal oscillate easily between float or double on the day that someone changes the 1 with 1.2. Reading a diff or a patch file wont catch the fact that you have accidentally changed the type of the function.
If this is the kind of code that you need to write (both cases above), then by all means, use the right tool and replace "var" with "dynamic". I would argue that using "dynamic" for the sake of not declaring the type there is a poor practice, but I am not about to lecture you on poor coding choices.
Generic type paramters: your comment makes no sense.
Lamdba expressions: makes no sense, the parameters are already inferred. Not sure what value (var x) adds over the already existing (x) syntax.
Which brings us to the very case I quoted "field declarations".
The ones that you skipped where it is supported: local variable declarations, for statements, using statements, consts locals and fixed statements.
Honest question: Are you just playing devil's advocate? It appears that the reasoning for C#'s design is that it targets LINQ as a goal, and the new features (like var and extension methods) were implemented just for the LINQ case, not considering those features by themselves.
In fact, the lack of type inference for fields was said to be due to technical limitations in the C# compiler implementation, not for any language reason. [1] Sure, any product will have limitations due to schedules/resources, but that doesn't change the fact that they're unfortunate limitations that could be fixed. C# hasn't addresses these issues, but they've had 2 or 3 releases since they were introduced.
>For a parameter definition, its use clearly makes no sense. What is the type for the following method?
>void Demo (var x) {}
In that case, x has no constraints (it's unused), so it's generic. Pretty simple. An unused parameter isn't that useful outside of a few cases. Usually you'll use the variable, and constraints can be inferred. Or you'll find a constraint that limits the type to a concrete type. Here's some examples:
void dub (var x) { Tuple.Create(x,x); } // x is T
void uri (var x) { new Uri(x); } // x is string
void cmp (var x, var y) { Nullable.Compare(x, y); } // x and y are Nullable<T> where T : struct
The example of "void Demo (var x) { var j = x.Parent; }" is probably one place you'd want it to fail, because using member access to infer types can get complicated, at least within C#'s type system. You'd want a static duck typing system in this case like "anonymous interfaces" or something to that effect.
Arguing against return type inference is wierd: C# does that in anonymous functions - certainly you don't think we should have to annotate types there!
As for type signatures changing, if you need want to keep public contracts, then write them out! No one is saying you need to always infer everywhere, just that it's a great aid. Also, this particular example just demonstrates why C#'s type coercion is a bad idea, albeit understandable, given their desire to follow C style.
Generic type parameters. What do you mean it makes no sense? I can't write, e.g. "new List() { 1 }" - it complains List needs a type parameter. Another example: "Func<var> = () => 1". Nope, I'm required to explicitly provide the type. What if C# didn't have syntax support for Nullable<T>? You'd have to do e.g. "x = new Nullable<int>(1)" or create a helper function like Tuple and have e.g. "x = Nullable.Create(1)". Generic type parameter inference only happens on methods.
Also note that C# can't infer generic type parameter constraints - you've got to annotate them all by hand. Makes sense, I guess, because it doesn't infer any generic type parameter definitions, either.
Lambda expressions can't be type inferred. "var x = () => 1" does not work (CS0815). This is because of the same syntax for expression trees versus anonymous functions. And if that was changed, you'd still need to "promote" func somehow, due to C#'s wierd nominal typing for delegates.
Fixed statements can't be type inferred either: error CS0821: Implicitly-typed local variables cannot be fixed.
The things you mentioned, for (and foreach), using -- those are all local variable declarations. So we have it working for local variables, for some generic type parameters, lambda parameter definitions (when the lambda type is known) and lambda return types.
It's a nice start, and it makes C# far more enjoyable than some other languages like Java. But it's still quite limited, without solid reasons.
You're right. The var keyword isn't dynamic typing. The dynamic keyword is though. dynamic C# 4.0 can do mixins, method missing, etc: http://amirrajan.net/Blog/dynamic-c-sharp
He's not attacking anything; the two of you are in (violent) agreement :). He's saying that C# provides all those little bits its "inherited", so it's an acceptably-good language for pretty much anyone, no matter your background.
Just an aside: I don't think you can really say "dynamic programming enthusiast" to mean "dynamically-typed language enthusiast", since "dynamic programming" already means something quite different. It threw me a bit, at least.
Mostly because it looks like Java and on the first glance also shares Java's mostly static type system (which is not strictly true).
The gist of it is: you can send arbitrary messages in Smalltalk (and ObjC) and the object itself deals with them somehow (with most of that being implemented by runtime anyway, but that is different issue). In C# you call methods defined by interface.
Also, you cannot modify arbitrary parts of standard library and runtime, but that is probably for the better (doing that is pretty common in ST world).
There are libraries that add both of these features at once (either by defining new (meta)metaclasses or by simulating same effect through simple copying of methods from base classes.)
On the other hand, Smalltalkers are quite keen on adding new methods to almost any existing system class, so mixins are not often as useful. Also Smalltalk development often relies on IDEs that do pretty extensive code generation behind the scenes and I think that this image based development with no real source code to speak of, and lots of reflection based development tools, is more important aspect of Smalltalk than any OO features.
so, yes this leveraging of dynamic metadata provider technically satisfies the requirements for a mixin, but you're still not applying this mixin to a poco, therefore i don't think it's fair to say the language/spec supports it natively.
That's what I need clarification on. the dynamic keyword wasn't provided by any 3rd party library (it's part of the C# language specification)... how is System.DynamicObject and IDynamicMetaObjectProvider different then System.Object? All poco's inherit System.Object. Do you consider System.Object as 3rd party? True, by default poco's don't inherit from System.DynamiObject. Does that make it not 'natively supported'?
...aren't they completely different tools? One uses mixins to keep the class hierarchy shallow, with fewer levels of inheritance. OTOH multiple inheritance tends to complicate an overly convoluted class hierarchy most of the time. It's true that in a language like Python people use multiple inheritance to "simulate mixins", but it's to the same good effect of keeping the class hierarchy short and flat, but in C++ at least I've never seen MI used to simplify things, as opposed to mixins that always seem well used...
1. "Cutting edge", first thing listed is asynchronous programming. Okay, don't get me wrong, I like C#, I like the Task Parallel Library, but Grand Central Dispatch in OS X/iOS is beautifully simple and incredibly powerful. It does everything I need it to do.
2. "Powerful features" - OOP. Really? And Java/Objective-C don't support OOP/encapsulation? You can't do dependency injection in Java/Objective-C? lol.
3. "Advanced runtime" - garbage collection. I'd rather use ARC in Objective-C instead of depending on a garbage collector.
4. "Reliability" – type safety. Really? And Java/Objective-C aren't type safe? I seriously doubt that anyone considers Java or Objective-C to not be reliable languages when it comes to type safety. This is almost laughable, IMO.
5. "Easy to adopt" - easy to learn. Ok, I'll give you this one over Objective-C. But frankly, I expect a good programmer worth his salt to be able to learn another platform, and I don't consider iOS or Android to be conceptually more difficult than Windows.
6. "Fast execution. C# on iOS is powered by the LLVM optimizing compiler." Uh, I think I'd rather use Objective-C compiled with LLVM. And the bit about "performance of a low-level language"? I don't consider Objective-C to be a "low-level language".
7. "Native access" - sweet, I can use some fragile interface that breaks when Apple decides to deprecate half of their stuff. Can't wait! I love waiting for libraries to catch up.
8. "Portability" - this is a decent point, but let's be realistic - we're talking "minimally portable" here. Write-once run-anywhere is a pipe dream, IMO (at least at this point). There are always platform-specific considerations that need to be taken, and they are not always compatible with other platforms. But even still, this point should have been the main focus of the article. Points 1-7 are minor compared to this.
This just doesn't feel like a serious discussion of C#'s potential for mobile development.
Edit: just for the record, I'm not necessarily trying to refute all or some of the OP's points. I think the original article was poorly written, and the case for C# was poorly made. Hence my "let's play this game" comment.
> 3. "Advanced runtime" - garbage collection. I'd rather use ARC in Objective-C instead of depending on a garbage collector.
I'd rather eat a burrito than a salad. This is not a very good technical critique of the nutritional benefits of salad vs. burritos.
> "Reliability" – type safety. Really? And Java/Objective-C aren't type safe? I seriously doubt that anyone considers Java or Objective-C to not be reliable languages when it comes to type safety. This is almost laughable, IMO.
I like Objective-C, but objectively speaking, Objective-C is horrible for type-safety. The id type is used in a lot of places (e.g. every collection class), and you can magically convert anything to anything else without a peep from the compiler through the id type. There is no easy way to say "This is an array of Foo" and have that checked.
A cursory glance at the Objective-C tag on Stack Overflow will show you a lot of programmers coming from safer languages like Java or C# who are confused by Objective-C's lack of type safety.
In practice this is not generally a huge problem once you get into the "Objective-C way of thinking," but static type checking really wasn't a huge consideration in the design of the language. It was designed with the Smalltalk philosophy of "a bunch of things sending messages to each other and responding with messages of their own," which heavily de-emphasizes the importance of concrete types in favor of interfaces.
> "Fast execution. C# on iOS is powered by the LLVM optimizing compiler." Uh, I think I'd rather use Objective-C compiled with LLVM.
This is another "I like burritos" argument.
> "Portability" - this is a decent point, but let's be realistic - we're talking "minimally portable" here. Write-once run-anywhere is a pipe dream, IMO (at least at this point).
While there will always be platform-specific debugging and often some platform-specific code required, it is indeed quite possible and even common to write portable programs. You probably use many of them. Ever used Clang? GCC? Emacs? Ruby? The Bash shell? None of these programs have a completely different codebase for different platforms. They might have some platform-specific code, but having a common, portable base is hardly a pipe dream.
> I'd rather eat a burrito than a salad. This is not a very good technical critique of the nutritional benefits of salad vs. burritos.
I put about as much effort into my critique as the original article's author did in arguing for C#'s superiority.
> I like Objective-C, but objectively speaking, Objective-C is horrible for type-safety. The id type is used in a lot of places (e.g. every collection class), and you can magically convert anything to anything else without a peep from the compiler through the id type. There is no easy way to say "This is an array of Foo" and have that checked.
This is a problem that has bit me in the ass maybe once or twice since I started developing in Obj-C several years ago. You could magically cast things to whatever you wanted in C++, and that was never a serious problem for me back when I was developing in C++ either. As an avid JavaScript developer as well, I've had no problem with a lack of type safety. It's just not the huge demon (for me) that it's made out to be - at least in my experience.
> They might have some platform-specific code, but having a common, portable base is hardly a pipe dream.
70% code reuse across iOS and Android (see http://news.ycombinator.com/item?id=4998661) is hardly what I'd call "portable". Does the Bash shell do better than 70% code reuse across Linux / BSD / etc? I certainly hope it does.
> I put about as much effort into my critique as the original article's author did in arguing for C#'s superiority.
Not really. He stated an actual fact: Proper garbage collection allows for simpler memory management than retain counting. I agree that he didn't do enough to illustrate this, but he did go further than just "I like this thing." It is indeed true that there are a number of awkward situations where you can end up with immortalized objects with retain counting that would not have been so under garbage collection (see for example the "__block id uglyHack = self" idiom in Objective-C).
> 70% code reuse across iOS and Android (see http://news.ycombinator.com/item?id=4998661) is hardly what I'd call "portable". Does the Bash shell do better than 70% code reuse across Linux / BSD / etc? I certainly hope it does.
I would rather rewrite 30% of my code than 100%. If you want to say it's imperfect and could be better, that's fine, but dismissing it entirely seems unreasonable to me.
RE: GC vs reference counting, I've found that writing code using ARC feels a lot like writing code using GC - 95% of the time, I never notice it. Still, I'd prefer to not have the overhead of GC.
> I would rather rewrite 30% of my code than 100%.
I agree with you on this. I'm not dismissing it - in my original post, I said the focus of the original article should have been the potential for reuse across platforms - but it was glossed over in a disappointing 3 sentences.
Chris Sells added reference counting to the open source "Rotor" release years ago. This was to test whether GC overhead was indeed an overhead over deterministic finalization.
I'm all for Objective-C, but your arguments were weak in the first place, and you're dragging a dead horse here.
I mean: "I put about as much effort into my critique as the original article's author did in arguing for C#'s superiority." -- Why even write your critique then? At least the original article's author had a reason, it is marketing copy.
>This is a problem that has bit me in the ass maybe once or twice since I started developing in Obj-C several years ago
The general consensus is hardly that type safety in large applications is something to not be concerned with that only bites you "once or twice" when you are a novice. So I don't see the reason for the anecdotal reference.
1) Nat says: "Both of these apps are using 100% _native widgets_ for their user interface, and I think it's fair to say that both of them are fairly UI-heavy.". Which means it's a worst case scenario. For a less UI-heavy app, or one with a custom canvas based GUI etc, the percentage would be far higher.
2) 70% is far better than rewriting everything (or 0% re-use) if you use Objective-C. You could use C++ or C to write a common base, but then you're not using a single language anymore, whereas with C# you are.
3) 70% in itself is nothing to sneer at when discussing portability. Especially considering your own argument that "write once, deploy everywhere" is a pipe dream, then 70% is quite high level of reuse -- basically meaning you merely re-do the UI parts to tune it to each platform.
> Edit: just for the record, I'm not necessarily trying to refute all or some of the OP's points. I think the original article was poorly written, and the case for C# was poorly made. Hence my "let's play this game" comment.
RE: "This is horribly mistaken in more than two ways."
Not all code is created equal. That 30% that needs to be rewritten for each device could be difficult to tune for each platform. Native widgets in iOS do not work the same as native widgets in Andoid. Unless there is some magical abstraction layer (which doesn't appear to exist), you're potentially rewriting the most difficult code for each platform.
But again, please see my original post. In my original post, I said the focus of the original article should have been the potential for reuse across platforms - but it was glossed over in a disappointing 3 sentences.
>Not all code is created equal. That 30% that needs to be rewritten for each device could be difficult to tune for each platform. Native widgets in iOS do not work the same as native widgets in Andoid. Unless there is some magical abstraction layer (which doesn't appear to exist), you're potentially rewriting the most difficult code for each platform.
Again, far better than writing ALL the code for each platform.
And in a lot of cases the GUI code is not the "most difficult", it's just the one that cannot be easily written once for all platforms (and still look native).
For example, if you make a music application, the sound engine, FFT, filters etc would be the difficult part (but it can be written just once) and the GUI layer is quite easy compared.
Eh, disagree. It's confusing for a number of reasons, firstly because instead of choosing the common functional names -- map, filter, reduce, flatten, etc. they choose to go with SQL-like naming. Just an annoyance. Secondly, Expressions are a clever hack on top of the language, but they are not intuitive for the person using them and requires you to know too much about the particular implementation you are using (for example, if using Microsoft's ORM, string concantenation inside an Expression is disallowed). It's not their fault, C# isn't really functional, and Expressions allow them to make it look functional and have some laziness built in, but there are still enough gotchas that make using it a pain.
Is there any other quotation-based system that _doesn't_ require you to understand the implementation? Or any language/runtime/library? SQL is different from one DB to the next, and a lot of C APIs are not implemented identically on all OSes.
As to the names, I'm not sure it's safe to assume that "Map" would result in better usability than "Select", for MS's target audience.
It's nice. I lived and breathed LINQ for many years. It's certainly useful; though I've been able to live without it fairly comfortably. It's just not a deal-breaker for me.
I like MonoTouch so far but I must admit these are all valid points. There are more downsides Xamarin doesn't like to advertise, like problems with AOT compiler you can't anticipate until they bite you in the ass, the-GC-won't-collect-some-stuff problems that basically force you to always dispose native objects manually, and horribly buggy IDE that MonoDevelop is.
I dont think I have your email address, but would love to share with you a tool I have been prototyping to identify the hard cycles in Objective-C that are causing the above leak problem.
In short, you should not need to manually Dispose, but you might need to use weak references from children to parents.
for #1, there's a bit more to asynchronous programming in C# these days than the TPL. Take a look at the new async/await keywords in C# 5. Lets you write asynchronous code without having to deal with callbacks.
GCD is very similar to writing against the TPL ... so you have the choice of writing async/await code which makes the execution flow very easy to grok; or if that doesn't make sense in a given scenario you can just fire of a task. Best of both worlds :)
3. So you like ARC? Do you prefer it over endless retain/release calls?
a. because it reduces the chances of memory leaks in your code? Because in this case you negate your "every programmer worth his salt" point in #5 which we've all seen thousands of times when people explain why c++ is really the best language to be writing all software.
b. because it greatly reduces the repeated boilerplate code? Which is pretty much how I define objective c in a single word: tedium. It's been a few years but all I remember from my brief foray is comparing every tutorial to how much shorter everything would be if it were written in another language. Pretty much any language other than c++ not just c#.
If you think ARC is a great step forward then maybe you are honest enough to admit that objective c could make several giant other leaps and it might even end up looking a lot more like c#.
> because it greatly reduces the repeated boilerplate code? Which is pretty much how I define objective c in a single word: tedium. It's been a few years but all I remember from my brief foray is comparing every tutorial to how much shorter everything would be if it were written in another language. Pretty much any language other than c++ not just c#.
Do you mean "shorter" in the sense of fewer characters, or "shorter" in the sense of fewer tokens? I've found both to be true to some degree (the state of text processing in Cocoa is positively barbaric), but what people usually complain about is the former, and I can't agree that it's a problem.
I can understand why too much typing gets tedious, but I think as a criticism it's a red herring, because it's essentially the complaint that Objective-C tends to be descriptive. That's normally considered a good thing! With autocomplete, typing Objective-C isn't harder than typing code in any other language, and the descriptive names help when reading unfamiliar code. I can come across a method call in somebody else's Objective-C codebase and instantly understand what each argument is and does without having to jump around and look at definitions.
If you actually mean you have to do more, though, I think that's one of those things that varies a lot from program to program. It's not really inherent to the language.
Obviously a bit one-sided (it's from Xamarin, after all) but I'm a big believer in C# and what it offers compared to what else is out there. I think the biggest weakness is dealing with another single-vendor-of-failure (in this case, Xamarin themselves) in order to cross-compile to Android/iPhone.
> Their data shows that C# popularity grew by 2.3 percent in 2012, more than any other programming language during the same period.
As many pointed out when that story came out, that datapoint is highly suspicious. There are plenty of smaller languages that have surely grown much more than that (it's easier to grow more when you are small).
Also, the actual popularity is what really matters, not the increase. If C# increased 2.3% from 1000 to 1023, but say Java was at 1,000,000 and stayed there at 0% growth, then the conclusion is obvious. (Not saying those are the numbers, the point is that actual popularity trumps a few percent of growth in a year.)
> What accounts for the growth of C# in 2012? Well, the launch of Windows 8 has probably played a role — C# remains the dominant language of third-party application development on Windows devices.
Doubtful. Windows 8's launch has been a failure, even compared to Vista according to the latest figures. And is C# even the "dominant language" for Windows 8? It seems JavaScript might be just as important if not more so for Metro apps.
C# is a dominant language for indie games: Unity, and almost all games using the Unity engine; Magicka, Terraria, Breath of Death, Weapon of Choice, and Sol Survivor, which use the XNA framework; Eufloria, which was originally written in C# prior to the cross-platform release, AI War, which uses C# and Mono, and Bastion, which uses MonoGame.
The vast majority of the code in most apps is not core code, it's display code. That's STILL entirely device specific.
I think the C# advocates oversell how much it is reusable
If you STARTED with a great pile of C# code, say, running on the desktop or server, then wanted to port that to an iOS device, then you have a point.
One big problem with non-native code is that example code is almost always written in native code, and can be hard/impossible to get new features working once you try to translate across the language barrier (mobile gets REALLY finicky about when X or Y is called, especially for things like animations)
Both of these apps are using 100% _native widgets_ for their user interface, and I think it's fair to say that both of them are fairly UI-heavy. And yet they average >70% code sharing.
It should also be pointed out that these apps were written from scratch. In fact, Jon Lipsky didn't even know C# before he started writing TouchDraw (the first app I linked).
Not all code is created equal. That 30% that needs to be rewritten for each device could be difficult to tune for each platform. Personally, I'd like to see some more information on what exactly was reused.
Those charts need 3 wedges: Shared non-interop layer code, interop layer code, and non-shared code. If 50% of that green wedge is interop layer code, you see why this isn't good.
Additionally, you really want "Time spent per area", which is very different from LOC in some cases.
But thanks for showing real data for named projects. Mine is from some LOC counts on customer projects (and not directly sharable), and from Mono____ advocates in the Atlanta iOS group.
Xamarin's examples for MonoDroid are (IMHO) pretty solid, to the point where I'll go to them first instead of the Android docs (even though I'm writing Java[0]).
It's true you have to re-write the UI code on every platform, but it's still the same language on all platforms, so less of a barrier vs. a total ground-up rewrite. Frankly, I see that as a plus, else you end up with badly cloned iOS-like UIs everywhere.
[0] Should note that I'd happily be using MonoDroid if I had a say in the matter.
As I said, you have to write separate UI on each platform, and you use the native toolkits (there's no fancy abstract cross-platform UI kit in Mono), so there's little chance of looking much out of place as long as you follow local conventions. It bewilders me to see native Android apps going out of their way to clone iOS widgets and stylings. Not as common lately though; you see it more in usage of PhoneGap and similar, since there's nothing stopping you from shipping an iOS-like CSS on Android.
PS: In the case of Android, I'd flip through [0], and check out Google's apps on a proper device to get the lay of the land.
>One big problem with non-native code is that example code is almost always written in native code, and can be hard/impossible to get new features working once you try to translate across the language barrier
I have not found this to be true for ObjC -> C# translation. In fact, it's the opposite: translating code is pretty straightforward, and CoreAnimation calls are no exception.
"Async support as a first-class feature" - that's actually a flaw. C#'s async implementation would be better served by having e.g. a generic monad system that allows async to be done in a library. Instead, it's another baked-in compiler feature, like C#'s duck typing and LINQ's query operators.
Calling it "cutting-edge" is an exaggeration, too.
It's not a flaw. Arguably it's worse than having a generic mechanism, but it's certainly not worse than having to use explicit callbacks, as in essentially all other popular languages (and previous versions of C#).
Please don't actually do this - or if you do this, use the C# code only for the backend, and write the UI code natively for each platform. Nothing is worse than a 'not quire right' 'seems almost the same but isnt' UI.
You have misunderstood. iOS and Android apps built using C# have access to 100% of the native API of the underlying platform. So all of the UI is fully native. Take a look at our API documentation on iOS, for example:
If you have read the article, you would have known Xamarin products expose native iOS and Android APIs. MonoTouch apps feel exactly the same as UIKit apps because they are UIKit apps.
From my perspective, MonoTouch UIKit bindings are more convenient than original Objective C library because C# is nicer (think events, typed arrays and dictionaries, etc.)
One of the major advantages of Xamarin's approach is that the Xamarin frameworks expose access to all of the native APIs and standard user interface controls of the underlying platform. The apps are fully native, they are just written with C#.
As I understand it the mobile SDKs Xamarin releases use the native UI widgets for each platform - allowing you to get native look and feel with one code base.
(I may be wrong on this, but I thought this was one of their major selling points. However, never having used the SDKs I don't know how well it works in practice)
What is he comparing to? Arguments #2-#5 & #7 seem to apply to Java as well, and #6 just quotes an extremely lopsided benchmark exploiting one specific feature of their C# implementation. Most of the arguments would apply to Objective C too, I presume (though I haven't really used that outside of toying around).
I'd personally expect we'll see a lot more JS for mobile apps: a terrible language compared to C#, but a very well understood UI layer (HTML).
I don't think each point is meant to show that it's better in that respect than every other competitor. I think it's meant to be sort of like those feature grids on software packaging, where some of the competitors may have some combination of the features, but only Our Product has the complete set.
Read this article for me to waste time because a programmer that program in a programming language that dominates more he soon comes to post here trying to convince others to learn the same...
after not ashamed to post an article saying that C# is the best language for developing mobile ... must have little shame .. I'm sure you still not programs Naive Code Android, HTML5 + CSS3 and other top mobile development languages
While a bit of an aside, personally I find LINQ to have been a horrendous misstep in the evolution of the language. While the argument that it makes code more concise and easy to maintain holds in the small, once a project grows it becomes a terrible cancer -- everything becomes an amorphous blob of stuff, unintuitive, performance-disaster LINQ filters applied everywhere to bend it into shape.
Yes, but it's not specific to LINQ or MS or C#. If you give a tool to mediocre developers to make things easier they will use it right away. If you don't supervise its usage (how it fits into the architectural goals of the project you are on) you will end up with a mess.
Anonymous methods, unnamed classes, try catch blocks are all examples of these type of tools that when used improperly will kill the system performance, code readability, maintainability and extensibility.
We are specifically talking about C#. LINQ was specifically held as one of the improvements to the language. I am not quite sure what your point is relative to that.
There are shockingly few examples where LINQ is superior to alternatives. LINQ is the brute force technique of avoiding proper collections/algorithms.
I don't understand. Are you saying that map, filter, reduce and al. on lazy sequences in functional languages are inferior to loops, or that Python list comprehensions and generators are inferior to loops?
That's basically LINQ to objects.
Maybe you're refering to LINQ-to-whatever (LINQ to SQL, etc.), a LINQ generalization where you can quote your expressions, rewrite the AST and emit something else (a very constrained kind of macros, if you will).
MS is probably to blame here, but people keep conflating the two.
In the vast majority of cases it is nothing more than syntactical sugar around loops. And no, I'm not saying that loops are superior, but rather I'm saying that loops are usually a terrible solution, but LINQ has a way of essentially hiding those egregious violators.
Take a block of code with LINQ in it and rewrite it minus LINQ but logically performing the same operations that the sugar is resolving to. To most developers it would offend the sensibilities of construction, but somehow in LINQ-land all seems fine.
95% of the time, what looks like a failed "loops 101" test actually doesn't matter at all. Maybe your list comparison function takes O(n^2) time... if your list never gets above 10-20 objects, who cares? Yes, if you're running LINQ over a list of 1,000,000 items, then you'll want to be careful what you're calling. But how often does that really happen in mobile apps?
What LINQ does is make your code more concise and readable. It makes it easier to find bugs and easier to understand by someone else (even if someone else is you, 6 months down the road).
Depends on what kind of mobile apps you are writing. Most of the apps I've worked on written tend to deal with video/ pixel level image conversions. In these cases it's really common to say convert an rgb pixel array to yuv or simliar and a linq expression on one of those arrays would quite simply kill the app. Which I think is the best counter argument to the op. C# might be fine until you decide to do something beyond the normal performance wise. Then you are back to square one learning the native supported tools, IMO you are much better off learning the native environments from the ground up.
This is really the crux of it. The counterargument essentially seems to be that such broad abstractions are fine in the small or when you have enough hardware, however that in no way carries over to smartphones where you want to do the most with the least, and even where you have a kick-ass processor and multi-GB RAM, you still want to reserve battery. The primary reason Android seems to need so much more hardware than iOS can be attributed to this. Even when you have powerful hardware, this can kill you in the large (which was the original failure of Windows Vista -- people forget that Microsoft did a complete revert after thousands of man years of work)
Most developers I know seem to prefer the loop version. I'm not sure what you mean by "sensibilities of construction", but I'm a big advocate of LINQ. If your concern is that certain linq calls iterate an entire sequence, that could apply to all kinds of scenarios that have nothing to do with LINQ like string concatenation. You should always be aware of the performance characteristics of methods you are calling. LINQ is not special in that regard.
> I'm saying that loops are usually a terrible solution, but LINQ has a way of essentially hiding those egregious violators
So when you want to sort, search or transform a list, you prefer a technique which doesn't iterate over the items in the list at all? That makes no sense.
I prefer a technique where algorithms and computer science come into play. e.g. dictionaries, hashes, binary trees, structured data based upon the actual uses, etc. Having a giant array of memory objects and then treating every bit of code like it's some edge condition is not appropriate.
I agree that Dictionaries and the like are under-used.
But I don't agree that LINQ is bad because someone on your team didn't know how to use a Dictionary and thought that iteration would be fast on large in-memory lists.
In many cases there are efficient LINQ constructs, e.g lazy lists, and First() not enumerating past the matching item. Hand-coding these is prone to doing it worse than Linq would. Teach your team how to use .ToDictionary() if need be.
The point is relevant in relation to the fact that you held LINQ out as a misstep, but the reasons you cite can be applied to many features of many different languages.
Yes, but rather than actually arguing with his point, you are showing how it could also be used to criticize some other language features not in an attempt to prove him wrong, but rather to pander to anybody reading the argument who likes those language features.
Personally, I like LINQ a lot. I don't use it very much in my code (and usually it's for things like light filter or ordering), but it has these little things that save a lot of time.
Be careful what you ask for -- you have to admit that it is possible to create some horrendous "code" with Linq. Also I have a hunch that OP is thinking of L2S or Linq2Entities or somesuch.
Well, horrendous code can be created with anything: LINQ, C#, Objective-C... But yes, such easy-to-use tools like LINQ are easily overused and you end up with real monstrous code.
We need to separate LINQ, the C# language feature, from the library implementation, LINQ to SQL or Entity Framework. The later does tend to cause issue because it offers a rather leaky abstraction over your database and poor performance. But it doesn't have to be this way. Case in point BLToolkit, a lesser known database access library that uses LINQ. It is light and almost as fast as native access. Very close to Dapper in performance or pretty much the same in performance if you compile your expression trees, but of course you have type-safety. Of course it doesn't have all the bells and whistles of L2S or EF, but you gain speed and simplicity.
This times a million. The amount of shit I have to deal with where people have used it in criminal ways is unreal.
Fine examples:
.ToList() materialization multiple times in a call graph. Ok so the thread is eating 200mb of ram?!?!?!
Generic method hell.
complete lack of understanding regarding IEnumerable, ICollection, IList etc.
O(log N) that doesn't go bang until you stick a production dataset in it.
The specification pattern - bottled rape is the only way I can quantify this. Sounds great until you have 2500 specifications and are stacking them 20 deep and your ORM decides it has had enough of clauses (EF4 bug).
My favourite: junior developer turns up and says "I wrote this awesome linq query - come and look!". Get there: all of the above.
These are great concrete examples that are very Linq specific. I cringed reading a few of them. The .ToList() materialization can easily be avoided by using a little skill, but the realities of late binding that don't show up until you hit production -- this can catch anybody.
> I find LINQ to have been a horrendous misstep ... grows it becomes a terrible cancer -- everything becomes an amorphous blob of stuff, unintuitive, performance-disaster LINQ filters
I couldn't agree less. Any language construct can be abused, but LINQ is not particularly susceptible to abuse, unless you try very hard.
Can you expand on what exactly you mean by LINQ? Do you mean the .NET library support (extension methods on IEnumerable/IQueryable)? Do you mean the language features added to C# (anonymous types, expression trees, nicer lambdas, query syntax)? The language features, with the possible exception of query syntax, seem like hugely beneficial additions, to me.
This is a fundamental place where the worlds of C# and Java differ. Where the Java world would say, "developers might misuse this, better not have it in the language," the C# world says, "developers could really use this, better put it in the language."
But it doesn't appear (anecdotally I admit) that people feel that way. Seems the "whole world" (speaking loosely) is turning against Java because of this very philosophy.
Because Java has been so conservative, people actively hate it's verbosity, boilerplate-ness, and lack of language features (anonymous functions, first class functions, etc.).
So Java has, for many years, helped huge teams of mediocre developers avoid certain kinds of self-inflicted wounds by being conservative in terms of language features. And the result seems to be that Java is increasingly scorned.
I know if I had to replace my C# work with Java, I'd feel incredibly frustrated at the lack of language power. In fact, I wouldn't choose to do it -- it would have to be a hell of a project or opportunity to pull me into it.
(Luckily for me, there are many better options, like Clojure or Scala on the JVM, or Haskell, Python, Ruby, etc. off the JVM).
I work on a pretty huge C# code base. Use of LINQ is the least of it's problems. If anything I'd say one problem is people not knowing about things like lambda expressions, linq, generics or whatever. It can really make code harder to parse when it's written in ways that don't take advantage of the full power of the platform.
If you're doing that it's crap code; you should have defined a few named object types. Maybe your "Dictionary<string, object>" is e.g. actually a "PropertyMap". this is not a problem in the c# language.
Not at all - you just missed a few facts that need considering. Lets rip it to bits some more:
It's in system.windows.forms isn't it? Don't really want that dependency and associated resolution being dragged in to a web app otherwise the compiler has to load the entire assembly's metadata.
Also, it requires full trust.
Oh and finally it isn't serializable.
Which is why we end up with SerializableDictionary<K, V> which is even longer and is an adaptor for Dictionary<K,V> which implements serialization.
That's why it all sucks.
And I haven't even included ConcurrentDictionary thread safety yet.
In my experience, that kind of stuff is usually a problem mainly because of C#'s verbosity from lack of type inference. Although, with 4 unnamed elements there, it might start making sense to create a new type, and then it's just "List<List<MyRecord>>".
Some of the code was certainly written before those features was available, especially some C# 4 features. I'm sure someone might look at my code someday and wonder why I didn't use async and await. I'm talking about code written today though.
You don't even have to use LINQ at all with C#, it's an extension library and not one that needs to be referenced or is even included as a default namespace. LINQ is very convenient to use on in memory objects where there's no noticable performance hits. Even LINQ->SQL produces somewhat optimized SQL. How else would you bend data into shape without an ORM layer? Writing custom stored procedures, adding a whole bunch of new service methods, or manipulating data tables directly?
This 'cancer' is simply a data layer; if you use proper design patterns and seperate your layers correctly, you can surgically remove this tumour and replace it with another ORM.
LINQ is indeed optional, but this discussion is about the evolution of the language, with LINQ being held as an improvement. And because it is new and shiny, C# code across the globe quickly became infected with it.
Your comment on "where there is no noticeable performance hits" strikes a chord because when first used there are no noticeable performance hits. But then that application grows and scales and suddenly it is death by a million paper cuts, thousands of grossly inefficient set operations devastating performance. That's aside from the fact that LINQ is often a short-circuit saving from having to think about appropriate algorithms of object-methods to deal with the likely uses.
LINQ is just another tool in the toolbox. Like any tool it has the potential for misuse. Ultimately the skill of the person wielding the tool will decide whether the outcome is good or bad. It sounds like you've mostly seen the bad. In the right hands, LINQ can make things more readable and concise.
Take another example, anonymous functions in Javascript. I've seen plenty of tremendously horrible JS code bases that were just a deep series of nested anonymous functions. Almost impossible to debug or maintain. However, when used appropriately the anonymous function is very powerful and can make certain things much easier.
That is, if you're a functional programming enthusiast, you can do a lot of functional programming in C#. Sure, it's often more painful than haskell or lisp or F#, but it's workable.
If you're a dynamic programming enthusiast, C# has a perfectly workable (but ultimately unsatisfying) implementation of dynamic.
Systems programmer? Mark your code as "unsafe" and twiddle bits to your hearts content! It's not as quick or necessarily easy as C, but it's there!
Even its' OOP, which is C#'s "official" paradigm, is probably disappointing to smalltalk guys, but hey, it's better than Java, right?!
It's not glamorous or flashy, it's a workhorse language. And at being a workhorse language, I'd say it really is one of "the best".