Every time I try to get a quick feel of .NET on Linux, I allocate an hour to it and that hour is always spent wandering around various guides on microsoft.com and never getting anything done.
There are terms like .NET Core, .NET Platform, .NET Framework, .NET SDK all mixed up on nearly every page, multiple versions of "Getting started" and "Quick start" guides, massive navigation menus and options on every page so you never know if you're looking at the latest and greatest or it's some kind of abandoned dark corner of a web property, as it often happens with corporate sites.
Yo! If you want .NET to be a massive hit on non-Windows platforms, move it off Microsoft.com to a small site, have a single version of "getting started" (and only one "guide"). Don't ever mention things that exist and don't work on Linux.
Also, have a simple downloadable .tar.gz which expands into /bin + /lib + /examples. I loved C# back in my Windows days and I moved to Linux to escape Microsoft complexities and over-reliance on complex IDEs and tools, scattered like shrapnel all over my c:/
I will not run apt-get against your repo without knowing ahead of time what I'm getting and where will it all go, so let me play with the tarball first.
To be fair, the new https://docs.microsoft.com site is much better than the old ones, but I know what you mean. I still occasionally get confused at all the .NET nomenclature, and I wrote I book on the topic.
I gave a talk at the London .NET user group earlier this year on why it's all so hard to understand. Maybe it will help clarify some things, however it was before the 2.0 announcement: https://unop.uk/on-asp-net-core-and-moving-targets/
Yo! Great point! In fact current dotnet core site was hugely inspired by golang one (IMO).
Just take a look: http://dot.net/
There's also a great 20 lesson in-browser interactive tutorial (there's also F# and VB version!): https://www.microsoft.com/net/tutorials/csharp/getting-start...
I'd also like to stay on this cool dot.net domain though, microsoft.com might scare people.
My experience has largely been the same, the whole mono, xamarin, vs studio, etc of tooling (whatever it all is) is a tragedy of good technology made useless. The licensing, tooling, and runtimes are such a mess I give up before I begin. There needs to be one canonical way to do things on every platform, this "band-aid" approach might've worked when you first open sourced but surely no longer does the job.
I'd love to use .NET but the overhead of getting started isn't worth it when I can use any number of other truly free languages/platforms that are much easier to understand. And are truly cross platform because they have been so for years.
For reference languages I use regularly or on occasion which don't suffer from any degree of the issues .NET has: ruby, swift, elixir, java, scala, JavaScript, and Elm.
CLR and .NET seem very awesome but so far have turned me off in a big way. Please fix <3
I've been putting food on the table as a .net specialist for fifteen years. My impression is that .Net 4 is a band-aid of Windows-dependent implementations and that .Net Core is some solid tech, using industry's best practises for streams, collections, GC and so on. How do you think they made it portable?
I started to get pissed off at MS at around the time MVC 3 came out. That was not the direction I would have taken. Oh the bloat. Asp.Net Core Mvc is a dream. You start out with nothing, basically. Invent your own conventions.
I'm happy to not touch .Net 4 again. Love Core.
Edit: as parent said though: Microsoft, your tooling has gone from best-of-breed to just-another-messed-up-ide. Please remove everything you copied from Resharper from Visual Studio. It just doesn't work. Try renaming a file. I have never had anything but near-fatal studio errors such as "the refactoring of the file name just didn't work, man, crashing soon...". I want my Resharper back. Can you disable your stuff? I mean, all of it? And please don't ask me to develop server applications on a javascript client such as VS Code. Nut gunna do it.
Having knowledge about legacy (Microsoft) tech is not bad if you're good. Many companies I know run their business on VB.Net/WebForms/Knockout/ELK/MSSQL. We do at my current gig. You have to be its mother to love that stack.
It takes about 5 minutes to set up the dotnet CLI tools and get a "Hello World".
If you don't even want to spend that you can install Visual Studio and .NET Core stuff will just work out of the box, as is customary for VS.
Versioning and documentation is a mess, but neither you nor the grandparent seem to have actually made the minimal time investment necessary to even encounter those problems.
> Versioning and documentation is a mess, but neither you nor the grandparent seem to have actually made the minimal time investment necessary to even encounter those problems.
This is Microsoft's fault, not these guys, and it has pissed me off so many times over the years.
I always use the analogy that Microsoft builds these gigantic, beautiful mansions, but then to get to them you have to find the secret path that's covered in weeds.
A slightly different topic, but very much along the same theme:
What a mess! I want SSMS, do I even have it installed anymore? How do I clean this up without spending 6 hours (because I know something's going to go wrong during the uninstall)?
Yeah the problem isn't "hello world." It never is, it's building something with the tooling that works cross platform or that can be deployed to the platforms I want to target. That's much more than "hello world" and the important part.
Perhaps you could make a useful comment next time instead of something plainly not?
Well, you literally said that it was useless and that you gave up before you began. Given that, I'd say that revelation's comment was pretty useful - explaining how you could quickly and easily get up and running with .NET. Perhaps you could tone down the hyperbole next time?
I don't disagree with your point at all. Hopefully https://dot.net/core will become that. As @runfaster2000 says the team have taken the point on board and are working on it.
However FWIW, I put together a C# on Linux Workshop for DevConf.cz earlier on in the year that might be useful for some folks https://github.com/martinwoodward/csharpworkshop - also includes links to the docs for building from source etc if you want to go really into the details.
One then that would have seriously helped me a few months ago, when I did a little project to learn some C# and write a command line tool using .NET Core would be a simple list of "using" statements that I'm likely to see in C# books and other documentation that are not available in Core (or that need configuration or settings changes to make work). (Or maybe the other way...a list of all "using" statements that work out of the box).
The command line tool I was trying to write was to read my Safari reading list on my Mac, and construct an HTML page that contains the same information that I could put on my website, where I could then access it from my Surface Pro. Apple provides a way to export the reading list in XML, so that was my input.
I don't remember what it was now, but my first approach used some XML stuff that I got out of examples from some recent C# book, and it worked fine in Visual Studio Community Edition on my Windows gaming machine. In Core, though, on my Mac (and on my Surface Pro and Windows gaming machine) it failed to build. It was not finding something I was trying to include via "using".
I was not able to figure out if that thing is simply not part of Core, or if some build setting somewhere has to be changed to make it available.
(I eventually changed my approach and dealt with the XML through LINQ instead of at a lower level, so that I no longer needed whatever it was whose "using" was giving me trouble, and successfully got access to my Reading List from my Surface Pro).
Speaking of names and versions -- while working on Cloud Foundry Buildpacks we frequently turned our brains into pretzels trying to manage what components were versioned how. We even had custom logic to parse a YAML file where we maintained a mapping to keep things straight.
Especially since some components embedded different versions of other components.
I don't know if the situation has improved since I rotated off, but a bog-ordinary semver scheme would've saved a world of pain.
Failing that, a single page with components (SDK? Runtime?) and available version numbers that gets updated.
Because we mostly wound up working what was what from forensic readings of scattered blog posts, Github release and I think comments on Github issues.
In any case, I am sure the team would be glad to give you feedback on their experiences since then -- my email is in my profile if want me to pass anything along.
The thankyous keep rolling. I also remembered I can point you at the Slack instance where they live -- https://slack.cloudfoundry.org/, in the #buildpacks channel.
I know you are in the .NET team but we have critical issues with Microsoft (we are MSDN subscribers) that are not addressed. The most critical issue is the impossibility to pass automated tests and sign a network device driver because of a Windows 10 bug! If you can help routing this inside the organization would be great. More info about the issue here: https://social.msdn.microsoft.com/Forums/SharePoint/en-US/b1...
Worked like charm had site up and running in 20 mins which is kind of "production" ready. With supervisor and reverse proxy. Or just do docker? You are overthinking a bit also with the terms, you don't need to know those terms by heart.
Yeah, the Microsoft docs are all always a pain to navigate. A lot of seemingly duplicate content. Not that Apple's docs were any better for many years (Swift docs are much better).
In contrast to those two, I've found Android's pretty good in the 3 years I've been referencing them.
100% hear you on this. I work at Microsoft and recently started looking at our Getting Started experience (initially focused on our websites). As @runfaster2000 already mentioned, we are working on some changes/redesign to address exactly what you talked about. I sent an email to the address in your profile to see if you want to chat more and see if some of our ideas would solve your frustrations.
Whenever I look at drastic improvements in performance, I remember that a while ago I managed to speed up a custom parser at work, by about 30x. When I emailed my colleagues, I just said "You may consider this to be work of a genius. Or you may think I was stupid when writing the original implementation. I'll let you pick the narrative that you fancy."
Interpretations are fun when there's a baseline :-)
Reminds me of when I replaced a read-write lock around generating a new object instance with an atomic reference instead. I personally put that one into the latter of your two cases... But the performance improvement certainly looked heroic!
> This is another great example of a developer caring a lot about a particular area of .NET and helping to make it better for their own needs and for everyone else that might be using it.
This is a great, succinct, non-ideological explanation for why open-source projects where anyone can contribute tend to be better. For a given component/function, there might be only a single person in the entire world who needs that optimized badly enough to actually do it themselves, but once they do, everyone benefits. A closed-source team has to prioritize their development efforts, which means niche improvements will probably never make it in. Multiply this by a thousand different niches, and the product is going to be slower.
We (TechEmpower) had this in mind when we created our framework benchmarks project a few years back. Performance improvements in platforms and frameworks have the potential of very broad impact. With our project, we wanted to provide some inspiration for doing that kind of performance tuning. We had found ourselves in many conversations about how many real-world CRUD web applications take multiple seconds to render a page with a form. We realized that if, just as an example, the JSON serializer or template engine were substantially faster, many real-world applications that use those components would see notable improvements to their user experience.
We haven't spent much time with those benchmarks. We looked at a couple of them and believe that there are better ways to write them in C# and get better results. That's not FUD but our findings.
This is a great community activity. Clearly, the community is more than capable of performance enhancements, based on the improvements they have made in the product.
If people start improving the C# benchmarks, please file an issue on dotnet/core to get feedback and some cred. We may do another blog post on that if there is some gravity around the activity.
> Please don't implement your own custom "arena" or "memory pool" or "free list" - they will not be accepted.
> ...
> We ask that contributed programs not only give the correct result, but also use the same algorithm to calculate that result.
So there might not be too much room to improve. There could be some room to improve for things like "custom ... memory pool" since .NET Core has ArrayPool [2] built-in. But I can't tell if the spirit of that rule is "don't implement pooling" vs. "you can only allocate memory in the standard ways provided by the runtime."
May come down mostly to Java being able to use 32-bit references on a 64-bit JVM in certain cases. Many of those programs are very heavy on references. C# can gain a bit in some cases due to value types, but it doesn't always help enough.
I played around with the binary-tree code. It's definitely because of Task (disabling the >= 17 heuristic doubles the memory usage). This should come way down with .Net Core 2.0 and ValueTask.
Worth noting it's not hand written assembly, but instead writing C# code in a very specific way in order to encourage the compiler to implement more aggressive optimizations (inlining, bounds check removals, etc).
I hope most or all these will find their way into the full framework. It's a bit odd that this is so rarely mentioned. I still haven't quite grasped the relation between core and the full framework when it comes to fixes like this being merged into the full framework.
“We expect that many of these improvements will be brought to the .NET Framework over the next few releases, too”
We still need to ensure that all the changes are behavioral compatible. Other than that, we are intending on improving the .NET Framework as well with these same performance investments.
> I still haven't quite grasped the relation between core and the full framework when it comes to fixes like this being merged into the full framework.
In the beginning the sources of the 2 were more closely tied, changes were (automatically?) copied over. Now it seems like it's more ad-hoc, done as and when they decide it's needed. See the mentions of 'TFS Mirror' in this thread for a bit more info https://github.com/dotnet/coreclr/issues/972#issuecomment-25...
If you take a look as these issues, it's clear that they're happy to make the source code very different, i.e. hard/impossible for changes to be automatically ported across.
Just look at how much .NET Framework (Desktop) code they removed from CoreCLR earlier this year!
FWIW (I work on the .NET Team and I wrote the mirror we use to keep TFS and GitHub in sync). When we started the project we maintained a mirror which kept our GitHub repository and internal TFS branch in sync. After a while, we decided that trying to maintain this was more harm that it was worth (the internal TFS branch used a completely different build system and there were other interactions between code which had been open source and code that hadn't that meant our internal branch was on the floor every few days), in addition it meant that we were carrying around a bunch of effectively dead code in the source tree.
We still do mirror some code (mainly the JIT) into TFS to make it easier to share code with the Desktop in an automated fashion. However, for the rest of the code (e.g. the VM and BCL), if there are improvements we want to bring back, an engineer will just port them manually.
.NET Core is not a rewrite in any way. It's the same runtime with the same GC and JIT, same language compilers and mostly the same standard library. It has just been stripped down by removing deprecated features (such as code sandboxing / partial trust) and made support non-Windows platforms.
You might have confused it with ASP.NET Core, the web development framework, which is a full rewrite.
Skimming, but it doesn't look like these optimizations are relevant to .NET standard (which is a formal specification of a set of APIs - not a particular implementation.)
Correct. Performance improvements are out of the scope of the standard. Much like how the HTML spec doesn't tell Google and Microsoft how fast their browsers need to work.
Although if I understood this correctly, these performance improvements will only take effect if you compile using .Net Core 2.0 and run using .Net Core 2.0 runtime?
I did not realize .Net Core had diverged this much from .Net Framework.
The real problem for me is compatibility of NuGets with .NET core. I have so many which are just not compatible and many which have a slightly different Core variant with weird gotchas.
As such I've stuck with .NET 4.5 for now. On the positive side Mono seems to have got a lot better and I have a bunch of stuff running on Linux with surprisingly few problems "out of the box".
It's pretty tricky for the runtime to figure out that there are no side-effects, and that s.Min is not going to change, if the implementation behind it actually walks some nodes. Among other things, it would require it to prove that none of the walked nodes ever mutate anywhere else (don't forget that this includes backdoors like reflection).
This is mentioned in the post, in case you missed it:
> Further, normally such testing is best done with a tool like BenchmarkDotNet; I’ve not done so for this post simply to make it easy for you to copy-and-paste the samples out into a console app and try them.
We love BenchmarkDotNet and use it (and other perf tools) quite a lot internally.
Owning and managing software business we develop using multiple technologies
Mobile objective c, swift, Android java, xamarin.net, phone gap
Web asp.net Mvc , php, java
Etc...
Using Asp.net Mvc requires using 3rd party ui libraries and
Depends on size of team and experience in .net, usually we assign.net developers with at least 4 years experience in .net and front-end currently mvvm js libraries
If you starter in .net you have learning curve but this is reducing as technologies improve
> For example, SortedSet<T>‘s ctor was originally written in a relatively simple way that didn’t scale well due to (I assume accidentally) employing an O(N^2) algorithm for handling duplicates.
Dude. Be fair. You left out the part where he says:
> In other cases, operations have been made faster by changing the algorithmic complexity of an operation. It’s often best when writing software to first write a simple implementation, one that’s easily maintained and easily proven correct. However, such implementations often don’t exhibit the best possible performance, and it’s not until a specific scenario comes along that drives a need to improve performance does that happen. For example, SortedSet<T>‘s ctor was originally written in a relatively simple way that didn’t scale well due to (I assume accidentally) employing an O(N^2) algorithm for handling duplicates.
You'll never get anything done if you want to get it perfect the first time round. Or as they say, first make it work, then make it right, then make it fast.
Always interesting timing by technology companies. Here's a post to developers by Microsoft the day AFTER Apple's State of the Union presentation to developers.
Microsoft touting performance improvements the day after Apple amps up performance on pretty much every aspect of the Apple developers infrastructure (Xcode, Swift, APIs, processor and GPU utilization, etc).
I was personally involved in the publishing of this post. The timing of this post and the Apple event didn't even register with us as interesting. We're an engineering team and have zero interest in cross-company tactics like that. Now, if there is an LLVM release you want to tell me about ... ;)
I have no way of knowing for sure, but I doubt a blog post that long, published on an official M/S blog (i.e. probably requiring sign-off) could be written that quickly!
There are terms like .NET Core, .NET Platform, .NET Framework, .NET SDK all mixed up on nearly every page, multiple versions of "Getting started" and "Quick start" guides, massive navigation menus and options on every page so you never know if you're looking at the latest and greatest or it's some kind of abandoned dark corner of a web property, as it often happens with corporate sites.
Yo! If you want .NET to be a massive hit on non-Windows platforms, move it off Microsoft.com to a small site, have a single version of "getting started" (and only one "guide"). Don't ever mention things that exist and don't work on Linux.
Like this: https://golang.org/doc/
Also, have a simple downloadable .tar.gz which expands into /bin + /lib + /examples. I loved C# back in my Windows days and I moved to Linux to escape Microsoft complexities and over-reliance on complex IDEs and tools, scattered like shrapnel all over my c:/
I will not run apt-get against your repo without knowing ahead of time what I'm getting and where will it all go, so let me play with the tarball first.
[EDIT] formatting