* I seem to install Json.net in every .NET project I create (json serializer - thanks James Newton-King!)
* NLog for logging. It's fantastic.
* Dapper is also excellent (Micro ORM) (and anything written by Marc Gravell is generally top-notch - see protobuf-net, miniprofiler, StackExchange.Redis)
* Automapper, to avoid writing boilerplate code to convert between similarly shaped types (E.G. DTOs to DB models).
* RestSharp makes talking to RESTful apis easier.
* Microsoft's Unity is a great IOC framework, although most people seem to use NInject.
Excellent list. Another addition I would make is another contribution from Stack Exchange, this time from Kevin Montrose. It's a high-performance JSON serializer/deserializer called Jil. https://github.com/kevin-montrose/Jil
I've been using it on most of my newer projects this year and I have to say I much prefer it over the (still excellent) Json.net.
> * Microsoft's Unity is a great IOC framework, although most people seem to use NInject
Autofac is my favorite. Once built, containers are immutable, there is deterministic disposal of components in 'lifetime scopes', and delegate factories make it much easier to instantiate classes with a mixture of services and data.
Dapper is a really awesome library when you want to write complex queries (full of table hints, output clauses, merges and other advanced stuff), but don't want to write enormous amount of code for adding parameters and reading result sets.
I've also used BLToolkit, and it is pretty fast too, plus it has Linq support to build strongly-typed queries that sometimes better than raw SQL queries. But when it comes to table hints, Linq and other stuff does not help. So, it always depends :)
Damn, thanks for pointing me in that direction. It's exactly what I've been looking for.
Technically, I implemented some of that stuff myself because I needed REST client that would work with PCLs and be a bit 'nicer' to use than just juggling strings/routes everywhere, so using annotations and wrapping Microsoft's HttpClient was obvious and relatively easy way out.
Anyway, I'll probably just phase out my own solution and start using the library.
[EDIT]
After checking it out, RefitStubs.cs file seems a little bit clunky, but the library as a whole looks solid enough. I'll look further into it.
RefitStubs is necessary because on certain platforms (iOS, WinRT), you cannot JIT and therefore can't use something like Castle Core to generate a class
On the other hand, I wonder if it wouldn't be easier to ship a T4 template? I know for sure that would integrate with VS (both the IDE and build process) without any issues, not sure about Xamarin Studio...
I'd add ImageResizer for resizing images ( and in MVC resizing them in a restfull way...). You can also localcache them or cache external images (resized). You can also store them in the cloud and add effects to the images.
I use NLog, but I access it through the Common.Logging facade. This is helpful when I need to bolt my code into an environment where NLog kind of sucks, like Xamarin.Android.
I think you're asking about Dapper (ORM) which could be compared to Entity Framework. The two major things to think about if comparing would be simplicity and speed. Dapper is simple to setup and use. It uses standard SQL and it's very fast. On the other hand, this is not a fair comparison. EF is has many more features and options. Use the right tool for the right job depending on requirements.
Elmah and NLog are not apples to apples either. Elmah make dealing with "Unhandled" exceptions very simple. Nlog is for application logging (Trace, Info) and exception management (A try catch scenario).
In fact it is not uncommon for me to use both Dapper and EF on the same project. Usually I'll use Dapper for the query/reporting side and EF for the domain model/update side. (Used to use nHibernate, but it seems to have really stagnated unfortunately)
Even then, for query building I found EF (or any LINQ provider for SQL) to be easier solution than string concatenation. Well, at least for relatively simple queries with pagination and stuff. For filters, Dynamic.Linq would be helpful here.
On the other side, if you need to work extensively with stored procedures or you have complicated or custom SQL to run, Dapper (or Insight.Database - https://github.com/jonwagner/Insight.Database, a library which I found really nice to use) will be your best friend.
It's true. What I really would like is a micro-orm like Dapper with support for using LINQ to dynamically build queries. But I suppose this is essentially EF with AsNoTracking, which I have yet to play around with and test performance wise.
AsNoTracking makes the materialization indeed faster, but EF always has a cost associated with it, both for initial use (set up of the context etc, which is noticeable) as well as every other query after that (expression tree parsing etc, not as noticeable especially for recurring queries).
I hope it's one of the pain points they will address and solve during their rewrite to EF7 at one point.
That being said, my opinion is that unless you have really strict performance and/or latency requirements, EF should be good enough. At least for simpler queries and in 'longer-running' applications :)
My testing showed that AsNoTracking is faster than dapper. But indeed the tests were purely focused on materialization and not query parsing: linq query parsing takes a lot of time, so with lots of predicates etc. you'll see much slower performance with EF compared to other ORMs with query systems other than Linq (except NHibernate, that's slow regardless)
Entity Framework isn't really similar to any of the things I listed. I tried to only include components you might use as part of a larger system, rather than technology stack choices.
Dapper distills the converting database rows to objects. It's a micro ORM, which means it doesn't try to abstract SQL like a monolithic ORM (NHibernate, EntityFramework) would do. Example:
public TagModel GetTag(long id)
{
using (var conn = m_connector.GetConnection())
{
return conn.Query<TagModel>("SELECT TOP 1 * FROM Tags WHERE Id = @Id", new { Id = id }).FirstOrDefault();
}
}
AutoMapper does not really compare to EF. AutoMapper is used to copy automatically data between two objects of different classes (like a.UtilisateurName = b.Utilisateur.Name). It uses Reflection for that. I will try to link automatically properties by their names. The two classes do not have to have the same format. It works if one the classes is the flattened version of the other.
* AutoFixture: great for the "Arrange" phase of unit tests. Populates object graphs with test data, highly customizable. There's a bit of a learning curve. Pairs nicely with xUnit but can be used with any TDD library - https://github.com/AutoFixture/AutoFixture
* Shouldly: a different approach to unit test assertions using "Should" that I find more readable. Also had friendlier error messages when assertions failed - https://github.com/shouldly/shouldly
Shameless plug of my own library:
* Regextra: aims to solve some common regex related scenarios. Features a Passphrase Regex Builder to generate a pattern for passphrase criteria, and also supports named templates and a few utility methods - https://github.com/amageed/Regextra
I've been considering abandoning AutoFixture for FsCheck.
Partially because that's what AutoFixture's maintainer seems to have done recently. Mostly because FsCheck adds a number of additional features such as automatically reducing failing test cases down to the the simplest failing example it can find that make it very powerful for more complex cases.
One downside, though, is that the C# interfaces are somewhat neglected. It's really a lot nicer to use from F#.
* Hyperletter: https://github.com/Jiddler/Hyperletter , a very easy way to do inter-process communication. Great for writing distributed software over http, but can be just as useful locally.
This is a port of the Akka framework from Java to the .Net platform. It's an actor model framework which works incredibly well for building highly concurrent and distributed applications. They just hit version 1.0 and are out of beta and are backed by a commercial company Petabridge.
I'm a bit torn because ETW is prob the way forward for us but it doesn't have a good ecosystem currently and offers nothing like log context... Though some higher order function magics might help.
Serilog to me is a much more modern logging solution. Allows us to pipe logs to table storage/sqlserver/loggly/logentries/docdb/azure event hub ... its the backbone of our app - really just awesome
would you be interested in sharing your setup or pointers toward a good solution with Serilog? I found that something other than a basic setup requires a good chuck of effort to get going. For example, configuration (all code) and IOC.
I can honestly say I had not heard about AutoFixture until today. Haha.
They look similar. The only difference I can really see is that I prefer my syntax a little better for type fixtures:
.For<string>("whatever");
vs
fixture.TypeMappings[typeof(string)] = s => "whatever";
I also allow a seed to be set for ensure "consistent random data".
But there are some features of AutoFixture that I definitely want to look at.
I created Fibber more for mocking data sources when I was creating Web Api services; and AutoPoco was just too much configuration. Unit testing was not the main purpose of the library.
I find NancyFX really interesting, but I can't really see a reason to stray away from Web API2 besides Mono support. Any specific reasons you prefer Nancy?
I've been wondering this, too. I've been using MVC since version 2 and am very comfortable with it and Web API, but a startup I've been doing side-work for is using Nancy. Nancy does seem lighter weight, and includes TinyIoC for dependency injection out-of-the-box, but I too don't see a real killer use case for Nancy.
It's nice to serve webapi's and serverside pages from the same class. I like the content negotiation in Nancy.
Also the built-in testing features are nice, it is easy to test views too.
With .NET going open source, I'd like to see what is out there for web services. Are there any linux-compatible .NET web servers/routers that I can use that have a similar feature set and performance to the Spray/Scala stack? C# or F#, but preferably F#. I've tried a couple (suave, nancy), but some simple benchmarks showed they were about half as scalable as spray.
Were those simple benchmarks running on Mono? .Net is certainly going open-source, but it hasn't arrived yet. It might be worth rerunning those benchmarks once .Net proper is available for *nix.
Yes they were on the latest version of Mono, which in my experience outside of web programming is just as fast as the JVM. I think the gap has more to do with the libraries than the VM, but I'd be willing to wait it out to see. I'm not migrating soon anyway...although it would be nice to know how far out it is.
Benchmark: C# (Mono) - 1052.041381 ms, Java - 597.9874 ms
My machine: C# (Windows) - ~475 ms, Java - ~552 ms
Even considering that Mono is using maximum precision available (so instead of floats it's using doubles), when I swapped all floats to doubles it still ran in ~525 ms on my box.
So while Mono might be 'getting there', sometimes it's still lagging behind. It's just something to keep in mind :)
+1 on this - getting ASP.NET working really well on Linux/Mac, with first-party framework and VM, is the main goal for all of the in-flight .NET Core CLR open source stuff. That + console apps are literally all that's going to be supported, at least initially. Microsoft cares a lot about making this good.
In this case half as scalable really doesn't clarify what you mean. If I can through twice as many servers at the problem and it runs as fast, then to me it's just as scalable.
* Do you mean half the performance per connection?
* Throughput falls off at half the connections?
* Will start trailing off at half the number of servers?
In my mind there's a cost to development time, architecture thinking as well as developer cost to maintain/adapt/enhance. If you can throw more runtime resources at a problem, and that costs you a fraction of a man-year of labor over the next five years vs. taking longer to develop, or fewer developers skilled enough to make modifications, these are real concerns. It really depends on context.
I've been using TopShelf in production settings for a few years now and have been more than pleased. We have yet to discover any issues related to using the library. And new developers find it extremely easy to pick up.
I've just rolled out a service using TopShelf having not used it previously. It really does simplify things. The best bit for me is being able to just hit F5 and having the thing run as a console application. No need to manually attach to processes for debugging.
This technique lets you debug your service as if it were a console app - just pass in -a on the command line.
In your Main method:
if (args.Length > 0 && args[0] == "-a")
{
// Running as an application
MyService ms = new MyService();
ms.DoStart();
Console.WriteLine("Press Enter to Stop");
Console.ReadLine();
ms.DoStop();
} else {
// Running as a service
ServiceBase[] servicesToRun = { new MyService() };
ServiceBase.Run(servicesToRun);
}
And in the MyService class that derives from ServiceBase:
protected override void OnStart(string[] args)
{
DoStart();
}
protected override void OnStop()
{
DoStop();
}
internal void DoStart()
{
// Start your service thread here
}
internal void DoStop()
{
// Tell your service thread to stop here
}
It's definitely superior to hand rolling Windows services. We have something like 6 or 7 TopShelf-based services, a couple of which are self-hosted WebAPI endpoints. No problems or gotchas that I can remember, aside from slightly marginal documentation last time I looked.
I think my problem was using the Custom Service style, rather than inheriting from ServiceControl. There are a number of ways to interact with the HostConfiguration.Service<T>() method and its overloads, but there's only one tiny section in the documentation.
TBH, I think you might want to deprecate that whole way of using TopShelf. Using ServiceControl is much much easier and better documented. I think if one wants to keep TopShelf out of their core code, it's easier to create a separate "ServiceWrapper" project that has a ServiceControl subclass and calls into your other code, rather than using the HostControl.Service() style.
I've recently discovered Linq2Db. Works great for CRUD in conjunction with Dapper. https://github.com/linq2db/linq2db
It also comes with handy T4 templates to generate models.
This is an extensive list of (non-MS) OSS libs and tools for .NET , complete with descriptions. The URL might suggest this is the .NET source, it's a set of text documents with URLs to OSS libs/tools.
One of my recent favorite libraries lately is DynamicData [1] which is Rx for collections. It allows you to use LINQ operators to create lots of different collections from a single "observable cache". An example from the github page:
var list = new ObservableCollectionExtended<TradeProxy>();
var myoperation = somedynamicdatasource
.Filter(trade=>trade.Status == TradeStatus.Live)
.Transform(trade => new TradeProxy(trade))
.Sort(SortExpressionComparer<TradeProxy>.Descending(t => t.Timestamp))
.ObserveOnDispatcher()
.Bind(list)
.DisposeMany()
.Subscribe()
This takes the collection at somedynamicdatasource, transforms the contents to TradeProxy objects, sorts it, and binds it to list. Any time that an object is added or removed in somedynamicdatasource, list will be updated to match. This library has over 50 operators including dynamic filters (changing the filter causes the bound collection to be reevaluated), boolean operators (only update the resulting collection if the object is in both source collections or only one), etc.
I have to say that for someone starting to transition to .net, this is a great post. Posts like these are what I expect from the Ruby/Python/Node/PHP communities but the lack of them and lack of good learning opportunities had made me hesitant to use .net in the past.
Thanks a lot, you've just made me feel a whole lot better about the .net community as a whole.
Last summer and into the fall, I had plans for Formo 2.0, which included a lot more flexibility and customization capabilities (json config, http context config, environment variables, etc). When I learned about aspnet5's configuration I essentially stopped development on Formo. The big reason being that they're building in such a way that I imagined Formo would be used. (Which makes me quite happy)
For that reason, the project is largely "finished", but it's still stable and alive.
As someone who spends a lot of time on file processing I appreciate the link. Althought I have to selfishly admit that had I stumbled on to this myself I would have avoided it just because of SourceForge.
How well does it handle the oddball cases? Not that you could call CSV a rigorously-defined spec, but we have a vendor that sends us a file where they sometimes don't escape double-quotes correctly.
Hangfire looks great. I'm currently running all my jobs through Azure Scheduler. Would anyone know what the pros & cons are between these two job schedulers? Are there certain requirements or architectures that favor one or the other?
Perhaps this is obvious but if you aren't running your own server, your web app (and Hangfire along with it) can stop running during a period of inactivity. Obviously this is not a problem with Azure scheduler.
I use Hangfire and then simply have Azure Scheduler ping my site regularly.
* I seem to install Json.net in every .NET project I create (json serializer - thanks James Newton-King!)
* NLog for logging. It's fantastic.
* Dapper is also excellent (Micro ORM) (and anything written by Marc Gravell is generally top-notch - see protobuf-net, miniprofiler, StackExchange.Redis)
* Automapper, to avoid writing boilerplate code to convert between similarly shaped types (E.G. DTOs to DB models).
* RestSharp makes talking to RESTful apis easier.
* Microsoft's Unity is a great IOC framework, although most people seem to use NInject.