The impact is GC is negliable for most workloads. First try it, and if you find problems then profile, and optimize, if needed.
AFAIK the CLR is GC only. You can emulate RAII, but still the GC will run and purge the objects from the heap.
In the CLR allocation is blazing fast: just increment the heap head pointer. You don't have to take care of problems traditional allocators face: find a big enough fitting free hole.
This has a cost: when objects are destroyed, the free memory on the heap becomes fragmented. This needs to be compacted, and this is where the GC pause is needed, as the references for moved objects need to be updated.
Basically the cost of finding a free hole in memory at allocation was moved here.
This was done intentionally, as studies found that most real world applications produce a load where most objects are short lived: this cheap allocation is needed, and a multigenerational GC can take care of them cheaply after destroyed (as for most of the time you only scan and free the youngest generation for unreachable objects, limiting the GC pause time).
The bottom line is: premature optimalization is the root of all evil :D If you need hacks you either chose a bad tool for the problem, or more often there is some problem in your application, maybe even at architecture level.
If you need real-time responses, use non-managed code. You can take advantage of the interop capabilities of the CLR (quite easy and handy, provided you are careful around the corners. Mono had a nice thorough doc on the topic), moving that code to a non-managed part of the application, maybe even running that on a different OS thread (this is just a guess, I have never resorted to this kind of hack), but usually the GC will not pose a great problem.
EDIT:
Also, you can create value types in the CLR (struct in C#), which are always stack allocated. They work like RAII, but only some framework types are such, and there is a rule of thumb that you do not want to create them if they are larger than twice the word size of the processor you are mainly targeting, as the are passed by value, which makes a lot of copying. Also boxing can cause problems, thus they are best use as immutable types, which limits their usefulness for certain scenarios, yet because of autoboxing otherwise serious unexpected behaviours could arise. (If I remember well assigning to an interface typed variable boxes a struct, thus if mutable changes to this variable do not affect the original variable which was assigned.)
On the other hand they can be passed by reference (ref, in, out keywords in C#), which makes them first class citizens, unlike in Java. Also Generics work as expected for them, with no unnecessary boxing taking place.
EDIT 2: some typo cleanup
I suggest reading the great book CLR via C# from Eric Lippert if you are interested.
AFAIK the CLR is GC only. You can emulate RAII, but still the GC will run and purge the objects from the heap.
In the CLR allocation is blazing fast: just increment the heap head pointer. You don't have to take care of problems traditional allocators face: find a big enough fitting free hole. This has a cost: when objects are destroyed, the free memory on the heap becomes fragmented. This needs to be compacted, and this is where the GC pause is needed, as the references for moved objects need to be updated. Basically the cost of finding a free hole in memory at allocation was moved here.
This was done intentionally, as studies found that most real world applications produce a load where most objects are short lived: this cheap allocation is needed, and a multigenerational GC can take care of them cheaply after destroyed (as for most of the time you only scan and free the youngest generation for unreachable objects, limiting the GC pause time).
The bottom line is: premature optimalization is the root of all evil :D If you need hacks you either chose a bad tool for the problem, or more often there is some problem in your application, maybe even at architecture level.
If you need real-time responses, use non-managed code. You can take advantage of the interop capabilities of the CLR (quite easy and handy, provided you are careful around the corners. Mono had a nice thorough doc on the topic), moving that code to a non-managed part of the application, maybe even running that on a different OS thread (this is just a guess, I have never resorted to this kind of hack), but usually the GC will not pose a great problem.
EDIT: Also, you can create value types in the CLR (struct in C#), which are always stack allocated. They work like RAII, but only some framework types are such, and there is a rule of thumb that you do not want to create them if they are larger than twice the word size of the processor you are mainly targeting, as the are passed by value, which makes a lot of copying. Also boxing can cause problems, thus they are best use as immutable types, which limits their usefulness for certain scenarios, yet because of autoboxing otherwise serious unexpected behaviours could arise. (If I remember well assigning to an interface typed variable boxes a struct, thus if mutable changes to this variable do not affect the original variable which was assigned.)
On the other hand they can be passed by reference (ref, in, out keywords in C#), which makes them first class citizens, unlike in Java. Also Generics work as expected for them, with no unnecessary boxing taking place.
EDIT 2: some typo cleanup
I suggest reading the great book CLR via C# from Eric Lippert if you are interested.