Well, we do have a good set of generic containers now. Access types and Unchecked_Conversion have always been a code-smell IMO. And Ada is still foremost a realtime language so garbage collection just creates scheduling problems.
I think that is a fine answer for limiting the applicability of Ada. I’m confused where people suggest it for bigger systems. I was on a team using it for a large soft real-time simulation. Memory management was a constant source of errors.
I'm confused not more complex large-scale software projects are written in Ada. The way I understand it, they are not complicated enough you need Ada.
For example, Ada makes it easy to structure one's code in a strict tree hierarchy (whenever you with a package put pragma Elaborate_All (..) on it, which works well with any Ada compiler I've tried it with and is in the Ada standard since 1995). It makes monolith creation by mistake impossible.
Strange you had memory management problems in an Ada application. With all the focus on safety and security within the Ada community, it makes me wonder how the software engineers were using the language in that project?
My understanding is that many safety critical systems in Ada don’t allow for dynamic memory allocation or use of Unchecked_Deallocation. That’s fine for systems where much is known at compile time. We were building software that could simulate thousands of entities. There was a lot of dynamic allocation. As soon as someone calls Unchecked_Deallocation, all bets are off with regard to safety.
Many safety critical systems in Ada ban the usage of dynamic allocation and Unchecked_Deallocation by adding "pragma Restrictions (No_Heap);" and "pragma Restrictions (No_Dependence => Ada.Unchecked_Deallocation);" at the top of the file where the main subprogram is located (application entry). The tradition when these pragma are in effect is to define the entities used in the application in arrays. The sizes of the arrays need not be defined at compile-time but can be determined at application startup (run-time). It means the sizes of the arrays can be specified in configuration files and vary depending on the hardware support the application is installed upon. Just because the entities/objects are located at indexes in an array it doesn't mean that they need to know about it and can point to other objects using access-to-object type variables (references). The problem with dynamic allocations is the risk of memory fragmentation and the performance of the application may "mysteriously" degrade over time. One also runs the risk of running out of heap memory unless the application checks for example there is at least 5% memory left on the device for the heap allocation to be successful.
Also note that one can run into memory leak problems using automatic garbage collected languages. I've personally needed to track down memory leaks in both C# and Javascript applications. Thankfully this rarely happens. It indicates that even when working in an automatic garbage collected language a developer needs to be aware of potential memory issues and think carefully about architecture.
Glad to hear you were successful in the project (with 100s of developers)!
> I'm confused when people don't use it for bigger systems. I like that the compiler detects most of my errors before I run the program.
That is more likely due to the type system than manual memory management. People after all say the same thing about Haskell, which also has powerful types, but is garbage collected.
There are two kinds of code errors to worry about: wrong answers (2+2=5), and divergence (a fancy name for crashing, i.e. 2+2=segmentation fault). In a jet engine controller, wrong answers and segfaults both potentially cause fatalities, so you better not use GC. Ada is made for that.
In (say) a compiler, bugs leading to wrong answers (incorrect code emitted) might cause potential fatalities, but if the compiler segfaults from running out of memory, that's only annoying (the developer must find a workaround, use a bigger computer, or whatever). So it is fine to write a compiler in a GC'd language even if its memory footprint and timing characteristics are hard to verify. If you wrote a compiler in Ada you'd spend a bunch of time with manual memory management, for little benefit.
In fact the most serious formally verified compiler (compcert.inria.fr) is written in Coq, which you can think of as an ultra precise dialect of OCaml and which is GC'd (Coq in this case generates OCaml code that uses the OCaml runtime. It can also generate Haskell etc.).
My experience with bigger systems is that when you have 100 developers passing references around it becomes hard to manage who is responsible for deallocation of an object. This leads to dangling pointers and debugging. Going to languages with automatic memory management made such projects a lot more reliable.
Thanks for clarifying how the dangling pointers may arise. Not everyone agrees with me, but these are my thoughts/recommendations when using Ada. Which thread/task that has ownership of a variable is paramount. Whenever one defines a variable it must be crystal clear which thread/task that owns it, for example has the right the read or write a value to the variable. What I recommend is the Actor Model (https://en.wikipedia.org/wiki/Actor_model). Synchronization between two tasks can either be through shared variables or message passing. Last time I checked Academia is inconclusive as to what is the best (least error-prone) way for threads/tasks to communicate. What seems the simplest to me is message passing. 10 years ago, first time I heard of the Actor Model and message passing is Erlang and it's a language where these ideas are fundamental. So a task owns a variable. If another task wishes to change the value of that variable it must send a message to the owning task and request it to change the value. If another task wishes to know the value it must ask the owning task what the value is. Since the time I heard of Erlang, other languages like Rust and the Pony language has picked up on this too. Rust has taken this further by making it possible for one task to temporarily borrow ownership to another task and it is checked by the borrow-checker.
To implement the Actor Model in Ada one puts all variables in the body of the tasks that are in the application. It makes them not visible from other tasks. So what you need to keep in mind when developing is for a task to never send an access-to-object type variable to another task. If there is a need to do that you need to use Ada/SPARK or Rust to get the proper ownership checking done. Btw, Codepeer (static code analysis tool for Ada) finds race-conditions, has deadlock detection, and warns if there are variables that may be read or written to by more than one task.
If one sticks to vanilla Ada (not SPARK) one could develop an application based on libadalang that parses all the Ada source code and checks that all task entries have input arguments that do not contain any access-to-object types (to find instances where a developer has sent an access-to-object variable to another task by mistake). Such a tool does not exist but libadalang exists to allow the creation of custom rules checking on one's Ada code.
Rust surfaces this information as part of the type and lifetime system. There's no ambiguity there: the part of the program that "owns" any data object will automatically deallocate the object if it's done with it and has not transferred ownership elsewhere. This works exactly like the usual C++ RAII, but it's generalized to the whole language. Even the standard .drop() operation follows these semantics.