Hacker News new | past | comments | ask | show | jobs | submit login
Summary After Four Months with Ada (pyjarrett.github.io)
286 points by thindil on Aug 29, 2021 | hide | past | favorite | 175 comments



Interestin things about Ada:

- language designer have a document where they explain their choices, the Ada Rationale: http://www.ada-auth.org/standards/rationale12.html

- there is an annotated version of the standard, and it's freely available as the standard: http://www.ada-auth.org/arm.html

- A language test suite was developped at the same time of the language: https://en.wikipedia.org/wiki/Ada_Conformity_Assessment_Test...

The last point was also in retrospect not sot good for the language ecosystem: buyers wanted a 100% on this test suite before considering your Ada compiler, which means dev had hundreds of compiler bugs to fix and no paying customers.


> The last point was also in retrospect not sot good for the language ecosystem: buyers wanted a 100% on this test suite before considering your Ada compiler, which means dev had hundreds of compiler bugs to fix and no paying customers.

You mean, they couldn't sell a half-finished product while claiming "100% conformity"? Awesome.


Compiler validation is done by a third party so you cannot claim validation without really passing the testsuite.

All compilers have bugs and releases happen (at least for GCC which I know a bit more than others) with plenty of known open bugs.

Having a testsuite is of course a good thing, however if you take the top 20 programming languages in use, how many of them do have an accepted wide coverage publically available language test suite?

You said "half-finished product" but what about compiler for every other language than Ada of which most of them don't have a testsuite? How do buyer decide to use Intel C Compiler for example?

Remember all of this was in early 1980.

Having this testsuite was well ahead of its time as many things with Ada, what I'm trying to say is that it didn't play well at the time to help the language ecosystem grow.


The purpose of Ada was not to sell compilers, nor to serve as a means for an acquihire, nor IPO, nor VC money. Passing the test suite is just about the most important thing an Ada compiler must do. The requirements of the language originate here: http://iment.com/maida/computer/requirements/strawman.htm#si...

> Every user level aspect of the language will be formally specified. None will be left to the translator implementer, operating system or object machine.

> There will be a traceable operational and/or technological requirement for each primitive data, operation and control structure in the language.

> The language will not reveal minor differrences in computer architecture.

> Machine dependent code cannot appear interspersed in-line in source language programs.

> The semantics of the language will be determinable from the description.

> Constructs of the language will have only one reasonable interpretation (i.e., physcologically unambiguous).

> There will be an axiomatic definition of the language. It will be mathematically complete in the Turing sense.

> The language will not sacrifice clarity for efficiency.

> The language will not contain special features for rare cases.

> All defaults in a program will be specifically provided in the language specification or in the program.

> There will be a minimal number of defaults.

… > Any translator for the language must implement the entire language (NO subsets).

> Any translator for the language must implement only the language as specified (translator implementer may not expand the language specification).

> Translator does not have to run on all object machines. Self-hosting is not required.

These are the requirements Ada was designed to fulfill. If not passing the test suite is OK with you, you probably have no use for Ada and would be better served by another language.

This is the best you can hope for in sensitive applications like aviation. I would feel better knowing the software flying me was extensively validated.


Especially given the markets Ada was used in.


And it never caught on outside limited contexts because dev environments were thousands of dollars per seat.


The purpose of the language was to serve those limited contexts for which no suitable alternative existed. It’s not meant to be Python.


Ada was not meant to fill a gap where suitable alternatives didn't exist. It was meant to unify, under one language, DOD projects. DOD leadership was concerned about the variety of languages in use in software systems, Ada was designed to become a single language to replace others.


I may be at fault for citing Wikipedia, but I think this supports the idea that it was developed to fill a gap:

> In the 1970s the US Department of Defense (DoD) became concerned by the number of different programming languages being used for its embedded computer system projects, many of which were obsolete or hardware-dependent, and none of which supported safe modular programming. In 1975, a working group, the High Order Language Working Group (HOLWG), was formed with the intent to reduce this number by finding or creating a programming language generally suitable for the department's and the UK Ministry of Defence's requirements. After many iterations beginning with an original Straw man proposal the eventual programming language was named Ada. The total number of high-level programming languages in use for such projects fell from over 450 in 1983 to 37 by 1996.

> The HOLWG working group crafted the Steelman language requirements, a series of documents stating the requirements they felt a programming language should satisfy. Many existing languages were formally reviewed, but the team concluded in 1977 that no existing language met the specifications.

https://en.m.wikipedia.org/wiki/Ada_(programming_language)

(I admit being a little tired to examine a primary source this Sunday. )


The previous mess of languages didn't provide the features that were specified as requirements for Ada either - effectively, the previous languages were heavily "first few generations of programming languages", often arcane and with limitations related to platforms that were no longer dominant. JOVIAL which was one of the major ones (even standardized) is essentially a slightly updated (using features like structures) ALGOL 58. Then you have tons of various assemblers. TACPOL.

There was a gap for a modern, safety & reliability oriented language - and which could be used to unify the mess that existed before.


It's still thousands of dollars per seat.


This isn't true. You can use GNAT (part of GCC) from the Free Software Foundation for free. I've worked in Ada for months without paying anyone anything.

There are companies that offer paid support, but that's different.


> dev had hundreds of compiler bugs to fix and no paying customers.

From what I understand, a principal rationale of Ada, was to have a “100% reliable” toolchain, and compile-time assurance. So this is actually a reasonable posture.

It seems that Ada can be challenging. In the 1980s, there was a project to rewrite the nation’s air traffic control system, in Ada (I knew a couple of engineers, working on it).

I don’t think it ended well.


Canada's national air traffic control system is implemented in ~1 million lines of almost pure Ada, handling everything from the radars to the user interface.

If non-C-family isn't a dealbreaker I would seriously consider Ada for a large project. Very strong typing and modularity that are very suited for large-scale development, a wide range of available toolchains, but without the kitchen sink. (But I'm not qualified to make such decisions.)

Besides, something like implementation language is almost never why a very large software project fails. (Though if you're already failing, then e.g. implementing your cloud service in COBOL is probably not helping.) It's usually from mismanagement, or a failure to define or understand the problem appropriately.


That ATC project failed for many reasons, mostly mismanagement.

Flying In Place: The FAA's Air Control Fiasco https://www.bloomberg.com/news/articles/1993-04-25/flying-in...

Meanwhile in Europe, Eurocontrol's CFMU (Central Flow Management Unit) is written in Ada and successfully routes thousands of flights a day.


If anyone is interested, here is "an experience report" (slides) about the use of Ada in Eurocontrol' s CFMU : https://slideplayer.com/slide/11313411/


The concluding slide from this report:

> The Ada language is one of the factors which has helped to build, maintain and enhance the CFMU mission critical sophisticated applications.


glorious, thanks


And I'd say out of the US the majority of ATC radars after 95 are probably running some mix of Ada and C.


> Meanwhile in Europe, Eurocontrol' s CFMU (Central Flow Management Unit) is written in Ada and successfully routes thousands of flights a day.

Was this a new system, or a replacement of old code? If the latter, could that have been a complicating factor?


> I don’t think it ended well.

https://www2.seas.gwu.edu/~mfeldman/ada-project-summary.html would suggest otherwise


The US version flamed out, in spectacular fashion.

The European one, however, worked.

Another poster put up links ([0], [1]).

[0] https://web.archive.org/web/20160601100527/http://www.bloomb... (Unpaywalled)

[1] https://www.skybrary.aero/index.php/Central_Flow_Management_...

It wasn't a ding at Ada. However, big projects (regardless of the toolset) need to be well-managed.


Sure wasn’t a ding against Ada.

It appeared that mismanagement and misprioritization of the huge ATC wish list and a failure of system engineering to factor limited system resources … then to its priority list.

A different US contractor (that I can’t recall) apparently did ATC better and faster but for a set of different customers (also in Ada).

It must have been when “you don’t get fire for hiring IBM” was a mantra then.


I hate to defend IBM, but they did have a track record of building out massive systems like this.

I’d guess the 80s political climate and more general incompetence at the agency and political level was a bigger issue than the vendors and integrators.


The program's name was AAS, the Advanced Automation System. It was designed to be a full end-to-end system replacement for the HOST mainframe systems that were the FAA's backbone. This included terminal, en-route, oceanic, and every other system that was managing aircraft. AAS is discussed as a failure because it had significant mission and requirements creep. FAA kept on piling on requirements and IBM kept saying yes.

The FAA also had issues managing it because it was so large. It was a multi-billion program with millions of SLOC in the early 90s. Simply put, this was a not a cat video website. The tool suite was on AIX 3.2.5 running on RS/6000s using CMVC for source code management.

This was all designed in the mid-80s with implementation starting in the late 80s through the mid 90s. AAS was originally designed after the mass firings of the ATC specialists and the shutdown of PATCO, the original ATC union. AAS was actually a entire suite of programs. AAS was overall program and funding vehicle. IBM Federal deployed a few of the sub-programs like ISSS, TCCC, and others successfully to the FAA.

It was originally developed by IBM Federal Systems and deployed on then-new RS/6000s. AAS failed because of mission creep and the fact it was entirely too large. The program ended after IBM Federal Systems was sold off to Loral and then merged into Lockheed Martin. The head of the ATC group at the time, Bob Stevens, discovered that LM would make more money if FAA cancelled the project and split it up into much smaller programs. Bob called up the FAA Administrator and made the deal to cancel the program since the scope creep was too much. FAA had already sent out a couple of cure letters to get things back on track, but this time the FAA agreed that it was time to cut bait on it.

Initially, what was the en-route portion of AAS was re-done as the Display System Replacement (DSR), which replaced the radar and data positions in the en-route facilities. The concepts of AAS were then rolled into NextGen, which is the FAA's family of programs to modernize its ATC systems.

Ada was used (originally OC Systems's Ada83 and Ada95 PowerAda compilers) for everything in AAS. An Ada-based middleware called FlightDeck was developed to implement all the underlying libraries and subsystems that you need for highly available applications for AAS, including clustering, failover, heartbeats, monotonic time, and multiple levels of redundancy. That middleware is still being today for the systems that IBM Federal, Lockheed Martin, and now Leidos are maintaining and developing like ERAM. After LM bought out what was IBM Federal, they sold it off in 2015 to Leidos because LM wanted to focus more on hardware.

(Source: I work on the later programs that Ada is used on and listened to all stories that were told me by the people who worked on AAS).


Large projects usually fail because of politics/resources/… Not because of the programming language used. There are successful and failed large scale projects in almost any programming language.


I think every language should have a design rationale. I'm working on the spec for my programming language, and I intend the rationale to be >50% of the size of the spec. It's great because it allows me to justify every facet of the language as part of a cohesive whole.


I think this is a good idea.

I'm also designing a language, and I have the rationale sort of written, but scattered amongst many design documents. I should put it all in one place like you.


I worked with Ada a lot during the 90s (and even maintained the Linux port of GNAT early on). This document does not address one thing that I am curious about. What does the modern, idiomatic Ada programmer do about memory management? Are library implementers still required to use Unchecked_Deallocation? I don't see how a language where Unchecked_Deallocation is used can be considered safe at a time when we have modern choices with automatic memory management.

Edit: I guess another way to put this is, "Can a non-trivial project be developed in modern Ada without using Unchecked_Dealloation?" Because I don't see how I can consider a library safe if it is allowed to call Unchecked_Deallocation.


Yes you can and most of people in the embedded world doing Ada following mostly the same guidelines.

1- Don't use pointers. Or use not-null pointers. But preferably don't use them. With in/out semantics you don't need passing pointers or references around, and the addition of official data structures in Ada 2005 make most uses of pointers obsolete. 2- use controlled types when you're allocating to free soon. Limited_Controlled_Type are even safer if you're prepared to deal with a limited type :-) 3- recently, use SPARK, with safe-memory semantics. Hard work, but so is pleasing the borrow-checker... edit Of course one of the impetuses (impetii) of manual memory management is having functions return objects of unknown size (at call site). In Ada this isn't a necessity, at least with GNAT's secondary stack.edit-end

I've programmed Ada almost every day of the work week for 13 years now, I can count on one hand the problems related to memory management, and opening the code you knew you were in for a treat. 'Smells like tiny cache syndrome' just remove those 90's blotches from our humongous codebases when we face them. I'm almost willing to make access types forbidden except in pools (so, not null he), with an open door policy 'explain to me so I can update our coding rules with that case if we find it worth the risk'. Forced software design review by linter.


You might be interested in this thread from a year ago, where we discussed memory management in Ada. [0]

The Ada folks sometimes seem a bit quiet on memory-management questions, a bit like the way Forth folks rarely talk about the real-world performance of Forth interpreters. On the plus side, borrower-checking is coming to SPARK. [1][2]

[0] https://news.ycombinator.com/item?id=24361992

[1] https://www.adacore.com/papers/safe-dynamic-memory-managemen...

[2] https://arxiv.org/abs/1805.05576


One thing to keep in mind in Ada is the "typical" Ada programmer lives in embedded world, where memory needs to be deterministic. It's a very different perspective from say "desktop-application" world where dynamically allocating memory is just a natural part of programming.


I get that. I just keep seeing Ada recommended for projects other than embedded systems and I don’t see how it can work.


I found it a lot easier than I thought it would be. Ada has RAII (controlled types) which you can use for reference counting and resource management like in C++.

Septum uses a lot of dynamically allocated memory, but it's all hidden by smart pointers or Unbounded_String (built-in reference-counted COW strings). I use it routinely to search codebases in the tens of millions of lines and it's fast enough right now (single second search times) due to task parallelism I did in Ada that I haven't even bothering to tune it or do any fancy indexing yet.


Presumably that means you're vulnerable to issues with reference cycles?


In theory, yes, I'm vulnerable to reference cycles, but I haven't run into this in practice. The GNAT smart pointer types supports weak pointers if I need it.


I'm not especially knowledgeable about Ada, but looking at the docs for GtkAda, it seems they're able to do reference-counting with good ergonomics.

https://docs.adacore.com/live/wave/gtkada/html/gtkada_ug/mem...


> What does the modern, idiomatic Ada programmer do about memory management?

Author here. I've actually not had to do any heavy direct memory management yet. There's an RAII-based (i.e. Controlled Type) reference-counted pointer implementation in GNAT I've used and the container libraries is mature enough to avoid the issue by just using built-in data structures.

A few other smart pointer implementations exists floating around in various libraries in Alire, I suspect there will be a default one will appear in Alire at some point which parallels Rust's Arc, Rc and Box, or C++'s std::shared_ptr, std::weak_ptr (for breaking cycles) and std::unique_ptr. I have a partial implementation myself, but I've been working on other things.


Off the top of my head there are two ways:

- Lexically scoped access types have storage pools that are destroyed when the type goes out of scope. This is similar to region-based memory management as in Cyclone[0].

- Limited controlled types are a lot like linear types and prevent unwanted aliasing and ensure deallocation happens correctly, as described in this paper[1].

[0]: https://en.m.wikipedia.org/wiki/Cyclone_(programming_languag...

[1]: https://dl.acm.org/doi/10.1145/165354.165362


I think I understand these. I think these solutions work where it is very clear what the scope of object should be.

But what about cases in large projects with many developers where a library/package allocates an object on the heap and passes it back to a caller? Sometimes it is not clear who is responsible for deallocating the object. This is the challenge we ran into building large projects in C, C++, and Ada. These days, I use Java, C#, Rust, or Go, because I know there will be no dangling pointers.


>a library/package allocates an object on the heap and passes it back to a caller

Limited controlled types handle this case.

Limited types prevent assignment (i.e. aliasing). Controlled types introduce destructors so everything is closed even when an exception is thrown.


I see how limited controlled types allow for reference counting. I don't see how they can be used to implement circular graphs and situations where it is not clear who needs an object to continue to exist.


IIRC a circular graph would typically be done via pool-specific access types.

    -- Forward declaration.
    Type Element(<>);
    -- Assuming there's a Graph.Pool implementation of the base Storage_Pool object.
    Type Pointer is access Element
       with Storage_Pool => Graph.Pool;
    Subtype Handle is not null Pointer;
    Type Children is array(Positive range <>) of Handle;
    Type Element(Parent : Handle; Child_Count: Natural) is record
       Data : Integer; -- Or whatever your actual data would be.
       Link : Children( 1..Child_Count );
    end record;


I haven't written enough Ada to know the answer (I've only researched the memory management scheme as part of implementing a language with compile-time ownership tracking).

I think the Ada answer would be to keep everything that has a lifetime, anything that's a resource, in its own module behind a strict limited controlled interface. But that's a very vague answer.


Maybe the semantic problem you are trying to solve should be not solved but avoided.

Much like "spaghetti code" is not only considered poor style but isn't even supported by modern languages, "spaghetti data" should also be considered a bad pattern, which more advanced languages force you to avoid.


"Are library implementers still required to use Unchecked_Deallocation?"

Unchecked_Deallocation has more in it than just returning storage to the system, it also triggers Finalization:

http://www.ada-auth.org/standards/rm12_w_tc1/html/RM-13-11-2...

"when X is not equal to null first performs finalization of the object designated by X (and any coextensions of the object ...)"

So even if your compiler target has automatic memory management (eg: JVM) you will still use Unchecked_Deallocation if you want to control when Finalization occurs.


Well, we do have a good set of generic containers now. Access types and Unchecked_Conversion have always been a code-smell IMO. And Ada is still foremost a realtime language so garbage collection just creates scheduling problems.


I think that is a fine answer for limiting the applicability of Ada. I’m confused where people suggest it for bigger systems. I was on a team using it for a large soft real-time simulation. Memory management was a constant source of errors.


I'm confused not more complex large-scale software projects are written in Ada. The way I understand it, they are not complicated enough you need Ada.

For example, Ada makes it easy to structure one's code in a strict tree hierarchy (whenever you with a package put pragma Elaborate_All (..) on it, which works well with any Ada compiler I've tried it with and is in the Ada standard since 1995). It makes monolith creation by mistake impossible.

Strange you had memory management problems in an Ada application. With all the focus on safety and security within the Ada community, it makes me wonder how the software engineers were using the language in that project?


My understanding is that many safety critical systems in Ada don’t allow for dynamic memory allocation or use of Unchecked_Deallocation. That’s fine for systems where much is known at compile time. We were building software that could simulate thousands of entities. There was a lot of dynamic allocation. As soon as someone calls Unchecked_Deallocation, all bets are off with regard to safety.


Many safety critical systems in Ada ban the usage of dynamic allocation and Unchecked_Deallocation by adding "pragma Restrictions (No_Heap);" and "pragma Restrictions (No_Dependence => Ada.Unchecked_Deallocation);" at the top of the file where the main subprogram is located (application entry). The tradition when these pragma are in effect is to define the entities used in the application in arrays. The sizes of the arrays need not be defined at compile-time but can be determined at application startup (run-time). It means the sizes of the arrays can be specified in configuration files and vary depending on the hardware support the application is installed upon. Just because the entities/objects are located at indexes in an array it doesn't mean that they need to know about it and can point to other objects using access-to-object type variables (references). The problem with dynamic allocations is the risk of memory fragmentation and the performance of the application may "mysteriously" degrade over time. One also runs the risk of running out of heap memory unless the application checks for example there is at least 5% memory left on the device for the heap allocation to be successful.

Also note that one can run into memory leak problems using automatic garbage collected languages. I've personally needed to track down memory leaks in both C# and Javascript applications. Thankfully this rarely happens. It indicates that even when working in an automatic garbage collected language a developer needs to be aware of potential memory issues and think carefully about architecture.

Glad to hear you were successful in the project (with 100s of developers)!


I'm confused when people don't use it for bigger systems. I like that the compiler detects most of my errors before I run the program.


> I'm confused when people don't use it for bigger systems. I like that the compiler detects most of my errors before I run the program.

That is more likely due to the type system than manual memory management. People after all say the same thing about Haskell, which also has powerful types, but is garbage collected.

There are two kinds of code errors to worry about: wrong answers (2+2=5), and divergence (a fancy name for crashing, i.e. 2+2=segmentation fault). In a jet engine controller, wrong answers and segfaults both potentially cause fatalities, so you better not use GC. Ada is made for that.

In (say) a compiler, bugs leading to wrong answers (incorrect code emitted) might cause potential fatalities, but if the compiler segfaults from running out of memory, that's only annoying (the developer must find a workaround, use a bigger computer, or whatever). So it is fine to write a compiler in a GC'd language even if its memory footprint and timing characteristics are hard to verify. If you wrote a compiler in Ada you'd spend a bunch of time with manual memory management, for little benefit.

In fact the most serious formally verified compiler (compcert.inria.fr) is written in Coq, which you can think of as an ultra precise dialect of OCaml and which is GC'd (Coq in this case generates OCaml code that uses the OCaml runtime. It can also generate Haskell etc.).


My experience with bigger systems is that when you have 100 developers passing references around it becomes hard to manage who is responsible for deallocation of an object. This leads to dangling pointers and debugging. Going to languages with automatic memory management made such projects a lot more reliable.


Thanks for clarifying how the dangling pointers may arise. Not everyone agrees with me, but these are my thoughts/recommendations when using Ada. Which thread/task that has ownership of a variable is paramount. Whenever one defines a variable it must be crystal clear which thread/task that owns it, for example has the right the read or write a value to the variable. What I recommend is the Actor Model (https://en.wikipedia.org/wiki/Actor_model). Synchronization between two tasks can either be through shared variables or message passing. Last time I checked Academia is inconclusive as to what is the best (least error-prone) way for threads/tasks to communicate. What seems the simplest to me is message passing. 10 years ago, first time I heard of the Actor Model and message passing is Erlang and it's a language where these ideas are fundamental. So a task owns a variable. If another task wishes to change the value of that variable it must send a message to the owning task and request it to change the value. If another task wishes to know the value it must ask the owning task what the value is. Since the time I heard of Erlang, other languages like Rust and the Pony language has picked up on this too. Rust has taken this further by making it possible for one task to temporarily borrow ownership to another task and it is checked by the borrow-checker.

To implement the Actor Model in Ada one puts all variables in the body of the tasks that are in the application. It makes them not visible from other tasks. So what you need to keep in mind when developing is for a task to never send an access-to-object type variable to another task. If there is a need to do that you need to use Ada/SPARK or Rust to get the proper ownership checking done. Btw, Codepeer (static code analysis tool for Ada) finds race-conditions, has deadlock detection, and warns if there are variables that may be read or written to by more than one task.

If one sticks to vanilla Ada (not SPARK) one could develop an application based on libadalang that parses all the Ada source code and checks that all task entries have input arguments that do not contain any access-to-object types (to find instances where a developer has sent an access-to-object variable to another task by mistake). Such a tool does not exist but libadalang exists to allow the creation of custom rules checking on one's Ada code.


Rust surfaces this information as part of the type and lifetime system. There's no ambiguity there: the part of the program that "owns" any data object will automatically deallocate the object if it's done with it and has not transferred ownership elsewhere. This works exactly like the usual C++ RAII, but it's generalized to the whole language. Even the standard .drop() operation follows these semantics.


Not having written Ada, but having seen many HN naysayers saying "Why do we need Rust when Ada can do this" and having tried to come up with a well-researched rebuttal, my sense is that there are three options:

1. Your project is static enough that it doesn't need dynamic memory allocation. The control system for a modern passenger jet is a pretty complex project, but you can allocate an object per engine, an object per flap, a struct per landing gear, an object per pilot display, etc. at design time. Even the navigational computers on these aircraft, which take in unbounded structured data about airports and navigational points, often have hard-coded limits. Quoting https://www.mitre.org/sites/default/files/pdf/12_1324.pdf :

> one widely used FMC [flight management computer] model ... has issues at airports with over 100 arrival and departure procedures. Some airport examples where 100 arrival and departure procedures are exceeded include Cairo, Amsterdam, Madrid, Paris (Le Bourget, Orly and Charles de Gaulle), Mumbai, New Delhi and Beijing. A FMC service bulletin issued states that the aircraft may lose FMC applications in flight with over 100 procedures and flight plan up-linking. Further, another manufacturer has a FMC model with a limit of 99 total procedures and a limit of 8 waypoints per procedure. A third FMC manufacturer has a model with a limit on the amount of arrivals, departure and approaches as shown below: Early models – limit of 70 departures, 70 arrivals, and 29 approaches at an airport. Later models – limit of 130 departures, 130 arrivals, and 39 approaches at an airport.

Counterintutitively, this is culturally acceptable for such "high assurance" applications, but WhatsApp gets confusion from the press when it increases the maximum number of participants in a group chat to the "oddly specific number" of 256 (https://i.redd.it/gk7dicv6hsvy.jpg).

2. You use a third-party GC/refcounting library that calls Unchecked_Deallocation internally.

3. There's a 2018 proposal (linked in another reply) for "ownership types", which specifically references Rust's lifetime model as an inspiration.

Or, in other words, if your goal is to write a (say) safe implementation of the DOM that can render real-world web pages while efficiently using memory on a general-purpose computer, Ada is probably the wrong tool for the job. Which is fine! It's a great tool for other jobs.


Hi Geofft! Ada developer here. When I read and study the Ada reference manual for the 1995 standard I get the impression that the Ada language designers were not thinking of third-party garbage collection or reference counting as the primary way of achieving memory safety but they were thinking of arena pools/storage pools. When one defines an access-to-object type in the 1995 standard one can specify in which storage pool the allocated object ends up in. The Ada standard does not talk about Stack and heap but talks about Stack and Storage pools. I get the impression that the idea is for an Ada application to get a number of storage pools (with statically determined sizes?) and one can allocate objects inside of these and when one is finished with the objects in a pool one deallocates them all at once by emptying the pool and then one can reuse it again. More efficient and less-error-prone than deallocating each object separately. You are right in point number 2 that the door for garbage collection is open in the Ada standard. No Ada compiler vendor has implemented a GC but you are right about it being considered. The idea for arena pool/storage pool can also been seen in the Ada language (also 1995 standard) by being able to define an access-to-object type locally inside a function/subprogram and at the point of the access-to-object types existence some memory is heap allocated (the size of the allocated memory is specified by the access-to-object type definition) and when the access-to-object type goes out of scope the memory is deallocated (without the use of unchecked_deallocation)... so there should be a forth point on arena/storage pools on your list. And it may be more suited for "a (say) safe implementation of the DOM that can render real-world web pages while efficiently using memory on a general-purpose computer".

To use storage pools in Ada I would recommend Deepend (https://sourceforge.net/projects/deepend/). Deepend is a storage pool with subpool capabilities for Ada 2012, Ada 2005, and Ada 95. Memory allocations can be associated with subpools and subpools can be deallocated as a whole which provides a safer alternative than managing deletions of individual objects. It also is likely to be more deterministic and efficient than garbage collection.


> if your goal is to write a (say) safe implementation of the DOM that can render real-world web pages while efficiently using memory on a general-purpose computer, Ada is probably the wrong tool for the job.

Author here. Ada is actually perfect for the use case you listed. As a C++ programmer focusing on performance, the applicability of Ada to "modern problems" was one of the reasons I was playing around with it. Between customizable "storage pools" (Ada's allocators), the easy of interfacing with C, control of type alignment and layout, compiler intrinsics, built-in concurrency types (protected objects for coordinating access, and tasks for splitting work) and a bunch of other things, I have all the tools I need to do this. It's pretty close to the C++ feature set with the face of Pascal, and C++ programmers should feel comfortable working in Ada after only a few months.


I don't think C++ was the standard intended here for "right tool for the job", but Rust. The quoted example project is a reference to Servo, if I'm not mistaken.


I figured this was what he was referring to, but C++ is more familiar to a lot of people right now, which is why I couched my answer in those terms. Ada would still a good language to that work in.


>I guess another way to put this is, "Can a non-trivial project be developed in modern Ada without using Unchecked_Dealloation?" Because I don't see how I can consider a library safe if it is allowed to call Unchecked_Deallocation.

Yes, there are very few times I've had to manually deallocate; there's a really good video on memory-management with Ada: http://video.fosdem.org/2016/aw1124/memory-management-with-a...


I went to a college that taught CS in ADA, and I never took CS because I tested out of 101, but I ended up fixing attempts at the problem sets for about half my dorm section (this was okay as long as and they documented it).

Well anyway, I found it to be very easy to pick up just by reading, hard to make semantic errors, and easy to modify without breaking things. The pace of learning for the entire cohort, mostly non-CS students, was, I think, much faster than e.g. C++ and possibly more complete than e.g. Python. It’s pedantic and exact, but it uses real english language keywords and modifiers.

I went to a different school later where I did the same thing but it was taught in C++. It seemed like many students would get by purely by repeating patterns that they didn’t understand. And I started to catch myself in that trap as well, realizing that I had a lot to learn.

But I wasn’t going to be a programmer; I was going to be a theoretical physicist cogitating on the deep mysteries of the universe. Fast forward…. D’ohhhh.


Does "real english language keywords" actually make it easier to learn programming? AIUI, this was popular in the past simply because many computing systems used bespoke, pre-ASCII character sets, sometimes with few symbols available and no distinction between uppercase and lowercase (hence why Ada is case insensitive as well!). Typing stuff like DIVIDE A INTO X can come in handy when your machine literally doesn't have a / character.


Pretty sure it does. Line-noise languages (e.g. math, C++, Rust, Perl, K) optimize for expert use, the trade-off being that they're difficult to parse and understand for newcomers and infrequent users. There is of course also a huge difference between languages were the meaning of symbols/notation is largely context-independent (e.g. the three types of braces in C) and those were this is not the case (e.g. C++, math).


I agree C++ is worse about the context dependence than C but they're both pretty bad about this. Notice that C has two different operators named * and two named & for example.


perl: there's more than one way to obfuscate it C: there's more than one way to segfault it


looks at vote count

I see I have enraged the C programmers.

My apologies for being terrible at C memory management, though in my defense I am sufficiently self aware of this to realise that means it's safer for me to stick to perl as a weapon of choice.


I think science is divided on this question. There's that one study showing that many languages with C-like syntax are as easy to learn to read as languages with tokens picked completely at random[1].

But that talks about learning the language. What about once you're proficient?

I vaguely remember reading about a study that concluded more terse symbol-based syntax is better because the programmer can use visual organisation of the code to greater extent because the syntax occupies less of the screen. But I could also be making it up because I can't find it now.

There's also the interesting (but slightly irrelevant) result that abbreviated identifier names may be just as effective as ones spelled out![2]

[1]: https://dl.acm.org/doi/abs/10.1145/2534973

[2]: https://link.springer.com/article/10.1007/s11334-007-0031-2


Ada intentionally prioritizes readability over write-ability, hence English-language keywords and the like. It’s designed for large, long-running systems that must be maintained over years or decades.

The thinking is that over their lifetime, such systems will be read more than written, by rotating teams of programmers who need to get up to speed with a complex codebase, repeatedly. In maintenance mode, modifications will be less frequent than reads.


"Real English language keywords" for readability was probably the most sensible choice back in the early 1980s when Ada was first standardized. They're rather less popular today, of course. The interesting question is, does using, e.g. BEGIN and END instead of curly brackets really make it easier for novices who are learning to code today. It's an interesting topic and one that isn't often seen here.


"begin" and "end" are probably the most arguable, but in the context of everything else being a real-word keyword I think they make sense.

Ada was not my first language, and I was a little dubious about the keywords at first, but I quickly changed my mind. Now I wish every language did it (obviously, not literally every). It's really nice for a couple reasons.

One is just googling. You pretty much have a built-in shared lexicon, so there's no trying to remember the right name for the problematic symbol, or figuring out the best way to describe it.

There's also no trying to remember what symbol does the thing you want, which is really nice even when just coming back to a language feature you haven't used recently enough to be fresh on.

I have no idea how much these things would matter in a more structured environment. I pick up words way faster than symbols and my memory isn't so great, so I think it would have helped me even there. But I don't know how universal that is.

Though given the number of times I had to help the actual CS majors with basic syntax stuff, even after the 101 course... I can't help but suspect it would be useful.


I always liked Ada, going back to the first MIL-STD-1815 of it (from 1980). It looked very complicated to me at the time, so I didn't try implementing it (instead I thought C++ would be a couple months to implement and did that instead).

A couple things D took from Ada were the in, out, and inout parameters, and embedding _ in numeric literals like 1_000_000.

The latter is so simple and so useful.

I still have my copy of MIL-STD-1815 on the shelf. No, you can't have it.


The 1_000_000 thing exists in perl as well (I think also via theft from Ada) and frankly I can't understand why everybody hasn't stolen it.


C++ has (albeit only since version 14) the single quote as digit separator, which is (IMHO) a better choice: https://en.cppreference.com/w/cpp/language/integer_literal


> which is (IMHO) a better choice

I know C++ added it 14 years after D :-) but I'm curious why you say ' is a better choice? I never read the papers proposing the feature, but the _ hasn't caused any problems for us, and I like it better myself.


I use the quote separator for digit grouping on any occasion, i.e. also outside of computer code. There, we usually have non-monospaced fonts, for which the use of underscores is glaringly uglier compared to use of single quotes. I don't know if you took this design decision from somewhere else or not, but it was an inspired decision nevertheless.

Also, I'm glad and honoured to have a comment like yours in my history. Sorry for late reply.


Pretty much every currently updated language does these days.


When I met Walter at an OSCON in 2008 we were both annoyed that more languages hadn't already stolen it.

"Achieved what perl had managed in 1988 and D in 2000" is not honestly a point in favour of languages who worked it out somewhere in the past decade.


What an odd way of looking at it. It's not a contest. It's a diffusion of ideas that are helpful.


I agree that its helpful and you notice Ive been firmly on the side of everybody stealing it - which is why "pretty much every language has that" rather than "oh, cool, that's a really nice feature and it's interesting to learn where it diffused from" was such an oddly dismissive response to Walter and I enthusing about it.


This wasn't remotely true in 2000 when D was being created. AFAIK D was the first after Ada, and certainly popularized it as I made many presentations including it.

C is still holding out :-)


Perl 5.000 was released in 1994 (see https://perl.bot/p/kw3gbl for a demonstration that had the syntax)

Edit: That bot is far too easy to be curious with, and unless I got my testing wrong, perl 2 (from 1988) seems to have been where it was added: https://perl.bot/p/jzyi91

All assistance convincing the rest of the world they should also adopt the feature is very much welcome though :D


Thanks for the info! I stand corrected.


Great article! I've written lots of Ada, and I actually had no idea Ada had such a thing as "Expression functions". I do understand the reasons why many people seem to dislike Ada so much, compared to C it's fussy and verbose. However using Ada resolves so many of the issues people have with coding in C, and C++. I'd recommend anyone who regularly writes programs in C to see what Ada has to offer. SPARK is also worth taking a look at. AdaCore's documentation on what SPARK has to offer to people working with MISRA C makes a great case for the language's use: https://learn.adacore.com/courses/SPARK_for_the_MISRA_C_Deve...


>> I actually had no idea Ada had such a thing as "Expression functions".

The John Barnes book mentioned in the article covers some of the differences between the Ada language versions.

Expression functions were added in Ada 2012 and directly support preconditions, postconditions, and various aspects in SPARK 2014.


Having separate module interface and module body files, as in Ada or Modula-2/3, is a great idea that sadly a lot of people are burnt out on because C and C++ do this in a very unprincipled way.

Having a terse little file where I can scan the interface of a module, rather than having to scroll through the implementation and see which declarations have `public` in front of them, is a great way to quickly refresh your mental model of a module's API.

And it allows you to have both interface docstrings and implementation docstrings, which a documentation generator could use to compile an API guide for clients and a developer's guide for people working on the internals.


> Having a terse little file where I can scan the interface of a module, rather than having to scroll through the implementation and see which declarations have `public` in front of them, is a great way to quickly refresh your mental model of a module's API.

That information is trivial to extract, there is no reason to force the developer to maintain it and keep it in sync.

> And it allows you to have both interface docstrings and implementation docstrings, which a documentation generator could use to compile an API guide for clients and a developer's guide for people working on the internals.

You could call these "docstring" and "comment".


>> Having a terse little file where I can scan the interface of a module, rather than having to scroll through the implementation and see which declarations have `public` in front of them, is a great way to quickly refresh your mental model of a module's API.

> That information is trivial to extract, there is no reason to force the developer to maintain it and keep it in sync.

That's true, in theory. In reality though, when is that information extracted? What do you use to extract it? Where is it saved once extracted? How easy is it to review it? Do you need to use an IDE to do that?

Any less than satisfactory answer to these questions will make this worse in practice than having the developer maintain it.


> That's true, in theory. In reality though, when is that information extracted? What do you use to extract it? Where is it saved once extracted? How easy is it to review it? Do you need to use an IDE to do that?

You could use your build system to extract it to a website (eg. https://docs.rs/)


  > That's true, in theory. In reality though, when is that information extracted? What do you use to extract it? Where is it saved once extracted? How easy is it to review it? Do you need to use an IDE to do that?
This is part of the GNU Ada toolchain.

I don't write Ada, but I have looked into it. I strongly dislike having to write an entire separate type/interface file that repeats the type definitions from the implementation.

This information exists -- a tool should be able to extract the signatures and spit out the interface file automatically (IE a header for C/C++)!

In Ada, this tool is called "gnatchop"

https://learn.adacore.com/courses/GNAT_Toolchain_Intro/chapt...

Instead of writing an ".ads" and ".adb" file (like ".h" and ".c"), you just write an ".ada" file and feed it to "gnatchop", it creates the two files for you and you're ready to compile.

  gnatchop example.ada # (example.adb + example.ads created)
  gprbuild p_main # (builds from example.adb)
Another neat thing Ada can do is interop with C++! It has C interop, but can ALSO support C++.

https://gcc.gnu.org/onlinedocs/gnat_ugn/Interfacing-with-C_0...

https://docs.adacore.com/gnat_rm-docs/html/gnat_rm/gnat_rm/i...

It can take C/C++ headers and auto-generate the interop code you need as well

  $ g++ -c -fdump-ada-spec -C /usr/include/time.h
  $ gcc -c *.ads
https://gcc.gnu.org/onlinedocs/gnat_ugn/Running-the-Binding-...

https://gcc.gnu.org/onlinedocs/gnat_ugn/Generating-Bindings-...


I can understand your hesitancy, but I think in practice I strongly disagree.

A big part of why I like Ada so much is the fact it lets me hold such a strong mental model of the program. I can specify quite a bit about how it should all work, and the compiler holds me to it.

Most or all of that sort of information is tied up in the .ads file. If I want to refer to the model, I can check the .ads file, even if my project doesn't compile yet. Everything I need to know is there, from the very first line of code.

Most importantly, if I'm working in the .ads file, I'm changing the model. Changes here are Important. If I unknowingly make a change here, I've lost my understanding of the model. I really don't want that to be possible.

Meanwhile the .adb is more the implementation. If I'm changing the .adb, I'm just altering the details, but the overall model stays the same. Maybe what I'm doing in the .adb tells me I really do need to change the .ads because the model has a problem, but that doesn't mean I should just go make the easiest little change to the model that makes the .adb work.

Frankly, I think that extra little bit of friction in having two files that need to be in sync makes it easier to write better programs. Something as huge as changing the model should have something that helps cue me in that I'm doing something Big.


The compiler should be able to provide it


You don't want to extract it. The intention is that the specification (the thing you code against, you do write those down, right?) is separate from implementation so that you can provide multiple implementations.

If you rely on extracting the interface from the implementation then you have to have another mechanism to compare two implementations to see if they provide the same interface. That's kind of an insane way to do things from the Ada perspective. You've made things harder for yourself and less certain for the users of that interface.

Put the public bits into a package specification file so that anyone can know as the user or the implementer what is expected. Swap out implementations as needed and have high confidence that (short of logic errors in the implementation) it will at least provide the same interface because, well, it wouldn't compile otherwise.

Also, the specification files are a bit like C or C++ headers. You can write a program predicated on their correctness without actually needing an implementation to verify against. The *.ads files tell you "These functions, procedures, and types exist. I promise, so you can go about your business even though an implementation may not be available yet.


I have never used Ada. But I really like the idea of having implementation documentation separate from interface documentation, and treating it like real documentation with its own searchable/linkable HTML or PDF reference document.

A good IDE can then fold/collapse the in-code documentation, and the programmer can have it up in a separate window along side.

This could be an interesting model for literate programming. Instead of it being "linear", like reading a novel, it would be like reading a translation of an ancient text, with the original source material on one side of the page and both the translation and detailed reference notes on the other side. And of course there would be a hypertext component to the documentation, which would allow you to build a "table of contents" and jump around the codebase.


> This could be an interesting model for literate programming. Instead of it being "linear", like reading a novel

It's unclear what you mean by "linear" here. Surely your "translation of an ancient text" is a linear read following the "ancient text" it translates even if it has forwards and backwards references?

Knuth's original conception of literate programming was non-linear in terms of code, you'd write some text, write some bits of code, possibly add a reference to an other snippet, write some more text, write some more bits of code, and tangle then stitches the source back together by following references.

More "modern" literate programming is non-linear in terms of narrative, making the "comments" / "docstrings" the main content but then having the code execute "normally" ignoring said comments.

Jeremy Ashkenas's tools (e.g. undescore, backbone, …) are all written and published in that style even though Javascript is hardly conducive to it, and shown in exactly the "original source material on one side and translation and detailed notes" you seem to talk about on the other. That is what Ashkenas called "annotated source": https://backbonejs.org/docs/backbone.html, https://underscorejs.org/docs/underscore-esm.html.

Recent revisions of underscore have been modularised and show individual segments you can look between instead: https://underscorejs.org/docs/modules/index-all.html maybe that's what you're thinking of when you talk about it being non-linear?

It's missing some of the bits e.g. the symbols themselves are not hyperlinked and there is no glossary, but because in the modularized version each function is the sole export of its module it's easy to jump between functions. Not that I'm convinced this makes for a great experience as it requires keeping a lot in memory, but there you go.


I meant "linear" as in you start at point A and read until point B. That is, pieces of information are presented and organized as a sequence of one item after another. I am envisioning a system where the programmer has code in one window and the explanation of the code in another. Like a book with text on the left and annotations on the right.


> I meant "linear" as in you start at point A and read until point B. That is, pieces of information are presented and organized as a sequence of one item after another.

That is the definition of the word, it’s not actually helpful in understanding what you’re thinking about.

> I am envisioning a system where the programmer has code in one window and the explanation of the code in another. Like a book with text on the left and annotations on the right.

So… literally what i posted.


> Having a terse little file where I can scan the interface of a module, rather than having to scroll through the implementation and see which declarations have `public` in front of them, is a great way to quickly refresh your mental model of a module's API.

There's also another structured approach to the interface/implementation distinction: leave it up to the IDE to offer an interface explorer. This is the approach used by, say, Java.

I'm not sure that either approach is outright better than the other; it's a trade-off.


I tend to prefer things being on the code itself as opposed to being added dynamically by the IDE. For example I think type annotations should be in the IDE.

This is because it lets me read code in extra-IDE settings: browing GitHub, or in a patch file, or on a book. Or I can write code on paper.

Another benefit is that you can design a program entirely by writing the module interface files, and typechecking them against each other without an actual module body file.

Then, as you start actually implementing the program, you can implement one module at a time, typechecking it against the module interfaces of its dependencies, without said dependencies having any actual code in them. So you can write the actual implementation in whatever order makes sense.


Free Pascal has units (aka modules) with the interface and implementation in the same file but in separate sections, e.g.

    unit Foo;
    interface
    
    type Weekday = (Mon, Tue, Wed, Thu, Fri);
    
    procedure DoThisAt(Day: Weekday);
    
    implementation
    
    procedure DoThisAt(Day: Weekday);
    begin
      // stuff
    end;
    
    end.
This helps keep things together and up to date (the Lazarus IDE can automatically sync the implementation section with the interface section, no need to type stuff twice manually) and you can still scan the interface section to see its API without bothering the implementation section (but it is still just a scroll away if you want).

(FWIW this is an old feature taken from Turbo Pascal which itself took it from UCSD Pascal)


In C++ you can’t really even separate them if you want to define templates, because (unless I am mistaken) template instantiation can only be done at compile time rather than link time. It’s sad to not be able to cleanly separate the interface from the implementation.


Yes, I think the module interface file should be for the user rather than the compiler.

The compiler would parse both the module interface and module body files, merge them and check for consistency, and produce both an object code file for the code in that module and a binary module interface that contains an efficiently serialized form of the interface, the bodies of generic functions, and maybe the table of monomorphic instances for separate compilation.

Then the build system makes sure to import the relevant binary interface files when building a project.


Just like C++20 modules allow.

You only need to export the public parts of the templates.


If my memory isn’t failing me, the Sun C++ compiler I used in 1994 did template expansion and compilation at link-time. However it was rather annoying in use, having to wait a long time to get errors arising from instantiation.


You can now when using C++20 modules.


"Interface files" that are intended as a form of documentation should be generated automatically instead.

As for interface vs. implementation docs, nothing precludes having both in a single file either.


> Having separate module interface and module body files, as in Ada or Modula-2/3, is a great idea that sadly a lot of people are burnt out on because C and C++ do this in a very unprincipled way. > Having a terse little file where I can scan the interface of a module, rather than having to scroll through the implementation and see which declarations have `public` in front of them, is a great way to quickly refresh your mental model of a module's API.

Then there's the other-way of doing it: imagine a language with a database/browser (e.g. smalltalk), where the implementation is just linked and can be accessed in the-same/an-other window.

This sort of system could also have documentation-comments attached to the interface (e.g. for usage), and the implementation (e.g. for maintenance logging, rationale, etc).


Coming from Pascal/Delphi background I too find the structured separation useful (unlike the unstructured one in C/C++), though obviously it has a cost of typing the declarations twice.

Then for more modern languages where there's no separation some IDE's can auto-generate the interface declarations along with the associated comments. E.g. Xcode does it for Swift and it's kind of OK.


> though obviously it has a cost of typing the declarations twice

In Lazarus you can have the IDE do the syncing for you (Ctrl+Shift+C). Modern Delphi might have something similar (if not exactly the same thing).


Doesn't OCaml have something like this too?


F# does, so I assume OCaml does too.

"Wait F# does?"

Yeah, but you won't have seen it because almost nobody uses it to the point where some tooling even has trouble understanding what it is. :(


Indeed, and that was inspired by Modula-2 actually: https://dev.to/yawaramin/ocaml-interface-files-hero-or-menac...


Well written. Always interesting to see new eyes in modern guise on old ideas. I studied and worked at the university of york, 1980s, but not on their Ada compiler project. I recall it had an astronomical number of passes, and was said to have been formally rejected as "not compliant" until they removed a compile time warning which said something like:

congratulations you have used the most obscure part of the ada specifications


An interesting read, but there's an error:

> Unfortunately, the top tiers of [SPARK] analyses are paid only, but you can get data/information flow analyses, as well as guarantee of no runtime errors for free.

That's not right. Straight from AdaCore's Yannick Moy, a year ago: [0]

> SPARK as included in GNAT Community allows you to go up to platinum level, with the 3 provers included (Alt-Ergo, CVC4 and Z3)

[0] https://old.reddit.com/r/ada/comments/hwgbwa/survey_on_the_f...


Thanks for pointing this out. Fixed.


Speedily done, thank you.


I would hesitate to call Ada "obscure", though. Maybe "once popular" -- especially since DoD spent billions writing Ada.


If you have ever written in Oracle's PL/SQL[0], Ada will come across strangely familiar to you (because the designers of PL/SQL modelled its syntax on that of Ada). For me, it was almost like a blast from the past since I wrote lots of PL/SQL almost 20 years ago.

[0]: https://en.wikipedia.org/wiki/PL/SQL


The similarities are only skin deep though. It's like comparing Javascript to Go because they both have C-style braces. Plus PL/SQL is an abomination of a language whereas Ada can be a real pleasure.

A much closer comparison would be Pascal. Which is another awesome language (I was gutted when the home computing industry moved away from Pascal and towards C).


> Pascal and towards C

As was I, Object Pascal was for it's time one of my all time favourite languages - it was so effortlessly boring it just got out the way.


What is it about SQL embedded languages that made them so hard to get right? I had to port a stored procedure from MySQL to Pl/PSQL once and it was nuts how much better the PSQL one was once it was finished.


What makes pl/sql an abomination of a language?


It's not. I wrote a lot of PL/SQL at my first real software job and I miss it dearly. I treasure every opportunity I get to write a little PL/pgSQL, which is very similar. Use it for the right purposes and it's a joy to write.

There are certainly things it can't do, and things it can do but only poorly, but that's true of anything.


> Use it for the right purposes and it's a joy to write.

I did use it for the right purposes. I was writing applications for Oracle Middleware. You don't get any more "right purposes" than that. And no, it really isn't a joy. Almost everyone I've ever met or spoken to hated PL/SQL (you being literally the one exception). I get that enjoyment is subjective but in this case it really feels like you're the anomalous data point.

I've not use PL/pgSQL so can't comment on how similar it is to Oracle PL/SQL but PostgreSQL is a much nice RDBMS to manage and write SQL for than Oracle is. So it wouldn't surprise me if PL/pgSQL had some quality of life improvements.


I've done a fair amount of plsql programming as well, did not see it as a big problem. When processing a lot of data i definitely prefer it to doing the same in java/python/... Executing queries with jdbc, mapping resultsets to java objects, writing back changes is cumbersome and bad performance.

Working with plsql packages is like programming in pascal/modula 2 but with first class SQL support built in. A bit like linq in c#.

For example, I don't see how for cursor loops in plsql like documented here: https://www.oracletutorial.com/plsql-tutorial/plsql-cursor-f... are less elegant than doing something similar with java or python.


It's an attempt at bringing procedural workflows to SQL but the two paradigms aren't really compatible. It's much easier working with a scripting language like Perl or Python and embedding SQL queries. Use a relational language for the relational logic and a procedural language for the procedural logic.


Granted PL/SQL isn't as flexible as Perl or Python - it is not a fully-fledged application/system language (don't think it was meant to be that). Having said that, it does have ability to define types/reference column/row types, typical data structures like arrays/hashes/records(which are like structs), control-flow etc but the main point is that all of that runs within the database engine - this is useful when you must deal with complicated conditional logic but don't want to make those round-trips and handle all of it in one go. Generally traditional relational SQL is the preferred approach and it can cater to conditional scenarios as well but sometimes SQL statements combined with conditional logic written procedurally in one place is simpler to understand/more performant.

To be clear, I'm not advocating this over the traditional approach of programs running discrete queries and handling the logic themselves - just saying that the approach taken by PL/SQL has its merits.


> Granted PL/SQL isn't as flexible as Perl or Python

Which, incidentally, can be used as procedural languages in Postgres (as well as TCL) as part of the standard distribution.

And there are third-party extensions for javascript, ruby, scheme, r, java, lua, … (though some of them may not be maintained anymore, the pg13 documentation doesn't list scheme and ruby anymore).


I get the reason for PL/SQL and completely understand the advantages of stored procedures (I used to write code for Oracle Middleware so have had several years of experience in specifically this domain). I'm just answering the question of why the language sucks to write code for.

I'm not suggesting PL/SQL should be a fully fledged scripting language either (neither Perl nor Python are systems languages by the way). I'm just saying the two paradigms PL/SQL aims to leverage don't combine well so the end result is always going to be ugly.


Ada is like the exact opposite of "move fast and break things". Ada is more like: think long and hard before writing your first line of code.


>think long and hard before writing your first line of code.

Which is no longer allowed in Modern software development influenced by Silicon Valley.


Ironically, this is the #1 think people whine about government research: "eeew, it takes too loooooonnnggg." Yeah, there's a good reason for that.


Most programmers fall into one of two categories in terms of how they think about program structure. Either

1) They think the fundamental elements of code are classes and methods on classes, or

2) They think the fundamental elements of code are functions.

Sometimes either of these also know there's something called a module or package, but the idea is loosely understood.

I was one of these people. If you're happy to stay that way, stop reading my comment now.

Ada is extremely well designed. One of the reasons is that the language design cleanly separates concepts normally bunched together into ideas like "everything is a class".

In many languages, if you want implementation hiding you get a whole bunch of other things with it. You can't use just implementation hiding without automatically opting into all parts of class-based programming, like inheritance, subclassing, etc.

Then you go "wait, what? Isn't inheritance and subclassing sort of the same thing?"

Yeah, no. Not fundamentally. It's only the same thing in popular programming languages because everything is the same thing in popular languages. Everything is a class and if you need anything you get the whole class concept, whether or not you wanted it.

This is problematic especially for beginner programmers because they learn implementation hiding is good. So they use that. But that opens up a huge toolbox of additional tools, not all of which are appropriate. But beginners are like beginners are, and when they see all those tools they simply use them -- sometimes out of desperation. This leads to messy code.

Ada is different. In Ada, you can pull out just the concepts you need, and it won't automatically opt you in to everything else.

A beginner that uses implementation hiding in Ada won't suddenly have an open toolbox full of subclassing. Their toolbox still contains just implementation hiding and whatever else they intentionally put in it. That leads to better code.

Just for that experience, Ada is worth learning, in my opinion.

Here are some quotes from TFA that alludes to this.

> Packages are namespaces for functions and types, unlike other languages where types can “contain” functions and types.

Organising code into subcomponents with implementation hiding is separated in Ada from class-based programming. You can do both, but you can also choose to do only one of them.

> Function overloading acts as a key design element

You can have polymorphism without opting into inheritance. (Further, you can have inheritance without opting into class-based programming.)

> Everything in a package is related, there’s no syntactical split between “free function”, “class function”, or “member function” (method).

You can have methods on types without opting into class-based programming.

> What most C-family languages call “functions”, Ada calls “subprograms”. Ada distinguishes between those which return a value and are truly “functions” and those which do not return a value, and are “procedures.”

A procedure is fundamentally different from a function from a reasoning-about-the-code perspective. You can have either without automatically opting into the other (as is the case when everything is a method.)

> Examples are “accesses” (sort of like pointers), “accesibility” (similar to a scope for borrowing), “tagged types” (classes), “derived types” (unrelated to OOP), and “subprogram”.

Using different words for different concepts -- instead of bundling them into the same generic idea -- increases the richness of your mental vocabulary which also increases the nuance your thoughts are able to express.

----

If you read my comment all the way down here, you might be interested in the Rust beginner's tutorial adapted to Ada, one of my more popular articles (which tells you something about my popularity...) https://two-wrongs.com/guessing-game-ada-style.html


> A procedure is fundamentally different from a function from a reasoning-about-the-code perspective.

They really are not unless the langage puts specific limitations or features on one or the other.

And odds are the average “procedure” should not be one, because it only exists for its side effect but then provides no feedback about these side effects.


1. In the older versions of Ada standard, functions can't have side effects, in a sense that they are not allowed to return more than one value (by setting the function's parameter's mode to out). Procedures always allowed to have side effects.

2. Functions can be expression functions which can be set in the code specification or used in expressions. Procedures can be written only in packages' bodies.

3. Procedures can be set to no-return state: that kind of procedure doesn't end in the normal way, but, for example, only by raising an exception.

4. Procedures can be null procedures, empty. That not the same as abstract subprograms from other programming languages.

Also, the difference is, in my opinion, more visible in the safe subset of Ada, SPARK, where still exists the rule which forbids side effects on functions.


https://www.adaic.org/resources/add_content/standards/05rat/...

Interesting history of how there was supposed to be a difference, but the idea was dropped and then later revived by SPARK.


GP is not talking solely about ADA though, they are asserting that there is a fundamental difference which is embodied solely in the presence or absence of a return value which is missing from e.g. "everything is a method"

Differentiating between pure and impure functions might be useful[0], but while not strictly orthogonal the presence of a return value doesn't tell you anything about that. Even ignoring the error signalling, read(2) has a return value (the data being read) and also has side-effects.

[0] though it's debatable that this distinction is really useful in and of itself


The general discussion is about Ada ...

GP said it depends on what the language does with that concept, and then I linked some extra info about what Ada and SPARK do with the concept.

As for whether it's useful ... it's absolutely useful.

Side effects break basic blocks. The smaller the basic blocks, the less optimization can be done.

Side effects also require special handling when using theorem provers. If you can specify that something should be free of side-effects, it's less work to requalify the system after making changes.

The purpose of SPARK is to be formally proven, which is why it implements the feature.


Obviously what you're saying is completely correct.

Maybe we can agree on that they should be different, but in languages where all code is a method it is dangerous to assume they are different?


The more common distinction between functions that return values and procedures that do not return values is not important at all.

On the other hand the distinction between "pure functions" and other kinds of functions or procedures is quite important.

In many early programming languages, the "functions" had to be what now are called "pure functions".

Later many languages relaxed the restrictions for "functions" allowing them to have side effects, but then the more important distinction was lost.

Only relatively recently many languages began to provide again means to specify that a function is a pure function.


As a pedant myself, i find this rather pedantic. Language use evolves and insisting it is only correct when words are used with their original meaning won't win you much. If you distinguish between produres and functions or functions and effects, many will get what you mean and it is more elegant IMSubjectiveO.

Same goes for im/mutability, i think val/var is more elegant than let/let mut but that is only my opinion on aesthetics. I like good design but still think function is more important than form.


The "pure" functions have different properties with respect to program optimization and program verification, in comparison with the functions or procedures with side effects.

That is the reason why this distinction matters. The distinction between mutable and immutable variables matters for the same reason.

Otherwise, the difference between a "procedure" and a "function" with side effects where you ignore the return value has no practical consequences.

There are contexts where you do not care about the optimizations that can be done by the compiler or about program verification, but there are also contexts where you care.


> Maybe we can agree on that they should be different

Why would we? And how should they be different, and what useful properties would that provide?

> in languages where all code is a method it is dangerous to assume they are different?

Why would you assume undifferentiated functions are different in the first place, and what would the difference be?


> Why would we? And how should they be different, and what useful properties would that provide?

Functions should be similar to mathematical functions and come with guarantees like purity, referential transparency, and totality.

Another reason they should be different is they've got different names! Might seem trite to say that, but what's the point in having the same concept with different titles?

Pure functions are very powerful concepts, especially when it comes to composition, if the consumer of a function can't rely on that, then the consumer must investigate the internals to know what it does. Once you're several layers deep into your composition that becomes exponentially harder to do.

So, there is value to having a concept (with a suitable name, like 'function') that indicates to the consumer what it is they're dealing with. Some languages, like Haskell, bake this in; others like JS really don't - making the whole process of dealing with 'functions' (really procedures) much harder once an application is more than a toy project.


> Functions should be similar to mathematical functions and come with guarantees like purity, referential transparency, and totality.

Essentially none of that is embodied in Ada's distinction between functions and procedures though, despite that being "fundamentally different […] from a reasoning-about-the-code perspective" according to the comment I replied to.

The only thing Ada tells you is "this definitely has side-effects" because there's no other reason to have a function without a return value, but that's the least useful thing you can be made aware of.

> Another reason they should be different is they've got different names! Might seem trite to say that, but what's the point in having the same concept with different titles?

Because you're very confused and making a distinction between things which are not different?

> Pure functions are very powerful concepts

That's debatable, but even then it's not what function means in Ada.

> especially when it comes to composition, if the consumer of a function can't rely on that, then the consumer must investigate the internals to know what it does.

Knowing what a function does is probably a good idea either way. You can call `map` with the same parameters as `filter` but the result will be rather different.


Initially, Ada had the restriction that the functions must be pure.

The restriction has been lifted only much later, so what you say is correct for modern Ada, not for the original Ada.

While gcc and other C and C++ compilers have non-standard extensions to specify whether a function is pure or not, I assume that an Ada compiler can recognize a pure function just by the fact that all its parameters are "in", so no extra keyword is needed.

What is needed is that the pure functions must be easily recognized by compilers and other tools and also by programmers. Not only there is no need that the functions be restricted to pure functions, that is actually undesirable.

AFAIK, unlike in C/C++, where pointers can make this task quite complex, in modern Ada it is still easy to recognize the pure functions.


> “There’s some quirky behavior, like tab indenting to where it thinks the indent should be and not actually inserting a tab.”

No sorry, inserting a tab is the actual quirky behavior.


Ads was one of the languages used to learn basic concepts of programming at the university programme I attended. We Lisp and Ada where the two main language that we used, to contrast the differences.


I learned SML. At the time the CS department here had the philosophy that the introductory programming course should be in a language students were very unlikely to already know, thus putting the self-taught programmers on a somewhat level playing field with those who've never programmed a computer before.

They did have to stream that intro course anyway, because irrespective of familiar syntax some people "get it" and some don't. I remember the bottom stream were named "Strugglers" but I don't remember what the stream for people it comes as second nature to is called, most people move up and down a bunch e.g. maybe they were fine right up until currying, and they go all the way from "Attend one lecture a week, complete the assigned work in ten minutes, hand in and we're done" to "I need both classes and the extra one-on-one tutorial to even understand the assignment, let alone try to do it" overnight.

Today I believe they teach Java, or possibly Python, bowing to industry pressure to churn out people who can cut code on day one, as if that's the purpose of a three year degree. Interestingly the same university's IT department just hired me based on the rationale that although I don't actually know most of the technologies they're using, I clearly can just acquire everything needed as I go. So they don't believe this bullshit, but many people hiring their graduates do, and 18 year olds pick institutions with the best hiring rates.


You wouldn't have trained at the Computer Science University of Bayonne (France) would you?

Because we also used Ada as our first language there.


Chalmers University in Sweden used Ada as its intro language for a while as well (late 90s).


KULeuven (Belgium) used Ada in the mid 90s for some of the programming-in- the-large CS courses. (So not as first programming language. That was Scheme or Pascal). It (Ada) got replaced by Java later.


University of Stuttgart, as well.

One of the language designers was the department head for programming languages and compiler construction.


Universidad Politécnica de Madrid, too.


I'm personally amazed by their productivity

- in 4 months of learning the language they've completed 7 projects

- they've read a 700 page book in a month

- they've made a ebook on ada


There were a few studies on ada (avionics software) and productivity when the language first came out, and ada scored very well compared to the state of the art at that time. People learned that using static typing and sub-typing moved a lot of defects from runtime to compile time. (that was sure my experience; getting it to compile took hours, but runtime errors were greatly diminished). Looking back, the power of the "spec and body", IN and OUT parameters, and such where just so much more powerful than Fortran or Assembly or even JOVIAL. C was too uncontrolled and wild.


Interesting review... the last time I looked at Ada was back in the 90s when it was being promoted as the new/replacement C, was the "chosen" DOD language etc.


> An interesting aside is that if Github locations are to be believed, the Ada community is predominantly European

That's a little ironic, given ADA was originally a US government project.


Might be that the US has more the commercial closed-source side, which doesn't show in Github stats obviously, and for whatever reason there is more academic/open interest in Europe?


Ada was the first language I learned through my Computer Science courses during my University years (Bayonne, France).


Same for me at University of Nantes (France).

Unfortunately, I was too narrow-minded to understand that it was a really interesting langage and not just a useless academic hobby of my teachers.

At least I do now, but I’m sad I was not more attentive at this time.


I wonder how much more robust the software we all interact with daily would be, had Ada become the standard systems programming language instead of C and C++.


As much as I like Ada, there were a couple projects at an old office that were great demonstrations that you could write C code in any language. Among other things, they absolutely failed to understand the type system so had a lot of manual range checks instead of letting the compiler and automatically generated runtime checks do the work for them. They didn't understand how to loop over arrays, passing in size information as a parameter. It was gross code, like someone had simply translated C or Fortran to Ada without consideration of the target language's abilities.

It would be like someone choosing Rust and then making everything unsafe. Or Haskell (I saw this in a tutorial once, it was hilarious and disgusting at the same time) and using strings (data) for dispatch instead of translating them into types and actually exercising the type system.

All that is to say, thoughtless programmers outnumber thoughtful ones. The language can only take you so far.


ZERO-NINE! Good to see you're still hacking on stuff :)


That's a very welcome blast from the past! :)

Feel free to send an email if you want to catch up.


Sent to your gmail!


The link to the tutorial site gives a 404 error


Thanks for mentioning this. Fixed.


I came here to read about Cardano, then I realized that this was about the programming language.


Both are named after Ada Lovelace, mathematician and first computer programmer.


Why all the hate?


Because it's an irrelevant comment. If the topic is not what you expected, keep scrolling. No need to let everyone know.


Same thought. I think the article on top because of ADA coin.


I hoped it was about Ian Dury and the Blockheads.

https://www.youtube.com/watch?v=8EMR9DXU_NQ




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: