Hacker Newsnew | past | comments | ask | show | jobs | submit | csb6's commentslogin

> I would love to get a list of language features that are "free to use with GNAT" and those that are "AdaCore license required."

All Ada language features are present in the free/open source version of the compiler. The proprietary version of GNAT is just updated more frequently I think and has commercial support - they periodically copy their changes into the main GCC source tree.

They have proprietary tools for some kinds of static analysis, but those wouldn’t be considered language features. GNATprove (the theorem prover tool for verifying SPARK programs) is open source.


The last time I bought a license for AdaCore it was to get the unit testing, static analysis and dynamic analysis tools IIRC but, that was about 8-9 years ago. We also paid for training from AdaCore and their main guy came out.

How would you rate that experience?

AdaCore produces a good product and their technical team is top notch.

I think AdaCore stopped supporting GNAT community in 2022 and recommended to use Alire community, no?

https://blog.adacore.com/a-new-era-for-ada-spark-open-source...


GNAT Community was just a version of the GNAT toolchain/IDE provided by AdaCore. GNAT is still open source and still updated as part of GCC, it is now just recommended to install it using the Alire package manager. Builds of GNAT are also provided on some distros since it is part of GCC.

Alire is just a package and toolchain manager that AdaCore wrote in the style of Cargo. It still runs GNAT under the hood.

I don't think AdaCore was involved in the creation of alire:

https://zaguan.unizar.es/record/79726/files/texto_completo.p...

AdaCore does directly contribute to GNAT which is important.

GNAT was chosen for alire since it is free software: "This work presents a working prototype tailored to the Ada compiler available to open source enthusiasts, GNAT."

Other Ada implementations typically use their own build systems which are different from GNAT's and so they would probably need changes to work with alire.


Alire isn't written by adacore, it's an independent project though adacore has donated to them.

At this point it is becoming the de facto method of acquiring the toolchain and building Ada projects.


The employees knew that they would probably get fired for protesting/breaking/entering at work. The idea is to make what they perceive as Microsoft’s involvement in the Gaza crisis a topic of national discussion and criticism. They felt that doing this was more important than their jobs. Smugly saying “actually, you’re asking to be fired if you break into an office and do a sit-in at work” is missing the point of direct action like this, which is meant to be disruptive and understood by the protesters to have likely consequences.

Incredibly stupid even if you deny climate change (which a rational person would not given the strong evidence for it). The scientific equivalent of cutting off one's nose to spite one's face.


Sounds like a petulant child. Wholly unnecessary to get his point across.


Try saying the same things over and over to adults for years


Alignment of incentives. I'm sure the personal humiliation of being yelled at by Jobs was a reasonably strong incentive, but I'm certain the perception that failing to deliver would have him personally sending you to the dole queue asap was even more of a strong incentive.

Compare to most corporations where the only thing you can do to get fired is fail at office politics and failure to deliver/delivering the lowest quality crap that can be passed off is just business as usual.


Yeah, with an attitude like that it's no wonder his products were such failures.


> Sounds like a petulant child. Wholly unnecessary to get his point across.

See from the replies to this how well you got your point across.


Yes, Steve Jobs was a jerk.

Alas, human don't come fully customisable. You get to pick from the packages on offer. And it seemed like for Apple Steve Jobs' good parts only came as part of a package that also included his bad parts.


To me it sounds like the symptom (emotion) of someone who deeply cares.

These things need to be well-placed to be effective. Sounds like it was.


> The major problem of both human-readable and binary formats is not the serialized form, but the understanding of the schema (structure) of the data, which more often than not, is completely undocumented. Human-readable formats are worse in this regard, because they justify it by "it's obvious what this is".

Exactly - CSV files for instance handle escaping or quoting inconsistently depending on the creator. Even supposedly machine readable text formats like RSS do not always have consistent schemas [0]. Unix programs seem particularly prone to using underspecified text formats as output.

[0] https://www.xml.com/pub/a/2002/12/18/dive-into-xml.html


Typically ships need to be certified by a classification society (an organization that inspects ships and makes sure they meet technical safety standards) in order to operate commercially. The Oceangate sub was not certified by any of these authorities because certification was viewed as “red tape” in the way of their “innovations”, and they would have almost certainly failed to be certified (a bad look if you are trying to convince people it is safe).

I think the rules/laws around commercial deep sea sub companies were unclear because most deep sea subs are research vessels or private projects (e.g. James Cameron’s sub), not tourist operations.


Because the law requires them to release reports on certain dates, so they do so and then make corrections as more data comes in.


Thanks.


Do you believe people living in Gaza are “pampered”?


As a counterpoint, Dijkstra [0] makes a distinction between what he calls “postulational” and “operational” (more like what TFA is describing) formal methods. (Sidenote: I think people nowadays would use “denotational” instead of “postulational” (e.g. denotational vs. operational semantics), but Dijkstra wrote this in the 1980s)

Instead of thinking of a program as a set of potential execution traces, he advocated thinking of programs as formulas (e.g. pre/postconditions) that can be used to construct proofs without having to simulate any executions. His idea with weakest preconditions was to derive code from the desired logical properties, not to prove that a priori written code satisfies those properties, reckoning that his way would be less work and result in more elegant programs.

I am just an amateur but it seems like formal methods as a whole is often conflated with model checking and other operational formal methods.

[0] https://www.cs.utexas.edu/~EWD/transcriptions/EWD10xx/EWD101...


While there is a proof assistant for checking TLA+ proofs and a couple of model checkers for subsets of TLA+, TLA+ itself is no more and no less than a formal language for expressing mathematical formulas that describe dynamical systems.

There is no notion of code, program, or execution in TLA+ any more than the formula y(t) = y0 + v0t - 0.5gt^2 has the notion of a ball, a vacuum, or of Earth. When the author talks about "sets of behaviours" (really, classes of behaviours, but that's getting too technical) it is precisely in the same sense that the formula `x > 3` denotes the set of all integers > 3 in a logic over the integers.

What's interesting about TLA+ (or, really, about TLA, the temporal logic at the core of TLA+) is how change over time is represented. Rather than with an explicit time variable, TLA intrinsically represents the evolution of the values of variables over time in a way that makes abstraction-refinement relations (the relation between more and less detailed descriptions of a dynamical system) particularly easy to express.

Dijkstra's pre/post conditions can be expressed in TLA+ just as anything that could be stated precisely could be expressed in any sufficiently rich mathematical language.

We can choose to interpret some TLA+ formulas as "programs" just as we can choose to interpret some formulas as describing the motion of a baseball in a vacuum, but TLA+ allows us to express things that are more abstract than programs, which can be very useful (e.g. no program can express the Quicksort algorithm in its full generality, as that algorithm is too abstract to be a program, but it is nevertheless very useful to show that a particular program is a particular implementation of that algorithm).


P.S.

A model checker doesn't imply anything about the kind of the formal specification used. It is a program that checks whether all assignments of free variables in a formula are satisfying assignments, or "models" for the formula (a model is a satisfying assignement of free variables in a formula, i.e. assignments that make the formula true).

It is distinct from an automated proof finder in that it doesn't (necessarily) find a deductive proof in the relevant logic that the forumla is always true, but rather operates in the semantic universe of the logic.

For example, the proposition `A ⇒ A` is true for any A in a simple boolean logic. We can determine that either via a deductive proof, which consists of a sequence of application of the deductive rules of the logic, or by examining the relevant truth table; the latter method would be a form of model checking.

Neither the use of deductive proofs or of model checking implies something about the nature of the logic. They are just two different ways of determining the truth of a proposition.


> A model checker doesn't imply anything about the kind of the formal specification used.

That is true; I was just contrasting the way TFA explained specifications as being definitions of groups of acceptable execution traces (perhaps checked using a model checker) with Dijkstra’s approach of characterizing the acceptable behavior without simulating any executions.

Both are of course valid approaches to demonstrate the correctness of a specification.


> I was just contrasting the way TFA explained specifications as being definitions of groups of acceptable execution traces (perhaps checked using a model checker) with Dijkstra’s approach of characterizing the acceptable behavior without simulating any executions.

I think that's a misunderstanding, as TLA+ doesn't require simulating anything (even the model checker doesn't work like that; it can check a formula that corresponds to an uncountable infinity of behaviours, each of them infinite, in a second [1]), and Dijkstra's approach is separate from the logical semantics.

It is true that a TLA formula denotes a set (really, a class) of something called "behaviours" (which could be interpreted as execution traces when a formula is interpreted to describe a program) just as the statement x > 3 denotes the set of all integers greater than 3 (in a logic over the integers), but in neither case is enumerating or "simulating" anything required, either in principle or in practice. It's just simple logical semantics, and Dijkstra's approach is easily expressed in a logic with such simple semantics.

In fact, TLA serves as a generalisation of Dijkstra's approach, as that is lacking when dealing with concurrency. While in sequential programs it is easy to specify invariants as pre/post conditions, it is hard to do that in concurrent/distributed programs.

[1]: You can ask the TLC model checker to simulate behaviours if you want (it can be useful sometimes), but checking mode, not simulation mode, is the default. Obviously, in simulation mode TLC doesn't work as a model checker. Technically speaking, every TLA formula denotes an uncountable infinity of behaviours of infinite lengths, where between any two states in the behaviour there is an infinite number of steps (it is in this regard that TLA is different from other temporal logics like LTL), and each state in every behaviour is an assignment to an infinite number of variables (in fact, that uncountable infinity is so large that it doesn't even correspond to a set in the set theory sense but to something called a class). The large infinities are what makes TLA so suitable for expressing abstractions and refinement and interactions between components.

This may sound strange, but it's actually how most simple logics work. When I said before that `x > 3` in a logic over the integers denotes the set of integers greater than 3, I wasn't entirely precise. What it really denotes is: "x is in the set of integers greater than three and any other imaginable variable could be any integer".


> TLA+ doesn't require simulating anything

I was referring to way the Python REPL examples/tree diagrams and sections on execution histories were used to explain things in the article, not to TLA+. I was contrasting how the author explained specifications versus how someone like Dijkstra would, not anything about TLA+. I don’t know enough about TLA+ to say anything interesting, but I appreciate your explanations of its approach!


You can read more about the Temporal Logic of Actions in my post here: https://pron.github.io/posts/tlaplus_part3


>His idea with weakest preconditions was to derive code from the desired logical properties, not to prove that a priori written code satisfies those properties, reckoning that his way would be less work and result in more elegant programs.

I love the idea. Are you aware of any such spec-based code generators in widespread use today?


The most developed one I know of is Atelier B [0], which uses the B-Method and has a C code generator. I believe this was the tool used to develop several driverless train control systems in the Paris Metro and other subways.

There is a free version but unfortunately it is not open source.

[0] https://www.atelierb.eu/en/


With an HTML body the link will be displayed as content and so will be directly clickable. But if the body is JSON then the client has to somehow generate a UI for the user, which requires some kind of interpretation of the data, so I don’t understand that case.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: