Hacker News new | past | comments | ask | show | jobs | submit login
SeL4 is verified on RISC-V (microkerneldude.wordpress.com)
244 points by ingve on June 10, 2020 | hide | past | favorite | 80 comments



Just a reminder that the government of Australia has continued to cut CSIRO funding, the national science agency of Australia that invented/funded SeL4 (amongst WiFi and other things).

https://www.smh.com.au/national/australia-risks-brain-drain-...

https://www.csiro.au/en/News/News-releases/2020/seL4-develop...


SeL4 was developed by NICTA, which was a fairly recent research organisation (opened in 2003) funded by the Australian federal government, but distinct from CSIRO. Indeed, NICTA has since been defunded by the government and the remnants absorbed into the "data61" unit of CSIRO: https://en.wikipedia.org/wiki/NICTA

re: CSIRO and WiFi, there is some controversy around that: https://en.wikipedia.org/wiki/CSIRO#802.11_patent


Much of the NICTA/CSIRO/Data61 thing was internal politics: CSIRO doesn't like the federal government funding other research organisations.


There's no controversy in that linked section of the Wikipedia article. It seems to assume the patent was valid. The only controversy is over whether free and fair licenses were available as they promised the standards body..


We hackers can't continue to depend on the life-and-death high-school popularity contests that are electoral politics to fund the development of a better future. We need to find ways to fund public-interest hacking, like the hacking that produced seL4, that don't come from taxes.


I don't think these kinds of long term, far-seeing projects would survive in the life-and-death contest of startups and the private sector either.

I dunno what the alternative is, other than political activism, pushing parties to support progressive tax policies, and educating our peers and family members. It's not only research on the line here.


The key is that money is, for most people, a negative need, not a positive one. The authors of seL4 didn't write it because they expected to get superrich — but they could have been prevented from doing it by needing a job to pay the rent. Remember that Bram Cohen wrote BitTorrent while couchsurfing on friends' couches and living off credit card balance transfers. YC was founded with the idea that three months of "ramen money" would be enough to get a lot of ideas off the ground.

I think there are a lot of things we can do:

1. Promote free software, peer-to-peer networking, cryptocurrencies, and privacy software like Tor. Don't forget, governments burn libraries. Free hardware like RISC-V will become extremely important once we have matter compilers.

2. Lobby against patent, copyright, trade secret, noncompete enforceability, and other legislation that make the contents of employees' minds the property of their employers and legally impose censorship. We don't need to eliminate these entirely, but the more we can reduce their scope, the better off we are. California's pioneering legislation in this area was probably a significant factor in the 1980s move of the computer world's center of gravity from Boston to Silicon Valley; China's nonenforcement was probably a significant factor in its 2000s move from California to China.

3. Remind people that the freedom to tinker is a human-rights issue, a government transparency and accountability issue, a consumer-protection issue. We need all the allies we can get.

4. Reduce people's dependency on employers and employment for what they need to survive, through programs like public libraries, public healthcare, retirement, public education, universal basic income, churches, widespread solar panel deployment, squatters' rights and easier adverse possession, homesteading, soup kitchens, the Rainbow Gathering, ashrams, food banks, police reform, volunteer mental health counseling, decent public housing like Britain's 1960s council housing, BeWelcome, Hospitality Club, and open borders. Our current society already produces so much that scarcity of basic goods need not imperil anybody's survival. (There are of course cases on the margin like funding Gilead's development of new drugs, and perhaps synthesizing some drugs that are especially difficult, but that's no reason for people to die on the streets of homelessness-induced hypothermia.)

5. Organize programs like GSoC, Patreon, and Kickstarter that can raise funds to the people working on public goods such as free software. Alex Tabarrok's "dominant assurance contracts" might provide an incentive structure for this that improves on Kickstarter's incentive structure.

6. Organize into collectives such as monasteries, Google, or universities — organizations like these, imperfect though they are, have often been very effective at protecting their members from the societal pressures of the "life-and-death contest of startups and the private sector", not to mention law enforcement, with constructs such as academic freedom, tenure, "Googliness", and 20% time. Today's Google, like today's universities, are unfortunately not as strong as it once was — hierarchical command relationships make them vulnerable to political takeovers. But a university is fundamentally a faculty senate organized around a library, and Google was at one time fundamentally a group of hackers organized around a search engine. Such things can be destroyed, but they can also be created. They can survive by receiving donations, as some monasteries do; by providing services to outside entities, as universities often have; or by earning rent from an endowment, as state land-grant universities and other monasteries do.

The housing issue is particularly bad because, in many places, legally housing a person costs hundreds of thousands of dollars, more than an average employee can earn in many years. I wrote a bit in https://news.ycombinator.com/item?id=23264786 about the underlying economics of the situation.


“ China's nonenforcement was probably a significant factor in its 2000s move from California to China.” one of the most anti-human organisation and arbitrary ruled organisation is the best to develop software because it is free. Are you joking.

You need to be free as a person for free software to be meaningful. Imagine gnu operate in china and run Software in campus (eg https://www.gnu.org/education/teaching-my-mit-classes-with-o...). Imagine ...


> Free hardware like RISC-V will become extremely important once we have matter compilers.

Could you share what you see here?


That's why longterm projects like a manned mission to Mars, or converting to all-electric vehicles should be government led.


They have been. Musk's businesses don't make sense without the incredible amount of government subsidies they get.


There are a lot of ways open source gets funded: donation models (Patreon etc), internships (like GSoC), employment (like with RedHat), consulting, grants (like NLnet), bounties (like Bountysource). Personally I like the Snowdrift model.

https://snowdrift.coop/ https://github.com/fossjobs/fossjobs/wiki/resources


donation models (Patreon etc) [...] bounties (like Bountysource)

Is there a significant number of people that can live of this? When I look at monthly income of larger projects on OpenCollective or developers on Patreon, they are doing very well if they make several hundreds of dollars per month. Not something anyone can live off in a western country.


I've been living on US$500 a month for years. In a Western country.


People who receive money over the internet don't have to live in an expensive location (big cities).


>taxes.

Research into developing jet engines is a pretty stable income stream in the US that has persisted for decades. Taxes can fund what you want, you just need to create the political will for governments to value it and fund it.


Taxes won't fund what I want. Taxes fund what is wanted by the people with the power to create political will. That is, as I said, a matter of a high-school-style popularity contest, except that the stakes are life and death. Do you really think the voters whose political will elected Donald Trump and Scott Morrison — or the mass media who directed that political will — should be in charge of whether we get secure operating systems, public libraries, or ICBMs? Should that political will determine whose research gets funded? The idea is absurd.


I think the point is that F-35 seems to get funded no matter how popularity contest goes. It is not completely beyond imagination that seL4 could be funded similarly, since both do serve military need. In fact, DARPA already funded some seL4 work.


General Dynamics bought OK Labs.


Do you really think voters are in charge?


No he doesn't because he already said that the mass media is directing the voters. Read his comment again.


People very rarely donate to open-source projects or buy open-source products.

Money has to come into open source from somewhere and in the end it has to be via taxes or commercial interests.

Maybe BitCoin will save us? Haha. No.


Have watched from close quarters how some friends have experienced burnout trying to make a living off their OSS projects. The projects in question are fairly popular and in broad use by large corporations but efforts to monetize were ultimately unsuccessful.


I see this as largely not a problem with OSS per se. It just happens that a lot of people have expectations around software that they don't have elsewhere.

For instance, let's say that I have this very clever idea that would improve on the good old-fashioned shovel. I could try to monetize this idea, but probably wouldn't have much success. Now, if I could figure out how to apply the idea to a grader or bulldozer or boring machine, I might have more success.

My best chance might be if I happen to already own an equipment manufacturer. But to create a new manufacturing company just to produce my new-fangled backhoe is a daunting task. And it turns out my one idea isn't enough. My backhoe needs hydraulics, power, tires, and a bunch of other stuff.

Most people wouldn't expect to be able to quit their day job to focus on a new shovel idea. But they somehow think that people should line up to fund a new web framework. Or a novel NoSQL approach.

It's hard to monetize any idea. This is not limited to OSS.


Note that software is not an abstract idea, it's an idea manifested.

And, really, people can still have a business selling cookies. But open source is unique in providing projects with many users, including commercial entities, that still leave the author in abject poverty.


Sadly, so true.

Making money from open source is somewhere between the Hollywood aspiring actress and the fool's gold level of wishful thinking.


That's true for software in general. However, you're talking about high-assurance security. That takes math skills, patience, a high commitment to security, and a time commitmemt. Most FOSS developers will not build high-assurance security regardless of the model you find.

This is why I favor creating systems that, with user-provided specs, automate proofs, analyses, and testing. The specs should be easy for non-math types to understand. Any failure is fixed or becomes a runtime check with logging. We might get more adoption of something like that than the masses contributing to an Isabelle/HOL or whatever project.


You're absolutely right, but that's just one manifestation of our society having problems with long-term goals. I'd say it's insecurity that draws people to short-sighted goals and leaders.


The humans are not good at long-term thinking. Any proposed solution that depends on them getting good at long-term thinking, en masse, is doomed to failure. Let's figure out how we can leverage the few humans capable of thinking slightly further out than the immediate present — 8 to 32 years, say — into sustained progress, while protecting that progress from destruction by the senseless masses who call it "weird", "scary", "pedantic", "odd", "creepy", "arrogant", "Satanic", "entitled", "problematic", "elitist", "gross", :unnatural", and "witchcraft".


A la enlightened dictature? I don't think that politics (as in collective decision making) should be disregarded. The inertia of the masses can feel like a drag, but I think that for every hacker that devotes himself to some specific long-term goal, there should be a hacker that devotes himself to politics and undoing that seemingly paralising inertia.


I am not in favor of any kind of dictatorship, although obviously a dark dictatorship is even worse than an enlightened one. Here in Argentina Perón's supposedly enlightened dictatorship — supported by the masses — destroyed the universities' academic freedom, and therefore the universities themselves, and squandered our chances of nuclear-energy leadership on a politically-favored con man who claimed to have built a usable fusion reactor.


We need popular non-free license which would require from big companies to pay fees to the creators, but allow small companies and individuals to use that software for free. Along with some company to handle payments and control usage, so developers won't bother with that.


There are slogans about this ("free software is a matter of freedom, not price", "software should not have owners") but the issue in practice seems to be that software licensed that way is not very good at attracting contributions or broad support. Also, it's difficult to write a license that has the effect of requiring payment without also introducing the other problems that non-free licenses have: you depend on software that you don't have the right to fix, for example.


CSIRO didn't invent WiFi. CSIRO invented and hold one of many patents used in WiFi. https://arstechnica.com/tech-policy/2012/04/how-the-aussie-g...

That said, I recall concluding CSIRO were unfairly treated in the patent pool issue, despite the title of the linked article.


I'm really looking forward to (hoping for) fully-open hardware platforms to build upon, with fully-open toolchains and software stacks atop them, and to get rid of a lot of crud along the way. RISC-V is one of the inspiring potential pieces of that.


Quote from [1]:

RISC-V has some nice work and support but that software/firmware/optimized OS ecosystem does not develop itself overnight. And x86 and ARM are still propitiatory ISAs but also have mature software/firmware/OS(Optimized) ecosystems.

Now just because the Power9/Power ISA is opened up does not mean that IBM’s specific hardware implementation that executes that now Open ISA is open, and ditto for any other MIPS ISA or RISC-V ISA running on others’ custom hardware implementations. Folks it’s the underlying hardware implementation that gets the real work done and an ISA is nothing more than a execution template that can have any specific custom hardware implementation being more efficient or less efficient depending on that in-hardware implementation that’s engineered to execute the ISA.

----------

[1] https://www.nextplatform.com/2019/08/20/big-blue-open-source...


Why cite another source:

* this article acknowledes that while open-source hardware requires an open-source ISA, this requirement is not sufficient

* this article mentions open-source hardware that implements the open-source RISC-V ISA, and the type of applications and research that this hardware is used for (e.g. eliminating Spectre-like timing attacks). It also mentions how such research is pretty much impossible to publish for non-open-source hardware using non-open-source ISAs.


> this article mentions open-source hardware that implements the open-source RISC-V ISA, and the type of applications and research that this hardware is used for (e.g. eliminating Spectre-like timing attacks). It also mentions how such research is pretty much impossible to publish for non-open-source hardware using non-open-source ISAs.

Yes, That's the main point IMHO. It's firstly about making research easier and decoupled from some companies, allowing you to then push given companies to adopt research. It also slightly increases the chances for more competition on the CPU marked by making entry easier for new companies. Lastly it's useful for countries which want to become less dependent on the US.

The idea that somehow an open source ISA will lead to open source hardware which will replace well established hardware is a bit to optimistic in my opinion. But it can do a lot good even without it.


Personally, I have no doubt RISC-V will largely replace proprietary ISAs and chips in some market segments. What I doubt is that your desktop, laptop, and phone are those segments.


> This means that it is proved to be bug-free relative to a specification formulated in a mathematical logic

But, but, but ... has the specification been proved to be bug-free?

As in does the specification meet eg a set of security constraints?


Yes, the seL4 specification comes with proofs of strong integrity and information flow security guarantees, and with integrity and authority confinement proofs. It also comes with a configuration that is proved to be a static separation kernel.

The implementation, including the code emitted by gcc, implements this specification, and is guaranteed to be free of buffer overflows, pointer errors, memory leaks, arithmetic overflows, and undefined behavior.

[1] https://ts.data61.csiro.au/projects/seL4-verification/


The right next sentence is, "And by now it has proofs about further security properties (which show that the specification has the right properties)".

In particular, they formalized notions of confidentiality, integrity, and availability, and proved the specification (hence the source code, hence the binary) satisfies these notions.

Now you can ask whether these notions were formalized correctly. But that's still a LOT better than asking it for binary/source/specification.


1st read other comments

then:

> has the specification been proved to be bug-free?

Yes for SeL4, but no for a bunch of other systems advertised as "having been proven to be correct". Which often sadly only means "implemented as specified" but not "implemented as specified and proven to have the right properties".


I think this is a very relevant question and I am sad to see that it is being downvoted.

Remember that we have had two major flaws in the formally verified tls1.3 so far because the spec was not complete.


Perhaps you are a brain in a vat being deceived into thinking that the specification exists. Perhaps any memories you have of reading the specification are false and implanted. Perhaps this very comment doesn't actually exist, you are merely hallucinating the memory of having read it.

Nothing can be known with absolute certainty. But a specification is much smaller than an implementation and misunderstandings of specifications are much less common than bugs in code. Don't let perfect be the enemy of good.


More importantly, a good specification is one that is triple-checked: by inspection by a human, having the program to conform to it, and by having to satisfy a bunch of nontrivial theorems about the spec that a human would expect from programs conforming to the spec. Sel4's integrity, confidentiality, and worst-case-execution-bounds proofs are an example of this... it's quite unlikely for a program to simultaneously provably satisfy a specification, the specification to provably satisfy nontrivial (first order) properties, and for those properties to nonetheless be defined incorrectly. Obviously there can still be holes in your model (for example, not considering certain side channels), but that's a pretty different situation from accidentally proving a different thing than you thought you were.


Philosophy of that sort offers little insight into correctness in formal software development, just as it offers little insight into the study of mathematics.

In practice, bugs can creep in at various stages of a formal development process, but they're much less common that with non-formal software development methodologies, as you say.

See page 46 of this case study (giant PDF warning): https://www.adacore.com/uploads/downloads/Tokeneer_Report.pd...


> Philosophy of that sort offers little insight into correctness in formal software development, just as it offers little insight into the study of mathematics.

I disagree. Knowing the limits of what is possible is important, so that we don't waste our time looking for the impossible.


What if our ability to know/reason about what is possible has a hard limit lower than the limits of what is truly possible?


Such a limit would be worth knowing about. But I see no reason to assume that limit exists in the absence of evidence, particularly as our tools and reasoning are improving all the time.


We can be pretty sure there is a limit. We're physical creatures in a (roughly) deterministic world, so ultimately we're bound by Rice's theorem just as computers are.

Can the human brain in principle be simulated by a (deterministic) algorithm? Seems to me the answer is obviously yes, as we are 'merely' extremely complex machines whose operation is governed by deterministic physical laws. If you agree, you are forced to agree that we are bound by Rice's theorem.

Whether it's ever likely to be a practical problem, is another matter.

edit: We already know there are mathematically interesting numbers so enormous they cannot be represented in our universe, let alone computed by humans or our machines. That's not really the kind of practical limit we're interested in, but it still shows a limit of a sort. TREE(3) for instance.


My perspective on results like Rice's theorem is that they tell us that the Turing machine formalism can express incomprehensible programs. I don't see that as a convincing argument that there are important or useful programs that we cannot comprehend; rather I see it as a demonstration the Turing machine model is broad, and an argument that we should look for stricter models that still admit all the programs we care about (probably based on some form of typed lambda calculus).


> My perspective on results like Rice's theorem is that they tell us that the Turing machine formalism can express incomprehensible programs.

I'm afraid I don't know what you're saying here.

> I don't see that as a convincing argument that there are important or useful programs that we cannot comprehend

I don't see that there's any way to contest it. The only way to deny that we are bound by Rice's theorem, is to deny that the human mind is algorithmic. This presumably requires the denial of determinism, which is fairly absurd.

If you're not following my argument, I'd be happy to rephrase it.

> an argument that we should look for stricter models that still admit all the programs we care about (probably based on some form of typed lambda calculus).

That offers no escape. We're still bound by Rice's theorem. Perhaps it will never be a practical issue, but the fact remains.


> If you're not following my argument, I'd be happy to rephrase it.

I'm not following what it is that you think Rice's theorem tells us.

Rice's theorem says that nontrivial properties of Turing machine programs are undecidable, i.e. for a given property, there will be Turing-machine programs for which we can't tell whether they have that property or not.

That doesn't tell us anything about what is possible for programs, or put any limits on programs that we do understand. It just says that in the Turing machine model there must exist programs that we don't understand. Fine. Who cares?


Sounds like we're completely agreed.


seL4 hey used Haskell to create an model which was then their specification to help with the formal verification process [1][2].

[1] https://dl.acm.org/doi/pdf/10.1145/1159842.1159850

[2] https://www.sigops.org/s/conferences/sosp/2009/papers/klein-...


Sure...what else ;)


Very interesting and useful.

However, does anyone have experience with obtaining RISC-V server hardware on a commercially useful scale, i.e. more than 1pcs? Back when ARM servers where all the rage, there were lots of announcements, but almost never products one could order. Is RISC-V any better in that regard?


I don't think RISC-V servers are anywhere near that level of development.


Competitive RISC-V general purpose servers are probably about 10 years off. Special applications will happen first.

I'd note that ARM servers are finally becoming competitive in specific use-cases. The AWS Graviton2 CPU is useful for many apps.


From what I got, the future should look like : -You choose the RISC-V design first. -From that design, you choose the fab that will produce the design. Each fab has its own price and trustfulness.


The 3 big fabs that you'd like to use are fully booked until 2023 or so. So you kind of forgot the "wait 4 years" part.


Maybe, but that would mean building our own hardware. I don't think anyone here would be comfortable with that, we are not big enough for such an endavour to make any sense at all. We'd rather buy off-the-shelf hardware, preferably from a selection of vendors...


Sure, we will fab your design how many units do you need? We can manufacture 2000000 for you. Oh, you don't need that many? Is 250000 low enough?


I do not think there is anything to obtain yet. The only two designs that I am aware of and that are out-of-order are Berkley's BOOM and Alibaba's Xuantie 910. And then you also need a complete and standardized platform, i.e. chipsets, firmware interfaces, etc. There are also a few standard extensions, including "Packed-SIMD", "Vector" and "Transactional Memory" that are still in a draft state.


> Berkeley


Buying handful units at list price isn’t at business scale


Yes. I just meant that when we tried to obtain a few ARM servers, the only quantity we ever got was "1 sample machine". Thus we couldn't even really try it out ourselves, let alone have customers do meaningful tests on a handful of machines.

Of course in "a few datacenters full of"-like quantities there was just no chance of getting anything. And of course, for making our own hardware (you can get ARM chips, just not full systems) we don't have the expertise, size and will.


Sure, but you need a handful of units to evaluate them properly. Availability of units is a huge concern and (imo) the largest barrier to boader adoption of non-x86 servers; ie, no sane person would hedge their operations on a single server vendor that can't reliably supply compatible units when needed. At least, not willingly.


Your product is worthless if nobody is using it.


The linked paper on limitations of mainstream ISAs for Spectre mitigation is super interesting.


It's there an OS built around Sel4?


Not anything you would call general purpose yet; seL4 has been used in several embedded systems though.

The closest to a real desktop OS would be Sculpt, which is based on Genode, which in turn can run on top of several different flavors of L4 (plus a few non-L4 kernels). Sculpt however is still closer to a fancy hypervisor than a full OS. It can run several unix-like command line tools and has a Qt based GUI, but you will run VMs in it to get any useful work done.


Genode.


Through it's not build around SeL4 specifically. It supports a wide range of mostly L4 based micro kernels as well as some other kernels like e.g. Linux.

Still it's a SeL4 OS if you want it to be one.


I still don't understand why Sel4 wasn't chosen for the microkernel for Fuchsia, why they chose to use their self-developed Zircon instead.


They didn't start from scratch, Zircon is based on lk which they already used in Android.


Say what now?

[Digging]

https://github.com/littlekernel/lk/wiki/Introduction:

> LK is the Android bootloader and is also used in Android Trusted Execution Environment - "Trusty TEE" Operating System.

> Newer Android phones have some chance of LK running all the time alongside Linux.

TIL!


I think seL4 didn't yet have multicore support when zircon was forked from LK, could be wrong.


Thanks. That seems reasonable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: