Hacker News new | past | comments | ask | show | jobs | submit login
Rosetta 2 creator leaves Apple to work on Lean full-time (linkedin.com)
401 points by ladberg 4 days ago | hide | past | favorite | 134 comments





This is me! Didn’t expect to see this on here, but I’m looking forward to working with everyone else at the Lean FRO and the wider Lean community to help make Lean even better.

My background is in mathematics and I’ve had an interest in interactive theorem provers since before I was ever a professional software engineer, so it’s a bit of a dream come true to be able to pursue this full-time.


Thank you! My work laptop is a M4 Macbook Pro so I really appreciate the beauty of Rosetta. Thank you for the effort!

I just checked your LinkedIn and realized you joined Apple since 2009 (with one year of detour to Mozilla). You also graduated from Waterloo as a Pure Math Graduate student (I absolutely love Waterloo, the best Math/CS school IMO in my country - at the age of 40+ I'd go without doubt if they accept me).

May I ask, what is the path that leads you to the Rosetta 2 project? I even checked your graduate paper: ( https://uwspace.uwaterloo.ca/items/4bc518ca-a846-43ce-92f0-8... ), but it doesn't look like it's related to compiler theory.

(I myself studied Mathematics back in the day, but I was not a good student and I studied Statistics, which I joked that was NOT part of Mathematics, so I didn't take any serious Algebra classes and understand nothing of your paper)


> May I ask, what is the path that leads you to the Rosetta 2 project?

The member of senior management who was best poised to suggest who should work on it already knew me and thought I would be the best choice. Getting opportunities in large companies is a combination of nurturing relationships and luck.


Btw, followup question, and don't take this the wrong way at all, but what is impressive is someone with a mathematical background worked on something that seems to be one of the pinnacles of software engineering: a translator working at the binary level that creates executables interacting directly with the OS. Did you also double in CS back in school? Or did you pick up the knowledge afterwards? Yeah, it seems like a long list: operating systems, compilers, computer architecture, UNIX/MacOS systems-calls and internals...

... not to mention all the performance considerations and optimizations, also requiring a strong sense of algorithms and computational complexity. Wow!

Yeah, seems like most mathematicians (and physicists) I know who go into tech don't get past learning a couple of programming languages and don't have an interest in learning the depths of a how a computer works. Very impressive!


I had an interest in programming at an early age. My dad would always bring home the old computer magazines from the IT department at work and I would pore over them. I got a bit obsessed with MIT AI lab myths in books like Levy’s Hackers. In a stroke of luck, I found a copy of SICP at the local bookstore in middle school and kept struggling through it.

I originally wasn’t going to go to university, but my parents suggested I go for CS. I transferred into Pure Math in my first term after the intro Java programming course asked us to implement tic-tac-toe without using arrays.

Basically all of the low-level programming and systems stuff was learned on the job, but it helped that my first job at Apple was working on WebKit’s interpreter (and later JIT), coming out of a Google Summer of Code doing the same thing. One of my coworkers on that project was an alumnus of the original Rosetta from Transitive, and he later ended up managing the group doing the transition to Apple silicon on the SWE side (I was part of HW Technologies). An interesting example of how things loop back in the industry.


Don't think I have ever heard of the SICP book as my school split all these different subjects it covers into different courses. But looked it up and wow, it goes into a lot of advanced subjects! (recursion, trees, Big O and computational complexity, Data abstractions, Objects, and ends with compilers). That is some survey and it's amazing you were able to work through this in middle school - I was just learning Algebra and the history of the early Americans back then - Incredible

This now makes sense you were able to work on Rosetta 2 mainly on your own!


Can relate this too. Tried to read SCIP but dropped after a couple of chapters because I couldn't wrap my head around some of the exercises and the ' symbol.

Eventually figured out I'm more into comp arch, assembly language and such instead of the more Math part of CS.


Yeah, higher-level math is got to be one of the hardest subjects a person can study. I find it quite a challenging too:)

I can relate with introductory CS courses. I switched to Statistics after ghe first course is a Java class and prof complained that back in his time no one gave introductory programming classes as they expected the students to self teach.

I kinda agreed with him.


Thank you for the information! I'm sure your skills are well trusted.

Waterloo really is the best CS school in the world.

I have never been there, what do you consider to be its speciality comparing to say MIT and Berkeley?

Didn’t go there but worked with a lot of great folks who did. The main thing I think is that they require their students to get industry experience before they graduate.

I don’t remember the actual requirement, unfortunately.


Thanks, is it something different from coop? I checked their graduation requirement and it is indeed interesting:

> Students admitted at the 1A level (except for those in Business Administration and Computer Science double degree), will normally have eight academic terms and six work terms.

Six work terms seem to be a LOT. Most coops I know is just a few summers.


Belgium, London, ABBA, Canada, or San Dimas? Why better than others?

lol, pretty sure it is Waterloo Canada / Ontario. People like to “idolize” their Alma Mater.

Rosetta 2 is easily one of the most technically impressive things I've seen in my life. I've done some fairly intense work applying binary translation (DynamoRIO) and Rosetta 2 still feels totally magical to me.

Thanks. It means a lot coming from someone with experience in our niche field.

We're you really _the_ creator of Rosetta 2? How big was the team, what was your role in it?

Rosetta 2 is amazing, I'm genuinely surprised this is the work of just one person!


I was the only person working on it for ~2 years, and I wrote the majority of the code in the first version that shipped. That said, I’m definitely glad that I eventually found someone else (and later a whole team) to work on it with me, and it wouldn’t have been as successful without that.

When people think of a binary translator, they usually just think of the ISA aspects, as opposed to the complicated interactions with the OS etc. that can consume just as much (or even more) engineering effort overall.


As someone frustrated in a team of 10+ that is struggling to ship even seemingly trivial things due to processes and overheads and inefficiencies, I would really appreciate some insights on how do you organize the work to allow a single developer to achieve this.

How do you communicate with the rest of the organization? What is the lifecycle and release process like? Do you write requirements and specs for others (like validation or integration) to base their work on? Basically, what does the day to day work look like?


Well, the first thing to realize about scaling codebases with developers is that an N developer team will usually produce a codebase that requires N developers to maintain. So by starting small and staying small until you reach a certain critical mass of fundamental decisions, you can avoid some of the problems that you get from having too many developers too early. You can easily also fall into the reverse trap: a historical core with pieces that fit too well together, but most of the developers on the team don’t intuitively understand the reasons behind all of the past decisions (because they weren’t there when they happened). This can lead to poorly affixed additions to a system in response to new features or requirements.

As far as Rosetta in particular was concerned, I think I was just in the right environment to consistently be in a flow state. I have had fleeting moments of depression upon the realization that I will probably never be this productive for an extended period of time ever again.


Thank you for what you did with Rosetta 2. It is outstanding.

On your last point, I’ve felt something like that myself and I hold onto hope that it isn’t true for myself (and now for you in your future endeavors). But even if it is true, you achieved something superhuman in your niche and the vast majority of people throughout the history of time have no idea what that is like. Tasting Heaven cannot last too long while on Earth. Maybe AI will bring us a little bit closer to that Heaven.


Thanks for sharing. Do you have an estimation of LoC? I know it's a BS indicator but just curious. I'd imagine it's something difficult but not too large.

>How do you communicate with the rest of the organization?

I wonder if Apple's renowned secrecy is a help with this. If nobody outside your small team knows what you are doing then it is hard for them to stick their oar in.


For the record I was interning on Cameron's team while he worked on Rosetta 2 and didn't even know myself what he worked on (the rest of the team and I were working on something else). I only found out later after it was released!

Apple is like this, I have seen plenty of instances where you have one person carrying a team of 5 or more on their back. I always wonder how they manage to compensate them when it’s clear they are getting 10x more done. Hopefully they get paid 10x, but something tells me that isn’t true.

Maybe getting interesting work is a better perk than $$, especially when Apple is already paying top dollars?

I'd imagine a lot of people are willing to do things for free.


When I was consulting I saw that everywhere. A team of ten people would have one or two primary contributors and often one person who had a negative impact on productivity.

That's super impressive. I remember being astonished that the x86 executable of Python running through Rosetta 2 on my M1 was just a factor of 2 slower than the native version.

QEMU was something like a factor of 5-10x slower than native, IIRC.


QEMU probably had to account for differences in memory models. A fork with that stuff removed might be able to easily catch up.

QEMU loses a bit from being a generic translator instead of being specialized for x86->ARM like Rosetta 2, Box64 or FEXEmu. It does a lot of spilling for example even though x86 has a lot fewer registers than aarch64.

Flags are also tricky, though they're pretty well optimized. In the end the main issue with them is also the spilling, but QEMU's generic architecture makes it expensive to handle consecutive jump instructions for example.


I found this blog post reverse engineering Rosetta 2 translated code: https://dougallj.wordpress.com/2022/11/09/why-is-rosetta-2-f...

Interesting. Yeah, being able to use Arm flags always is probably a big thing, since they even added hardware support for that.

It's a huge achievement for a single person to have written most of that.


> It's a huge achievement for a single person to have written most of that.

Qemu was mostly Fabrice Bellard by himself at the beginning and plenty of emulators are single person project.

It’s a field which lends itself well to single person development. How to properly architecture compiler/interpreter/emulator has been studied to death and everyone mostly uses the same core principles so there is little guess work as how to start (provided you have taken the time to study the field). If you are ready to do the work, you can reach a working translator from hard work alone. Then, the interesting work of optimising it starts.

Don’t get me wrong, Rosetta 2 is a very impressive achievement because the performances are really good. I tip my metaphorical hat to whoever did it. My post is more in the spirit of you can do something in the same ballpark too if that’s your kick.


That is fascinating that this amazing system was the work of largely one person. You mentioned that interacting with the OS was super difficult. What were the most enjoyable aspects of building Rosetta?

I am also amazed that this was the work of largely one person. Having seamless and performant Rosetta 2 was a major factor why the Apple transition from Intel to Apple Silicon was viable in the first place!

It's a shame that Apple's stated intent is to throw the project away after a while. Personally, I really hope it sticks around forever, though I'm not optimistic.

Rosetta 2 can't go away until Apple is ready to also retire Game Porting Toolkit. At most, they might drop support for running regular x86 macOS applications while keeping it around for Linux VMs and Windows applications, but that would be pretty weird.

In principle, the Linux Rosetta binaries should remain usable well into the future. Even if Apple discontinues support for Rosetta in their VMs, there's very little (beyond a simple, easily removed runtime check) preventing them from being used standalone.

AFAIK Linux Rosetta does not work standalone but uses some channels to exchange x86 and arm binary code between Linux guest and macOS host. Actual translation happens in the macOS.

You'd think so, but no. With a patch to remove the runtime check, Rosetta works on Asahi Linux, with no macOS kernel present at all.

The kernel could drop support.

> game porting toolkit

I don't understand why Apple even bothers these days, I wouldn't be surprised if Apple's gaming market is a quarter of what the Linux gaming market currently is (thanks to Valve and their work on proton and by extension wine)...


Because people want to use their fancy new hardware to play games? Linux market share wouldnt be increasing so fast if Valve didn’t do the work so why shouldn’t Apple do the same?

> so why shouldn't Apple do the same?

If Apple truly cared, they would stop blocking older games from being run on newer versions of OSX...


They don't really block games though. It's more like they don't want to maintain the roads the games need to run on. Transitioning to ARM wasn't possible if they had to support 2 x86 ABI's and an extra ARM 32 bits ABI. Throw in another migration and you have an untestable number of legacy combinations.

I suspect this was a project spearheaded by some clever geeks deep in the company and promoted upwards by management. Not a top-down initiative.

Where did Apple state that Rosetta 2 was to be deprecated?

I think they assuming from the past that this will happen. When Apple moved from powerPC to x86 there was Rosetta 1. It got deprecated as well.

The first Rosetta was based on licensed technology, used at a time when Apple was still pinching pennies.

It made financial sense to stop paying the licensing fee to include it in each new version of the OS as quickly as possible.

There is no financial incentive to remove the current version of Rosetta, since it was developed in-house.


Thanks didn’t know that.

It was interesting tech, licensed by Silicon Graphics, Apple, IBM, and Sun. IBM ended up buying out the company that brought it to market.

https://www.wikipedia.org/wiki/QuickTransit


I think it is different this time. A lot of developers use Rosetta 2 for Linux to run x86-64 Linux Docker containers under macOS (including me). They'll be upset if Apple discontinues Rosetta 2 for Linux. By contrast, once the PPC-to-Intel transition was under way, Rosetta was only used for running old software, and as time went by that software became increasingly outdated and use of it declined. While I think Rosetta 2 for macOS usage will likely decline over time too, I think Rosetta 2 for Linux usage is going to be much more stable and Apple will likely maintain it for a lot longer. Maybe if we eventually see a gradual migration of data centres from x86-64 to ARM, Rosetta 2 for Linux usage might begin to also decline, and then Apple may actually kill it. But, if such a migration happens, it is going to take a decade or more for us to get there.

I just pointed out what happened in the past. I have no clue if Apple will deprecate it and what reason they put forward doing so. I personally like the fact that I can run both arm and x86 binaries. But I think judging Apple that if they don’t have a personal reason to support Linux (they also use it for their services) they will remove it. But deprecated dons’t mean it will be removed anytime soon. Apple keeps APIs and frameworks as long as they don’t interfere with something else.

What's the advantage of running x86-64 Linux Docker containers over running ARM Linux Docker containers? Aren't most distributionss and packages available for both platforms?

Microsoft SQL Server is only available as an x86-64 docker container binary. They actually had a native(?) arm64 docker container under the name "azure-sql-edge", which was (and still is) super useful as you can run it "natively" in an arm64 qemu linux for example, but alas that version was not long lived, as Microsoft decided to stop developing it again, which feels like a huge step backwards.

https://techcommunity.microsoft.com/blog/sqlserver/azure-sql...

There's probably other closed-source linux software being distributed as amd64-only binaries (rosetta 2 for linux VMs isn't limited to docker containers).


Lots and lots of Docker images are only linux/amd64 compatible. Without Rosetta 2 I wouldn't be able to do my job, especially in a team with a mix of Mac and Linux workstations and most images being build as amd64/linux only.

The advantage is the fact that they exist. Not every Docker container is built for multiple platforms.

Some images are only available for amd64 still. Like oracle databases. Even if there is an arm64 of a recent version of the app, it may not exist for older versions that you want to test against.

It is my experience that it is easier to create good quality things as an individual than as a team. Especially for the core of a product. Also look at Asahi.

However, to really finish/polish a product you need a larger group of people. To get the UI just right, to get the documentation right, to advocate the product, to support it.

It is easily possible to have 10 people working on the team and only having a single core person. Then find someone to act as product manager while as the core person you can focus on the core of the product while still setting the direction without having to chase all the other work.

It is possible, but not easy to set up in most organisations. You need a lot of individual credit/authority and/or the business case needs to be very evident.


Do you have book recommendations in regards to disassembly, syscalls, x86/64 assembler etc?

What do I need to know to be able to build something as advanced as rosetta?

I am assuming that you reimplemented the syscalls for each host/guest system as a reliable abstraction layer to test against. But so many things are way beyond my level of understanding.

Did you build your own assembler debugger? What kind of tools did you use along the way? Were reversing tools useful at all (like ghidra, binaryninja etc)?


"Virtual Machines: Versatile Platforms for Systems and Processes" by Jim Smith and Ravi Nair is a great book on the topic.

Thank you. Other than papers, I think this is one of the rare books that talk extensively about dynamic recompilation. I was hoping to learn more about the PPC M68K emulator (early version interpreter style and later version dynamic recompilation style) and definitely will read it.

Out of curiosity, if you’ve been at a FAANG since at least 2009, have you ever retired or taken a “sabbatical” for a year or two, since you would have made enough money to retire and live passively at amounts similar to annual compensation and taxed way better

Just curious how the decisions have formed, its totally fine if FAANG or specifically Apple was fulfilling for you, I also wonder if its financial fear to an irrational extent just because I see that on Blind a lot


I did go to Mozilla Research to work on Servo/Rust for a bit in 2015, which didn’t turn out to be the best decision.

I always assumed that I would stick around at Apple until some singular event that would motivate me to quit, and that would be it. I have been so lucky at Apple to have been in the right place at the right time for several projects: relatively early iPhone team, original iPad team, involved in the GCC -> Clang transition, involved in the 64-bit ARM transition, involved in early Apple Watch development, first engineer working full-time on the Apple silicon transition for the Mac, etc. Obviously I was doing something right if I kept getting these chances, but if I went to another FAANG I wouldn’t have the same history, and I don’t think it would be the same experience.

My projected path to parting ways with Apple didn’t really take place, since I’m now working at a non-profit dedicated to developing an interactive theorem prover and left Apple without any animosity in either direction.


It would be incredible if you could write a book someday about all those experiences. I would very happily buy that book.

Thanks for all that incredible work and your insights here.


Wow that's really a ton of memorable experience. I hope you write a book or some blog posts or do an interview.

sorry to hijack the discussion but do you see any chance of consolidating the theoretical framework of real numbers with practical calculations of floats? That is if I proof the correctness of some theorem for real numbers ideally I would just use that as the algorithm to compute things with floats.

also I was shocked to learn that the simple general comparison of (the equality of) two real numbers is not decidable, which is very logical if you think about it but an enormous hindrance for practical applications. Is there any work around for that?


You can use floats to accelerate interval arithmetic (which is "exact" in the sense of constructive real numbers) but that requires setting the correct rounding modes, and being aware of quirks in existing hardware floating point implementations, some of which may e.g. introduce non-exact outputs in several of the least significant digits, or even flush "small" (for unclear definitions of "small", not always restricted to FP-denormal numbers) results to zero.

Equality is not computable in the general case, but apartness can be stated exactly. For some practical cases, one may also be able to prove that two real numbers are indeed equal.


This is exciting!

Given your experience with Rosetta 2 and your deep understanding of code translation and optimization, what specific areas in Lean’s code generation pipeline do you see as ‘low-hanging fruit’ for improvement?

Additionally, which unique features or capabilities of Lean do you find most promising or exciting to leverage in pushing the boundaries of efficient and high-quality code generation?


What was the tipping point that made you want to work on Lean?

I don't think there was a single tipping point, just a growing accumulation of factors:

- the release of Lean 4 slightly over a year ago, which impressed me both as a proof assistant and a programming language

- the rapid progress in formalization of mathematics from 2017 onward, almost all of which was happening in Lean

- the growing relevance of formal reasoning in the wake of improvements in AI

- seeing Lean's potential (a lot of which is not yet realized) for SW verification (especially of SW itself written in Lean)

- the establishment of the Lean FRO at the right time, intersecting all of the above


How does Lean compares with Coq? (I am not familiar with Lean but am familiar with Coq)

Mario Carneiro’s MS Thesis has a good overview of the type theory and how it compares to Coq: https://github.com/digama0/lean-type-theory/releases/downloa...

Thank you!

Can Lean can do what TLA+ does - model check thorny concurrency problems ?

Surprised you didnt go into something AI adjacent

I don't know what his reasons are but it makes sense to me. Yes, there are incredible results coming out of the AI world but the methods aren't necessarily that interesting (i.e. intellectually stimulating) and it can be frustrating working in a field with this much noise.

I don't want to come across as too harsh but having studied machine learning since 2015 I find the most recent crop of people excited about working on AI are deep in Dunning-Kruger. I think I conflate this a bit with the fascination of results over process (I suppose that befuddlement is what led me to physics over engineering) but working in ML research for so long it's hard to gin up a perspective that these things are actually teleologically useful, and not just randomly good enough most of the time to keep up the illusion.

What do you mean by "things that are actually teleologically useful"?

Fellow physicist here by the way


Like useful in an intentional way: purpose-built and achieves success via accurate, parsimonious models. The telos here being the stated goal of a structurally sound agent that can emulate a human being, as opposed to the accidental, max-entropy implementations we have today.

Sounds like an arbitrary telos, especially in a world where one of the most useful inventions in human existence has been turning dead dinosaurs into flying metal containers to transport ourselves great distances in.

Every goal is equally arbitrary, I'm speaking to the assumed ideology of the AI fanatics.

Is a guide dog teleologically useful?

Not if you’re taste testing ceviche

I see, so humans are also not usefully intelligent in an intentional way, because they also follow the 2nd law of thermodynamics and maximize entropy and aren't deterministic?

Pure, refined “but humans also”.

What do you mean by "Pure, refined"?

You're right that "but humans also" is better than my "and humans also"


Not OP, but I'm assuming he means that they are maddeningly black-boxy, if you want to know how the sausage is made.

I feel that way sometimes too.

But then I think about how maddeningly unpredictable human thought and perception is, with phenomena like optical illusions, cognitive biases, a limited working memory. Yet it is still produces incredibly powerful results.

Not saying ML is anywhere near humans yet, despite all the recent advances, but perhaps a fully explainable AI system, with precise logic, 100% predictable, isn’t actually needed to get most of what we need out of AI. And given the “analog” nature of the universe maybe it’s not even possible to have something perfect.


> But then I think about how maddeningly unpredictable human thought and perception is, with phenomena like optical illusions, cognitive biases, a limited working memory.

I agree with your general point (I think), but I think that "unpredictable" is really the wrong word here. Optical illusions, cognitive biases and limited working memory are mostly extremely predictable, and make perfect sense if you look at the role that evolution played in developing the human mind. E.g. many optical illusions are due to the fact that the brain needs to recreate a 3-D model from a 2-D image, and it has to do this by doing what is statistically most likely in the world we live in (or, really, the world of African savannahs where humans first evolved and walked upright). This, it's possible to "tricks" this system by creating a 2D image from a 3D set of objects that is statistically unlikely in the natural world.

FWIW Stephen Pinker's book "How the Mind Works" has a lot of good examples of optical illusions and cognitive biases and the theorized evolutionary bases for these things.


Lean is AI adjacent.

Only because the AI people find it interesting. It's not really AI in itself.

If you’re interested in applications of AI to mathematics, you’re faced with the problem of what to do when the ratio of plausible proofs to humans that can check them radically changes. There are definitely some in the AI world who feel that the existing highly social construct of informal mathematical proof will remain intact, just with humans replaced by agents, but amongst mathematicians there is a growing realization that formalization is the best way to deal with this epistemological crisis.

It helps that work done in Lean (on Mathlib and other developments) is reaching an inflection point just as these questions become practically relevant from AI.


It's not AI in itself, but it's one of the best possibilities for enabling AI systems to generate mathematical proofs that can be automatically verified to be correct, which is needed at the scale they can potentially operate.

Of course it has many non-AI uses too.


Proof automation definitely counts as AI. Not all AI is based on machine learning or statistical methods, GOFAI is a thing too.

If you want to have superhuman performance like AlphaZero series you need a verifier (valuation network) to tell you if you are on the right track. Lean (proof checker) in general can act as a trusted critic.

They do have AI on their roadmap, though: https://lean-fro.org/about/roadmap-y2/

Seems more like applying LEAN to AI development, no?

Partially, I guess, but also: "We will seek to provide tooling, data, and other support that enables AI organizations and researchers to advance Lean’s contribution at the intersection of AI, math, and science."

It's not ML, but it is AI

This is *VERY* AI-adjacent... the next batch of AI algos will need to integrate reasoning through theorem provers to go next level

In a previous discussion the name of Gary Davidian is mentioned who also — initialy single-handed — did amazing work on architecture changes at Apple. There’s an interview with him in the Computer History Museum archive.

https://news.ycombinator.com/item?id=28914208

https://youtu.be/MVEKt_H3FsI?si=BbRRV51ql1V6DD4r


From wiki it looks like David's emulator is perhaps uses interpreting as wiki says Eric's uses dynamical recompilation and Connectix' is even faster so maybe more optimization.

I tried to find the source code of any without any success.


All this stuff is or was proprietary, closed-source tech, and what's more, it was tech that gave certain companies strong competitive advantage at particular points in time -- so they had strong incentives to make sure it did not leak.

(I see posters in this thread who do not know what I thought were well-documented parts of the story, so I am trying to spell out the context here.)

Some large reputable companies have histories of stealing other's code, ideas, implementation methods, algorithms etc. and passing them off as their own. IBM, Microsoft, Sun, Apple, Google, Oracle, Digital Research, Lotus -- all were dominant players, all were so accused. Most either backed down, or re-wrote, or re-implemented to avoid being sued.

Microsoft more than almost anyone, and it only thrived because it was able to pay other companies off, or simply wait for them to go broke.

Sometimes, how code works can be deduced simply by studying what it does. I worked out how Rsync worked because someone asked me to explain what it did in detail.

Powerquest's PartitionMagic was amazing, black magic tech when it came out. I didn't review v1 because I did not believe what the packaging said; when I reviewed v2, a reader wrote in accusing my employers of doing an elaborate April Fool's joke and pointed out that my name is an anagram of APRIL VENOM.

(If I ever branch out into fiction, that's my pseudonym.)

Now, the revolutionary functionality of PartitionMagic is just an option in one screen of some installation programs. It's valueless now. Once people saw it working, they could work out how it was done, and then do it, and it ceased to have value.

Very fast emulation is not such a thing. Setting aside sheer Moore's Law/Dennard scaling brute horsepower, efficient emulation during the short window of processor architecture transitions is a massive commercial asset.

Apple has done it 3 times between 4 architectures.

68000 -> PowerPC PowerPC -> x86 x86-64 -> Arm64

Nobody else has ever done so many.

IBM bought Transitive for QuickTransit, but it's not clear how it used it. Its major architecture change was IBM i. Originally OS/400 on AS/400, a derivative of the System 36 minicomputer, it successfully moved this to POWER servers. However, there is a translation layer in the architecture, so it didn't need Transitive for that.

But IBM has bought many radical tech companies and not used the tech. E.g. Rembo, an amazing Linux-based boot-time network-boot fleet deployment tool it never really commercialised.

Microsoft bought Connectix for VirtualPC, kept the disk formats and management UI and threw away everything else, because Intel and AMD bundled the core virtualisation tech.

I know a little of the binary translation tech because the man who wrote it flew across the Altantic for me to interview him.

All thrown away, but today, it's valueless anyway.

At the time, though, very valuable.


Thanks for sharing the context! I'm mostly curious as I hope to walk along the path of predecessors. I have a bit of fetish about programming in assembly language so emulation/vm is one of the interesting fields -- and after writing a couple of simple interpreter emulators it's natural to go down the dynamic rabbit hole.

Apple caught my eye exactly because of what you said: it went through 3 big transitions in a relatively short period of time (about 25 years), so everyone involved are still active in the profession. And I'm sure they used dynamic translation extensively. I agree that all is going to be, or was thrown away, but it is interesting to know how did they do it.

BTW April VENOM is a pretty good pseudonym, somehow reminding me "Raul Bloodworth", the pen name of CSM from X-Files.


IMHO the most interesting thing happening in recent years in this area is Emu68:

https://github.com/michalsc/Emu68


Interesting. Didn't know about it. Now let me grab the earliest version and see if I can read the code...thank you!

The linkedin back button is weird. Instead of coming back to hn after back button, it goes to its homepage.

Its not weird its just disgusting. The back button should go back

What is Lean FRO?


Yeah that really doesn’t help.

FWIW, not having done a proof since about 1982, it doesn't help me either.

we aim to tackle the challenges of scalability, usability, and proof automation in the Lean proof assistant https://lean-lang.org/

Yep. Truly a mystery.


~~Nice addition of a link that doesn't exist in the actual "quoted" source. If it did exist it would certainly be helpful though, so thanks for adding it I guess.~~

EDIT: Apparently their website design is just so poor their clickable links are identical to the non-clickable plain text. That link is a clickable word if you completely guess you can click on some of the apparent plain text.


Lean is a currently-niche programming language / proof-assistant. A proof assistant is basically a tool to construct mathematical proofs, which verifies that the proofs are correct like how a compiler verifies types in your programs.

IIUC a regular programming language with a certain set of restrictions already duals as a proof-assistant as discovered by Curry & Howard. By restrictions, I mean something like how Rust forces you to follow certain rules compared to Java.


Background about the organization: https://en.m.wikipedia.org/wiki/Convergent_Research

Their proof assistant / programming language: https://en.m.wikipedia.org/wiki/Lean_(proof_assistant)



There are a lot of broken links in the docs. Like most of the feature links.

There's a completely new language reference in the process of being written: https://lean-lang.org/doc/reference/latest/ (by David Thrane Christiansen, co-author of The Little Typer, and Lean FRO member)

Some links here seem to be broken at the moment — and David's currently on vacation so they likely won't be fixed until January — but if you see for example https://lean-lang.org/basic-types/strings/ it's supposed to be https://lean-lang.org/doc/reference/latest/basic-types/strin...


That answers the Lean part, FRO stands for Focused Research Organization

That doesn't say much.. Research on what? It looks like Lean is a programming language but everything else is pretty abstract to me.

"we aim to tackle the challenges of scalability, usability, and proof (Mathematics) automation in the Lean proof assistant."

How is the Lean non-profit getting funded to be able to afford such great devs? How does that work in general?

I am pretty sure “lean” is that codeine cough syrup rappers drink


The audio on this is about the worst I’ve ever heard on YouTube. I fast forwarded and at least he stops playing that loud music over his quiet voice, but damn.

He gets off topic a lot (bullies, amphetamine salts??) and spends the entire time talking to the commenters not the video recording.

Surely, there’s a better video out there than this.


Apple just seems to be bleeding talent left and right. I wonder what's going on over there to cause people to leave when the job market is as uncertain as it is right now.

The 20th million doesn't hit as hard as the 19th and when you make 2x your salary on the dividends on your stock you start to wonder why not just do something more interesting.

Sometimes people just want to work on cool stuff and have the luxury of being able to do that. Rosetta 2 is shipped and done.

You could have posted this in 1985 and been right. Talented people have options.

Citation needed?

I mean, there will always be long tenured people leaving, even without offers on the table

Some jobs get old eventually




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: