Hacker News new | past | comments | ask | show | jobs | submit login
Keeping POWER relevant in the open source world (czanik.hu)
110 points by pabs3 on Jan 22, 2022 | hide | past | favorite | 72 comments



IBM has had this argument internally for eons.

When i was there, two decades ago, people were making presentations internally saying the same thing - "without affordable volume machines that random people can buy and use to develop, this ecosystem will fail"

Then, like now, there were even organizations/companies people thought might eventually take care of doing this that weren't IBM (Spoiler alert: They didn't)

Given that it has been at least two decades, i'm not sure whether that makes the doomsayers wrong or right.

But I am sure of one thing: IBM didn't care to do it then, and doesn't seem likely to start caring to do it now


Warning: rant (sorta)

I remember discussing with an IBM representative at one of the last CeBIT shows (was it 2016?) that the community who did open source development on non-x86 were gravitating away from POWER/PowerPC towards ARM. The affordable hardware was a major selling point.

His offered "solution" was that IBM granted free access to its POWER cloud. But that is of course not interesting for someone who runs a home lab and works on Linux distros.

And even so, the blog post still does not aim to where they need to aim. Raptor systems are far from "affordable" for your average CS undergraduate or hobbyist. Power Pi's $250 target price is still off by almost an order of magnitude. The $250+ market is served by second-hand POWER servers, which are ok for home labs. Unfortunately, Ubuntu and IBM in their infinite wisdom decided to drop POWER8 support from upcoming releases, which will drive away home labs and the open source community further from Power.

https://www.phoronix.com/scan.php?page=news_item&px=Ubuntu-2...


>His offered "solution" was that IBM granted free access to its POWER cloud.

Yeah the mainframe community asked ibm to open some newer systems >24bit but ibm told them they want to provide the community with new "actual" software...out of that came that stupid program called "master the mainframe" or "IBM Z Xplore" it's unbelievable that they are so tone-deaf even to the most loyal IBM fans...IBM is the perfect example of a marketing/management and anti-engineer company (as of today), they really deserve to have a slow death.


I think Debian still supports POWER8 (and has unofficial ports for 32-bit PowerPC and for big-endian POWER).


The problem about Ubuntu dropping POWER8 support isn't that hardware owners won't find another distro to run.

The problem is that home lab owners who develop on their second-hand POWER8 hardware can no longer have the same software environment as the POWER9 systems running latest Ubuntu that they deploy to. And Ubuntu is very popular in the Cloud.


Yeah IBM is crazy with z/OS too, give out free z/OS developer images so everyone interested could play with it and maybe it sparks interest in mainframe again? NONONO pay ~500$ every year and you need a usb-dongle, and "master the mainframe" is just open for 2 month a year...it's pure stupidity.


PASemi was stopped by an acqui-hire. I don't like counterfactuals, but that was a huge transfer of expertise from Power to ARM


I mean, this is true, but it doesn't really change a lot of what i said. It was always something :)

If your ecosystem is stopped in its tracks because someone acquired a small company, it is not a particularly robust ecosystem.

From what i was told (and i was very young and inexperienced when i worked at IBM, so take it with a grain of salt), the baseline issue was always the same on the IBM side - the chip groups controlled most of the budget for the chips, and didn't see any way that spending hundreds of millions to make developer-friendly hardware would have positive ROI. Most of their revenue was not derived from the customers they thought this might ever attract.

It's similar to why they told Apple to go pound sand repeatedly until Apple switched to Intel - Apple did not make up a really meaningful part of the business of Power chips from where they sat, so they didn't really care about giving Apple what it wanted.

The software folks involved thought this was mostly insane


This is starting to feel a lot like the DEC Alpha, at least from the perspective of the 3rd party motherboards.

The ubiquity of x86 and ARM does not leave much room for the Power architecture in the general market.


Or is it the Power architecture who is not bringing much on the table for the price? (except radiation hardened chips, where money is not important)


At the other end of the scale, if anyone wants to play you can run a little openpower CPU on an FPGA with completely open source. https://github.com/antonblanchard/microwatt

It's capable of running Linux, some example docs are https://shenki.github.io/boot-linux-on-microwatt/ The toolchain consists of "apt install gcc-powerpc64le-linux-gnu" on Debian, no funny downloads. And if you want to target a Lattice ECP5 board even the FPGA tools are also all open, thanks to Yosys and friends.

All the openpower ISA spec PDFs are available for perusing on the openpower site.


Thanks for sharing. I've looked at getting a PowerPC dev board, but didn't find anything at a reasonable price. I never made the connection that microwatt plus and FPGA would get me there.


We've now got a walkthrough up for getting Microwatt going on an OrangeCrab ECP5 board. https://codeconstruct.com.au/docs/microwatt-orangecrab/


Don't forget that POWER10 is no longer ryf-certifiable. Of course the ryf certification may be questionable but it is an established bar, POWER9 was able to cross it, POWER10 probably won't: https://www.talospace.com/2020/08/power10-sounds-really-grea... https://twitter.com/RaptorCompSys/status/1435510763402244105...


RYF allows various "tricks" of hiding blobs, so it is not what I'd use as the benchmark here. Raptor's standards are likely higher than what RYF would require...


I don't know if the firmware on embedded controllers in currently ryf-certified laptops are open source, but that is the only source of "trick" I can see that could be currently explored to get ryf certification.

I've heard people saying things like "even a windows device could be ryf-certified if it was in ROM", but I see nothing even close to this when I look at currently ryf-certified devices. People are probably influenced by this: https://puri.sm/posts/librem5-solving-the-first-fsf-ryf-hurd... where a Purism engineer describes a "trick" to store a memory controller training firmware in ROM and run it on a "secondary processsor" so it can get Librem 5 ryf-certified; but to this day Librem 5 is still not certified and probably won't because of this specific issue.

Also there are the people who correctly question that ryf is silly because it accepts software without code available if it is in ROM and only runs on a secondary processor. I currently have no counter-argument to this and I think it would be great if FSF explained it clearly. Nevertheless, there are some reasonable points to consider that stance:

  - AFAIK, the form of accepted code is only for "secondary processors" and can't take over the system or compromise it,

  - having things in ROM forces manufacturers to maximally simplify it,

  - having things in ROM forces manufacturers to implement more features in software that can be checked and

  - having things in ROM forces manufacturers to be extra careful when implementing it.


I'm not saying that RYF is entirely useless, just that "new version requires firmware blobs, old one didn't" is a clearer and stronger statement than mentioning RYF IMHO.


Hmmm... I understand.

I mentioned ryf certification because Tallos II and Talos II Lite are the only ryf-certified modern systems available. Also, I bet FSF is very strict when certifying systems so we don't have to rely on the word of the vendor only.


POWER10 could probably game RYF certification. Raptor is really much stricter than everyone on that topic. If you make bits of firmware non-upgradeable, I believe you can pass them as RYF-certified even if they are blobs - they are counted as part of the hardware.

Raptor didn't accept anything like that, closest is the minimal (it's a RISC after all) microcode mask rom in POWER9 core that AFAIK is effectively covering some hairy and rarely used instructions as multiple standard Power instructions. And it's so small thing that I am not really sure it's there...


FSF RYF certification carries very little weight.

If you care about not having backdoors in your hardware, RYF is not very relevant because it allows closed source hardware and firmware blobs.



Any reference to the ryf certification?


They're referring to the FSF's "Respects Your Freedoms" programme, a frankly absurd specification that encourages vendors to lock down closed source firmware into ROM rather than actually make it free. RYF doesn't care about the hardware designs themselves being open (no computer would qualify in that case), so they treat closed-source firmware ROM images as identical to closed-source hardware, which is fine. But store that firmware on some sort of RAM, or flashable ROM, and all of a sudden it's now closed-source firmware, which is bad.

The end result is that systems that are the most locked down are those which "Respects Your Freedoms".



Ah thanks a lot. I asked because I had not heard of that abbreviation.


My experience using Voidlinux, on a Raptor POWER9 system, has been excellent. The software support has just gotten better and better. It would be a shame to lose this momentum.

Affordability and availability are major pain points. I've shared my excitement about OpenPOWER with many and their excitement evaporates when they are told what a modest system costs.

I am strong believer in FOSS and fully open owner controlled hardware. I think this is far more important than it appears at first glance. It sounds like tinfoil hat stuff, but the reality is we're backed into a corner. The less control owners/users have, the more vulnerable they are.

--edit typo s/that/than/..


Fedora on POWER9 for my daily driver, personally (a dual-8 Raptor Talos II). I like the architecture, having been an AIX admin since the 3.2.5 days (still have a personal POWER6 with it) and a long time Power Mac bigot, and I like Raptor's commitment to openness. My HTPC is a Blackbird.

It shouldn't be a privilege to own a fully user-auditable computer. Cost continues to be a problem, which is why I hope Microwatt gets into more things. But the cost is primarily from the motherboard, which has no economies of scale (CPU prices are actually fairly competitive), and other than the processor everything else is off-the-shelf. Fedora, Void and FreeBSD work well in my personal experience, and there are other choices.


Unrelated, but thank you for your blog! I check in occasionally, mostly looking for a reason to justify buying one of the Raptor machines at work. I really enjoy the articles on qemu/virtualization.


Hey, thanks! Very kind of you to say.


My main workstation is too a Raptor Power9 running Void since November 2019, and I’m very happy with it although as the article mentions, “My experience with firmware is that open source does not mean necessarily better, rather the opposite”. OpenBMC (the firmware for the little ARM-based embedded computer that boots the big one) has a number of rough edges.

From time to time I submit patches to software to make it work (or work better) on this machine, last week it was these two trivial ones:

https://github.com/ciao-lang/ciao/pull/45

https://github.com/janet-lang/janet/pull/915

Typically that’s all that’s needed, in some cases it’s just more of it:

https://github.com/oneapi-src/oneDNN/pull/767

But for most things, since it’s Linux, it’s just those small changes: (one of my customers runs the software I write on AIX and that’s a little more work)

https://github.com/IntelRealSense/librealsense/pull/5586

https://sourceforge.net/p/speed-dreams/tickets/1022/

For Rust programs, many packages used i8 for C strings (to interface with OpenGL for instance) which broke when running on ppc64 and had to be changed to c_char, but that’s not a problem these days, I guess because many people now test on ARM.

https://github.com/femtovg/femtovg/pull/5

Before Power10 was done, IBM actually asked us Raptor users about proposals for useful machine code instructions to add to it. I replied that I’d like to have hardware UTF-8 de-/encoding but they wanted a more detailed proposal and I never got around to write it. I’m not even sure that this would be worthwhile, but I see UTF-8 de-/encoding everywhere in the code I write and would like it to approach memory read/write speeds.

I was very disappointed to learn that they had gone more proprietary with Power10 so I would not have been able to use those instructions anyway. What a pity! This machine still covers my needs so I’m not planning to replace it any time soon.


I don't manage the system I work on, so I don't see OpenBMC, but "a number of rough edges" sounds a lot better than the proprietary BMCs it's been my misfortune to use, which appear basically to be unsupported.

As far as rust goes, I've just asked around for rust expertise to try to get a feel for whether it's worth persisting with trying to make the stuff work that a user wants, since I know nothing about it. (The first issue I found was exactly i8 in a current crate.) That is rather the exception in building free software for ppc64le, though, in my experience of packaging HPC-type stuff. The real problems are with Mellanox/NVIDIA proprietary stuff for GPU support.


Interesting, never heard about them asking about ISA changes. Where was that done?

There are some lovely new instructions in Power10 but until they get the firmware source out fully I won't use them in the Firefox JIT.


It was through the IRC channel. I sent a direct message as response and exchanged a couple of sentences, then they gave me their IBM email address to submit the detailed proposal. My understanding was that it would have to be justified and for that I would have to show that it could be more efficient than an implementation based on the existing SIMD instructions which I’m not familiar with. I suspect that the kind of instructions that could actually be put there might not perform better than that.

I regret not sending at least an amateurish proposal, I still think it would be a good idea to have no-cost UTF-8 en-/decoding. Not just for text, but for general variable-length encoding of other kinds of data.


Very interesting. Which channel specifically?

VSX is really where the new development is happening, but it's become quite complete. The PC-direct instructions first made available in P9 also really closed a gap (beforehand you had to do bl with a weird flag to get PC in LR without trashing the history table).


The IRC channel was #talos-workstation on FreeNet, now on Libera.Chat.

Do you know of some minimal example of calling VSX instructions from C?

Ideally just one .c file and one Makefile or README with the exact GCC command to compile it, plus a pointer to documentation describing each instruction. I’ve seen assembler inserted into C source code with GCC, but I’ve never done it and I assume there are some non-obvious details to take into account.


Sorry, didn't see this until now (out all day). Here is a very stupid example that uses `xxbrd` to byteswap a 64-bit quantity.

  #include <errno.h>
  #include <stdio.h>
  #include <stdint.h>
  #include <stdlib.h>
  
  int main(int argc, char **argv) 
        uint64_t v = 0;
        double o;
  
        if (argc != 2) { 
                fprintf(stderr, "usage: %s quantity\n", argv[0]);
                return 1;
        }       
  
        v = strtoull(argv[1], NULL, 0);
        if (errno == EINVAL || errno == ERANGE) {
                perror("strtoull");
                return 1;
        }       
  
        __asm__(
                "xxbrd %0, %1\n"
                :"=f"(o)
                :"f"(*(double *)&v)
        );      
        fprintf(stderr, "0x%lx\n", *(uint64_t *)&o);
        return 0;
  }
  
  % gcc -o xxbrd xxbrd.c
  % ./xxbrd 0x123456789abcdef
  0xefcdab8967452301


I don't know what you want of the VSX, but if you want vectorized code, what do you expect to gain over letting the compiler do it on your C? If you want an example, there's the kernels in OpnBLAS and FFTW (and BLIS, but that seems to be broken on POWER9).

There's an IBM web page somewhere with three(?) alternatives for using VSX, one of which is just using SSE intrinsics -- I don't know how well that works -- and another is a library that's now in Fedora, whose name I forget.

That said, it's obviously not competitive with AVX2 or, presumably SVE, unless you can win on parallelization (or plain clock speed, which you probably can't).


What I’d like to do is a quick proof-of-concept to see whether whatever instructions are available in my CPU can be leveraged for UTF-8 en-/decoding.

For instance, does it work any better than my C implementation? https://github.com/Sentido-Labs/cedro/blob/master/src/cedro....

Maybe the compiler already compiles that to an optimal SIMD version, I don’t know. That’s what I would like to find out. And if the VSX instructions are not a good fit for this task, which instructions would be needed? Can I come up with a combination of logic gates that does that? Maybe not, there might be no way of implementing any significant part of the algorithm without branches or look up tables.

The thing is that I need to start somewhere, and for that classichasclass’ example is exactly what I need.


Just keep in mind that the FPRs and vector registers are now aliased together (in VMX-only CPUs this wasn't necessarily the case). What is particularly stupid about my example is that it may have to spill to memory to move the uint64_t (a GPR) into the VSX register (an FPR) and then move it back because PowerPC famously had no direct GPR-FPR moves for quite a while. Since I didn't specify -mcpu=power8 (or higher), gcc doesn't issue the new instructions and I'm not sure it would know how to.

A better way would be to explicitly use the newer mtvsrd (mtfprd) and mfvsrd (mffprd) instructions and avoid the spill. So here's a revision 2.

  #include <errno.h>
  #include <stdio.h>
  #include <stdint.h>
  #include <stdlib.h>
 
  int main(int argc, char **argv) {
        uint64_t v = 0;
 
        if (argc != 2) {
                fprintf(stderr, "usage: %s quantity\n", argv[0]);
                return 1;
        }
 
        v = strtoull(argv[1], NULL, 0);
        if (errno == EINVAL || errno == ERANGE) {
                perror("strtoull");
                return 1;
        }
 
        __asm__(
                "mtfprd %1, %0\n"
                "xxbrd %1, %1\n"
                "mffprd %1, %0\n"
                :"=r"(v)
                :"r"(v)
        );
        fprintf(stdout, "0x%lx\n", v);
        return 0;
  }
If v is already in a register, then it can just stay there.


> "Old Macs are big-endian, just as network processors from NXP. Some Power developers still want big-endian systems to keep the dream alive. But support for big-endian systems is mostly gone from Linux distributions, and when it comes to developing common utilities or even programming languages, most developers are no more even aware that a world exists outside of little-endian. As much as I love the PowerPC laptop project, I see it now as a dead end: producing hardware for an ever shrinking software ecosystem."

Note that PowerPC itself is bi-endian and always has been since the PPC601, configurable via a special register. POWER gained this capability in POWER3 when it subsumed the PowerPC ISA. Old PowerPC Macs are explicitly big-endian because they needed to maintain compatibility with everything written for the earlier MC680x0-based Macs whose CPUs were big-endian-only. Windows NT ran on PowerPC in little-endian mode!

https://www.cs.umd.edu/~meesh/cmsc411/website/projects/outer...

https://catfox.life/2018/11/03/clearing-confusion-regarding-...


Big Endian POWER isn't bug-for-bug compatible with buggy Javascript usage of typed arrays that assumes little endianness, and thus browsers/nodejs/deno on POWER will be exposed to bugs that don't affect little endian x86-64/ARM.

After so many years of endianness bugs in C/C++ code, it's perplexing that the web standards committee voted to put typed arrays in Javascript in such a way that exposes platform byte order to Javascript programmers who can't generally be expected to have low-level C/C++/ASM experience with memory layout issues:

  function endianness () {
    let u32arr = new Uint32Array([0x11223344]);
    let u8arr = new Uint8Array(u32arr.buffer);
    if (u8arr[0] === 0x44)
        return 'Little Endian';
    else if (u8arr[0] === 0x11)
        return 'Big Endian';
    else
        return 'WTF (What a Terrible Failure)';
  }
EDIT: my old Power Mac was big endian, but I just read POWER has an endianness toggle. So in little endian mode it ought run endian-buggy JS with bug-for-bug compatibility.


Spoiler alert: it does (typing this in Firefox 96 on a little-endian POWER9). In TenFourFox, which ran exclusively big, we had code to byteswap typed arrays to make them look little-endian to scripts. This partially worked (enough for many `asm.js` scripts to run).


The most compelling argument for POWER was to make sure there is competition to amd64/x86 and we do not develop a monoculture. I think arm and risc-v have filled that hole nicely and taken over for POWER. Without readily available and affordable hardware I do not see much of a future for POWER in open-source. It seems entirely dependent on IBM to keep it going unless this changes.


I'm not sure I agree about RISC-V. ARM is now indisputably in the same performance ballpark as x86_64 (M1 most obviously but there are others), as is Power, but RISC-V has a ways to go before it can compete on the same turf. It hasn't even really competed with ARM in embedded, despite its advantages there (though it has all but killed the zombie corpse of neo-MIPS).

I do agree that without other processor makers, however, the Power ecosystem is crucially overdependent on IBM policy. This is not a new problem but it hasn't gotten any better with OpenPOWER.


I did not mean to imply that I thought RISC-V was a challenger to x86 and ARM right now. Just that it is accessible to those that want it which can let them make sure software runs on it. Even that, I do not think is really true yet but it seems like it is the direction things are heading.

With POWER, I just do not see the motivation, outside of IBM, for doing any work with it anymore.


One of the problems with the systems seems to be IBM support, at least for people with Sierra/Summit-like nodes who are averse to trying to talk to Livermore and Oak Ridge. That includes dropping support for what seemed to be the major selling point for our system before it got going. (Admittedly it probably only needs someone who needs it with some get-up-and-go to do the work, but still.) It is different with the IBM GCC maintainers, who you don't talk to through IBM, of course.


> in most parts software support for Power 9 is now in par with x86 and ARM.

I very much doubt this.

> we need affordable Power hardware quickly to keep and expand the momentum.

Possibly, but the POWER architecture would get (a lot ?) more traction if the entities behind it made VMs with OS distros easily available for ensuring FOSS builds and works on them. This is much less of a challenge than making cheap hardware available.

Travis CI - the way it used to be anyway before they started asking for money even from FOSS developers - is a good example of how this could be offered. It was very easy to set this up for your FOSS repository (on GitHub at least - but IBM would need be more welcoming than that), and from that point it wasn't that hard.

For some larger projects, direct interaction may be even more readily effective, e.g. getting in touch with maintainers of Linux and with BSD distributions, with the Document Foundation for LibreOffice support (maybe it's already supported?) etc.

---

If this happens in parallel (or before) reasonably-cheap hardware is available, then I would expect more "buy-in", figuratively and literally.


IBM had the opportunity of being the royalty-free ISA that RISC-V is. Or even release the cores "IP". They squandered it.


The ISA is open and royalty free.

There are at least three open cores, all IBM funded or designed:

https://github.com/antonblanchard/microwatt https://github.com/openpower-cores/a2o https://github.com/openpower-cores/a2i

The last two are older but they work. Microwatt is developing by leaps and bounds.


I'm aware. The problem is that the ecosystem has been opened up too late, while the RISC-V ship has sailed and also missing the opportunity to disrupt datacenters and phones.


"squandered [..] the opportunity of being the royalty-free ISA"

Last time I checked, IBM was a business. I don't think losing an "opportunity" to give away the ISA for free sounds like a loss to them.



Anything to back up this assertion? Like some key dates/moments?


Can someone tell me where I can read more on what Power exactly is?


Power is a processor architecture produced by IBM. It's been pretty popular at times - server side it was up there with Sun Sparc in popularity, before Linux on x86 ate the server market for lunch. Client side, variants of it were in every console for one of the generations - Wii/PS3/Xbox360. As pointed out by another poster, the PowerPC variant used to be in apple hardware too.

Variations of standard Power chips are also what runs IBM's mainframes. For quite a long time they beat Intel in raw performance on some benchmarks. The problem is it was 10x the price.

The workstations and servers were always crazy expensive compared to x86 hardware and that's one reason they remained pretty niche.

Within IBM, Power systems were most frequently used to run IBM's Unix - AIX.

https://en.wikipedia.org/wiki/IBM_Power_microprocessors


https://www.geeksforgeeks.org/powerpc-architecture/

Geek for geek. Otherwise try wiki. https://en.m.wikipedia.org/wiki/PowerPC

Used in old apple before replaced by intel. In one console.


PowerPC is a derivative of the original POWER architecture by IBM.

https://en.m.wikipedia.org/wiki/IBM_POWER_instruction_set_ar...


Also used in the Xbox 360, and an exotic variant was also in the PlayStation 3.


And GameCube, and Wii/Wii U.


What was interesting at Power is that it used SDRAM at half the processor speed which made processing of data from memory faster unlike traditional PC when you read data in bursts.


At least Linux POWER can go little endian! Being big endian seems pretty lonely in 2022 - AIX, Solaris and stuff on System Z? https://community.ibm.com/community/user/ibmz-and-linuxone/b...


I counted other big endian processors (probably looking at GCC support) a while ago, but I don't remember what they were, and probably only embedded processors.


Squeezed out my ARM frankly since it's captured the imagination as the next server platform


Can someone who's using a PowerPC workstation for day-to-day tasks tell where its better than x86_64 or ARM?


If i had a "different" ISA to market, I'd think the very first priority would be to have excellent free compilers for it, be they FOSS or not. Write our own and give them away if we gotta. Remove every barrier to people using your product that you can.

IBM doesn't think that way. They're probably using "the longhair hippies can't use this hardware" as a selling point.


TIL LLVM and gcc are not excellent free compilers and useless for longhair hippies. What do your hippies use?


I know nothing of the support for the POWER arch out there now... if you do, would you say it could be improved? is "corporate whim" stifling open support for them? Last I messed with POWER i couldn't find docs for what I needed and so just stuck with Intel Sparc and Alpha. (The last two were still viable market presences then; to give an idea of how long ago this was)


... so you "know nothing", but started the discussion by making a strong claim to the contrary, seriously?

The article already lays out the field: The infrastructure groundwork is done (compilers, linux distros, ...), in no small part thanks to IBM funding such efforts. The main limit is hardware availability and overall mindshare, not infrastructure. If you have a reason to develop for POWER it's easy and reasonably well documented in my experience, the problem for the ecosystem is that few people have such reasons. (i.e. you're either an enterprise supplier like SAP or a bunch of nerds at an uni or with a raptor box at home, little inbetween)


>Last I messed with POWER i couldn't find docs

You can use google nowadays.


The IBM XL compilers have had a free to download community version for a little while now [1]. They also appear to be adopting LLVM [2].

1. https://www.ibm.com/products/xl-cpp-linux-compiler-power

2. https://community.ibm.com/community/user/power/blogs/si-yuan...


In some ways, #2 is sad to me. Don't get me wrong, I have worked a ton on LLVM and love it.

But at least when i was there, IBM's interprocedural middle-end (TPO) was some of the nicest and well structured C++ compiler code i had seen in a long time. It was well written, well commented, and well architected.


It may have changed since I left, but my understanding as of a few years ago was that IBM was replacing the front-end with Clang but continuing to use TPO (at least for now).




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: