Hacker News new | past | comments | ask | show | jobs | submit | anewhnaccount2's comments login

The basics are here: https://box86.org/ It is an emulator but:

> Because box86 uses the native versions of some “system” libraries, like libc, libm, SDL, and OpenGL, it’s easy to integrate and use with most applications, and performance can be surprisingly high in some cases.

Wine can also be compiled/run as native.


> Wine can also be compiled/run as native.

I'm not sure you can run Wine natively to run x86 Windows programs on RISC-V because Wine is not an emulator. There is an ARM port of Wine, but that can only run Windows ARM programs, not x86.

Instead box64 is running the x86_64 Wine https://github.com/ptitSeb/box64/blob/main/docs/X64WINE.md


It should be theoretically possible to build Wine so that it provides the x86_64 API while compiling it to ARM/RISCV. Your link doesn't make it clear if that's what's being done or not.

(Although I suspect providing the API of one architecture while building for another is far easier said than done. Toolchains tend to be uncooperative about such shenanigans, for starters.)


Box64's documentation is just on installing the Wine x64 builds from winehq repos, because most arm repos aren't exactly hosting x64 software. It's even possible to run Steam with their x64 Proton running Windows games. At least on ARM, not sure about RISC-V.

Wine's own documentation says it requires an emulator: https://wiki.winehq.org/Emulation

> As Wine Is Not an Emulator, all those applications can't run on other architectures with Wine alone.

Or do you mean provide the x86_64 Windows API as a native RISC-V/ARM to the emulator layer? That would require some deeper integration for the emulator, but that's what Box64/box86 already does with some Linux libraries: intercept the api calls and replace them with native libraries. Not sure if it does it for wine


> but that's what Box64/box86 already does with some Linux libraries: intercept the api calls and replace them with native libraries. Not sure if it does it for wine

Yeah, that's what I meant. It's simple in principle, after all: turn an AMD64 call into an ARM/RISCV call and pass it to native code.

Doing that for Wine would be pretty tricky (way more surface area to cover, possible differences between certain Win32 arch-specific structs and so forth) so I bet that's not how it works out of the box, but I couldn't tell for sure by skimming through the box64 repo.


As demonstrated by Microsoft themselves in Windows 11: https://learn.microsoft.com/en-us/windows/arm/arm64ec


Almost certainly done to avoid the cost of regulatory compliance.


There are a lot of words for crunchy potato slices, and people get very angry about which...


The models are here: https://huggingface.co/LumiOpen


They successfully trained LLMs on Lumi, which has AMD Instinct MI250X GPUs. This perhaps provides a hint about one angle on why AMD are interested.


It makes sense then for AMD to buy them out.

If they’ve trained LLMs with lumi which has a lot of instinct GPUs there is a high chance they’ve had to work through and solve a lot of the gaps in software support from AMD.

They may have already figured out a lot of stuff and kept it all proprietary and AMD buying them out is a quick way to get access to all the solutions.

I suspect AMD is trying to fast track their software stack and this acquisition allows them to do just that.


I am curious if the models are any good, though. The landscape is so fragmented I never heard of Poro.


Poro (reindeer in finnish) is specifically developed to be used in Finnish. GPT etc. general models struggle with less used languages. Unfortunately this sale likely means this development will cease.


Reindeer is a great name, and gives me an idea - next time I create an Azure OpenAI resource (depending on model availability and data residency requirements, sometimes you need to create more than one) I'm going to start going through Santa's reindeer names.


Gpt4 or even 3.5 is quite good at Finnish. Was there ever a benchmark against closed source models?


So AMD wants to know how they did it, understand.


Then a seagull flies overhead ;)


If you make a poll with two options, you can force people into two camps, and declare on the "winner". I would bet that most of us have slightly more nuanced positions, where we may be influenced by arguments made by either the "Tankies" or the "evil earth destroying capitalists" depending on context.


By all means do the poll as best you see fit. I’m none binary when it comes to polling.


What about being able to use direct voice when reporting related work? e.g. "Secondname invented a yabayaba". I have seen "[1] invented a yabayaba", but doesn't it look kind of odd?


Well there is still StarLogo. Looks like it's even been rebooted: https://www.slnova.org/



I've had Google photos eat ~12 months worth of pictures, I don't really trust anyone to keep my data safe.

Logseq has been fine for me over several years, but it also makes it extremely easy to auto-commit to git.


Logseq's git auto-commit is a great insurance policy and should make recovery a breeze.


Ah, those appear to be entirely multi-device-sync problems. (I use logseq with git-autocommit for storage and backup - but since the multi-node sync stuff wasn't available for self hosting anyway, I've never tried it, and thus dodged the problem entirely. Obviously for a lot of people multi-device use is the entire point, but for some of us, logseq is "just an editor"...)


Yes but as far as I understand this is only really usefully possible with FlashAttention. (The main idea is that you have to use the log-sum-exp trick when computing the softmax, but can't compute the max activation incrementally so have to rescale everything.)


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: