Hacker News new | past | comments | ask | show | jobs | submit login
Writing GUI applications on the Raspberry Pi without a desktop environment (2019) (avikdas.com)
205 points by raytopia 6 days ago | hide | past | favorite | 51 comments





AvaloniaUI can also utilize framebuffer to run on Raspberry PI, see https://docs.avaloniaui.net/docs/guides/platforms/rpi/runnin...

Wow, I would have never discovered this and I work full time with .NET, and have experience with Avalonia! It's just that I normally don't look for Pi-related stuff there, rather heading to Python as a clear de facto Pi language and libraries. Really cool to see this kind of niche being carved out by .NET and Avalonia! Too bad it's generally easier to get I/O boards to work out of the box on Python, with high level drivers and libraries often already written. .NET of course also has its way to interface but you'll likely end up doing it on a lower level due to the lack of drivers. There _are_ drivers but not as many and it's more likely to end up with more generic GPIO pin reading/writing libraries.

Well, there are a lot of efforts to make C# / .NET suitable for raspberry projects.

NativeAOT is on the way, so that you can compile small binaries without any dependencies.

If that does not work, you can compile your project as "ReadyToRun" with "Trimming", which will treeshake all unneeded runtime stuff and prepare an acceptable small binary.

One way to overcome the problem with missing drivers is to setup a python API (e.g. Flask) and triggering things through the Avalonia UI. Or you could wrap a binary execution via CliWrap, an excellent library for .NET to run external processes.

I once wrote a little eventhub[1] to run on the Raspberry PI, it is just an experiment but worked ok.

There is also .NET IoT[2], which exactly targets these platforms.

1: https://github.com/sandreas/eventhub/tree/main/eventhub

2: https://dotnet.microsoft.com/en-us/apps/iot


I've replaced an industrial windows workstation with PI + Avalonia in a factory. Way more compact, you don't have to care about Windows anymore. Is the PI industrial grade? No, but we have a spare PI ready, and SSD with preinstalled OS. So you can fix everything in a matter of minutes.

Although I had to rewrite the software, because the original was WinForms, it was a pretty simple application.


I’ve used this in production. It’s great.

I've been trying to setup a pipeline of pass through hdmi from an HDMI input to an HDMI output with an OrangePI5 Plus. I could talk for a long time (now) about the issues with vendor supplied kernels and unsupported hardware. I was completely naive until I had the hardware in hand, having not done any embedded work.

Right now, the best of breed thought that I have is to run Weston, have a Qt application that is full screen, and to use DMA buffers so I can do some zero copy processing. Rockchip has their own MPP and RGA libraries that are tied into the Mali GPU, and I'm not smart enough to understand the current level of driver/userspace support to not leverage these libraries.

Rockchip and the ARM ecosystem is such a mess.

If anyone has any pointers, experience, approaches, code, etc, I would love to see it.


Not sure the kind of processing you need to do on the video stream, but have you considered giving `ffmpeg` a try if you just need plain pass-thru from video input to output? `ffmpeg` might be built with support for the Mali libraries you mention for the OS you are using. If you are able to run `weston`, `ffmpeg` should be able to output directly to the DRM card thru the use of SDL2 (assuming it was built with it).

If the HDMI-USB capture card that outputs `mjpeg` exposes a `/dev/video` node, then it might be as simple as running:

`SDL_VIDEODRIVER=kmsdrm ffmpeg -f video4linux2 -input_format mjpeg -i /dev/video0 -f opengl "hdmi output"`

An alternative could be if you can get a Raspberry Pi 3 or even 2, and can get a distro where `omxplayer` can still be installed. You can then use `omxplayer` to display your mjpeg stream on the output of your choice, just make sure the `kms/fkms` dtoverlay is not loaded because `omxplayer` works directly with DispmanX/GPU (BTW, not compatible with Pi 4 and above and much less 5), which contains a hardware `mjpeg` decoder, so for the most part, bytes are sent directly to the GPU.

Hope some of this info can be of help.


Looks helpful! I assume ffmpeg needs to be built with SDL for this to work? I couldn't get it to work with my current minimal compile, but I don't think the board I'm working on has SDL, so might need to install that and recompile

That's correct, `ffmpeg` needs to be built with `SDL` (SDL2 really is what is used on all recent versions). When `ffmpeg` is built, and the dev files for SDL2 are present, ffmpeg's build configuration picks it up automatically and it will link against the library unless instructed otherwise by a configuration flag. When you run `ffmpeg` first lines usually show the configuration which it was built, so it might have hints as to what it was built with, but if you want to confirm what it was links against you can do a quick:

$ ldd `which ffmpeg`

And you should get the list of dynamic libraries your build is linked against. If SDL2 is indeed included, you should see a line starting with "libSDL2-2...".

If I remember correctly you should be able to output to the framebuffer even if there is no support for SDL2, you just have to change the previous line provided from using `-f opengl "hdmi output"` to `-pix_fmt bgra -f fbdev /dev/fb0`.

You can also use any other framebuffer device if present and you'd prefer it (e.g. /dev/fb1, /dev/fb2). Also, you might need something different to `bgra` on your board, but usually `ffmpeg` will drop a hint as to what.


In general DRM/KMS can be quite confusing a there seems little userland documentation available. I assume you get the DMA buffers from the HDMI input somehow? If so, you should be able to use drmModeAddFB2WithModifiers to create a DRM framebuffer from them. Then attach that to a DRM plane, place that on a CRTC and then schedule a page flip after modesetting a video mode.

The advantage would be that you can directly run without starting into any kind of visual environment first. But it's a huge mess to get going: I wrote quite a bit of Pi4/5 code recently to make a zero copy HEVC/H264 decoder working and it was a quite a challenge. Maybe code like https://github.com/dvdhrm/docs/tree/master/drm-howto can help?


The HDMI receive device on the OrangePi5 plus is in a semi-functional state. Collabora is in the process of up-streaming code so the RK3588 will work with the mainline linux kernel.

Until that happens, working driver code is in a very transitive space.

To get going, and sidestep that problem, I've purchased an HDMI to USB capture cards that use MacroSilicon chips. I've some thought of using a cheaper CPU in the future with a daughter board based on this project which uses MacoSilicon chips: https://github.com/YuzukiHD/YuzukiLOHCC-PRO, which made it potentially not a waste of time to dig into.

The MacroSilicon HDMI to USB capture cards output MJEPG, which Rockchip's MPP library has decoder for.

So the thought is: (1) allocate a DMA buffer (2) set that DMA buffer as the MJEPG decoder target (3) get the decoded data to display (sounds like I may need to encode again?) & a parallel processing pipeline

I'll dig into the stuff you've sent over, very helpful thanks for the pointers!

I've thought about switching to Pi4/5 for this. Based on your experience, would you recommend that platform?


> I've thought about switching to Pi4/5 for this. Based on your experience, would you recommend that platform?

Their kernel fork is well maintained and if there is a reproducible problem it usually gets fixed quite quickly. Overall pretty happy. KMS/DRM was a bit wonky as there was a transition phase where they used a hacky mix between KMS and the old proprietary broadcom APIs (FakeKMS). But those days are over and so far KMS/DRM works pretty well for what I'm using it.


Not the same thing but there is this project that does digital rgb to hdmi using a pi https://github.com/hoglet67/RGBtoHDMI I believe they use a custom firmware on the pi and a CPLD, but you could probably eliminate that doing hdmi to hdmi.

Fascinating, thanks for pointing this project out!

I know there is at least one ffmpeg fork with Rockchip mpp and rga support, although I haven’t tested it myself yet: https://github.com/nyanmisaka/ffmpeg-rockchip

I have tested the mpp SDK a bit and the code is easy to work with, with examples for encode and decode, both sync and async.


They don't have a MJPEG decoder yet, which is a blocker for hardware acceleration, but I'm going to try and patch the library with the author and get it added. Thanks for pointing it out!

you can also run Qt directly on the console fb without wayland/X

I might end up doing that. When I was first digging into it, the Qt documentation seemed confusing. But after sinking 10-20 hours into everything, it's starting to click a lot more.

Thanks for the pointer!


LVGL is pretty much the IOT industry standard for 32-bit architectures like Renesas RX, Arm Cortex-M, ARC, TI MSP, Atmel...

https://lvgl.io/


This is a great library, but as far as I understand it is rather for bare-metal or low-resource embedded operating systems, not for Linux. The OP apparently runs Linux. Could he also use LVGL on Linux and write to the FB device?

Oh, yeah, you're right. It seemed like OP was trying to prove he could "bare bones" it because the article was about how to avoid everything but a framebuffer, so I thought I'd offer this up... LVGL is as bare-bones as it gets!

yes you can

Is this an officially supported feature, so that I could e.g. develop on Linux and directly use my code on an embedded device with no changes?

Officially supported? Yes. With no changes? Largely, but you usually configure and compile it with your application for the platform you're targeting, so the vast majority of your code should be the same, but it might require some tweaking to any hardware specific initialization.

You can also run it under X using an SDL backend.

One thing to note though, is that, unless they added it in 9.1, certain mouse control schemes that we've come to expect from desktop applications aren't supported. Since it was made primarily for touch screens, it only supports mouse click (tap) and long press, so it doesn't support things like mouse wheel scrolling in a scroll box, or mouse button right click for example. But tapping, sliding, flinging controls and screens like it's a smartphone work really well. I had made a first pass at implementing support for more desktop mouse functionality a while back, but the maintainer wanted a more generalized solution to what I had done, and I haven't had a chance to go back and play with it again. It's pretty easy and fun to hack on the code base. It might not fit the bill for all UI problems as it's pretty low level, but it's remarkably robust and featureful and I had a great time using it for a number of applications on a custom desktop OS in a similar vein to the linux framebuffer.


Thanks, that was helpful. I will take a closer look at it and see what the functionality differences are compared to Qt Widgets.

PyGame is also great for scripting quick UIs for the fb. You don't want to write big apps in it but for ease of setup and use it's sweet.

One problem with Raspberry Pi displays is that not all of them provide vsync signal in SPI mode. That leads to high CPU usage (due to very high frame rate) and its generally inefficient. Choose your display carefully.

I've successfully utilized Qt5 (probably would work with 6 too) for framebuffer/OpenGL.

QML is nice, animations were much smoother than I expected.


QML's main use case, commercially at least, is fullscreen embedded applications. Of course it works great for that in particular.

The way I did it was to run Weston without any icons or anything and using systemd to start my app and also having systemd reopen my app if it closes. Worked well enough.

I haven't personally used it, but I have seen cage [0] being recommended a few times for similar use-cases.

[0] - https://github.com/cage-kiosk/cage


Stumpwm is very nice if in X11.

No WM at all is less nice, but can work. A bit over a decade ago, we shipped a kiosk/appliance that software-wise only had kernel, X, and firefox. All starting directly from /etc/inittab, something like this:

  id:5:initdefault:
  x1:5:respawn:/etc/rc-x11
  ff:5:respawn:/etc/rc-firefox
...where /etc/rc-∗ are few lines of shell that set up environment variables, and end with "exec chroot --userspec=... / /bin/firefox ..." - this way X and firefox run under same PIDs that sysvinit knows about, so they get restarted after a crash.

We have a bunch of examples of GUIs that run inside containers (with different backends, frameworks, without compositor etc): https://github.com/toradex/vscode-torizon-templates.

Don’t forget about the Kivy framework. It’s a python framework that renders openGL accelerated UIs, also headlessly on the raspberry pi if needed. https://kivy.org

More importantly, Kivy also has Vulkan compatibility, which is the only thing you can run on the Pi 5 since it has no OpenGL support.

Can you really use it without a desktop env though? Would be cool if one could launch it in full kiosk mode from the headless tty.


I have an application deployed on Raspberry Pi 3 machines running a GUI touch application running on EGL without X11. To do so you set the Window provider to `egl_rpi`. My recall is that I had to do a custom build of Kivy to do this at the time. I'm pretty sure you can do this with a Raspberry Pi 4, but I don't know for sure about a Raspberry Pi 5.

I have a project with Raspberry Pis up to 3B+ (https://just-functional.com). egl_rpi is not supported on 4, but KMS/DRM is, still headless.

> which is the only thing you can run on the Pi 5 since it has no OpenGL support.

https://mesamatrix.net v3d conformant up to OpenGL 3.1.


> which is the only thing you can run on the Pi 5 since it has no OpenGL support.

The Pi has always supported OpenGL ES though.


Years ago I played with TekUI on a Allwinner A10 board and it worked fine also displaying GUIs on the framebuffer without any underlying X environment.

http://tekui.neoscientists.org/

Only caveats: it's quite old and doesn't seem maintained anymore, although it still compiles fine, and it's Lua only, but being written in C it shouldn't be too hard to port it to other languages.


conceivably interesting to you if you want to do this: my library yeso https://gitlab.com/kragen/bubbleos/blob/master/yeso supports drawing on the framebuffer or in an x window; for c programs you choose which at link time, but the luajit and python bindings dynamically one shared library or the other according to whether they're running under x-windows or not

yeso is a very small and simple library, so you have to do more things from scratch than with libraries with more comprehensive functionality, but being able to test your app in a window before running it on the framebuffer could be useful

yeso's input handling on the linux framebuffer is not as complete as its x-windows handling, but it's good enough that yeso programs like ./tetris-fb (with wasd) and ./mand.lua do work on the framebuffer console. the terminal emulator admu-shell-fb mostly works, but it has the problem that things like control-z or control-c suspend or kill the terminal emulator instead of what you're running in it :)

(i haven't actually tried yeso on the pi, but if it doesn't work there i'll fix it)


I've really tried a few times to go down this rabbit hole and get something working and this thread has like 7 projects I've never come across before.

Yet another alternative for drawing directly on the framebuffer is DirectFB 2: https://directfb2.github.io/

It supports Vulkan, OpenGL, Cairo and other technologies.


I've been doing some R&D for building a Home Assistant dashboard with a nice touchscreen display I bought to go on the wall at home, with the ultimate goal of expanding it into many home-rolled displays.

Having slimmed down the RPi Lite OS as much as I can, running Wayfire with a Chromium kiosk is just too much for the Pi 3B+ I'm using once I add a streaming camera to the dashboard, and it can't cope. My goal is to have a responsive touch-screen display for Home Assistant using something in the form-factor of a Pi Zero 2W, so that I can put the SBC _inside_ the display and build a wooden picture frame to house it all, so it doesn't stick out like a sore thumb.

I'm not sure what kind of API HA has for the frontend, but my first thought, was to build a native application with a Go backend (I write Go for my day job) and use something like Wails[0] for the frontend and completely cut out the heavy-weight Chromium browser.

I have Pi4's and Pi5's but I really want to use the littlest amount of compute (and power) I can to achieve this, even it means writing the UI myself. I've tried looking for a lighter-weight browser that I can simply run the HA dashboard in to no avail.

[0]https://wails.io/


I've been looking at the Radxa zero3w/zero3e.

Looks like this guy got Chromium to work? https://www.youtube.com/watch?v=XAnN1A_sye0


I've been considering that board, but a read of the forums suggest it has lots of issues.

I might try it anyway but I'm not sure what their long term support would be like compared to Pi if I do get it to work.


Have you looked at DietPi as a low resource OS? It's my default choice for a SBC like the Pi. They have versions for x86 too.

I haven't, but tbh, the OS is already using next to zero resources, chromium is the real issue. Of course I could just throw a more peweful Pi or SBC at it, but that's not my goal.

MPV can play videos direct to the FB, with hardware acceleration too

I like this write up. It is timely too.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: