Hacker News new | past | comments | ask | show | jobs | submit | Bhulapi's comments login

I think trying to use anything from LVGL in this project would reduce to essentially just using LVGL. It's more of a project to try and build most of the components from "scratch", i.e. use as few external libraries as possible.

Added MIT license

My first experience with programming was with QuickBasic. You just brought back some memories, wish I still had all of those old programs around.

As far as I know, a framebuffer can mean a lot of things depending on hardware and implementation, but it was used to refer to actual memory that would contain pixel values that would eventually be written to the screen. In Linux, this is abstracted by the framebuffer device, which is hardware independent (you can actually have several fbdevices, which if I'm not mistaken end up referring to different monitors usually). What's convenient about the implementation is that these devices still work as normal memory devices, which means you can read/write as you would any other memory. Some more info: https://www.kernel.org/doc/html/latest/fb/framebuffer.html

I'll preface this by saying that I may have some misconceptions. Other people much more knowledgeable than I am have posted summaries of how modern graphics hardware works on HN before.

My understanding is that modern hardware is significantly more complicated at the lowest levels and (at least generally) no longer has a dedicated framebuffer (at least in the same sense that old hardware did).

My understanding of the memory access provided by fbdev is that it's an extremely simple API. In other words an outdated abstraction that's still useful so it's kept around.

An example of this complexity is video streams utilizing hardware accelerated decoding. Those often won't show up in screenshots, or if they do they might be out of sync or otherwise not quite what you saw on screen because the driver is attempting to construct a single cohesive snapshot for you where one never actually existed in the first place.

If I got anything wrong please let me know. My familiarity generally stops at the level of the various cross platform APIs (opengl, vulkan, opencl, etc).


> My understanding is that modern hardware is significantly more complicated at the lowest levels and (at least generally) no longer has a dedicated framebuffer (at least in the same sense that old hardware did).

Modern hardware still generally can be put into a default VGA-compatible[1] mode that does use a dedicated framebuffer. This mode is used by the BIOS and during boot until the model-specific GPU driver takes over.

[1]: https://en.wikipedia.org/wiki/Video_Graphics_Array#Use


> My understanding of the memory access provided by fbdev is that it's an extremely simple API.

Maybe some of fbdev are like that, but most of them are not. They use vga/vesa interfaces to get a real video memory and write into it. A text console is also using vga video memory to write character data into it.

I still wonder do there any ways to use VGA at its full. Like loading sprites into invisible on the screen video memory and copying them into their right place on the screen. VGA allowed to copy 8 of 4-bit pixels by copying one byte, for example. Were these things just dropped off for a nice abstraction, or maybe there is some ioctls to switch modes for read/writes into video memory? I don't know and never was interested enough to do a research.

> In other words an outdated abstraction that's still useful so it's kept around.

Yes, it is kinda like this, but the outdated abstraction is realized on video card, kernel just gives access to it.

In Linux fbdev is more like a fallback device when drivers for a specific video card are not accessible. fbdevs are used to make a text console with more than 80x25 characters. Video acceleration or opengl can work on fbdev only as a software implementation.


Most GPU drivers these days are DRM drivers, which implement fbdev support for backwards compatibility only [0]. The fbdev API is primarily "fake" these days.

DRM/KMS using dumb buffers are the preferred API if you want to do software rendering and modesetting. You can find several examples of this online if you search for drm_mode_create_dumb.

[0] https://web.git.kernel.org/pub/scm/linux/kernel/git/torvalds...


> contain pixel values

the frame

> eventually be written to the screen

the buffer


Other way around

The buffer is between your pixels and the actual analog or digital output. It serves the conversion between the two formats.

Writing urandom to the framebuffer is a joy in and of itself. You actually reminded me to have users add themselves to the video and input group (which does require root privileges usually), but this way they can then run the library without sudo.

IDK about the video group, but being a member of the input group is a bit of a security concern, since it allows the user to monitor all keyboard input system-wide and even inject their own input events. No big deal if you're playing with a raspberry pi, but not something you'd want to do on your workstation.

Interesting! I got curious and looked this case up, found this generalization to square waves with arbitrary duty cycles: https://www.researchgate.net/publication/376689187_Limit_Cyc...


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: