Not the OP, but if they mentioned they did it from Termux you can install an X server and a full desktop browser like Firefox either directly or via a chroot/proot "container" via for example proot-distro [0].
There's also an app called userland which seems to spin up some kind of VM. I don't know if android supports KVMs or if it's a real userspace chroot type of deal.
I toyed with it briefly, the performance is about what I expected given the hardware I was using. Which is to say, not a terrible amount of overhead
I'm using VictoriaMetrics (VM) to store basic weather data, like temperature and humidity. My initial setup was based on Prometheus however it seemed very hard to set a high data retention value, default was something like 15 days if I recall correctly.
Since I would actually like to store all recorded values permanently, I could partially achieve this with VM which let me set a higher threshold, like 100 years. Still not 'forever' as I would have liked, but I guess me and my flimsy weather data setup will have other things than the retention threshold to worry about in 100 years.
Would be nice to learn the reason why an infinite threshold is not allowed.
Mimir [1] is what we use where I work. We are very happy with it, and we have very long retention. Previously, our Prometheus setup was extremely slow if you went past today, but Mimir partitions the data to make it extremely fast to query even long time periods. We also used Thanos for a while, but Mimir apparently worked better.
I have done mimir deployments. I am generally very happy with mimir. It's very cost efficient. It does require someone to know enough to admin it though.
I didn't pick thanos because I really like the horizonal scaled blob store architecture the Grafana crew put together.
They're out of tech posters' zeitgeist but AFAIK they are each still maintained and fulfilling people's needs. Just not as much commentary or front-of-mind-share.
Usually performance and storage concerns. You can set effectively infinite retention on Prometheus, but after a long enough period you're going to like querying it even less.
Most TSDBs aren't built for use cases like "query hourly/daily data over many years". Many use cases aren't looking further than 30 days because they're focused on things like app or device uptime and performance metrics, and if they are running on longer time frames they're recording (or keeping, or consolidating data to) far fewer data points to keep performance usable.
As a fellow alternative Ubuntu user, one additional difference is the LTS support schedule which is shorter for Ubuntu alternatives. For example, Xubuntu 24.04 is supported until 2027, while Ubuntu 24.04 is supported until 2029.
Interestingly, as I understood it, when a release such as Xubuntu goes out of support it does not stop getting updates that are not specific to that respective flavor, so in a way you still have some indirect support, but feels a bit like a gamble.
Good point. My strategy is to buy a new laptop (Thinkpad X1 Carbon) every one or two years and install the latest LTS of Xubuntu, so that a 3-year support is long enough for me.
Over the years, I have developed my notes and scripts to configure quickly a newly installed Xubuntu system on a new computer, so that everything works in the same way as on my old computer. Since I stick with the same brand of laptop (Thinkpad X1 Carbon), I do not feel any difference after the configuration, except that the computer becomes more powerful. I do not want to spend my time on adapting myself to a new system or a new computer.
Buying a new laptop so frequently may sound a bit expensive. It is indeed not if you spend so much time on your laptop as me. A more powerful laptop means that I can finish my work (e.g., numerical experiments) in (much) less time. In this sense, my life is prolonged. This is the only case I know that a common person can effectively trade an affordable amount of money for a longer life, as I often tell my students.
If you’re ever feeling adventurous, I would suggest trying out Debian with XFCE instead of Xubuntu. I recently migrated and even though the installation isn’t as pretty, I find both the installation and the distribution itself to be much more stable and lightweight without sacrificing any important functionality.
One nice benefit for optical media is that it's hard read only by default. This makes it easy to ensure the install media does not get corrupted by overwrites or malware.
No, it's not read-only. It's just that the writes are somewhat random, and in control of God and physics, rather than, of human design.
(speaking as someone with a big pile of CD-Rs in the attic, most of which have some forms of corruption on them)
I'd love to see a standard like M-Disc in mainstream use. The problem is optical has not kept up with magnetic. M-Disc is about $100 for 100GB. In contrast, I bought a 20TB HDD for ≈$200-300, so about $10/TB, so 100x cheaper. It's as cheap to buy a HDD every year and make a full copy for a century as it is to buy M-Disc.
I don't think that's fundamental, so much as economies-of-scale. Optical should be cheaper per density, more stable, and write-only, but CD was invented in 1982, DVD in the nineties, and we've only made limited progress since then. HDD were on a rapid growth curve until SSDs came in. Today, SSDs are on the growth curve, and I expect will eventually be cheaper than magnetic or optical.
Optical made advancements beyond the DVD. However they caught on only in a limited manner. There is Blue-Ray, now 128GB 4-layer. However, due to the amount of data we generate and consume, long term storage is less of a concern at the consumer level, i.e. there is almost always more where that came from. Content has, simply put, been commoditized.
I never said they didn't, and indeed, cited 100GB optical media. I said they made _limited_ progress.
In 1982, a 20MB HDD was considered large, while a CD is 640MB. That's an almost insurmountable 20x advantage to optical.
By the late nineties, a DVD was 4.7GB, while typical HDDs were maybe 500MB-2GB, giving a more modest advantage to optical.
In 2024, a HDD is maybe 200 times bigger than optical (20TB versus 100GB), while an SSD is maybe 10x bigger (1TB versus 100GB).
Prices are also worth looking at. 100GB media is maybe $10/disk. I remember buying CD-Rs and DVD-Rs in stacks of 20-100, at maybe 10 cents-$2 per disk, depending on type, quantity, and year. The cost-per-byte for optical media has hardly changed in two decades.
For some cars, getting to reverse requires to press down the shifter and then shift to the same position as 1st. I think this scheme would not work in such cases.
That's a good list of reasons, however it seems the biggest concern is if you run a dedicated DNS service on your network.
For a simple home network setup, as long as naming conflicts can be managed, it looks like mDNS is quite handy.
On a side note, I find .local to be best suited for the purpose, since from the language perspective it's easier on international users than .localhost
The newly proposed .internal comes close, but .local still looks more semantically flexible or maybe this is a cognitive bias of mine.
They list under hardware requirements "a powerful graphics card with at least 6 GB VRAM is recommended. Otherwise generating images will take very long"
Does anyone have any idea what would very long mean on a 4GB VRAM card?
"Tested on a NVIDIA GeForce RTX 3050 under Ubuntu with 4GB VRAM. (...) lowered the canvas to 2Kx2K and it seems to just about be okay. My test prompt (...) produces a picture of rocks. (...) I get a nice scene (...) Both take about two minutes."
My very-rough feeling about it from playing around with Stable Diffusion is that it takes about 4x as long if it runs out of GPU memory and needs to shuttle data back and forth from system memory. There are a lot of variables though - on my 3070 with 8GB of RAM, I can get very impressive 512x512 images in about 10 seconds with somewhat low sample counts, or I can set it to a higher resolution and sample count with 2x upscaling and get a really sharp image in around 2 minutes.
After more than 15 years of average default Ubuntu user, I also had to move, in my case I went to OpenSuse Tumbleweed (TW).
While there were a lot of Ubuntu changes over the years, that I managed to adapt to, this time I realised it was too much work on my side as a user:
- for snaps, since I have data spread across different mounts, I had to apply some bindfs cludge to address the hardcoded paths that snaps get access to.
- for Gnome I got tired of having to install all kinds of 3rd party extensions for basic UI features like right click actions.
Debian or derivative would have been my initial choice too, I went with OpenSuse because it's got a strong developer backing and very responsive update schedule, it's a rolling release which means that it has quite new packages but also if updates break it, you can quickly revert it with snapper.
Some downsides to consider are that this is not a Debian derivate so, while most guides on the Internet can be 'ported' to OpenSuse it's a bit more involving. That said they also feature quite some good docs, so many common scenarios are covered.
PS I know you can get the same Gnome on OpenSuse, but the upside is that one has access to more recent alternative DEs, which can replace Gnome on rolling releases.
[0] https://github.com/termux/proot-distro