Hacker Newsnew | past | comments | ask | show | jobs | submit | _peeley's commentslogin

I'm surprised that this is being dropped, but the "Send to Kindle"[0] feature is still supported. I would imagine that the email servers (and whatever other behind-the-scenes cruft it requires) to relay files to individual Kindle devices is a much bigger maintenance burden and "piracy" enabler than transferring via USB.

I'm a huge user of the Send to Kindle feature via my Calibre library too, so this has me pretty bummed and pessimistic for the future. I guess if the worst comes to pass, I can just look into jailbreaking or getting any of the zillion other Android-based eReaders from AliExpress.

[0] https://www.amazon.com/sendtokindle/email


send to kindle requires that you connect your kindle to the cloud, which gives it a chance to sync up all the data the device has collected while it has been offline.

it seems pretty clear that's what's really important to them - they want all that sweet sweet telemetry, and could care less whether you're actually buying the books or not.


That and they’d rather the small % of users that use the kindle for piracy keep doing that vs going to another ecosystem


Article mentions Calibre will continue to work:

*You can continue to use Calibre to send Kindle books to your Kindle

*Send to Kindle will continue to work


Sure, I read that in TFA too. My point is that if USB transfers of Kindle eBooks are being sunsetted, I would estimate that Send to Kindle's days are also numbered.


The "Send to Kindle" has a hard limit of 50MiB if done via Email, or 200MiB if done via amazon.com/sendtokindle.

My complaint on this feature is mostly that the only supported proper ebook format is now epub, and I frequently run into the E999 error. Sometimes I can workaround it by converting the epub to mobi and back, but sometimes it just keeps failing which is frustrating.

(I run Calibre on a Linux headless box in Docker so connecting it to USB then transfer is toily)


I've been quite happy with my Kobo and the choice to avoid the Kindle/Amazon walled garden.


Do you mind specifying the title of the paper? It appears there's quite a few papers[1][2][3] published concerning Therac-25 by an author named Leveson.

[1] http://sunnyday.mit.edu/papers/therac.pdf

[2] https://ieeexplore.ieee.org/document/274940

[3] https://ieeexplore.ieee.org/document/8102762


Gladly! It's the second of these, "An investigation of the Therac-25 accidents" (1993) w/ DOI 10.1109/MC.1993.274940.

#1 is a later version that was an appendix to her book Safeware (which I have not read), and [3] is a nice second read that follows up on [2] many years later but isn't quite the relentless engineering detective story that makes the original so poignant.



This is the one :-). If I recall right, the journal scan had some additional diagrams some uploads have omitted, so while a PDF isn't the most ergonomic thing it's probably the safe bet.


I will take the opportunity to ask if there’s a good way to read PDFs using phone. How terrible it is!


Probably a Nobel to whoever solves this.


How ironic that, at the time, she was in a professorship endowed by Boeing.


Safeware is good. I read it back in the day. Several good failure analysis.


Thank you!



Very exciting! I'm particularly pleased to see the invisible encryption stuff mentioned.

One of the biggest pain points I had when setting up a self-hosted Matrix instance and getting all my devices signed in was the crypto stuff. At least in the client I use, Element, I was bombarded with tons of popups with vague "Upgrade your encryption!" prompts upon logging in the first time. The copywriting on the "Security & Privacy" page was less than helpful in illuminating what I was actually "upgrading" or setting up, since specific technical terms (e.g. recovery key/security phrase/security key) were all used more or less interchangeably. If that kind of confusion can be reduced or swept under the rug for end-users, it'd be a huge improvement on user experience.


Yup. One of the biggest learnings of E2EE in Matrix is that the complexity is 95% user experience. However, in Element X, we've been determined to get it right - although there is still some temporary UX in there while full-blown Invisible Crypto is still rolling out (as it requires a breaking change to stop encrypting/decrypting with unsigned devices - the equivalent to a browser refusing to talk TLS to self-signed certs).

If you haven't seen MSC4161 (https://github.com/matrix-org/matrix-spec-proposals/blob/and...) i highly recommend it as evidence of how we've made a serious effort to fix the terminology and copy - not just for Element X but across all Matrix clients.


Standardized terminology is an awesome step. I'd love to see some of standardized file format for setting up the right keys on different devices. In the past I'd had annoying issues getting all the messages to decrypt on multiple devices, especially if I wasn't using the same client every device. Honestly though I suspect I was doing something wrong.


there's already a standardised export format for message keys (although EX doesn't let you load/save it yet, mainly because online backup already solves most use cases): https://spec.matrix.org/v1.12/client-server-api/#key-export-.... If you enable backup on your clients then EX at least will merge the missing keys to/from the backup. Meanwhile, the original problem of missing keys were probably unfortunately just due to bugs - although as per https://matrix.org/blog/2024/10/29/matrix-2.0-is-here/#4-inv... we've done a huge amount of work to improve this now, and they should be really unusual now (at least when due to bugs, rather than permissions or data loss or similar).

Separately, talking of standardised key formats: one of the team did a skunkworks hack last Friday to experiment with a standardized file-format for user public keys - a kind of basic key transparency ledger for Matrix, to help with bulk-verification within orgs.


Interesting post. I've used Laravel for a few years now for work and personal projects, and I've really enjoyed it. I tried to test out Rails to explore other MVC web frameworks, and I just couldn't vibe with it. I think the major areas in which I was incompatible with Rails were:

- Lack of dependency injection/inversion of control. I find it interesting the author lists this as an advantage. With Rails, I was always a little anxious not knowing where things were defined or being implemented.

- Validation happens on models, not requests. With Laravel, I really appreciate being able to validate pretty much any data coming into the application regardless of whether or not it ends up in the database. With Rails, I tried to look for something similar to FormRequest and its validation rules, but I couldn't find many solutions. I think it might just be one of those things that's not the "Rails way".

- Perhaps more of a Ruby issue than a Rails issue, but the dynamism of the language - especially in its type system - was a bit of a drawback for me. I really appreciate PHP 8 and newer versions of Laravel for their support in type hinting and static analysis; being able to mouseover anything and know pretty confidently what I'm working with is a huge boon in my productivity.

I definitely agree with the author on a lot of the Laravel tooling stuff. I've learned to just kind of ignore most of the offerings outside of the core framework. I'm sure it's all great, but there's always a bit of churn in Laravel as the author mentioned so I'd rather save myself the future heartbreak.


Pretty much my experience as a frontend guy that had to fill in for a team reduction and had to start working on the Rails side of things. I always felt like I was just having to "know" what was going to happen. Unit tests, mocks, etc. were all coming back "ok" but there was always something in production that would act up. I had never this happen with the Java and PHP backends that I always had to touch up when there wasn't much talent available. Before I was let go from the Rails client I suggested that they invest in property based testing for all code as it was quite clear there were cases that weren't being thought of that existed in the application somewhere (undocumented, manually added, etc.).


> Lack of dependency injection/inversion of control. I find it interesting the author lists this as an advantage. With Rails, I was always a little anxious not knowing where things were defined or being implemented.

This is why i switched from rails to symfony even though I hadn't had experience with those kinds of systems before. I took to it rather well


can you explain a bit what you mean with

> - Lack of dependency injection/inversion of control. I find it interesting the author lists this as an advantage. With Rails, I was always a little anxious not knowing where things were defined or being implemented.

Rails itself doesn´t have framework/library for DI/IOC but you can use constructors, I understand that a lot of Rails devs won't and just use wtv they need.


Ah, I think you're referencing the sidenotes. Sometimes the website doesn't render properly on mobile (I've tried getting click-to-expand working, but it's tricky), so try reading in desktop mode.

Regarding sidenote #1, I actually very deliberately did not mention the language or company ;) Here's the full text of the sidenote, if you're still unable to get it rendering:

> I'm not going to name the language itself, because this post would just turn into a flame war over that language specifically, and I definitely don't want to cast shade on any language/community in particular. I'm also kind of hoping that the most annoying people read this and think, "Ah, of course he's talking about that language over there! This criticism obviously doesn't apply to my perfect and favorite language!" Regardless, I feel that the thesis and content of this post applies pretty evenly to most functional programming languages.


Pulling from this published source[0]:

> As I said, the problem is a classic one; it was formulated during the war, and efforts to solve it so sapped the energies and minds of Allied analysts that the suggestion was made that the problem be dropped over Germany, as the ultimate instrument of intellectual sabotage.

[0]: https://academic.oup.com/jrsssb/article-pdf/41/2/164/4909740...


Oh yeah, I had just heard about Talos Linux the other day in this blog post[0], and it seems super interesting. If I was all-in on Kubernetes, I'd probably consider it strongly. Unfortunately, though, there's other stuff that I want to run on the machines outside of the k8s cluster (like the BIND server I mentioned in the post).

[0] https://xeiaso.net/blog/2024/homelab-v2/


Same, I originally had a bunch of RasPi's in my lab running differing versions of Raspbian until I got tired of the configuration drift and finally Nixified all of them. Writing a single Nix Flake and being able to build declarative SD card installation images for all of them makes managing a bunch of different machines an absolute dream (tutorial here[0], for those interested).

The only issue is remotely deploying Nix configs. The only first-party tool, nixops, is all but abandoned and unsupported. The community driven tools like morph and deploy-rs seem promising, but they vary in terms of Flakes support and how much activity/longevity they seem to have.

[0] https://blog.janissary.xyz/posts/nixos-install-custom-image


I can vouch for deploy-rs. Used it for years without issues. Flake support is built in and activity is pretty good.

Disclaimer: I am a relatively active contributor


Im really happy with nixinate if you havent tried it. Basically does the bare minimum so theres no real concern over continued development.


I agree. For most people just starting out, it's a lot more worthwhile to get a single cheapo repurposed desktop or a single Raspberry Pi to run PiHole or something on and then expand from there. My homelab[0] started as a single Pi running PiHole and has expanded to four machines running everything I need from Jellyfin to Calibre to DNS, etc.

That being said, when I finally got around to rackmounting and upgrading some of the other hardware in my lab, this "beginner"'s guide was really helpful.

[0] https://blog.janissary.xyz/posts/homelab-0


Please, do not use Raspberry Pi for a homelab unless you are 100% sure your workload is OK with it. I've just sold mine after ~2 years it being in a box in a closet. It's just to weak, too useless. I value my power socket slot more that RPi. If ARM is important, especially Mac Mx, the lowest Mac Mini is not that expensive. RPi is close to zero in performance. It could be just some unnoticeable VM in Proxmox/AnotherHypervisor performance-wise.


I remember when I was diving deep into Docker for the first time a few years ago, I would have really appreciated seeing something like this. I wrote something kind of similar in a blog post [0], but that was only a semi-confident note to self that took quite a bit of digging through READMEs and GitHub issues. All the different container runtimes/engines/interfaces are really enough to make your head spin.

[0] https://blog.janissary.xyz/posts/docker-gripes , see `Conclusion` section


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: