Hacker News new | past | comments | ask | show | jobs | submit | sm_ts's comments login

Conditional to your definition of "serious", I did: https://github.com/64kramsystem/catacomb_ii-64k. I essentially don't do technical writing anymore (and I had the impression that this topic isn't generally considered interesting), however, my considerations are:

1. there are three levels of refactoring: removing the extensive (unbearable, to be honest) boilerplate that C2Rust introduces; converting the design from "C with Rust syntax" to safe Rust; convert the design from unidiomatic Rust to idiomatic

2. as another poster pointed out, for non-trivial projects, writing refactoring tooling is a must (to remove the C2Rust boilerplate), in order to perform step 1

3. design refactoring (step 3) difficulty depends on the source code design; the code I worked with was relatively hard to refactor, as it was old (school), in particular, lots of globals; the difficulty was caused by the typical freedoms that C gives and Rust doesn't (in other words, the very obvious design differences between C and Rust); somebody did a C to Rust port of (I think) Zstd, which is a modern codebase, and I think much easier to work with (also because of less, or possibly no, external dependencies)

4. regarding the code understanding, if one performs the translation in the three-steps mentioned in point 1, at the end of step 2, one has effectively a safe Rust codebase, "just" unidiomatic

5. in terms of quantity of changes (but not time spent), it's possible to perform the bulk of step 3 with rather local thinking (understanding), but of course, most of the time spent is on major design changes

6. beside a few steps, I was able to perform a conversion in self-contained steps, which is very good news for this type of work. Even better, it's possible (but that's a niche case) to port an SDL project by using at the same time the C library and the Rust one!

7. however, I can imagine projects like Wolfenstein 3d to be very hard to port, since it's hard to port memory allocators and similar

99. most important of all: just converting to Rust will quickly (even immediately) find bugs in the source; I've found approximately four bugs in the source code, including one by Carmack!

All in all, I find this tool great, but somebody needs to work on refactoring tools, and C2Rust's output must be improved in order to be found usable by the public.


By serious I just meant any real world codebase at all. A full game, even if an old, smaller one is way more than I expected anyone to have done!

Definitely will thumb through the git history to get an idea of the refactoring efforts.

Thanks a bunch!


I write a blog that at its peak it had a not-huge-but-not-small-either userbase, and a massive TIL.

Blog and TIL are two radically different things.

I write the TIL for myself, and it's open, but it's pretty much useless to anybody else. This is because TILs reflect the writer's mental structure, which is very individual; the topic has been discussed on HN before.

Regarding the blog, I didn't/don't publicize it at all, but it actually got noticed by some BigCo.

It's important to ask oneself what's the purpose of having it discovered. Fame and glory :)? Career? And/or just helping people?

In the case of my blog, I didn't care about it being discovered. However, it did help people; if one cares about writing quality posts, people will find it and use it as reference, in a virtuous cycle, although there is a limit - blog do "age" with time, even if some articles stay popular.

The discoverability will be based on the fact that the most popular (useful) posts will be used as reference over the web.

If the target is being popular for the sake of being popular... well, then one gets into the SEO topic. I don't personally advise this, but to each their own :)

Having a popular (or so) blog doesn't necessarily help with the career. It can help as part of a portfolio, but prospective employers will either ignore it, or take just a peek, unless they know it already - in that case, it's definitely a big help.


I've been using VFIO and maintaining a guide for a few years*, then I've realized that VFIO kinda works, but it's not a reliable technology, and I moved (back) to dual boot.

There are very significant pain points, specifically:

1. if one reserves the video card for VFIO, it won't have any power management; this means that it will run hot while doing nothing; in order to work this around:

1a. first has to battle with X, which has an option not-to-take-over-a-card-but-it-takes-it-over-nonetheless

1b. then one can give exclusive access to the graphic card driver, which can be switched out/in when starting/stopping the VM; this unfortunately works, but not reliably

2. the points above apply to nvidia; AMD is worse, as it hasn't supported soft GPU reset until very recently (I think it was added on 5.19 or so)

2a. this means that one starts the VM, then stops it, and most of the times the card will hang

2b. there resize BAR functionality is not supported by VFIO (at least, last year it wasn't), which means, one loses additional performance (I could be ok with it, as the loss is not significant, but performance losses compound)

The problem is that all the points above are not in control of the user; the problems happen at driver level (if, say, there is no reset support, one can't add it out of thin air).

If one uses Nvidia, and they're ok with the card running hot all the time, then definitely, VFIO works wonder. But this lead me to abandon VFIO, as I don't want that (and the alternative of the card having a most-of-the-time-malfunctiong driver was not appealing, either).

Big shame! I loved VFIO :)

* https://github.com/64kramsystem/vga-passthrough


I can only share a part, since the majority of my scripts reveal much about my system structure (I try to open whatever I can, though; the tedious part of open sourcing a script, is to make it generic/configurable):

https://github.com/64kramsystem/openscripts

Missed the previous cheatsheet post :) I have a massive collection, which are large enough to be books more than cheatsheets (still, I access and use them as cheatsheets):

https://github.com/64kramsystem/personal_notes/tree/master/t...


I keep a relatively large amount of notes (1), which are fundamental to my learning.

My notes are essentially books in markdown format, which I can open with the editor/IDE I use when working on any project.

My opinions are:

- the vast majority of the effort is spent on cataloguing knowledge when adding new notes (that is, keeping each book consistently structured); this is something that no tool can do, and as a consequence, any tool will probably do equal.

- a consequence of the cataloguing effort is that the brain better remembers the topics stored.

- searching is where the other effort goes; I've found that as long as the books are consistently structured, and one puts a bit of effort to make concepts easily findable, a textual search does well. probably, a tool to do fulltext search may help in some cases, but I rarely find the need

- there are interesting differences between doing a google search and searching a stored concept: 1. the stored concept is processed 2. the search follows my brain organization, not a search engine's

- I do only very basic cross-referencing; my method will probably be inadequate if this is a requirement

For things that require rote memorization (say, System-V x64 calling conventions), I use Anki.

I take notes almost only for computer/science related stuff. If I had to catalogue diverse topics, I'd probably just use subdirectories.

(1) https://github.com/64kramsystem/personal_notes/tree/master/t...


Last year I humorously told my 7yo kid that Santa Claus died of COVID. Already before COVID though, we had the Christmas tradition of watching Futurama's "Xmas Story" episode, pretending that Santa has been replaced by a murderous robot who kills people during Christmas day.

It's all a matter of perspective. In our case, we've replaced mythology with humor and subversion - I prefer the latter to the former, and the kid fully embraced it (making the Futurama episode our tradition is their idea) :)


The following is my personal experience (so I can't comment on the general nature of blog comments).

I have a "strictly technical" SWE blog with approximately 4k to 8k users per month, which I have been maintaining it for 3/4 years.

I use Disqus for comments. I virtually have no spam (if I had some, it's been so little that I don't remember it). The comments are generally good quality (some even improved the articles), possibly due to the nature of the blog, but they're few.

If spamming and low quality comments are due to open comment systems, I'd still stick with a closed system like Disqus, as I prefer fewer but more motivated comments.

On a funny note, it took me a while to find out that Disqus introduced taboola ads at some point, because I use ad blockers. The moment I found out, I was so horrified that I thought somebody hacked the blog and panicked; it took me a bit to figure out what actually happened :)


If you don't mind sharing, how did you obtain that many users in such a short period?


Sure! I've actually though that it was a relatively small number :) I've reached 8k users around 1/1.5 years ago, so the times are even shorter.

I did not plan for exposure (audience size); however, looking at the stats, I think that there is a clear indication.

My articles are often more or less deep dives into mainstream topics; I believe that the consequences of this approach are two:

1. the articles get exposure because the topics are common, and frequently searched by developers;

2. by being deep dives, I think they slowsly get used as references and linked by other sites.

I think this is a specific approach with pros and cons.

The pros are that it slowly grows a good audience over the time, and that it tends to have a stable minimum (since the references are there). Also, repeated deep dives in a given field will get attention from known people working in it, which is very significant.

The downside is that this type of articles is a pain to write (and I'm not sure I'll continue).

I had at least one article that exploded in popularity, however, while that's nice to see, it's a type of article that doesn't provide any value in the long term (on the other hand, short term is also important; I got interviewed because of it).

I think the numbers are generally normal to reach if one focuses on at least one subject, and dives in it. My blog is intentionally very scattered - if I focused, say, on databases, I would have certainly multiplied the users, but that's not my end goal.

All the best! :) The blog is https://saveriomiroddi.github.io, by the way :)


I wrote long ago about writing competion scripts in any language: https://saveriomiroddi.github.io/Using-scripts-in-any-langua....

Although it's not ideal (one ends up duplicating the arguments parsing logic of the program), it's actually not complicated, just some boilerplate.


> has anyone here read Rust in Action[1]?

I did, and I didn't like it much, but YMMV.

First: sadly, as common practice in the Rust books world, the book devotes 1/4th of the content to an utterly useless Rust guide. This is a marketing device (it's scammy for me, but it's arguable) to illude readers that they can read a book on learning Rust _and_ apply it in a certain context - but it's not possible to meaningfully learn Rust in 100 pages (not even in 400...). Those 100 pages would have been much better spent on-topic.

Rust language sections are also added to various chapters, which again, are redundant. Chapter 10 is entirely dedicated to multithreaded Rust programming, which is not systems programming.

Ultimately, it depends on what one exactly wants to learn and what they intend to do with it:

- if one wants to learn O/S programming, this is definitely not an O/S programming book; just the last two chapters are.

- if one wants to learn interfacing with system components (e.g. the network stack), especially with the intention of just reading without actually applying, this can be a fun book.

I personally don't think that the latter is systems programming, and I find the book misleading.


Absolutely "not necessarily". SWE requires an extensive amount of learning (if you're lucky :)).

Unless one has extraordinary mnemonic capabilities, note taking is a way to structure and store concepts.

Even if this wasn't the case, personal notes are still a more efficient/effective way than googling, for two reasons: 1. on the notes, one puts the processed version of a concept, so that it's safer/more stable/improved/cleaner/adapted etc. 2. personal notes follow the mind of who writes them, so the concepts are faster to find.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: