Hacker Newsnew | past | comments | ask | show | jobs | submit | azdle's favoriteslogin

> you have seen it all, go outside

Or "you've seen it all. Bored? Click here to let your friends know you're looking for something to do/see who else is bored". Or "Bored? X needs volunteers!" Or some other positive suggestion to try to prevent a "eh guess I'll doomscroll something else" reaction.


I’ve not been happy with the state of desktop ebook readers for a while, so I recently built a simple web-based ebook reader. It’s designed to be a quick and easy way to read books while also providing decent layout and typography.

Although it’s a website, books and reading histories are saved in the browser’s local storage and it doesn’t track anything.

Here’s the link: https://www.minimalreader.xyz


I feel like I haven't seen an anime in years that's been in the same league as Akira or the great Miyazakis (Spirited Away, Princess Mononoke, Totoro). Yes i'd be brutal and include Miyazakis following works in that list, from Howls Moving Castle to The Boy and the Heron. Ive seen lots of incredible animation, sure, but nothing like the cinematic depth.

What am I missing? What should an old fart who's becoming convinced things were better in the old days put myself infront of?


Just a directory of feeds could be of limited use. You don't know the signal-to-noise ratio of each feed for you.

You subscribe to tens or hundreds of feeds and, boom, you have another problem - how do you prioritize which feed to read .

With https://linklonk.com I'm trying to solve both problems: discovering feeds to follow and prioritizing content from all feeds.

You start with content you liked - submit links you liked and you will get connected to all feeds that included this link.

For example, there are a bunch of feeds that included this link https://simonwillison.net/2024/Feb/21/gemini-pro-video/

Those are:

- https://simonwillison.net/atom/everything/ - the original blog

- https://kagi.com/api/v1/smallweb/feed/ - a feed of "small web" links, I didn't know it existed, but one of the users must have submitted this feed.

- https://hnrss.org/newest?points=1000&count=100 - HN links that got more than 1000 points

- https://lobste.rs/rss - submissions to Lobste.rs

- https://lobste.rs/t/ai.rss - submissions to Lobste.rs with "ai" tag.

The point is, if you upvote this link on LinkLonk (https://linklonk.com/item/481037215144673280), you automatically get subscribed to all of these feeds. This is a way to discover new feeds through content you liked.

Now, being connected to hundreds or thousands of feeds might seem crazy. But we have a solution to that which also relies on what content you "liked". LinkLonk knows how often you liked content from each feed you are connected to (which is essentially the signal-to-noise ratio). So it ranks new content based on that. If you like 50% of posts from https://simonwillison.net/atom/everything/ then new posts from Simon Willison will be shown above other links from, say, https://lobste.rs/rss.

The more you like - the better the ranking of fresh content becomes.

In this world you don't have to actively manage which feeds you are subscribed to or not. You only rate content.


I developed a bad case of perfectionism-procrastination after working for a toxic boss.

It didn't matter how polished our product was, he'd find a way to tear it apart. When he'd have a bad day, he'd start picking apart a random team's product. "Unbelievable!" he'd say in Slack, dropping a screen recording of the app that showed something we were supposed to be embarrassed about. It could be that the app took 3 seconds to load and show fresh data from his hotel WiFi, or it could be as simple as the UI not matching some directive he gave to the UI designers who failed to update the designs or tell us about the change. He would rant about how disappointing we were. At his worst, he fired some people on the spot for a problem that wasn't even their fault.

I quickly learned that the only way to avoid that pain was to not ship anything. The people he liked most were the ones who were operating in hypotheticals: The people who made UI designs in Figma, or the architects who drew nice diagrams about how things would work, or the people who wrote long design documents to hand to other teams. They never shipped anything for him to critique, so he thought they were the geniuses of the company. As long as they could avoid having to actually implement anything, they continued to be favorites.

It took me longer than I like to admit to shake that habit when I finally escaped. I found myself delaying shipment, pivoting from design doc to design doc, and trying to operate in that hypothetical space as long as I could. Fortunately I learned to get over it, but it was scary how much that single job could shape a large part of my personality.



Smart RSS reader that, right now, ingests about 1000 articles a day and picks out 300 for me to skim. Since I helped write this paper

https://arxiv.org/abs/cs/0312018

I was always asking "Why is RSS failing? Why do failing RSS readers keep using the same failing interface that keeps failing?" and thought that text classification was ready in 2004 for content-based recommendation, then I wrote

https://ontology2.com/essays/ClassifyingHackerNewsArticles/

a few years ago, after Twitter went south I felt like I had to do something, so I did. Even though my old logistic regression classifier works well, I have one based on MiniLM that outperforms it, and the same embedding makes short work of classification be it "cluster together articles about Ukraine, sports, deep learning, etc." over the last four months or "cluster together the four articles written about the same event in the last four days".

I am looking towards applying it to: images, sorting 5000+ search results on a topic, workflow systems (would this article be interesting to my wife, my son, hacker news?), and commercially interesting problems (is this person a good sales prospect?)


A sufficient quantity of servers in dudes' basements are indistinguishable from a cloud.

Like this? http://www.casetronic.com/corporates/40-t1160.html

Not ARM-based though, but they do have a variant that can host 4 pico-itx boards: http://www.casetronic.com/corporates/42-t1040.html . I gather you may be able to convert that one easier to fit an ARM board, or RISC-V for that matter.


I think this title needs a '(2016)' appended to it.

Between print stylesheets and paged media, CSS has become one of my favorite ways to typeset documents documents where the layout of individual pages matters greatly. (LaTeX remains my prefered choice for documents where text flows between pages.)

I recently wrote up my experience typesetting my resume in HTML/CSS: https://jack.wrenn.fyi/blog/pdf-resume-from-html/


Carpet based localisation!

I need a location tracking system for my office robot. Our carpet happens to have a pseudo random grid pattern of four colors. I'm creating a map of this pattern, then at runtime I fuse odometry with carpet colour (detected via camera) in a particle filter to determine and track robot location.

Initial results are looking good :)

Some links :

How I detect carpet colour: https://nbviewer.jupyter.org/github/tim-fan/carpet_color_cla...

The particle filter: https://github.com/tim-fan/carpet_localisation

ROS package with gazebo simulation : https://github.com/tim-fan/carpet_localisation_ros

Current state of my carpet map (about 1/4 of the full office): https://raw.githubusercontent.com/tim-fan/carpet_localisatio...


Mapping shadows across the earth in real time based on location and time of day.

https://shademap.app/

Things I learned along the way:

- How slippy maps work and Leaflet.js

- Most elevation maps use data collected back in 2000! [1]

- You can perform calculation on the GPU without knowing GLSL [2]

[1] https://en.wikipedia.org/wiki/Shuttle_Radar_Topography_Missi...

[2] https://gpu.rocks/


I've seen references to this sensor before and find it a bit concerning that there's no information about how to properly use the CO2 sensor.

This sensor uses a SenseAir S8, which like most CO2 sensors, has an automatic baseline calibration algorithm enabled [0], which expects to see pure, undiluted fresh air at least once every 8 days. The only way to disable it is explicitly, through the MODBUS interface [1].

Leaving it enabled makes perfect sense in a business or businesslike environment because these environments will be completely unoccupied overnight and have air conditioning, which usually does a daily fresh-air purge, ensuring that the sensor will have regular exposure to fresh air.

However in a residential environment, the auto baseline calibration often doesn't make sense, especially in winter. When the windows are closed and/or people or pets are around, it's very rare for the sensor to see uncontaminated fresh air, so it will see say 500ppm of CO2 and assume it's fresh air when it really isn't. I have measured this and it's a real problem.

In a residential environment, unless you're sure you have good, frequent exposure to pure fresh air, you're better off doing a fixed calibration once a year or so.

AirGradient also seems to be a hardware-only design. The ESPHome project [2] has great software support for a variety of sensors (including the SenseAir S8, so it should be compatible with the AirGradient hardware) as well as a very well-documented hardware project [3]. After trying my own Arduino-based software and then ESP-IDF, I find esphome much more pleasant to work with.

[0]: https://rmtplusstoragesenseair.blob.core.windows.net/docs/pu...

[1]: https://rmtplusstoragesenseair.blob.core.windows.net/docs/De...

[2]: https://esphome.io/

[3]: https://github.com/nkitanov/iaq_board


> but did rub off the rustlang community the wrong way?

I find it interesting that people bring up my time contributing to Rust as a negative thing, largely due to my former business partner misrepresenting it and falsely claiming I was kicked out of the project. To be clear, I don't think you're doing it maliciously, but it's quite weird that contributing so much of my time to an open source project and then having that used against me as if working on a set of open source projects nearly full time for a year as a volunteer was a terrible thing to do. It's a large part of why I now avoid doing work without compensation. If people are going to value my work so little, then I'm at least going to get paid for it.

I left Rust on my own accord because I wasn't enjoying it anymore and I'd determined that it was extremely unlikely that it would turn into a career which is part of why I'd persisted long past the point that I was enjoying it. It became harder and harder to accomplish anything of substantial value as it moved towards stability. I had strong opinions on many of the topics and to get anything done as an outsider I had to make strong arguments and be incredibly persistent, which rubbed some people the wrong way. It was also very rough at that time being an outsider and trying to have a significant influence on it, especially when I disagreed in many areas with the core developers. The way things were done drastically changed later on for the better. I left the project for the same reason a few people didn't like my involvement in it. I was getting burned out dealing with them and they were getting burned out dealing with me. It certainly went both ways and the vast majority of the people involved in the project didn't have issues with me. Out of thousands of people, there were only a couple that I truly didn't get along with and literally only one person where that persists today (and believe me, I'm not the only person who doesn't click with them).

I've certainly evolved how I communicate with people online since then. I still take serious issue with people bending the truth and being dishonest / misleading, which can make arguments very heated if people aren't trying to debate based on the facts. There's a tiny minority of people that I absolutely don't get along with because they'll keep bending the truth and I'll keep pointing out that they're doing it, which they can interpret as an insult. In the context of a debate over the design of a project where the stakes are high, I'll choose not to be very diplomatic when the alternative is letting someone walk all over me with false claims. It's too tiring refuting things over and over and having facts treated as subjective things rather than being able to agree upon a set of facts and argue things based on their merits. I think the world would be a better place if people didn't tolerate this so much. I was no good at playing politics and choosing my battles carefully which played a big part in it too.

The objective truth is that I decided to leave the Rust project and community, and I removed myself as a contributor from the repository. If I recall correctly, I think someone misinterpreted what happened and posted a thread on /r/rust incredibly angry because they thought I was kicked out of the project. The people who saw the thread but weren't aware of the details assumed that it actually happened and then had a massive fight with each other about whether something that didn't happen was justified or not. The reality is that it didn't happen in the first place.

I also seriously doubt that I would be kicked out of any project for occasionally being a bit abrasive in arguments. It would be a bit ridiculous for a project to ban people from contributing for having that kind of personality or not being neurotypical. It's possible that they would have asked me to start being less abrasive in debates, sure, but they hadn't. I definitely don't think I was always fun to work with, particularly once things had soured with Mozilla, but I don't think it's entirely fair to put all the blame on me for that. I was upset about what had happened overall and that definitely influenced how I participated.

My experiences with Rust and other projects are what led to me making sure that I'd own and control the projects that I'd be heavily working on in the future so I wouldn't need to spend so much of my time debating and playing politics. When I co-founded Copperhead, I made sure that it was explicitly agreed that my open source work would remain under my control despite the company sponsoring it. It was explicit that I would own and control the OS development project. It's worth noting that there were 3 co-founders, and 2 of us believed in open source and the company building value around it rather than by selling it. Unfortunately, the 3rd co-founder left early on before shares were even divided up, and I ended up owning the company 50/50 with a narcissistic sociopath who ended up totally screwing me over. Internally, there was conflict and dysfunction long before it became public. I wanted to be free of that company for a long time, but I couldn't leave because I couldn't just abandon the people using the project and it had become too tied to the company. Eventually, my business partner decided to throw away all the agreements and just try to take over the project with threats / ultimatums. I don't think it was at all rational for him to do that. It wasn't at all in his best interest even from an entirely selfish point of view. There's absolutely no way I was going to turn over ownership / control of my project to someone that by then I considered highly untrustworthy and downright dangerous. Unfortunately, they had set up everything to be able to completely screw with over by tricking me at various points and being very strategic about how the domain, infrastructure, etc. was set up. It ended up not mattering at all that I owned 50% of the shares because they just ignored my rights as a shareholder and banked on me not wanting to spend a huge amount of money fighting them in court.

GrapheneOS is the direct continuation of my work on this, which began before Copperhead became involved in it. It had existed before it was CopperheadOS. I've learned a lot of lessons from the experience there. One of the biggest mistakes was being tricked into not being a director early on, but that was also before the stakes had become so high. It also really shouldn't have mattered to the extent that it did if the Copperhead lawyer had been at all competent and truly looked after the interest of the company instead of acting solely on behalf of my business partner. Anyway, I'd rather not be directly involved in businesses at all. I've had almost nothing but bad experiences with governments, businesses, etc. including things far worse than the stuff with Copperhead.


I have a hobby project that my target was following similar learning path, I could only recommend if you also work on your own server dont forget software side,perf (http://brendangregg.com/perf.html) is a god not just kernel side, as well as your own software, as part of my build I was always checking below command:

'perf stat -e task-clock,cycles,instructions,cache-references,cache-misses,branches,branch-misses,faults,minor-faults,cs,migrations -r 3 nice taskset 0x01 ./myApplication -j XXX '

Additions I would have I have benefited:

* I use latest trimmed kernel, with no encryption, extra device etc...

* You might want to check RT kernels, finance & trading guys always good guide if you can

* Removed all boilerplate app stack from linux or built small one, I even now considering getting rid of network stack for my personal use

* Disable hyper-threading: I had a short living workers,this doesnt helped me for my case , you might want to validate first which one is best suited for your needs

* Check also your CPU capabilities (i.e. with avx2 & quad channel I get great improvements) and test them to make sure

* A system like this quickly get hot, watch temps, even short running tests might give you happy results but long term temps easily hit the wall that bios will not give a fuck but only throttle


As someone with a PineTime Dev Kit on their desk in front of them, I would say that “just not sealed/glued shut” is not the entire story. Sure, you could glue it shut. But you would be stuck with somewhat bog standard software that is neither exciting nor updateable unless you hook the device up via Serial Wire Debug (SWD) to flash it – which requires yet another piece of kit to achieve. PINE64 are not joking around when they state that: “The PineTime Dev Kit [is] aimed solely for development purpose only, this is not for end user[s] who [are] looking for [a] ready to wear Smart Watch. More specifically, [we] only intend for these units to find their way into the hands of developer[s] with extensive embedded OS experience and an interest in Smart Watch development.” [1].

That being said, I have had a lot of fun learning embedded systems over the holidays and highly recommend getting a dev kit if you want a rewarding hobby that is likely to contribute to this watch coming out “for real” with a lot of fun software some time next year. Admittedly my reading list is very Rust biased, so feel free to ignore parts of it. But I highly recommend the embedded Rust “Discovery Book” [2] and “The Embedded Rust Book” [3]. Also, anything written by Lup Yuen Lee (李立源) so far has had the highest quality of all writing related to PineTime development. The only downside is that it is on Medium (yuck!), but do start with the one where he breaks his PineTime open for the first time and go from there [4]. There is also of course the PineTime sub forum [5]. Lastly, if you are new to embedded systems (such as myself) it may also be worth getting the development board that corresponds to what is inside the PineTime [6]. Happy reading and hacking!

[1]: https://store.pine64.org/?product=pinetime-dev-kit

[2]: https://docs.rust-embedded.org/discovery

[3]: https://docs.rust-embedded.org/book

[4]: https://medium.com/swlh/sneak-peek-of-pinetime-smart-watch-a...

[5]: https://forum.pine64.org/forumdisplay.php?fid=134

[6]: https://www.nordicsemi.com/Software-and-Tools/Development-Ki...


Thanks for answering.

> The specification as-written is really, really bad

We get a pretty wide range of feedback on the spec fwiw: some people seem to really like it. Others say that it’s “really, really bad” which doesn’t exactly give us much to go on...

> Synapse is kind of... and Dendrite is stalled

We have no choice but improve Synapse currently, and while it is still quite a resource hog it’s improved by at least 3-5x over the last year. Dendrite instead has become more of an R&D project for future homeserver shapes, but it’s not entirely stalled.

> The matrix.org server causes de-facto centralisation

The server has been less than 50% of the visible network for several years now - and ironically the datacenter perf issues (unrelated to Synapse) we had over the last few months have shifted that balance further - it’s about 35% and dropping. Ideally we will turn it off entirely once we have decentralised accounts.

> lack of e2ee by default is iffy

Our main project right now is to fix this. Cross-signing is mid flight; E2E search is done; Pantalaimon (E2E compat for dumb clients) is done; remaining key distribution bugs are in flight. We’re aiming to turn it in by default in Jan.

> “I couldn’t figure out how the state resolution works”

State resolution is the main technical novelty in Matrix, and yeah - it’s hard, much like git’s merge resolution is not exactly easy either. Unfortunately it comes with the territory; if you want to have consistent room state while replicating them over a byzantine network of servers to stop hijacks and other abuse, you have a relatively hard problem to solve. We got it wrong the first time; the current version gets it right (as far as we know).

The spec (which is deliberately formal and terse) is https://matrix.org/docs/spec/rooms/v2

However, there are supporting documents linked from the spec to help clarify: the original spec proposal at https://github.com/matrix-org/matrix-doc/blob/erikj/state_re... and the guide at https://matrix.uhoreg.ca/stateres/reloaded.html etc

Ironically I think that state res is one of the best documented and understood bits of Matrix now (which is just as well, given how important it is).


Is there any good book for explaining all of these startup evaluation, fundraising, etc. terms, how they work, what to ask about, etc.?

I think this is the main way to go if you care about air quality because unfortunately most consumer air quality monitors can be wildly inaccurate (i.e. detect nothing at all) even in day-to-day scenarios like making toast [0].

It's not limited to particulate matter either. You can get devices with reasonably accurate basics like temperature and humidity but as soon as you get into the actual air quality stuff like CO2, things start to fall apart. There are tons of devices that use wildly inaccurate TVOC sensors or fake their CO2 measurements (they estimate it based on H2 instead).

If you want anything remotely close to accuracy, I strongly recommend buying something that actually tells you which sensors it has inside it. Get datasheets for the sensors and check that their specs are reasonable for what you want. For example make sure that if you want to actually measure CO2 to buy something with an NDIR sensor like the Senseair S8 inside.

I wanted something I could plug into my home Prometheus/InfluxDB/Grafana setup so I bought [1] from Taobao. It lists all the sensors it uses, which are fairly good for the price. The device has a pretty simple TCP API that gives you JSON. Everything is Chinese but the measurements themselves are labelled in English and Google Translate works pretty well on the documentation.

0: https://www.sciencedaily.com/releases/2018/08/180822091022.h...

1: https://item.taobao.com/item.htm?id=550317428831


> useless code of conduct

#Code of Conduct

This is not a community project. This is my project. I know that will disappoint some people, but I do this for fun in my own spare time. If it stops being fun, I will stop working on it, which will pretty much kill the project. There are millions of projects in the world and the only reason they continue (if they actually do) is because the maintainers stubbornly stick at it.

With that in mind, here is the code of conduct: If it is fun for me then it is good. If it is not fun for me, then it is not good.

Things I find fun include: Bug reports that explain what you saw and what you expected to see. Suggestions for features that would make your life better. Stories of how the software so far has already made your life better. Entertaining stories of how you used the software (bonus points if it includes pictures of cats). Offers to volunteer to improve something (super bonus points if you actually improve it). Questions about how the software works. Offers to write documentation (super, executive class bonus points if you actually write some). Answering questions that other people ask (bonus points if you get the answers right).

Things I don't find fun: Drama. That is all.

To some extent, I will accept drama in exchange for money. But it has to be a lot of money. Think FANG level money. If you don't have FANG level money that you are willing to give me in exchange for drama, just don't do it -- even if you think it is the most important thing in the world.

There is no other code of conduct. I may arbitrarily declare some things fun for me and some things not fun. Please pay attention when I declare one way or the other and act accordingly.

Thank you.


I used to work at Tumblr, the entirety of their user content is stored in a single multi-petabyte AWS S3 bucket, in a single AWS account, no backup, no MFA delete, no object versioning. It is all one fat finger away from oblivion.

The most important operation in QNX is MsgSend, which works like an interprocess subroutine call. It sends a byte array to another process and waits for a byte array reply and a status code. All I/O and network requests do a MsgSend. The C/C++ libraries handle that and simulate POSIX semantics. The design of the OS is optimized to make MsgSend fast.

A MsgSend is to another service process, hopefully waiting on a MsgReceive. For the case where the service process is idle, waiting on a MsgReceive, there is a fast path where the sending thread is blocked, the receiving thread is unblocked, and control is immediately transferred without a trip through the scheduler. The receiving process inherits the sender's priority and CPU quantum. When the service process does a MsgReply, control is transferred back in a similar way.

This fast path offers some big advantages. There's no scheduling delay; the control transfer happens immediately, almost like a coroutine. There's no CPU switch, so the data that's being sent is in the cache the service process will need. This minimizes the penalty for data copying; the message being copied is usually in the highest level cache.

Inheriting the sender's priority avoids priority inversions, where a high-priority process calls a lower-priority one and stalls. QNX is a real-time system, and priorities are taken very seriously. MsgSend/Receive is priority based; higher priorities preempt lower ones. This gives QNX the unusual property that file system and network access are also priority based. I've run hard real time programs while doing compiles and web browsing on the same machine. The real-time code wasn't slowed by that. (Sadly, with the latest release, QNX is discontinuing support for self-hosted development. QNX is mostly being used for auto dashboards and mobile devices now, so everybody is cross-developing. The IDE is Eclipse, by the way.)

Inheriting the sender's CPU quantum (time left before another task at the same priority gets to run) means that calling a server neither puts you at the end of the line for CPU nor puts you at the head of the line. It's just like a subroutine call for scheduling purposes.

MsgReceive returns an ID for replying to the message; that's used in the MsgReply. So one server can serve many clients. You can have multiple threads in MsgReceive/process/MsgReply loops, so you can have multiple servers running in parallel for concurrency.

This isn't that hard to implement. It's not a secret; it's in the QNX documentation. But few OSs work that way. Most OSs (Linux-domain messaging, System V messaging) have unidirectional message passing, so when the caller sends, the receiver is unblocked, and the sender continues to run. The sender then typically reads from a channel for a reply, which blocks it. This approach means several trips through the CPU scheduler and behaves badly under heavy CPU load. Most of those systems don't support the many-one or many-many case.

Somebody really should write a microkernel like this in Rust. The actual QNX kernel occupies only about 60K bytes on an IA-32 machine, plus a process called "proc" which does various privileged functions but runs as a user process. So it's not a huge job.

All drivers are user processes. There is no such thing as a kernel driver in QNX. Boot images can contain user processes to be started at boot time, which is how initial drivers get loaded. Almost everything is an optional component, including the file system. Code is ROMable, and for small embedded devices, all the code may be in ROM. On the other hand, QNX can be configured as a web server or a desktop system, although this is rarely done.

There's no paging or swapping. This is real-time, and there may not even be a disk. (Paging can be supported within a process, and that's done for gcc, but not much else.) This makes for a nicely responsive desktop system.


Yet another proof for the following:

1. It's reasonable to claim that amd64 (x86_64) is more secure than x86. x86_64 has larger address space, thus higher ASLR entropy. The exploit needs 10 minutes to crack ASLR on x86, but 70 minutes on amd64. If some alert systems have been deploy on the server (attacks need to keep crashing systemd-journald in this process), it buys time. In other cases, it makes exploitation infeasible.

2. CFLAGS hardening works, in addition to ASLR, it's the last line of defense for all C programs. As long as there are still C programs running, patching all memory corruption bugs is impossible. Using mitigation techniques and sandbox-based isolation are the only two ways to limit the damage. All hardening flags should be turned on by all distributions, unless there is a special reason. Fedora turned "-fstack-clash-protection" on since Fedora 28 (https://fedoraproject.org/wiki/Changes/HardeningFlags28).

If you are releasing a C program on Linux, please consider the following,

    -D_FORTIFY_SOURCE=2         glibc hardening

    -Wp,-D_GLIBCXX_ASSERTIONS   glibc++ hardening

    -fstack-protector-strong    stack smash protection

    -fstack-clash-protection    stack clash protection

    -fPIE -pie                  better ASLR protection

    -Wl,-z,noexecstack          don't allow code on stack

    -Wl,-z,relro                ELF hardening

    -Wl,-z,now                  ELF hardening
Major Linux distributions, including Fedora, Debian, Arch Linux, openSUSE are already doing it. Similarly, Firefox and Chromium are using many of these flags too. Unfortunately, Debian did not use `-fstack-clash-protection` and got hit by the exploit, because it was only added since GCC 8.

For a more comprehensive review, check

* Recommended compiler and linker flags for GCC:

https://developers.redhat.com/blog/2018/03/21/compiler-and-l...

* Debian Hardening

https://wiki.debian.org/Hardening


You can also use associated consts which feels nicer in my opinion:

    #[repr(C)]
    #[derive(PartialEq, Eq, Clone, Copy)]
    struct Foo(i32);
    
    impl Foo {
        const BAR: Foo = Foo(0);
        const BAZ: Foo = Foo(1);
    }

Congratulations, they beat you into submission.

You now have accepted a fundamentally different world where anything you like, anything you say, anyone you are with or hope to be with, anything you hope to do, have done, didn't do, every mistake or misstep or misstatement or misunderstanding or fuckup, is recorded, analyzed, classified, and mined. You're being constantly thought about, by the machines, who, if you are lucky, are only interested in making a buck off you, and if you are not lucky, have targeted you for increased scrutiny, security checks, auditing, social classification, digitized karma, and eventually, all of this will translate to a significantly different experience through life. How will it manifest? Maybe it'll be something big like being denied a loan for a car or a house. Maybe it'll be a landlord turning you down for an apartment. Maybe it'll be a constant drip of ads trying to trick you into buying something. Or maybe one day beaker53 will say something bad about the government, or get involved with a terror group, or it will accidentally look like you got involved with a terror group. Or maybe they'll just come annoy you while you're sitting down to tune your guitar with an ad on how to make yourself a better guitar player, if only you did this or that or the other thing. Or maybe they'll pester you because your friends did something or didn't do something or should do something, or how you'll look better in relation to them if you did do something.

Speak for yourself. I'm sick of being watched and being "thought about" by all these damn machines. FFS leave me alone, like it was just 15 years ago. Just 15 years ago.


> once async/await lands, I think I'll be much happier

The current futures proposal is on track for stabilization[1]. You can use futures with async/await today in nightly, and what you use will probably be exactly what lands in stable soon.

The biggest missing piece is documentation, but thats starting to improve. My go-to for examples is the futures test suite[2]. And if you want more features, futures-preview 0.3alpha adds a bunch of useful future combinators[3]. futures-preview now just wraps nightly's std::future, which is very nice.

Except for the missing documentation, rust's futures are looking great. You can have a play today if you feel keen to jump in. But, as you said it will take some time for the web development ecosystem to mature around the futures API.

[1] https://github.com/rust-lang/rfcs/pull/2592

[2] https://github.com/rust-lang-nursery/futures-rs/blob/master/...

[3] https://crates.io/crates/futures-preview/0.3.0-alpha.10


>There's no point in the word any more.

There is a point and your confusion is frequent one. Let me clarify.

When a economist like the author uses uses the word 'monopoly', it's shorthand for 'monopoly power'. It's obvious from the context of the discussion what he means. The confusion arises when people know the definition of pure monopoly (single supplier) but don't know what is monopoly power.

Between pure monopoly and perfectly functioning markets there is a large area with varying degrees of monopoly power.

There are different ways to quantify the monopoly power. One is using market power and Lerner index. In perfect markets Lerner index would be zero. In practice it rarely is. https://en.wikipedia.org/wiki/Lerner_index

Internet economy generates large companies with monopoly power trough the network externalities. Dominating companies create barriers to entry for newcomers. If the value comes from being connected to others, the platform that connects gets most of it's value from the number of customers and economies of scale, not from being technically better. (of course, after you have the economies of scale you can hire the best people to keep up).

Classical Natural monopolies like railway systems, telephone and electric networks are often regulated to preserve markets. Platform economies have similar attributes.


I'm using i3-gaps and polybar. My GTK theme is Arc [0] and the paper icon set [1]

My dotfiles are really disorganized, but here's my polybar config which I copied from some reddit post:

https://github.com/veggiedefender/dotfiles/blob/master/polyb...

You'll need fontawesome 4 and material icons

[0] https://github.com/horst3180/arc-theme

[1] https://snwh.org/paper


The yellow light provides a period when drivers know the light is going to go red soon, all else being equal this light should be set to stay on for just long enough for drivers in properly maintained cars, who drive at legal speeds, to see it and all either come to a stop or pass the intersection before the light turns red. Typically formulas care about the allowed speed of cars, the expected braking capability of a car that's allowed on the road, the gradient (down hill it's hard to stop quickly!) and other factors, plus the size of the junction (a huge junction can't change as quickly because it takes time to cross the junction).

That is, Sally is doing 40mph in a 40mph zone, she sees the light turn yellow, and she's very close to the junction so she just keeps going, everything is fine. Bob is doing 25mph in a 55mph zone, he's some distance from the lights, they turn yellow, Bob stops normally as the lights turn red.

With red light cameras providing revenue there is an incentive to cut that time too short. Now Sally finds the light turned red earlier and she got a ticket. So next time Sally won't make that mistake, as soon as the light goes yellow she slams on the anchors, and somebody goes straight in the back of her - there's a traffic accident even though we invented these lights to reduce accidents. Oops.

Now a _good_ government would resist the temptation. They would set the formula to reduce crashes. But money is very tempting. Shaving a second off the time, getting $1M of extra revenue and oops smashing up a thousand people's cars, that's a good deal so long as you aren't paying for all those car repairs...


1. Navigate to your profile folder [1].

2. cd chrome or create the directory if it isn't there

3. Inside chrome, open userChrome.css or create it if it isn't there

4. Paste the following:

  @namespace url("http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul");
  
  /* to hide the native tabs */
  #TabsToolbar {
      visibility: collapse;
  }
  
  /* to hide the sidebar header */
  #sidebar-header {
      visibility: collapse;
  }

5. Restart Firefox

---

1. http://kb.mozillazine.org/Profile_folder_-_Firefox#Navigatin...


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: