Hacker News new | past | comments | ask | show | jobs | submit | kybernetikos's comments login

There are ways that people are set up psychologically that can lead to "discipline problems". A common companion of ADHD is oppositional defiant disorder (ODD) for example.

I've seen people with an instinct to resist everything when under pressure or stressed. I also know people who don't seem to have the instinct for understanding authority and hierarchy that most people do ("if I have to listen to the teacher, why doesn't the teacher have to listen to me?").

I'm sure having people with these behaviours can be very adaptive and beneficial for the whole society in some situations (I bet they do better on the Milgram experiment for example), but in many cases in modern life it can make it feel like everything is an unnecessary fight. I think for people dealing with such individuals the key is to understand that it isn't directed at you personally. I'm not sure what the best advice is for those with those instincts, but I would imagine that it's a combination of learning to use those instincts appropriately and not being too hard on yourself when it goes wrong.


You may be interested in Tabletop Simulator from Berserk Games, which is on Steam and has VR mode.

I found this on scurvy and forgotten traditions to be fascinating https://idlewords.com/2010/03/scott_and_scurvy.htm

I don't agree with this at all. Pairing where there's a big skill gap isn't proper pairing, it's more like mentoring or hands on training.

In pair programming as I learned it and as I have occasionally experienced it, two individuals challenge each other to be their best selves while also handing off tasks that break flow so that the pair as a whole is in constant flow. When it works this is a fantastic, productive, intense experience. I would agree that it is more than 2x as productive. I don't believe it's possible to achieve this state at all with a mismatched pair.

If this is the experience people have in mind, then it's not surprising that they think that those who think it's only for training juniors haven't actually tried it very much.


Exactly this. And I did mean equal skill when i said “more than 2x” implying that you get more done together than if you were separate.

One interesting thing is skill levels aren’t really comparable or equatable. I find pairing is productive where skills only partially overlap, meaning the authority on parts of implementation flows between participants, depending on the area you’re in.

I have some examples where I recently paired with a colleague for about 2-3 weeks nearly every day. i’m confident that what took us a month would be near impossible just working on our own


Fine tuning an llm on the output of another llm is exactly how deepseek made its progress. The way they got around the problem you describe is by doing this in a domain that can be relatively easily checked for correctness, so suggested training data for fine tuning could be automatically filtered out if it was wrong.


The key requirement to solve this problem is that you can ensure that third party libraries get a subset of the permissions that the code calling them has. E.g. My photo editor might need read and write access to my photo folder, but the 3rd party code that parses jpegs to get their tags needs only read access and shouldn't have the ability to encrypt my photos and make ransom demands.

Deno took a step in a good direction, but it was an opportunity to go much further and meaningfully address the problem, so I was a bit disappointed that it just controlled restrictions at the process level.


Kind of two different things being addressed here. The article is talking about doing this at the granularity of preventing imported library code from having the same capabilities as the caller, which requires support from the language runtime, but the comment being responded to was saying there is no way in 2025 to run a program and keep it from accessing the network or the filesystem.

That is simply not true. There are many ways to do that, which have been answered already. SELinux. Seccommp profiles. AppArmor. Linux containers (whether that be OCI, bubblewrap, snap, app images, or systemd-run). Pledge and jails.

These are different concerns. One is software developers wanting to code into their programs upper limits to what imported dependencies can do. That is poorly supported and mostly not possible outside of research systems. The other is end users and system administrators setting limits on what resources running processes can access and what system calls they can make. That is widely supported with hundreds of ways to do it and the main reasons it is perceived as complicated is because software usually assumes it can do anything, doesn't tell you what it needs, and trying to figure it out as an end user is an endless game of playing whack-a-mole with broken installs.


> every field of every type ends up being optional in practice.

This also means that you cant write a client without loads of branches, harming performance.

I find it odd that grpc had a reputation for high performance. Its at best good performance given a bunch of assumptions about how schemas will be maintained and evolved.


Even in 2025 grpc is still awful for streaming to browsers. I was doing Browser streaming via a variety of different methods back in 2006, and it wasn't like we were the only ones doing it back then.


I'm not sure it is- even if you think the electorate are educated and competent, it still makes sense to delegate the specific decisions to a smaller set of individuals who are given the time and resources to get into the detail. It just scales better.


My boox note has lasted years, been to the beach and on hikes, visited multiple countries, and commuted with me most work days. No sign of fragility. In that time I've had two kindles and one kobo break.


Is it a color boox? Mine was color. I think those are more fragile.


Colour ones weren't around when I bought it. I had been considering an upgrade but it sounds like maybe I should wait.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: