Very interesting article, as were the previous articles, thanks. On the other hand, reading it made me feel slightly depressed because my work currently is so boring compared to this and I don't have any energy left for hobby projects at the end of the day or week.
I was exactly in the same boat and I started feeling burnout symptoms and so I quit my job last month for an extended leave so I can rest while also doing some hobby projects that I neglected over the years. Easily best decision I made in years. Of course I am aware not everyone has the privilege of being able to just quit working, but fortunately I had enough savings plus I made sure my expenses are quite low so I can afford it.
Started doing 50% this month, and for the first time in years I actually feel like I get stuff done outside work. Like, not just chores and a tiny bit of fun-coding once in a honeymoon, but really spending time on coding for fun, getting something done around the apartment, just going for a walk for two hours. All things I wanted to do or try but always felt too tired, or something more important popped up. Things like, just repainting part of a wall in one room got postponed for almost a year, and now that I did it, it didn't even take half an hour and actually felt good.
I'm still working three days a week, because it's not like I hated my job, but it was quite demanding at times.
I wrestled with the idea of just reducing hours for way too long. If you do too, just do it.
It’s really a problem that work tends to expand to fill all the cracks completely. You’d think the 8-hour workday would leave 8 hours for sleep and another 8 for recreation and other necessities, but somehow in my life there’s really only about 6 hours for sleep and 2-3 hours for everything else. Some of that is kids, but it’s not the entire explanation and honestly I’m not too sure where it all goes sometimes.
For me it's the kid. Kid doesn't really take a huge chunk of time, but he takes all the time between 5pm and 6:30pm, 50 mins between 6:30 and 8:00pm, 10 mins between 9:00 and 10:00, and many times 30 mins between 8:00pm and 9:00pm. So in reality I don't even have a large enough chunk of time for gaming, let alone studying and working on side projects. And occasionally work spills into the night.
Wow that's really a lot...do you happen to live in Florida and work in the defense industry? I saw someone with 5 kids posting hiring posts a couple of years ago and he also has a lot of kids, so just asking.
not me! But yeah, we’re out there. The thing is, with a bunch of kids, you still only ever have one or two tiny needy ones. So it’s less of a lift than you’d think.
> plus I made sure my expenses are quite low so I can afford it.
Reaching a point where you have more money than time is surprisingly easy to do without becoming financially stable in any sense. From my experience the key discipline has to be making my own food at home; it sets a baseline for my thinking on consumption.
I felt much the same way. I wish I had more creative energy at the weekend for these kinds of things... But then I remember that I actually have a much more creative job than most people and my creative energies are spent from 9-6 Monday to Friday (I work in movies). I've found MUCH more peace and pleasure in my "consumption hobbies" like watching tv, reading, playing linear story games that don't require me to be particularly creative and learned to stop being so hard myself for not having a hobby where I'm actively producing something like writing a book or making a game. Weirdly when I have a quiet period at work and am not so creative for 2-3 months I get the urge to pick up creative hobbies again. I've come to realise for myself that I have a measureable amount of bandwidth to create and a measureable amount of bandwidth to consume and they seem to be seesaw'd together.
When entities are culled due to Potentially Visible Sets (PVS), a player moving fast enough near a vis brush may appear to pop into existence out of thin air. This occurs because of latency. A simple fix would be for the QVM code to extrapolate the player’s position into future frames to determine whether the entity should be culled.
The original Q3 netcode lacked lag compensation for hitscan weapons. As a result, players often had to "lead" their crosshair to register proper hits. This issue was later addressed by the Unlagged mod, which introduced lag compensation.
Input commands had a back buffer to handle lost input frames, but instead of applying them with a delay, the inputs were applied to entities instantly. Consequently, other players' entities sometimes appeared "choppy" when the respective player had a poor connection.
That being said, they got almost everything right:
* Snapshots with entities: Efficiently delivering game state updates to clients.
* Lerping entities: Smoothing movement on the client side using a snapshot buffer.
* Delta encoding: Transmitting only changes in entity states within snapshots.
* Huffman encoding: Compressing snapshots for reduced bandwidth usage.
I can't speak for CMPA or QL, but I can speak a little bit on Unlagged:
- w/ Unlagged, the Server now stores a history buffer (g_activeSnapshots[]) of player states. When processing a action shot, it uses the timestamp sent by the client (cmd.serverTime) to interpolate or rewind states for collision checks (G_EvaluateTrajectory()). This allows the server rewinds player positions to the moment a client action occurred (eg: firing a shot)
- Unlagged reduces choppy movements by making clients use a snapshot buffer (cg.snap, cg.nextSnap) to interpolate positions with CG_InterpolateEntityPosition(). Extrapolation uses velocity vectors from the last known snapshot (currentState.pos.trDelta).
- Unlagged tries to compensate for high latency by rewinding entities to their positions at the time of the shot (ClientThink_real() and G_RewindEntities()). Uses a circular buffer (via lagometerSnapshot_t) to store past states.
- Position corrections (cg.predictedPlayerState) are interpolated over several frames using VectorLerp() to fix prediction mismatches between client and server.
- Commands are stored in a circular buffer (ucmds[]) and replayed when missing packets are detected (ClientThink_cmd()). This attempts to mitigate packet loss by buffering input commands for later execution
Also; The server calculated projectile trajectories independently of client-side predictions.It was buggy on certain? versions. We (OSP) noticed this on earlier Quake 3 releases and it may have been fixed.
This results in visible "misfires" where projectiles didn't behave as expected
Almost 12 years ago, I wrote my first web server in C. It was for a router/firewall device running Linux From Scratch(LFS), I can't recall the kernel version(2.6.x?).
Reading this just took me back and made me realize even though we have progressed so far with tech stack and new technologies(LLMs), there is so much joy in just writing plain C code. To me its just feels like working on a classic old 68 mustang and getting your hands dirty. What a joy!
The net code for the original Q3A client worked well for lan, but was sensitive to latency for remote play. One of the exciting changes to Quake Live was the updated net code for better remote play. Internet connections also got better in general with the time.
Eh. While not great given bandwidth availability at the time, most Quake hosted servers has excellent ping and latency for most people domestically. The main issues were honestly people hosting from their local machines across the world, which obviously had very poor results.
I am a former Quake 3 champion, have a lot of experience dealing with Quake 3 servers :)
What is a former q3 champion doing these days? I'm genuinely curious. Since you are on hacker news I assume you do something with software. Any skills that you picked up back then that are useful today?
I’ll credit Quake for helping/exposing me to networking, filesystems, and shaders at a young age - one of the few games that truly encouraged (à la Quake Console) making modifications and setting booleans to configure settings. That was enough at the time to encourage me to teach myself JavaScript, PHP, and SQL - as I had some web projects in mind that I needed to execute on for the Quake clan I founded. This later became freelance, which later became a side gig through school. I was also grateful for the various sponsorships and earnings at the time so I could keep my equipment up to date and pay for storage.
Fast forward a bit, and I’ve mostly worked in SecEng at a number of great companies, most of them popular FinTech and FAANGs..I’m extremely fortunate and have been able to work at all my favorite companies. I’d like to think at minimum, Quake prepared me professionally be meticulous in my craft, study human pattern behaviors, and know when to take risk.
Lastly, I can (hopefully) better answer your question in a few years as I am just now working through switching careers completely to become a Motion Designer. There’s something extremely appealing to me to be able to work with 3D + Engineering, while still being able to solve problems AND feel creative at the same time.
..maybe this desire stems from Quake, maybe not. Either way, Quake has a permanent location in my heart.
I think early pc shooters (doom, quake, unreal, etc) had a whole legion of kids learn the basics of computing. typing commands in a console. Understanding latency, what servers meant. Installing mods and drivers. It was a fantastic, motivating introduction to system administration.
UT99 did it for me. The game had a built in IRC client. That exposed me to the community outside of the game itself. And then once I heard the game ran better under Linux… my fate was sealed.
I have been playing Quake since Quake World and dial up. For Quake Live I was ranked highly for a while in RA. It's true that if you were close enough to the server you could get a better experience, but in western Canada servers were limited. In the past though, even with a good ping, rails were much more likely to hit on LAN. The changes to the net code in Quake Live were a welcomed change for myself.
As a note, I recently played for the first time in over 10 years. I got banned until 2029 from the main RA3 sever most people play on because I achieved 60%+ accuracy in a match and people thought I was cheating.
I also got the aim bot achievement from Steam as a result, which only 3.6% of other players have.
Back in the day I had a very tiny sponsorship with HyperGlide where I tested their products before the launch. They are credited for making the first hard surface mouse pad. I only received mouse pads, but I was happy to receive this.
What’s the story around the Unlagged mod was that a mod you remember? It was mentioned in the blog post curious if it was popular and if it eventually got iD's attention to implement something similar officially?
One thing to keep in mind is a lot of mods were frowned upon or flat out not allow at all, especially as you can imagine with playing competitive. Once PunkBuster did finally get release for Q3, there was also a bit of hesitation from ranked players on making any non-typical modifications to the game out of fear of being banned. That said, aimbot was a huge issue with Q3 and rarely did I witness PunkBuster do its job.
iD tends to be very tight lipped about things from my limited interactions with them, even in person at Quake Cons..but I do know they were all hanging out on the same forums and places the players were at - so my hunch is they were aware of it.
PunkBuster was a PITA, but there were popular mods like DeFRaG, CPMA (Pro Mode), and RA3. There were also other full on games built like Urban Terror by Sandstorm.
Most of the popular mods were rolled in as game modes though.
I mostly lived in modded Q3A - held a few world records in DeFRaG from time to time, but none that were important. I miss those days, they were the best of times :D
It's interesting that it's latency predictive with corrections and doesn't use anything fancy like operational transformations (OT). I guess it's actually simpler and shared state isn't a collaboratively-edited document but needs an independent, ultimate source of truth, is faster to develop, and probably more performant as a shared server.
Yeah, OT would be way overkill in a FPS, and anyway I don't think OT was even out of the academia in Quake 3's time. And there cannot be any resolution of diverging client states, the server must be the sole source of truth, lest you get cheating and exploits.
This reminds me of the term "isochronous," that I first started hearing, when FireWire came out. They allude to that, in their justification for using UDP.
I'm pretty sure that isochronous data transfer is a fairly important part of the current USB/Thunderbolt spec.
Could be related to the 'disappearance' of RakNet, which at some point in the past was *the* go-to networking middleware for games (because it implemented a flexible guaranteed-vs-nonguaranteed and ordered-vs-unordered protocol on top of UDP) until the author sold the library to Facebook and essentially vanished (and Facebook letting the library rot).
There was the q3 client and the q3 server. It was point-to-point but could be hosted on the internet. There was no "middleware" except for locally-hosted aimbots for cheating by rewriting the cheater's client packets using a socks proxy... obviously not sportsman-like nor fun for anyone else.
PS: I had 7 ms ping* to a Q3 server at Stanford but I still sucked, unlike one of my college roommates who stuck to Q II and the railgun and was quite good.
* Verio/NTT 768kbps SDSL for $70/month in 2000-2001
Funny how 7 ping was phenomenal 25 years ago, but now it is the minimum expectation (in WEU). I expect my ping to always be 7 or less (in CS:GO). I don't play on MM servers if the ping is more than 10.
Sort of, but not precisely. In practice, it really depended on the slowest link's maximum single-channel bandwidth*, oversubscription ratio of the backhaul(s), router equipment and configuration like QoS/packet prioritization... and then it also depended on internet traffic at the particular time of day.
In my case, I was 3-4 hops away and 34 mi / 55 km straight line distance, 110 / 177 driving, and most importantly roughly around 142 / 230 of cable distance approximately by mapping paths near highways in Google Earth. I doubt the network path CalREN/CENIC was used because it never showed up in hops in traceroute (although there was nothing preventing intermediaries from encapsulating and transiting flows across other protocols and networks), but it definitely went through PAIX.
* Per technology, zero-distance minimum delay is a function of the single maximum channel bit rate and data size + lower layer encapsulating protocol(s) overhead which was probably UDP + IP + 1 or more lower layers such as Ethernet, ATM, ISDN/frame relay BRI/PRI, DSL, or POTS modems. With a 1 Gbps link using a billion 1 Hz|baud channels, it's impossible to have a single bit packet latency lower than 1 second.
Depends on the location of you and the server. Obviously it can't go faster than the speed of light, but it can go much slower, since it doesn't go add the crow (or photon) flies, and more of in a zig zag path, and it has to traverse several photon/electron translation hardware hops (aka routers and switches), and there's typically some packet loss and buffer bloat to contend with as well. The speed of light in fiber is slower than in a vacuum, to be fair, but the latency you experience is marred more, not by the raw speed of the photon in fiber, which is still quite fast (certainly faster than you or I can run), but by all the other reasons why you don't get anywhere near that theoretical maximum speed.
From me to Australia should be ~37 milliseconds if we look at the speed of light, but it's closer to 175 milliseconds (meaning a ping of ~350). Nevermind the latency of being on wifi adds to that.
There are many problems with this article, it's for laymen so simplifies things but it's also factually incorrect. Fiber is usually next to roads or railways, which usually do not zigzag. Modern router/switches have a forwarding delay of micro/nanoseconds. The beam in a single mode fiber does not bounce like a pinball, it doesn't bounce at all, hence the name.
Ping is largely a product of distance and the speed (200km/s). It's not the distance a bird would fly but it can be close to it sometimes. And then the internet is a collection of separate networks that are not fully connected, so even if your target is in the next building your traffic might go through another bigger city as that is where the ISPs peer.
You're still missing many other significant factors besides distance. There are many conditions that affect latency, but on the minimum theoretical value possible, it's mostly dominated by the slowest path technology's single channel bandwidth. The other factors that reduce performance include:
- Network conditions
- High port/traffic oversubscription ratio
- QoS/packet service classification, i.e., discriminatory tweaks that stop, slow, or speed up certain kinds of traffic contrary to the principles of net neutrality
- Packet forwarding rate compared to physical link speed
- Network gear, client, and server tuning and (mis)configuration
- Signal booster/repeater latency
- And too many more to enumerate exhaustively
As such, point-to-point local- and internet-spanning configuration troubleshooting and optimization is best decided empirically through repeated experimentation, sometimes assisted by netadmin tools when there is access to intermediary infrastructure.
I wasn't enumerating all sources of latency. I wrote largely, as after some amount of distance all the other factors are not really relevant in a normally functioning network (one without extreme congestion).
Due to how most FPS games are implemented you are actually seeing other entities in their past state.
What happens is the game will buffer two "snapshots" which contain a list of entities (players, weapons, throwables, etc.) and it will linear interpolate the entities between the two states for a certain period of time (typically the snapshot frequency).
The server might have a "tick rate" of 20, meaning a snapshot is created for a client every 50ms (1000ms / 20 tick rate).
The client will throw that snapshot onto a buffer and wait for a second one to be received. Once the client is satisfied with the number of snapshots in the buffer, it will render entities and position them between the two snapshots. Clients translate the entities from the position of the first snapshot to the position of the second snapshot.
Therefore even with 5ms ping, you might actually be seeing entities at 55ms in the past.
maybe they mean solid open source netcode? Call of Duty famously used Quake 3 net code for a while. That engine was a great core to build from since Carmack made such solid stuff.
Yeah I know, I work for Sledgehammer Games on cod. It's not that it's been swapped out, it's been rewritten and rewritten many times over the years. What I meant to imply (but didn't succeed in saying) was that for systems that were in q3a, if you knew the q3a code, then you would feel a sense of familiarity with those parts of the cod code.
I occasionally do some random spelunking for fun and while there's practically no code left, there are still a few small remnants like the odd comment or function signature (or partial signature) that are still there, and some that even date back to Quake 1!
In another article about the network abstraction layer "NetChannel" [1], the author describes that ACK happens by sending "Sequence" numbers in the headers.
The grammar is pretty bad, but good for a presumably non-native english user. Overall, it's a good article, especially when you consider that it's free. Game network code and predictive motion and rollback or input delay-based approaches are all fascinating to me. It is a hard problem to deal with network latency in a way that feels real-time and doesn't break a game.
Agreed. I wrote this a long time ago without anybody to help proofreading.
Sometimes I want to take it down out of shame but at the same time it is a good reminder of where I am coming from and a motivation to keep on improving.
reply