Hacker News new | past | comments | ask | show | jobs | submit login
Counter-Strike Global Offsets: reliable remote code execution (secret.club)
186 points by stefan_ on May 14, 2021 | hide | past | favorite | 91 comments



Another example is why app level security is so important. Why shouldn’t games allow arbitrary code execution? It only matters because the access space for programs is still so broad.

People complain when applications on Mac request permission to access files, but that makes such a huge difference. It’s time for kernel level permissions to be standard on desktops.


> It’s time for kernel level permissions to be standard on desktops.

And then the games industry starts deploying vulnerability-as-a-service kernel modules in order to bypass all of those controls.

https://mobile.twitter.com/TheWack0lian/status/7793978407622...


That ship has long sailed. All the major anti cheat systems do this now, and have done for years.


And still cheaters plague most public servers of the games I played.

On the other hand: how can you spot a cheater who obviously has the cheat at kernel level - with a anticheat software running above that?


By realizing that a purely technical solution is not enough.

- make it easy to report suspected cheaters

- include footage from the cheater, not prerendered but as client state

- create a way for experienced players to judge, from multiple angles, with proper randomization and incentives

- require an id for creating an account. Ban persons not accounts


"include footage from the cheater, not prerendered but as client state"

Don't you need that deeper system integration to achieve that in a way it can't be negated by the cheat?


You can record (clientside and server side) the inputs from multiple players and play them back deterministically; if the suspected cheater's ones don't line up with how the game played out for the majority of the server then that's a pretty clear smoking gun.

I think it is ok to be judicious with timeouts, kicks and even bans, so long as you provide a reasonable, accessible and well-scaled dispute mechanism.


> You can record (clientside and server side) the inputs from multiple players and play them back deterministically;

That requires a deterministic simulation, and also trusting the clients inputs. Cheats that just send damage/kill messages are stopped by a server just checking "is it possible for X to do Y". Stuff like this is often explainable by unfortunate lag spikes on the "cheaters" side (and most instances of someone rotating 90 degrees and headshotting you in one shot are really just bad networking combined with an edge case in prediction code).

Meanwhile modern cheats are emulating input devices and living as kernel extensions. A game that looks at client inputs is doomed to fail.

> well-scaled dispute mechanism.

The primary issue with dispute mechanisms is social engineering. If you have a cheat forum of 300 people banned, and one figures out what to say to be unbanned, they share the results with others and everyone gets unbanned. A stream of 300 tickets in a game like Fortnite, particularly if they're done in multiple languages and spread across a few days would be unnoticed.

It's a tough problem. I'm not claiming kernel level anticheat is a _good_ thing but it seems to be better than any of the other options available right now.


Forgot that last point, a good dispute mechanism is absolutely needed


Cheaters are (for the most part) not a problem on games using the more "modern" kernel mode cheats. There are always going to be some cheaters on PC games, but for games using solutions like EAC, the majority are banned incredibly quickly. Its always a game of cat and mouse. The _only_ solution to cheating is hardware control (see consoles of game streaming), everything else is a sliding scale.


Yeah and it sucks. Why does the games industry think it can own our machines for the sake of some ineffective anticheating solution?


Because it is effective. Fall guys is an example of a recent game that was absolutely riddled with cheaters on PC. The problem practically disappeared overnight when they introduced EAC.


Because, really, the cheaters are just that bad.


There are other viable options for dealing with cheaters, such as (but not limited to) community servers with robust moderation controls and modding capabilities to let them experiment with bespoke solutions out of band with the developer's release cycles.


This can’t be more horrifying


Yeah the way macOS handles permissions can be annoying to some but generally the way it places an additional barrier between code execution and ability to read/write files beyond the application files itself is welcome imo.

If you accept the admin prompt in windows that code can just run anything. In macOS the first time it'll run the code but prompt again if it starts trying to access the core file system.

You may, as a developer, have an expectation some simple app you've downloaded should be able to run in a sandbox quite simply. A random app seemingly unnecessarily trying to access your user/systems files is worth a second thought.


I haven't dug very much into it (in fact mainly because I couldn't find much detailed/technical information) but Windows 10 has this "core isolation" feature, which I believe tries to achieve a system/apps isolation by virtualizing everything.

It used to be a little buggy, but I now have it enabled all the time and don't necessarily feel any performance issue - on a powerful laptop with reasonable usage.

If anyone has more technical information about the feature I'd be very interested to see more about it and what it exactly does - is it really effective or just a lure?


It runs the kernel under a hypervisor. Technical details: https://www.microsoft.com/security/blog/2020/07/08/introduci...


> Why shouldn’t games allow arbitrary code execution?

Well, for one thing, to maintain the integrity of online games, protect personal data from the game itself and help prevent theft of in game assets... so I think there’s no strong win here.

Yes it would be good if CS:GO exploits couldn’t reach out and hit your Bitcoin wallet or what have you, but you can’t really just give up entirely either. There’s by-design going to be things the game can access or at least indirectly access that can be sensitive. (At least in the context of online games like CS:GO.)

> People complain when applications on Mac request permission to access files, but that makes such a huge difference. It’s time for kernel level permissions to be standard on desktops.

I agree that improving security is good. However, the way macOS has implemented security has a heavy focus on not trusting the apps themselves and in some cases not even the user. While it is a reasonable principle on some levels, I suspect this mentality actually has more to do with greater control over the ecosystem and the security benefits are a nice bonus. Apple isn’t really alone in this either.

For this particular issue, the issue isn’t with trusting CS:GO itself, it’s with isolating the game so that it doesn’t have more privileges than necessary. That’s actually less difficult of a problem to improve on with less annoying implications. A Linux solution might be shipping an SE Linux profile or Firejail configuration with your app to help lock it down. In this case you still trust the app, and good developers can improve their security easily. No annoying prompts, albeit it would not protect against malicious developers at all.

I still have a lot of mixed feelings because I think concepts like secure boot and principle of least privilege are good ideas. However, they have been met with scorn in their introduction to desktop because traditionally, on desktop platforms, you want to trust the software vendor anyways. The trust boundaries are changing and consolidating trust mostly with the OS vendor and device vendor. I realize this long-held tradition of trusting desktop applications has been crumbling under pressure and less people trust software they depend on than ever. But, I still think that macOS is basically neglecting the demographic that absolutely wants granular and intensive security, but wants more control over how it is enforced and against what. In general, that demographic is being forced into the same pigeonhole as everyone else whether they like it or not, much to the detriment of their ability to hack, repair and otherwise use the device, maybe in ways the vendor objects to. Apple famously believes in a world free of porn. Why should the company that makes my phone be hindering the types of content I can consume on it so stringently?

I am also feeling stronger lately that we’re headed towards less and less sustainable conundrums as the grip tightens. One key insight is that back in the day, you generally trusted software vendors because they mostly had incentive to not betray that trust, and you used websites because you wanted to and not because you felt you had no other options. Even if we make security a priority and drop as many privileges as possible, will that solve the problem? We hate the software we use. Everyone wonders why the web is so slow and disliked; it’s because every time you scroll Twitter spends 100ms blocking the main thread on a modern multicore processor doing god knows what while the browser desperately tries to hide this fact. Even when you don’t scroll enough to cause any virtualized viewport changes. Is it ads, telemetry? Who knows. It’s a mess. Nearly every webapp and probably many mobile apps do things we hate. And they all seem to pull this elaborate bait and switch on users where it’s nice and user friendly and developer friendly until they no longer need your help to thrive.

Users should have ways to run apps they don’t trust that will help protect their security and personal information. Security by default is also a laudable goal, too. But if the future is going to be OSes trying to protect us against apps we hate that hate us that we feel compelled to use due to social pressure, legal requirements or business needs, I think we’re really just headed to a generally dark place.

I’m not saying I don’t understand the legitimate motivations here. It just also is convenient that part of this model of security necessarily takes control away from the user, and it seems like the endgame is that we hate everything and everything hates us. It is possible we’ve run into a situation where a technical solution is simply not enough to untangle the mess. I have no idea what is.

For a long time I was on the side of sympathizing with OS vendors. They were just doing what the security paranoids wanted, right? Well, kind of. But the implications of what it really means for computing feel bleaker every time the subject comes up.

And yet are we safe? The threats just keep adapting. I’m sure at one point many thought “as soon as those pesky buffer overflows are gone, computers will be safe.” It’s good that the security baseline on a technical level keeps improving, but I really think we need to find a way to have healthier relationships between end users, device vendors, and software vendors...


TLDR, but I think ancestor was talking about games being permitted to execute arbitrary code, as opposed to privileged users being able to inject code into a game process…


> TLDR,

It's not like I can make you read my entire post, but I find it irritating that you went our of your way to let me know that you didn't actually read my post, but still felt compelled to respond anyways.

> but I think ancestor was talking about games being permitted to execute arbitrary code, as opposed to privileged users being able to inject code into a game process…

I never said anything about privileged users being able to inject code into a game process.

I am retorting that the idea that RCE exploits would be OK if we just sandboxed everything is not true. It would improve things, but online games like CS:GO necessarily have access to some things that would be considered sensitive, and thus we still have to care about RCEs.

I went on further to suggest that Apple-style ecosystem control is unnecessary to solve this particular problem. App vendors themselves could opt into various sandboxing measures to limit their own privileges to improve their own security posture. Apple's control protects against untrustworthy applications, and in this case, CS:GO is not an untrusted application.

And then, finally, in around 600 words, I began talking about how I have grown to dislike the walled garden approach because we're building a world where we all hate and distrust all of our software vendors and they hate and distrust their users. Building an ecosystem that is "secure," but that nobody actually likes or wants to partake in.


Remember “Don't be snarky”.

I appreciated your comment. Maybe saying RCE is “ok” was a bit of a hyperbole, it depends on the game. MMOs people could steal credentials, etc.

SELinux, App Armor, and MAC is what I mean. Windows has mandatory integrity control but it’s no where as comprehensive as Linux, MacOS, iOS.

Windows does have some opt in isolation controls as well like you suggested, but their recently released security features like App Guard is based around virtualization which think won’t work for the average program due to performance and overhead.


Try writing less next time.


Glad they published the process used to discover the exploit. Interesting to follow along on how the weakness was found and the relative simplicity of the actual exploit code.

Can't help but feel like this entire class of problem should be avoided with modern tooling. Even in a relatively unsafe language like C++, a static analyzer could have flagged an unchecked array access.


How can people contact big corporations and get no response? Are the messages not being read?

Or is there a weird culture of fear where you’d rather silently try to fix it without acknowledging that it exists, because acknowledging a problem means taking some legal responsibility? It wouldn’t be the first instance of US law having weird effects on human behavior but it does seem a bit far fetched.


I think game development companies are forced to take a very different approach from normal companies. Game developers face frequent abuse from customers due to passionate feelings involved with such an interactive media, programmers will get hate mail (generally vaguely directed at least) and female artists will get stalked pretty often - it's honestly a pretty toxic community.

Look, for example, at what happened to Hello Games after the release of No Man's Sky - lots of people felt entitled to send death threats and demands to the developers - the UK authorities were regularly involved in threat assessment[1]. If some LoL players[2] are willing to swat the other team after losing a match then they're quite willing to go extreme lengths if their favorite character gets nerfed.

I think this frequently toxic community interaction really impacts how game development studios interact with press and the public. The folks reporting legitimate bugs might just end up being buried in an avalanche of "BUG: My DPS damage is too low" that overwhelms an already tetchy CS department.

1. https://www.theguardian.com/games/2018/jul/20/no-mans-sky-ne...

2. I'd just like to reinforce that most of the people in a community can be fine - it's the crazies that start a lot of these problems, most folks won't swat you for beating them at LoL.


This is definitely true in general, but it’s worth noting that Valve pays HackerOne to validate submitted issues before they have to look at them. If it can’t be reproduced as a security issue, it will likely not get through.


For Valve at least my experience is that’s pretty common. With the way they structure their company everyone works on what they want to. The result is that nobody chooses to work on the boring stuff so it doesn’t happen.

I used to run some of the official servers for CS:GO and L4D2 on behalf of Valve. Mostly it was great but communication was always a challenge.


> Or is there a weird culture of fear where you’d rather silently try to fix it without acknowledging that it exists, because acknowledging a problem means taking some legal responsibility?

That is probably one aspect of it for sure. I think the gaming industry itself might have problems with sharing because 0days in games could really mean no more purchases, at least for awhile, and even then you have lost momentum from your marketing. Also, for non-zero days, just bugs in general, look at No Man's Sky or even more recently Cyberpunk 2077, so much social backlash.


How can people contact big corporations and get no response?

In general, the bigger the company, the more bureaucracy and layers of indirection you have to get through to reach someone who even knows what you're talking about. This is regardless of whether the message is "bad security bug in your product" or "want to buy a million of your product".


> How can people contact big corporations and get no response? Are the messages not being read?

Customer support is seen as an expense. So automate as much as their job as possible and then... stop paying for customer support.


Valve is a big company, but it has very small teams for CS GO and Dota relative to the industry.


Valve should be kicked off HackerOne. They seem to abusing the service to trick researchers into submitting vulnerabilities without providing any sort of compensation. Does anyone here work at HackerOne?


More likely is nobody at Valve cares enough to actually monitor or respond. There's plenty here about their bizarre corporate structure which really falls flat at critical times.


I would go with this assumption. Best to assume neglect over mal intent.

This behavior was seen before too with the devs behind the new Gmod in Source 2 (Alyx engine). They spent months trying to get in touch with access and it came down to an employee who ended up getting fired for neglecting responsibilities. Now everything seems to be working out and open tooling is being developed.


Does it really make a difference, with respect to whether they should be kicked off HackerOne? Incompetence and disorganization is no excuse for what is essentially (if not literally) wage theft.


My experience with valve on HackerOne was when I discovered an easy way to lag, and if exploited crash, a dota 2 server. It was nothing technically fancy, but something that made it easy for griefers to ruin the game for everyone. I think even spectators to the match could trigger the functionality.

They turned me down for any reward - I don't remember the exact wording in their policy but I think it was a generic exclusion of DOS and DDOS vectors. I thought I should've still been eligible as my exploit was simple and only required a low input frequency. The bureaucratic process of the whole thing scared me from doing anything more and I was happy enough that they would fix the vulnerability and I could get back to enjoying the game.

I think it's five years later and the problem still exists, and regularly reduces the quality of my games.


It would be awful if after 5 years someone independently rediscovered this vulnerability and tweeted reproduction steps.


At some point it is more responsible to make the exploit public and spark public pressure for a fix, than to let Valve ignore exploits while each day is a roll of the dice on a bad actor discovering them.

If they don't feel the pain, again and again for things like this then there will never be any impetus to change.

This isn't your responsibility of course. I'm just saying that disclosure at this point would be anything but irresponsible. Maybe not in the middle of a big tournament, I suppose.


No experience with Valve but are not alone in being cheap:

I submitted the bug where you I proved you could make predictions about a password in Microsoft just by using ctrl + arrow keys.

It wouldn't have been much anyway but I wouldn't have been surprised if they sent me some swag or something - instead I was surprised about how short the thank you mail was ;-)

(They said it wasn't a security issue but at least it was fixed in the next release :-P)

Edit: I later found a reliable way to run the encryption tools the correct way with the tooling in Azure Information Protection that still leaves the files unencrypted (so simple it can happen by accident, that's how I found it, useful for data exfiltration with plausible deniability) and besides the integration of information protection in Sharepoint is so extremely broken that depending on how you log in, SharePoint will easily serve you the files unprotected.

Between not finding the correct way the report it and the very "meh" feeling on my first find I only tried to report those onve or twice and then gave up.


If you check their hacktivity you'll see that the previous negative press spurred them into action:

https://hackerone.com/valve/hacktivity?type=team


These researchers could have earned plenty from making cheats instead. Would make sense for Valve to pay these types to fix their software instead of breaking it.


They pay, eventually. They're particularly slow for game client exploits. Much quicker for server-side issues.


Yeah, it is certainly a wide spectrum. Server-side fixed in days to months, but these game client bugs have been sitting for years.

I had one server bug in HackerOne’s “mediation” for over a month, after 4-6mo of no reply, which did nothing until mentioning it to a Valve employee on another report they had actually responded to.

Disappointing for a program that has paid $1m+ in its lifetime.


I like Valve's approach. Who is the real bad guy here?

Why should they prioritize people who break their hard work and coerce them into paying for protection?

I might be biased; always wanted to work @ Valve Software since HL1.


> Why should they prioritize people who break their hard work and coerce them into paying for protection?

Because they voluntarily joined responsible disclosure programs and promised rewards?

Their hard work wasn't good enough to survive in the extremely hostile world out there. The fact is online gaming is a form of distributed computing and so people are exposed to network attacks. By failing to prioritize security they are putting their customers at risk. They can either start taking this seriously or watch people make money off of the vulnerabilities in their hard work.


> I like Valve's approach. Who is the real bad guy here?

Valve are, leaving millions of users at risk, despite having been informed of an issue.


Because that's what they signed up for?


See it from Valve's perspective.

They've had pre-launch source codes leaked by hackers, gameplay ruined by hackers; they probably don't like hackers.

I wouldn't be surprised if they signed up purely out of spite to tarpit and frustrate hackers.


That is such a bizzare take.

Valve's stance on hackers won't change the fact they exist and are out there doing their thing. Valve's stance can now be the deciding factor between exploits being disclosed to them for a reward, or sold on black markets as cheats.

If you don't want people "breaking your hard work", don't release software. However, being on hackerone can help alleviate the negatives.


Sounds like a great idea, now instead of them getting issues fixed they get sold and not reported.


> they probably don't like hackers

Many of their breadwinners were made by 'hackers'.


meanwhile they spend billions with VR nobody wants, controllers nobody wants, linux OS nobody uses, smart tv integration nobody cares about, etc

cs:go at this point is a cheater's game.

because valve's server ain't reliable at all, some people choose to play on third party servers, and not surprisingly a lot of players had the third party client mining bitcoin on their computer [0]. much blame the service, but it's obvious to me it's Valve's responsability.

0: https://www.theverge.com/2013/5/2/4292672/esea-gaming-networ...


The Valve Index and Half-Life: Alyx were huge accomplishments that did more for VR as a medium than anything before (and probably after) it. Also Valve's efforts towards making Linux a gaming platform were immensely successful. Proton is amazing and most people that use it agree.

Valve is not investing their resources in stuff you care about, but they are not objectively wasting their time. They are not just building another online game (which everyone is doing), but innovating in and developing areas that no one else can get away with wasting money on.

I am thankful for all the things you list, some of which have not turned out great, but that is imho just a sign that they are truly doing bold/economically dangerous things, for the good of the medium.


it's a matter of cost of opportunity, and risk management. maybe those efforts were great for "VR as a medium" or for Linux enthusiasts, but not for cs community, which was basal for Steam success


This is a bit selfish view. I loved their controller, I love steam link allowing me to seamlessly play from any room (with whatever controller I want). I use Linux and think that vr is something everyone should try at some point. I know that you maybe couldn't care less about these things, but don't exacerbate the argument by saying "nobody".

Besides, cs started with 3rd party servers and have always been there. Many people have never transitioned to the official matchmaking and I don't think that's necessarily a bad thing.

I just think that your comment is a bit too salty on HN's standards.


Good for you mate that you enjoyed all that. I'm just pointing out they do a poor job on cs security and anti-cheat and communication in contrast to other efforts on the company like these, and it's cs who made steam possible, not that other stuff.

3rd party servers are different from 3rd party services.


I think it's great they take a server side approach to anticheating. More intrusive methods of other vendors reliably need andministrative access to the useres system and theres no way I allow anything game-related to run with root privileges. CS:GO is one of the best games in the market in this regard. The cs playerbase regularly complains about the opponents cheating when its much more likely that the other player is just much better.


That's sadly true indeed.


I use linux and appreciate that Steam has support for it. Fact: their Linux OS is the primary reason why I use Steam.


I don't know about you but I use linux. I was pleasantly surprised to find CSGO running smoothly on it.


Valve doesn't even fix VAC bypasses you can find on github


Valve as a company needs fundamental change. It’s pure luck that CS and Dota2 are still #1 and #2 on steam. I don’t even know if I would call valve a game developer anymore. They are mostly a service provider who happens to own some profitable IP.


> Valve as a company needs fundamental change.

Counterpoint: No, they really don't.

> They are mostly a service provider who happens to own some profitable IP.

They are a staggeringly profitable service provider who occasionally has their employees work on games as a hobby. They just are not a game dev company anymore, and that's okay.


Luck...? Dota2 is an extremely well managed esport. Have you seen literally any of their productions on youtube?


They're a technology company. Most of their games in the past ten years have been about showcasing tech, e.g Source, Steam, Trading/Market, VR...


Gamers don't care about invasive anticheat because they already use Microsoft Windows. They have nothing to lose


Fun writeup, thanks. FYI to the author, on mobile there is a horizontal overflow due to one image being too wide (the dereference illustration). I really like the blog’s style otherwise.

There is one part I’m not clear about. Presumably the vector to exploit this is a malicious server, not a proxy. So if you control the server, why do you need to set two Content-Link headers to trick the parser into thinking it’s empty? Could you use a legit file and a fake content header with extra (empty) bytes? Or does that have too many side effects due to the client actually parsing the file rather than ignoring it?


The idea is that you don't want to fill the memory chunk with real data from the server but keep the contents that happen to be in there from previous allocations. Then when the proxy later retrieves the contents of the memory chunk they get essentially a random block of old heap contents. Thats valuable because those will contain (vtable) pointers that reveal where in memory the game was loaded - most executables nowadays use ASLR so a prerequisite to an exploit is having some sort of information leak like this that can tell you the base address.


after reading the comments, I think the takeaways for games are

1. sandboxing - restrict read/write/execute folder/file access at kernel level

2. sandboxing - restrict games to execute arbitrary code, only the game binary itself is allow to be executed

3. anti-cheat software should not be installed at client's computer

4. implement anti cheating from server side instead of relying on client side anti-cheat software


For 3 and 4, good luck. The verdict is in, and server side anti cheat has decidedly lost.

The issue here is that cheat prevention is defense in depth. Drop a couple layers and your cheater rate rapidly rises. Doesn't take much from there to a dead game.


Valve's handling of security reports is abhorrent, and I absolutely believe they should be kicked from any platform that supports good-faith reporting of bugs until they decide they're ready to play.

Sup Gaben?


Backend dev here. How can one learn these types of skills? Recommendations?


Corelan Cybersecurity Research[1] is a good starting point. There haven't been any new articles in ~5 years, but some of the techniques in the linked article are covered. The /r/netsec subreddit is also a suprisingly good source of security info. A big part as well is just experimenting on your own. Vulnerability research is a very "creative" field, so being able to think outside the box is just as important as having technical knowledge.

[1] https://www.corelan.be/



Script kiddie at best.

If you want to learn from the best, and it has to be youtube, go watch Gynvael's channel.


I have always been afraid of this. Never playing a community server again.


Is this the same exploit that was reported to Valve in like 2018?


No, the site has a timeline that says it was reported in January, but Valve didn't fix it until they got publicly shamed for not fixing the 2 year old RCE.


Weird, when I read the article earlier the page content stopped after the 'convars as a gadget' section. That did seem like an abrupt ending


Burying the lead here!

> in over 4 months, we did not even receive an acknowledgment by a Valve representative. After public pressure, when it became apparent that Valve had also ignored other Security Researchers with similar impact, Valve finally fixed numerous security issues

Also, can we all agree "DD/MM/YYYY" is the worst possible date format?


MM/DD/YYYY is clearly worse. Not only is it only used in a tiny number of places, making it more likely to cause confusion, but it doesn’t go consistently from smaller unit to bigger unit or vice-versa.

YYYY-MM-DD is ideal, of course, due to easy sorting and no ambiguity over the order of MM and DD, but I’d take DD/MM/YYYY over MM/DD/YYYY any day.


+1 on ISO 8601, but MM/DD/YYYY does have a certain logic to it. The month essentially tells you what time of year it is. "Want to go on vacation in July 10th?" carries less mental overhead than "..on the 10th of July", unless it's currently July (ie, you just say "..on the 10th"). Because "in July" is more meaningful than "on the 10th of any random month".


Is it? Either we're asking to pick a date (in which case it's just "in July" or "mid July" or "in a couple of months"), or you're asking about a specific date which works for you, in which case both components are equally important.

I could make similarly silly arguments in favour of dd/mm/yy - if you're writing it down on a paper list and dates aren't yet final, it's easier to cross out the first item as you'll have space in the margins to put in the new value rather than the middle item, and for that kind of short term projections, the day is more likely to change.

The real reason we find a system preferable is just familiarity, no matter what post-facto justifications people try bringing up.


Preface: Obviously this is all nitpicking, for fun; you should use whichever system you like.

> Is it?

Yes, I'll stand by this one. Imagine it's January 1st, and I ask you to "pack for a camping trip some day in July this year" vs "…on the 10th of some month this year." While technically the latter narrows it to fewer days, the former is much more actionable because there's a lot less variability.

In my mind, it's similar to putting the main point up front when you're writing an email, and then clarifying, instead of giving all the context and working up to the request. The latter feels logical, like you're telling a story, but the former is so much easier to follow, because you don't have to re-interpret the context when you get to the punch line.

Here's a quick example: "I can't wait until you can come try it out. Such a smooth ride, tons of acceleration, and great handling. You're going to love riding my new tricycle." From a literary standpoint, it's a nice subversion of expectations, which makes it more likely to stick in your mind. But from a functional perspective — that is, communicate the ideas in an efficient, reliable, boring way — this would be much more effective, "I can't wait until you can come try out my new tricycle. Such a smooth ride, tons of acceleration, and great handling. You're going to love riding it!"

Back to dates. If you asked me, "Are you free on July 10th?" I would assume you mean the current year. When you ask, "Are you free on July 10th, 2025?" I do same thing, then hear the year and have to forget that and start over.

The real thing to optimize for is useful information up front. Can I start drawing conclusions as soon as you start giving information, or do I need to wait until the end? The programming equivalent is something like short-circuiting conditionals. I personally know I won't be able to take vacation in July this year, so I don't need to pay attention to the date once I've heard the month; but if the date came first, then I'd have to remember it in case the month turned out to be June, September, etc.

It's not a lot of overhead, and the date usually comes all at once in a row so it's not a very big difference either way, but I do think there's an objective difference.

> The real reason we find a system preferable is just familiarity

I grew up with mm/dd/yyyy. It's the most familiar/comfortable and, due to practice, the easiest for me to parse. I still think yyyy-mm-dd is better, for the same reasons as above (and because it's an international standard, and I like standards).


I suspect that's at least partly because it's what you're used to. "July 10th" makes me pause for a second, as I think you're suggesting 'sometime in July', then you add a specific date to it.


The only problem with YYYY-MM-DD is that most of my non-digital use-cases don't require the year, and often not the month. Truncating information from the front-end isn't as intuitive.


> Also, can we all agree "DD/MM/YYYY" is the worst possible date format?

No, we cannot. Both DD/MM/YYYY and YYYY/MM/DD are consistent and usable (the latter preferred by developers because of its natural sortability).

The one format we can all agree is the worst possible is MM/DD/YYYY.


> Also, can we all agree "DD/MM/YYYY" is the worst possible date format?

No, we can't. That format is properly ordered, unlike MM/DD/YYYY which makes absolutely no sense.


I’m ready for us all to standardize on the DATEINT format.


I read the whole article to find out how Valve responded and it was EXACTLY what I expected. They've done this before and they'll do it again. Toxic




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: