It seems to me this accounting is missing the significant financial contribution many companies make to open source by way of paying employees to contribute, either part time or full time.
Many of the arguments here are premised on the idea that the only valid model for open source funding is a donation model. But if a company believes they get more benefit by contributing via employment instead of patronage, I don’t think it’s fair to say they’re exploiting open source.
(Disclaimer: I work at a BigCo where my role sometimes involves open source and standards bodies)
Indeed, but most of these projects are really big & popular projects that the organization depends upon. New/Upcoming projects will still be left out due to this.
But we should also note that individual donations can make a huge difference too, if a propoer system is created around it.
I'm really happy to see this. As fumbled as the launch was, I played it on PC and found the world and its characters to have more depth than I'd personally experienced in a video game before.
What broke my heart was to feel viscerally how much time and love the developers had poured into the game, dedicating years of their lives, only to have it all overshadowed by a disastrous, rushed launch. I for one am glad to see the game getting a shot at redemption, and I'm glad more people are getting to experience what IMO is a masterpiece of digital storytelling.
There's a kernel of a good game in there. But the big issue in my opinion is that outside of the main story missions, the world feels completely dead.
Night City is huge and you can tell that there was a lot of passion and care put into the design, at least from an art perspective. But the player has no incentive to explore it.
I see three main reasons for this:
1. You don't stumble upon content organically. Every event is premarked on your map and will remain frozen in time until you get there. And I mean every event. Even firefights between different NPC factions. They don't happen randomly/organically. They happen at premarked spots on your map. You're a tourist with an itinerary in an amusement park as opposed to a person navigating a living, breathing city.
2. There is zero reason to go anywhere without a quest marker. You aren't rewarded for peeking into random corners, for going off the beaten path, like you are in a game like Skyrim. If you do, you are met with an eerie sort of emptiness. Just lots of RNG NPCs, and a feeling that you were never meant to go there.
3. If you follow the markers to the side content, it's pretty hit or miss. There are a few extended, multimission side quests that are legitimately interesting. But the regular side missions (called "gigs") all sort of blend into one. They sort of remind me of COD missions: Exposition is thrown at you like a firehose with a monologue, then you kill a dozen NPCs, and only after the smoke settles do you realize that one of them happened to be the big baddy. And then the mission just sort of ends. And none of the characters involved ever come up again beyond a passing mention in conversation (usually). Sure, the story is there, but it doesn't feel like it has any relation to the gameplay.
So what you're left with is a city that is nothing more than a backdrop for a fairly linear main quest, which while decent, isn't all that remarkable on it's own.
So much was left on the table. I hope CD Projekt make a sequel where they take the time to really flesh out the world. They could have a classic on their hands if they do.
This for me is one of the reasons why Diso Elysium is such a masterpiece. There is no corner in the game without a secret, yet it never feels like it forces gameplay into these corners, the mystery appears naturally and implicitly in everything.
Very different type of game compared to CP77, but it has become my gold standard of RPG storytelling.
yeah I almost completely agree. I feel almost no need to do any side quests (of which some are impossibly difficult until you've passed the end game). The thing that really bugged me more than anything was I just thought the controls for the game weren't very tight. Driving felt a bit too floaty and the gun controls on PS5 were very lacking (compared to say GTA5). There is glimmers of greatness, I just think they bit off more than they could chew.
> I hope CD Projekt make a sequel where they take the time to really flesh out the world.
Sounds more like they should go back to making more expansions (they had planned three, it's been cut down to only one) to really flesh out this game, rather than dumping it and starting from scratch. Seems like a waste otherwise.
What the world needs is random NPC events and non-quest things to find in the world.
- Overhear conversations about someone going to do a clandestine deal, then rob that person and the person they are trading with for way more money than a typical NPC and a neat item.
- Random gang-v-gang or gang-v-police turf war battles should pop up. NPC's should notice when you're 'helping' and not attack you.
- NPCs should occasionally try to rob you or sell you sketchy stuff.
- NPC should be doing interesting things, but all their animations are just 'walk around' or 'standing still'. You should be able to follow an NPC to get a peek at their life so they don't seem like arbitrary robots.
- Hidden caches with actually-useful or amusing items to encourage exploration
- Build out spaces that aren't used in quests, so exploring doesn't feel like you've just given yourself spoilers.
Couldn't all of that be added in via expansions, DLC, or patches (what's the difference anyway) without changing the underlying engine? It's just strange that a game that was worked upon for so long (including after its release) is seemingly unsalvageable. I would think that they could expand more.
All of that should have been integral to the initial version of the game. I'd balk at a game that lets you play singleplayer but you have to pay extra if you want the world to feel real.
Sure, but 1) you'd have to pay for a sequel anyway and 2) there's still the possibility of free DLC/patches.
My main question is does anyone know if 2077 is just so technically broken, systematically, that it's impossible to patch in all of that. Was it just improperly designed from the ground up to support its ambitions. Because if not, then I view throwing away all of the work done for the existing game, in order to start a new one, seems like a waste and another potential boondoggle.
I'm very sure that CDPR never wanted to create a GTA full of random crap happening on the street with no story connections.
I think a lot of you misunderstood what kind of games CDPR makes. You also didn't get randomly mugged on the streets of Novigrad just to be hunted by the guards.
>Overhear conversations about someone going to do a clandestine deal, then rob that person and the person they are trading with for way more money than a typical NPC and a neat item.
The open world was an albatross they should have ditched in favor of the main quest; the campaign is cinematic and awesome, the combat is bomb, but the world is... well, you traverse it, and that's about it.
To be honest I was so amazed by the feel of the world that I completed the mainquest (and all side quests) by either walking or driving there. I rarely used any form of waypoint to skip the inbetween.
I think the architecture, the roads and the districts are very well designed. I think the quests take you to interesting no places and show interesting perspectives on life in sich a city.
What the game lacks to some degree are more emerging observable things with the NPCs moving throughout the city. There should be more things happening. Be it quarreling couples in 100 variations, a car crash that is being locked down by police or emergent behavior between police, gangs, emergency services, hookers, onlookers. Create situations that emerge from the systems of the city.
I played the same way. Well, I took a motorcycle because the traffic was always way too slow. Sometimes I'd stop to Judge Dredd some random Crims if it was on the way. The world looked pretty, but the traffic and pedestrian AI was just so weirdly silly that it wasn't exactly an immersive experience. Relaxing to blast through traffic at 120kph on an Akira bike though.
The world is full of small stories and big - from environmental storytelling to chunks of short gigs each showing a slice of the world and people in it.
It's kind wierd to skip the part that this game is best at (just like Witcher 3 was).
> I'm really happy to see this. As fumbled as the launch was, I played it on PC and found the world and its characters to have more depth than I'd personally experienced in a video game before.
I think it pales in comparison to many other games. Including The Witcher 3, another CD Projekt RED creation.
However, I also agree that it got unnecessary hate. Sure, it was a bad decision to launch it on the previous console generation. There were some bugs, but I haven't encountered any show stopping bugs when playing on day 1 _on Linux_ even!
Yeah I gave it a second try this past summer after buying it at launch, playing three hours, then getting frustrated at the crashes and general gameplay. Once I got past the prologue, I think the game really started to get interesting. I sort of had the impression it was a "cyberpunk" GTA before, but the depth of the story as it progressed gradually won me over.
I tried it, expecting it to be awful. I actually really enjoyed it. It's not an FPS and they probably over-hyped it but the world was very engaging and the story I believed. To be honest, I'd have preferred it without the guns, but maybe that is just me…
Those first 15 hours or so were amazing - the rest of it less so but still great. If they had just kept up the prequel level of storytelling it would have been mindblowing. Still a great game.
Hey folks, I've been working with wycats and my colleague Chirag Patel on Starbeam for the last few months. We prefer to work in the open, but this repo is still a work in progress and is definitely not ready for public consumption, other than very very very early adopters. We were planning to reveal more in June when things were more buttoned up, and certainly didn't expect to see this on the HN front page!
That said, we'd be happy to answer any questions people have about Starbeam, with the understanding this repo is still very WIP.
A little more context: I've been working on the design of this reactivity system in some form since roughly 2018 as part of the Ember framework.
The original design of the reactivity system made its way into Ember Octane as the "auto-tracking" system, and was fairly exhaustively documented (as originally designed) by @pzuraq in his excellent series on reactivity[1]. He also gave a talk summarizing the ideas at EmberConf 2020[2] after Octane landed.
Unfortunately, there's no good way to use the auto-tracking system without the Ember templating engine and all of the baggage that implies. But there's nothing about the reactivity system that is fundamentally tethered to Ember or its templating system in any way!
For various practical reasons, Tom, Chirag and I had a need to build reusable chunks of reactive code[3] that work across many frameworks. We liked the foundation of the auto-tracking system enough to extract its ideas into a new library, decoupling the auto-tracking reactivity system from Ember.
PS. In case you're wondering, I expect Ember to ultimately migrate to Starbeam, once it's in solid production shape and the dust is shaken off.
[3]: I would have called them "components", but that would make it seem like they have something to do with creating reactive output DOM, which is not what I mean.
Like @tomdale, I'm surprised to see this on the front page of Hacker News!
I have been working on more detailed documentation about the overall model. It's also still in flux (and not ready for release ), but you can check it out at https://wycats.github.io/starbeam-docs/.
When I joined LinkedIn 4 years ago, the JavaScript infrastructure was a mess. We had a tool called `just` that integrated with the existing Java infrastructure, including Node version and dependency management, but it was a bit of a hack that predictably became a core part of the foundation.
Worse, onboarding took longer than it needed to, because every new hire had to learn the alternate "just" universe. I can't begin to count how much time was wasted because people subconsciously fell back to the "normal" npm or Yarn commands they were used to, mysteriously putting their project into a bad state, costing them and support engineers valuable time.
We've been able to replace that legacy infrastructure with Volta and Yarn, and working on JavaScript projects finally feels like a first class citizen.
What I appreciate about Volta is two things:
1. It treats the _entire_ toolchain (including npm or Yarn) as important, not just the Node version.
2. It's fast. Like, really fast. Tools like nvm never previously felt slow to me, but now that I've used Volta for the better part of a year, going back feels like they're running in molasses.
It’s strange this article effectively pins the attack on Iran, but doesn’t mention Stuxnet/Olympic Games, a malware attack on Iran that destroyed nearly 1,000 of their centrifuges[1].
The article didn't just talk about the Saudi attack. It gave an overview of the history of similar attacks. Stuxnet was arguably the most significant of those, and it's especially relevant since they're blaming Iran for this attack.
Personally, I don't want Iran to develop its nuclear progam; however, there are legitimate concerns that such cyber attacks set precedent, or even could expose security flaws, that might instigate retaliatory attacks towards the US and its allies.
I came to the comments to see if anyone else was bothered by this glaring omission. The lede is that this cyber attack didn't just delete files, it caused real world physical damage.
> The attack was a dangerous escalation in international cyberwarfare, as faceless enemies demonstrated both the drive and the ability to inflict serious physical damage.
Can you reasonably call this an "escalation" after Stuxnet? Maybe? But not even mentioning Stuxnet? At best, that seems like poor reporting.
I think there is an important distinction here. See:
> It was meant to sabotage the firm’s operations and trigger an explosion.
This seems quite different than Stuxnet, which was very carefully created for a specific purpose of damaging centrifuges at an Iranian nuclear enrichment facility in a very quiet manner, not causing explosions.
It's strange because one is seen as a retaliation and the other as preemptive aggression.
Retaliation is more tolerable than preemptive aggression to many people.
It's very important in this context, because the article goes out of its way to try to point the finger to Iran, but it fails to establish the context. That it was not only attacked on its Nuclear facilities, but also had cyberattack on its oil facilities via the "Flame" malware [0]
Which if this was an Iranian attack makes it a retaliation.
saudi is a longtime ally to the us, each offering material support to ther other in conflicts that have involved iran, it's allies or it's interests.
it also seems unlikely to me that saudi intelligence and other forms of support were not utilized in the planning, development and deployment of stuxnet (among other things).
What about UK? NZ? Kurds? US has many allies, but that doesn’t mean they’re involved in stuxnet. In fact, operationally you’re more likely to fail or leak the more hands you have in the pie. Saudis aren’t known for their technological prowess as much as Israel, so none of this makes sense to me.
there's more than the context of merely being an ally, it's being a "close" ally in the region; one with a history of direct and indirect conflict.
nobody knows if saudi was involved, and i'm not saying they definitely were, but it seems rather unlikely that there was no assistance at least with regard to intelligence and other ancillary-ish things.
even so, the notion of "retaliation" in the scope of clandestine conflict and power struggle is very nebulous, as geopolitics has many layers. if nothing else, saudi is almost certainly a much more ideal target of opportunity than the US due to it's being a nearby direct competitor for regional influence. and even if saudi really had zero involvement with stuxnet (seems unlikely to me), they've definitely had allied involvement with other activities that worked against iran or it's interests.
So that’s a lot of hypothesizing going on with no evidence. But all of this begs the question, why is it strange that stuxnet wasn’t mentioned in the article? Still seems to me you haven’t answered that. If you want to extend into hypothetical land, then all kinds of attacks on industrial controls and breakins should be mentioned. But they’re not, because the article is more focused on the individual event.
The article was already long enough as-is (I think it ran in the print edition, too). The fact that we know Israel did something similar to Iran's centrifuges is a bit far afield, since they have no apparent beef with Saudi chemical refineries.
With several countries probably able and willing to kill people with cyber-attacks, it's probably not long before an attack succeeds, blurring the distinction between cyber and "real" war.
> The article was already long enough as-is (I think it ran in the print edition, too). The fact that we know Israel did something similar to Iran's centrifuges is a bit far afield, since they have no apparent beef with Saudi chemical refineries.
From the Iranian perspective, it might be easy for them to group the USA with Israel and Saudi Arabia as quasi-enemies and not have a cared-for distinction. Especially since policies and actions by all 3 countries have been, at best, not aligned with Iranian interests and - at worst - belligerent to Iran. I really would disagree that the Stuxnet incident is irrelevant if Iran is indeed responsible.
I was just in Medellín for JSConf Colombia a week ago. It was my first time in Colombia and I didn’t know exactly what to expect.
I was blown away by how friendly and engaged the developers I met were. I can say unequivocally I have never had the audience burst out in spontaneous cheering and applause at the end of a technical talk with such enthusiasm as they did in Medellín.
The conference venue was a beautiful, modern building next to the botanical gardens. The coffee was both delicious and strong. But most of all, the people were whip-smart and hungry to make their dent in the world. I can’t say enough positive things about Colombia.
> I can’t say enough positive things about Colombia.
You can't say enough positive things about the small, elite privileged slice of Colombia that you interacted with. It's a bit disturbing that you appear to have no notion of how different the reality for the masses is in comparison to your own thin slice of experience. I'd be very careful before extrapolating.
Hmm, your comment is actually kind of funny because the cool looking Ruta N building he mentions, in which the conference has held, is located right in the center of a slum, where it was intentionally built (like most big public works in Medellin) to actually make the city more inclusive by mixing people from different backgrounds in these spaces. It's a bit disturbing that you speak with such property when you have no notion of what you are taking about. I'd be very careful before extrapolating.
The building (part of the Ruta-N complex) was designed by http://alejandroecheverri-valencia.co It was the first LEED Gold-certified public building in Columbia (in 2016) so it's cutting edge rather than typical.
It's worth noting that Alejandro Echeverri also teaches (edit: has taught) at Syracuse University.
It sounds like you experienced the lives of the privileged, mobile elite of Colombia. That's valid, as far as it goes, but any boosterism should be tempered with at least a mention of the countries massive inequality (sadly not that atypical for Latin America).
Wealth inequality in Colombia is high, but actually lower than wealth inequality in the US by most measures.
If someone posted about a positive experience at a tech conference in New York, I wouldn't expect to see a follow up reminding us about poor people in Tennessee.
I'm sure there are lots of different measures that give lots of different results. My point was just that the US and Colombia aren't radically different in terms of economic inequality.
The US is not a high bar for comparison of wealth inequality. In fact wealth in equality in the US is a problem, and by any measures Columbia is either about the same or much worse.
The goal is places like Denmark.
South America in general has serious inequality problems.
You say that as though boosterism has a negative effect on the non-mobile-elite of Columbia when I would think that any economic activity would be a boost for the whole economy there.
>You say that as though boosterism has a negative effect on the non-mobile-elite of Columbia when I would think that any economic activity would be a boost for the whole economy there.
For folks who want to play around with this, we've got an interactive playground at https://try.glimmerjs.com/ that lets you add components, templates and helper functions.
In a nod to The Net, you can click the π symbol in the bottom right corner where you'll get a debug view of the disassembled binary bytecode.
I don't think the binary AST proposal changes the accessibility status quo. In my mind, the best analogy is to gzip, Brotli, etc.
If you had to have a complicated toolchain to produce gzipped output to get the performance boost, that would create a performance gap between beginners and more experienced developers.
But today, almost every CDN worth its salt will automatically gzip your content because it's a stateless, static transformation that can be done on-demand and is easily cached. I don't see how going from JavaScript -> binary AST is any different.
I actually think gzip serves as a good example of this issue: this comment alone is daunting to a beginner programmer and it really shouldn't. This chrome/cdn thing could ALSO be auto-gzipping for you so that a beginner throwing files on a random server wouldn't need to know whether it supports gzip or not. I think we really take for granted the amount of stuff completely unrelated to programming we've now had to learn. If our goal is to make the web fast by default, I think we should aim for solutions that work by default.
It's definitely the case that once a technology (such as gzip) gets popular enough it can get to "by default"-feeling status: express can auto-gzip, you can imagine express auto-binary-ast-ing. It's slightly more complicated because you still need to rely on convention of where the binary-ast lives if you want to get around the dual script tag issue for older browsers that don't support binary ast yet (or I suppose have a header that specifies it support binary ast results for js files?). Similarly, at some point CDN's may also do this for you, but this assumes you know what a CDN is and can afford one. The goal I'm after is it would be nice to have improvements that work by default on day 1, not after they've disseminated enough. Additionally, I think its really dangerous to create performance-targeted standards this high in the stack (gzip pretty much makes everything faster, binary ast one kind of file, and introduces a "third" script target of the browser). The chrome/cdn solution means that firefox/cdn might try caching at a different level of compilation, meaning we get actual real world comparisons for a year before settling on a standard (if necessary at all).
Edit: another thing to take into account, is that it now becomes very difficult to add new syntax features to JavaScript, if its no longer just the browser that needs to support it, but also the version of the Binary AST compiler than your CDN is using.
The process of getting content on to the web has historically been pretty daunting, and is IMO much easier now than the bad old days when a .com domain cost $99/year and hosting files involved figuring out how to use an FTP client.
In comparison, services like Now from Zeit, Netlify, Surge, heck, even RunKit, make this stuff so much easier in comparison now. As long as the performance optimizations are something that can happen automatically with tools like these, and are reasonable to use yourself even if you want to configure your own server, I think that's a net win.
I do agree with you though that we ought to fight tooth and nail to keep the web as approachable a platform for new developers as it was when we were new to it.
On balance, I'm more comfortable with services abstracting this stuff, since new developers are likely to use those services anyway. That's particularly true if the alternative is giving Google even more centralized power, and worse, access to more information that proxying all of those AST files would allow them to snoop on.
Many of the arguments here are premised on the idea that the only valid model for open source funding is a donation model. But if a company believes they get more benefit by contributing via employment instead of patronage, I don’t think it’s fair to say they’re exploiting open source.
(Disclaimer: I work at a BigCo where my role sometimes involves open source and standards bodies)