Hacker News new | past | comments | ask | show | jobs | submit | jhack's comments login

Kind of weird to believe “slop” didn’t exist on the internet in mass quantities before AI.


Cool, an open source project in a language no one can use. Say what you will about V, at least it exists.


You can use Jai. You just have to ask. It’s in closed beta, but afik “closed” pretty much means “open to anyone who wants to be a beta tester”.


I have asked several times and got no answer.


How do you ask / apply?


[flagged]


Blow is a smart guy who thinks he's far smarter and far more interesting than he really is. Smart people who don't seek out feedback, and especially those who haven't worked in a team environment, tend to be pretty bad at estimating the limit of their capabilities.

Does Blow have a mentor? Friends who call his BS out?


He does seek feedback, that is why there is a closed beta. I would say he is quick to flip the bozo bit, and easily irritable, but that is also a consequence of having an audience that is large enough to be annoying but too small to just ignore.


A good number of your bullet points are factually incorrect and inflammatory. But to reply in the style of Blow: what have _you_ released in the past ten years to talk all this smack?


I'd say my work has gone in the hands of a few million people and made their lives (mostly) better, along with what all of my coworkers did.

But most importantly, I'm not out there telling people that they suck, they're bad programmers. I'm not the one out there telling people that GC bad, memory ownership bad but also you're stupid if you can't free() memory properly. I'm not the one out there rejecting decades of PL theory, calling academics "believers" and "wrong" [0], I'm not the one throwing shade at every other language, and when criticized on my language, fall back to "uuuuh I make Jai for myself you don't have to use it"

Anyways, if you want a real systems language that's available today and that actually works, Odin exists.

[0] - https://twitter.com/Jonathan_Blow/status/1363913308865138689


Don't they get tired of being wrong, too? I've used software from these types of people and their stuff crashes and segfaults as much or more than their equivalents, certainly than the rust equivalents I use.


Please don't adopt this line of reasoning. Anyone's argument doesn't have to stand on their personal achievements and public works. Just because you're not a chef doesn't mean you can't call one out if they put rat poison in your food.


They did not provide an argument, but a barrage of accusations without evidence. I think in that case, it is fair to ask: Who are you, anyway?


I still don't see why it matters who the person is. Why not ask for evidence instead?


I didn't ask for evidence because I know for a fact that he is lying or misinterpreting things to an extent to be considered lying.

Getting into the beta has nothing to do with being a good or bad programmer. It's not about Blow's ego or criticizing the language, it's about keeping a modicum of respect about a decade's worth of work and not being an ass.

If the ad-homimem's didn't disqualify his whole post, the smell of his butt-hurt would.

PS. As a factual contradiction, documentation is provided with every compiler version and it currently stands at ~32K lines of very well commented example code touching almost every feature of the language.


Can you only apply via Twitch chat or is there some other venue where you can genuflect before his greatness?


No you can not apply via twitch chat, and again, there's no genuflecting. Stop listening the assholes with axes to grind. You can ask about it in twitch chat and probably get an answer, but I think you must show an interest more than "I heard about this thing on HN, I want to try it NOW".


My favorite part of your reply was the part where you said "There's no genuflecting" and then "You must genuflect at least a little."

I applied via email and mentioned that my team is one of the biggest users of the fanmade language Zig.


Can you clarify? How is showing interest "genuflecting"?

I understand how open-source has lowered the general threshold for having access to other people's work, but the entitlement your post exudes is a bit sickening.


Right, I forgot to mention dealing with the most cult-of-personality community that has ever existed.


I fail to see where I said anything about Blow's personality. :) I can despise the guy but still be impressed with his work ethic and accomplishments.

If only righteous people are allowed to make things the world would be much poorer.


These aren't the kinds of things where evidence is readily available. If you have a reputation to stake on certain claims, that makes me more inclined to give you the benefit of the doubt.


Why is it that you want to tone police me (when I was actually trying to be facetiously funny) instead of parent with his virulent ad-hominem's towards two industry veterans which probably have done more for the field of computer science than most of us commenting in this thread combined (at least until Walter drops in).


Hmm seems like he was right about the ego part then.


If only the opposition were this concerned about the harmful effects of far more dangerous drugs like alcohol.


SO is a toxic cesspool, especially for people new to programming or just looking to learn. If ChatGPT and other AI tools can get the job done without resorting to asking anything from the SO “community”, that’s a victory.


The problem is if you solve the problem of not needing the community by training an AI on community content, your community will leave and then you're out of training data. AI is impressive, but I'm unconvinced it can answer truly novel programming questions.


Don't worry, between GitHub and VS Code, there's plenty of training data for Microsoft and OpenAI.


ChatGPT/OpenAssistant/etc users are giving the models an immense amount of feedback.


ChatGPT can not do anything especially not in programming.

All it can provide is how the answer would sound or look like.

Be prepared for an absolute avalanche of bugs and security holes no one alive would've made. I make my living from debugging so I welcome the new job security but it's a massive net negative for society, no question about that.


> All it can provide is how the answer would sound or look like.

This includes providing correct answers, because by definition, a correct answer sounds exactly like it would sound like.


A broken clock is right twice a day.

The answer still has zero information value, even if it is accidentally the correct one.


I asked it to write a simple function to copy text to clipboard. It unecessarily created a async function and forgot to pass down the text as a variable.

Next, it tried to use AlpineJS to create tooltips with NextJS which is really not supported and couldn't fix the resulting bugs.

For sure, it's better than nothing, but you've got to be at least somewhat competent before you start copy-pasting code from it blindly.


> especially for people new to programming or just looking to learn

Yes, StackOverflow is not for beginners at all. It's for fairly narrow technical questions.

Maybe AI may assist in asking a good question or breaking down a big question into narrow on-topics ones.


It has also helped me hundreds of times.


Because you asked a question? Or because someone else did?


For me, both. I search for something generic, either an official manual, a bug report or a StackOverflow's answer pops up. I have a specific narrow question, I'm unable to google the answer, I ask, a more knowledgeable person appears and ties all the loose ends together in I way I have never thought of.


The latter. But I suppose it's typical. A given problem usually emerges for thousands of people so it's not surprising I was never the first to ask.


Does the distinction actually matter?


Still five months away. Looks like a solid enough improvement, though.


I don't know how a language that sets itself up to be a better C turns into this monster of horrible syntax and patterns that's nothing like C at all.


Hey, it's unreleased so we can project our personal vision of "ideal C" onto it.

Then when those design choices get hardened into a 1.0 it will become just-another-language and we will inevitably feel let down and move into the next attempt at a Perfect Programming Language. So it goes.


Wonder is Backblaze will follow suit. They're part of the bandwidth alliance with Cloudflare and that's great, but if you're using B2 for personal use and want to get data from a private bucket, you're still paying $0.01/GB egress after the first gigabyte.


There goes the competitive pricing...


Right, because the existing owners were giving you that price as a favor, and now the evil stockholders will demand that they chase away all of their customers by raising prices.


Typically existing owners have been spending those sweet, sweet VC bucks that go on "growth hacking", and after an exit event the business switches to actually trying to make some money instead of pretending MAUs are better than money for whatever reason they like this week.

In the case of Backblaze they've only raised $5.3m over three rounds so that's unlikely. There wouldn't be much VC cash to hack growth with after 14 years.


Yes. Everything you said is correct. Often startups sell at a loss, this is not news.


The PS4 Pro had heat and noise issues so it looks like they've taken that feedback very seriously. Happy to see liquid metal there, too, instead of the usual blob of below-average grey goo you'd typically see in a mass market consumer product.


You can tell someone hasn't watched the Ratchet & Clank demo from the other day[0]. The SSDs in these consoles go far beyond short load times. They can radically change how games are made and designed.

[0] https://www.youtube.com/watch?v=VsnG-3-r6-Q


Ratchet & Clank is the only demo with new gameplay based on the SSD.

But it's overused, giving each game world less weight. The technological imperative, preoccupied with whether they could, etc. In contrast, artificial constraints add depth to gameplay, e.g. limited movement speed/duration ("sprint").

All that said, keeping processors fed with data is a central problem of CS. The innovations here are not just the SSD itself, but elimination of bottlenecks in the architecture (e.g. direct placement in GPU RAM).

Typically, storage is 1000x slower than RAM. On PS5, it's 50x. That has got to be a revolution in algorithmic space-time tradeoffs... which has got to be reflected in gameplay, somehow, somewhen.


Because execs will only be seeing demos after they’re loaded, and there’s no way some 55 year old is going to spend more than 5 minutes playing some Star Wars game and therefore trigger no world loading, games that do something special with the SSD will not be financed by anyone other than Sony. The technology is basically DOA for third parties. The stuff that third parties are saying is basically moot.

Conversely when Nintendo makes new hardware, it’s the same deal - they’re the only ones financing games that use the balance board or gesture controls or labo or whatever. They just put up a lot more money and have internal studios with more autonomy. SSDs are a Sony problem not a game design/engineering problem.


That portal transition effect likely hides the load screen - it's still there, nearly half a second, it's just shorter. This is comparable to current load times with an SSD on PC!


The difference with PCs is that since the hardware is standard, developers can now create gameplay that depends on those capabilities.

Until all (or most) PCs are equiped with high performance NVMe SSDs, those kind of features won't be possible other than on consoles.

Also, the PS5's architecture is optimized end-to-end for faster loading times, it's more than just faster storage.


Or a PC game can just slap a minimum RAM requirement on and be done with it.

I’m impressed with the tech but it seems like the end goal was to keep console manufacturing costs down. Now it’s being sold as a gameplay-enabling feature and reason to upgrade. The upgrade only looks impressive because the PS4 by now is so old.

Relying on cheap SSD storage instead of expensive RAM, and relieving CPU effort via the storage streaming chip is a cool trick. But that tech alone enables absolutely zero gameplay experiences.

It hasn’t been proven to us whether or not a typical gaming PC’s increased memory just overcomes the need for this tech. If I have a PC with 32GB of RAM and my GPU has 8GB of its own RAM I’m not convinced that a PS5 with 16GB of shared RAM will do anything that the PC setup can’t.

Desktop computers eclipsed the performance of current consoles gen consoles so long ago that I am still suspect: my prediction is that a decent mid-range gaming computer is completely capable of playing any PS5 game.


Both worlds win here:

Finally the gamer gets low/no loading which is nice to have.

But also it becomes much easier for Game Developers.

I'm still looking forward to it, after all, it is an huge improvement to current gen, independently of how long it took and how old the ps4 is.

And i'm not 100% sure if this doesn't affect PC Gaming. After all Direct Storage will hopefully fix small SSD Issues you also have on PC right now.


Most tests I've seen of real-time game asset loading between the various types of SSDs on PC are incredibly inconsistent - for most games it is hardly noticeable. I'd be excited if I was proven wrong but this really feels like the typical console hype ramp up to black friday that South Park portrays so well...


This is not great logic, as those games are not designed around taking advantage of the faster SSD storage like new games will be with the launch of the new consoles. Most games are built with an HDD in mind and thus the ssd is not the bottleneck.


All AAA games need to be cross platform to maximize revenue with a long tail, so they target the lowest common denominator for hardware requirements.

No one is going to design gameplay for a special hardware constraint unless the gameplay can degrade to lowest common denominator. Which of course, makes needing the special hardware optional.

There are few exceptions to this rule. Some platforms pay for exclusivity, effectively covering lost revenue from other platform streams. And Nintendo alone makes a profit on hardware, so they can produce platform exclusives to drive addtl revenue from hardware sales.

Special SSD pipelines, while PC gamers are still using 7200rpm HDDs, are about as appetizing to game devs as waggle controls or Kinect sensor games.

The new consoles include these SSDs not to make something possible now, but to remain relevant in ten years time when PCs may have caught up.

This is the game industry's equivalent of supporting IE 11.


This is simply not true. That's like saying no game on PC is possible because not everyone has a good enough graphics card. Just have a minimum spec for required storage speed and you're golden.


I imagine it will not take long for NVMe to be part of the required or recommended specs for gaming.


Gaming PCs are standardized, in practice. People who have insufficient hardware don't play game X, or they upgrade.


So imagine this tech in a simulated war mmo, where each server of 64 players represents a part of a battlefield and you instantly can transition to a new server representing the next part of the battlefield when you walk there. All of WW2 would be a collection of servers that represent parts of Europe.

Takes a little imagination, but when you look at the ingenuity of something like F-Zero on SNES (2.5d graphics), it’s pretty clear that your typical tech really can used in amazing ways.


A fast SSD won't solve the network latency of connecting to a server and downloading all its data.


Assuming the maps and assets are static, the only thing needing to be downloaded is the current state of the game and entities, which is a handful of megabytes. I agree that an SSD is completely irrelevant here, but so is the network. The hard problem here is how do you sync state between the different servers in real time so that transitioning between servers is seamless (and how do you handle reconciliation after an eventual net split).


From what I know, World of Warcraft does something like that on a conceptual level. Obviously syncing game state in non-action game like WoW is probably a little easier than something like a full blown action FPS.

https://wow.gamepedia.com/Sharding_(term)#Behavior


Planetside 2 is a great and current example of multiple massive 100+ player battles occurring simultaneously in the same server/map.

Obviously its graphical detail is nowhere what was shown here, but the precedent exists, and the only thing that would be different is the geomotry and texture res.


So we’ll have to wait for 5g!


Wow, I'm not going to believe that's realtime in my hands. I started the video thinking "whats so special about this", and when they started going interdimensional I realized either this is very cool or just hype.


Insomniac Games is pretty good about wysiwyg


That seems to be progressing on rails, what does the SSD help here? Maybe you can postpone preloading the assets later before the transition, and thus need to free up memory slightly later for next scene's asset decode/load buffers. Though there isn't anything here that looks like 2 or more scene's worth of assets couldn't be in memory at the same time.


That is impressive if each of those worlds is loaded as you go, but I feel like both Microsoft and Sony were holding off on this next generation until graphics tech got better.

We still are nowhere near photo realistic, cinema type game play (and yes I realize that such animation has to be scaled down a bit because it can disturb people if it's too real, but we're not even at that point yet).

Real time ray tracing was going to be nVidias whole push into a new generation of 3D graphics, but that turned out to be not all that impressive (and with huge performance costs).


Things like this have been possible for almost a decade now. Bioshock infinite being a prime example.


Prey did the same thing 14 years ago... I don't see what is groundbreaking about this particular gameplay segment at all.


it's been forever since I played the first Prey but I don't remember anything like this aside from perhaps the single usable portal in the game? Was pretty cool at the time and then Valve explored the concept properly in Portal 1 & 2.

Having said that, the original Prey looks pretty rough these days. And I don't see how a single portal is "the same thing" at all.

In fact, I think the only thing that I can think of to compare to is the ending of Portal 2 where you shoot a portal to the moon, but even that was just a single portal to a fairly low detail environment.

Are you referring to Prey's usage of that portal? Or something else.

FYI, the new Prey game released a few years ago is amazing. It has basically nothing to do with the original because they apparently just wanted to use their rights to the name on a flagship title. But it's amazing if you're into System Shock/BioShock type games.


It just means fast loading assets, so if SSD becomes RAM one day, anything interesting at all? Same goes for ray tracing, looking more realistic with control of light...

Nothing special here, no need to hype

edit:

You know what really makes a difference?

You playing an VR open world game where every single artificial intelligence NPC does thing in every possible way leading to different gameplay and outcome. And you as a character can just grab anything you want and throw at any monster that you just CREATED in that game itself

Or you can control that monster that you created in your own way, time travelling to another open world game saying hello to your friend playing in his house back and forth

Together with some AI NPC friends you made in that game, you live in that dimensional space forever as you want even after your human body dies, your conscious stays in electronic form

Referencing https://en.wikipedia.org/wiki/Sword_Art_Online:_Alicization


"SSD as RAM" is a good way to reason about it.

It means you have a lot more RAM, e.g. thus, PS5 has 825GB of RAM. RAM in the TB, soon.

Would a couple of orders of magnitude make a difference? If software is designed with terabytes of RAM in mind?


"SSD as RAM" is a bad way to think about it. What you need to realize is that the standard for games has been to treat RAM as storage, because the hard drive was too slow to use for loading data on the fly. SSDs mean games can use storage as storage, but they still have to fit the working set in RAM.


And that's really what the hype is about in terms of better game experiences on these new systems — we should be able to have larger working sets because you don't need to waste RAM as storage.

To get specific about what this enables, I think we will see many more indie games with great looking graphics. The combination of high res asset scans, automatic resolution scaling, automatic texture compression, generally less tight performance budgets that don't need teams to do optimization work (next gen consoles), and a financial model around tools to take advantage of all of this (Unreal + Quixel as the leader here) should make this next generation of games pretty awesome.


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: