Hacker News new | past | comments | ask | show | jobs | submit login

This is categorically untrue. Publishing material generated like this is going to be generally better than human generated content. It takes less time, can be systematically tested and rigorous, and you can specifically avoid the pitfalls of bias and prejudice.

A system like this is multilayered, with prompts going through the whole problem solving process, considering the information presented, assuring quality and factuality, assigning the necessary citations and documentation for claims.

Accuracy isn't a problem. The way in which AI is used creates the problem - ChatGPT and most chat based models are single pass, query/response type interactions with models. Sometimes you get a second pass with a moderation system, doing a review to ensure offensive or illegal things get filtered out. Without any additional testing and prompt engineering, you're going to run into hallucinations, inefficient formulations, random "technically correct but not very useful" generations, and so forth. Raw ChatGPT content shouldn't be published without significant editing and going through the same quality review process any human written text should go through.

What Storm accomplishes is an algorithmic and methodical series of problem solving steps, each of which can be tested and verified and validated. This is synthesized in a particular way, intended as a factual reference article. Presumably you could insert debiasing and checks for narrative or political statements, ensuring attribution and citation occur for quotations, and rephrasing anything generated by the AI as a neutral, academic statement of fact with no stylistic and artistic features.

This is significantly different from the almost superficial interactions you get with chatbots, unless you specifically engineer your prompts and cycle through similar problem solving methods.

Tasks like this are well within the value add domain of current AI capabilities.

Compared to the absolute trash of SEO optimized blog posts, the agenda driven, ulterior laden rants and rambles in social media, and the "I'm oh-so-cleverly influencing the narrative" articles posted to Wikipedia by humans, content like this is a clear winner in quality, in my opinion.

AI isn't at the point where it's going to spit out well grounded novel answers to things like "what's the cure for cancer?" but it can absolutely produce a principled and legible explanation of a phenomenon or collection of facts about a thing.




> Publishing material generated like this is going to be generally better than human generated content.

I think the opposite. Publishing material generated like this is going to dilute our knowledge base and result in worse content.

But we're both speculating. Only time will tell. I hope that you're right and I'm wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: