Hacker News new | past | comments | ask | show | jobs | submit | coder-3's comments login

I love this! My suggestion is to not close panels when going back up levels but instead put them in a parallel branch.


The problem with AI-generated content is not necessarily that it's bad, rather, it's not novel information. To learn something, you must not already know it. If it's AI-generated, the AI already knows it.


How much work do individual humans do that could be considered genuinely truly novel? I measure the answer to be "almost none."


That's true to some extent, but training on synthetic content is big these days:

https://importai.substack.com/p/import-ai-369-conscious-mach...


We might also say the same thing about spelling and grammar checkers. The difference will be in the quality of oversight of the tool. The "AI generated drivel" has minimum oversight.

Example: I have a huge number of perplexity.ai search/research threads, but the ones I share with my colleagues are a product of selection bias. Some of my threads are quite useless, much like a web search that was a dud. Those do not get shared.

Likewise, if I use LLM to draft passages or even act as something like an overgrown thesaurus, I do find I have to make large changes. But some of the material stays intact. Is it AI, or not AI? It's bit of both. Sometimes my editing is heavyhanded, other times, less so, but in all cases, I checked the output.


How does this technically work? Is it just a natural language shortcut for prepending text to your context window, or does it pull information as needed as inferred from the prompt? E.g. the meeting note formatting "memory" gets retrieved when prompting to summarise meeting notes.


You don't think AGI is feasible? GPT is already useful. Scaling reliably and predictably yields increases in capabilities. As its capabilities increase it becomes more general. Multimodal models and the use of tools further increase generality. And that's within the current transformer architecture paradigm; once we start reasonably speculating, there're a lot of avenues to further increase capabilities e.g. a better architecture over transformers, better architecture in general, better/more GPUs, better/more data etc. Even if capabilities plateau there are other options like specialised fine-tuned models for particular domains like medicine/law/education.

I find it harder to imagine a future where AGI (even if it's not superintelligent) does not have a huge and fundamental impact.


It's not about feasibility or level of intelligence per say - I expect AI to be able to pass a turing test long before an AI actually "wakes up" to a level of intelligence that establishes an actual conscious self identity comparable to a human.

For all intents and purposes the glorified software of the near future will appear to be people but they will not be and they will continue to have issues that simply don't make sense unless they were just really good at acting - the article today about the AI that can fix logic errors but not "see" them is a perfect example.

This isn't the generation that would wake up anyway. We are seeing the creation of the worker class of AI, the manager class, the AI made to manage AI - they may have better chances but it's likely going to be the next generation before we need to be concerned or can actually expect a true AGI but again - even an AI capable of original and innovative thinking with an appearance of self identity doesn't guarantee that the AI is an AGI.

I'm not sure we could ever truly know for certain


This is exactly what the previous poster was talking about, these definitions are so circular and hand-wavey.

AI means "artificial intelligence", but since everyone started bastardizing the term for the sake of hype to mean anything related to LLMs and machine learning, we now use "AGI" instead to actually mean proper artificial intelligence. And now you're trying to say that AI + applying it generally = AGI. That's not what these things are supposed to mean, people just hear them thrown around so much that they forget what the actual definitions are.

AGI means a computer that can actually think and reason and have original thoughts like humans, and no I don't think it's feasible.


Intelligence is gathering and application of knowledge and skills.

Computers have been gathering and applying information since inception. A calculator is a form of intelligence. I agree "AI" is used as a buzzword with sci-fi connotations, but if we're being pedantic about words then I hold my stated opinion that literally anything that isn't biological and can compute is "artificial" and "intelligent"

> AGI means a computer that can actually think and reason and have original thoughts like humans, and no I don't think it's feasible.

Why not? Conceptually there's no physical reason why this isn't possible. Computers can simulate neurons. With enough computers we can simulate enough neurons to make a simulation of a whole brain. We either don't have that total computational power, or the organization/structure to implement that. But brains aren't magic that is incapable of being reproduced.


The battle, yes. The war, no.


This is good. If Sama managed to overthrow the board I'd be worried about the future of AI safety


Why? Now they'll just start their own company that is 100% for profit and OpenAI may eventually fade into nothingness as money chases the new venture, possibly even Microsoft and employees flee. There's a rumor that Microsoft is already interested in funding Sam's new venture.


They won't be able to catch up to and surpass OpenAI. At least not for a few years. I'm in the camp that we ought to solve alignment before we get to AGI and also of the camp that this is unlikely. Therefore, the pace of AI progress slowing and a point won by the safety side is a good thing based on my assumptions.


I get the sense that there was not a lot of time to do this before Sam was too difficult to remove and appropriated the brand and assets.

From the reading on this I did this evening (which was much more than I probably should have) I had seen it suggested that this might have been a limited window in time where the board was small enough to pull this off. It was 9 members not long ago before being reduced down to 6 (where 4 of them collaborated). Allegedly Sam was looking to grow the board back up.


It's good because Sam was pushing for regulatory capture to harm startups and open-source efforts.


Raw IQ doesn't mean you'll have good epistemology. His brain lets him reach deeper abstractions than most but it doesn't mean what he comes up with is grounded in base reality. If you're worried about being off-base, try:

- Contact with reality (feedback). Predict, try, did it work?

- Having a good reasoning framework. For some it's 1. Religious text, 2. what others they respect think, 3. reasoning for understanding. This is not the worst but not the best. Perhaps a better one would be 1. Science/rationality, 2. religious/philosophical/spiritual texts, 3. first principles thinking

- Humility, you don't know anything unless you have great reasoning and empirical evidence to back up what you're saying. Even then, when faced with complexity (i.e. unless you are dealing with the most atomic/simple concept), you're still almost certainly wrong.


Unfortunately I don't know anything that guarantees a good epistemology. I am quite certain he'd insist he's doing all those things... including humility.


No, burnout is too much exposure to certain stressors


The calming effect comes from L Theanine which you can buy separately and add to your coffee


This might not be an issue for the same (somewhat inscrutable) reason that GPT-4 has quasi-perfect grammar.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: