Hacker News new | past | comments | ask | show | jobs | submit | AIorNot's comments login

Why if the state provides a basic safety net should that preclude individual ambition?

Should we not move toward a society that can not go hungry?


The false dichotomy is that you need big government to have a society that does not go hungry.

Great Another acronym to put on your resume with 5 years of experience

Hmm without evidence this is just another fanciful fiction and we know the human mind can create an endless series of speculative fictions -

Thoughtfully essay, but writing this at age 29-30 feels kind of like a five year old telling me they don’t like Spinach

-this is still too early in life to come to hard realizations on experienced wisdom..

give it some time and tell me about it again in your 50s -


Is it though? At any age we have some kind of experienced wisdom. It's what guides us moving forward. Of course, my experienced wisdom at 30 is different to what I had at 20, however it's not a replacement of the former, but built on top of it.

when you do the same thing year after year, eventually you will realize that you are not learning anything new anymore, and the only way to change that is to change what you are doing, or how you are doing things. a five year old doesn't dislike spinach because they had enough of it, but because they are unfamiliar with it. clearly this is not the case here. i changed the way i travel every few years. in my teens i was doing bike rides. in my late teens i discovered that i could afford trains in eastern europe. in my late 20s i found that i could get work in remote places. and in my 30s i got married. i never stopped moving to new places. i just changed the way i got there, and the reason to be there, and the main motivator for change was that with the old way i didn't learn anything new anymore.

This is another silly against AI tools - that doesn’t offer useful or insightful suggestions on how to adapt or provide an informed study of areas of concern and - one that capitalizes on the natural worries we have on HN because of our generic fears around critical thinking being lost when AI will take over our jobs - in general, rather like concerns about the web in pre-internet age and SEO in digital marketing age

OSINT only exists because of internet capabilities and google search - ie someone had to learn how to use those new tools just a few years ago and apply critical thinking

AI tools and models are rapidly evolving and more in depth capabilities appearing in the models, all this means the tools are hardly set in stone and the workflows will evolve with them - it’s still up to human oversight to evolve with the tools - the skills of human overseeing AI is something that will develop too


The article is all about that oversight. It ends with a ten point checklist with items such as "Did I treat GenAI as a thought partner—not a source of truth?".

So weak! No matter how good a model gets it will always present information with confidence regardless of whether or not it's correct. Anyone that has spent five minutes with the tools I knows this.

I’ve read enough pseudo-intellectual Internet comments that I tend to subconsciously apply a slight negative bias to posts that appear to try too hard to project an air of authority via confidence. It isn’t always the best heuristic, as it leaves out the small set of competent and well-marketed people. But it certainly deflates my expectations around LLM output.

OSINT (not a term I was particularly familiar with, personally) actually goes back quite a ways[1]. Software certainly makes aggregating the information easier to accumulate and finding signal in the noise, but bad security practices do far more to make that information accessible.

[1] https://www.tandfonline.com/doi/full/10.1080/16161262.2023.2...


Back in the 1990s my boss went to a conference where there was a talk on OSINT.

She was interested in the then-new concept of "open source" so went to the talk, only to find it had nothing to do with software development.


So the same folks complains about lack of AI in Alexa are complaining now? - cloud level processing is needed for the gen ai features


Were there real people complaining about that? Or just investors?


Yup, I can’t wait to have a nice LLM capability on my Alexa devices! Maybe I’m the only one on this thread :)


is it though?


I was thinking about this lex friedman podcast with Marcus Hutter. Also, Joshua Bach defined intelligence as the ability to accurately model reality.. is lossless compression itself intelligence or a best fit model- is there a difference? https://www.youtube.com/watch?v=E1AxVXt2Gv4


Incidentally, François Chollet, creator of ARC-AGI, argues in this 2020 Lex Fridman podcast that intelligence is not compression: https://youtu.be/-V-vOXLyKGw


Compression works on finite data, and AI models need to keep open for new data. So they should be less greedy.


It's the ability to find a simple model that predicts a complex reality with a high accuracy and low latency. So we need to consider the four axes, and AI will be a region in this space.

Edit. In fact, there is a simple test for intelligence: can you read a function in C and tell how a change in input changes the output. For complex algorithms, you have to build an internal model because how else are you going to run qsort on million items? That's also how you'd tell if a student is faking it or really understands. A harder test would be to do the opposite: from a few input/output examples come up with an algorithm.


For a quicker tie in than watching a whole podcast, the result of Hutter's stance takes the form of the Hutter prize[1], which in some ways has many of the same goals as ARC-AGI, but sees compression itself as the goalpost towards intelligence.

[1] http://prize.hutter1.net/


Youth is wasted on the young..as they say

One good thing about getting old is that you worry less about what people think - these kinds of feelings are powerfully emotive and urgent while young but, given our position in this world just navel-gazing ultimately


This is such a great example of why pursing knowledge for knowledge sake in pure science leads to advancement in areas we might not expect…


Same concept in LLMs as referenced in this video by Chris Olah at Anthropic:

https://www.reddit.com/r/OpenAI/comments/1grxo1c/anthropics_...

also see: https://distill.pub/2021/multimodal-neurons/


The authors of the second piece specifically said this was not the same thing: the fact that they weakly fire for loosely-associated concepts is very different from (and ultimately shallower than) concept neurons:

  Looking to neuroscience, they might sound like “grandmother neurons,” but their associative nature distinguishes them from how many neuroscientists interpret that term. The term “concept neurons” has sometimes been used to describe biological neurons with similar properties, but this framing might encourage people to overinterpret these artificial neurons. Instead, the authors generally think of these neurons as being something like the visual version of a topic feature, activating for features we might expect to be similar in a word embedding.
The "turtle+PhD" artificial neuron is a good example of this distinction: it is just pulling together loosely-related concepts of turtles and academia into one loose neuron, without actually being a coherent concept.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: