I'm wondering: are there any simulator games in which you have to take care of the stability of a nation-wide power grid? This whole situation has me interested in logistics of power distribution now.
I don't think I will ever use Lisp but I love the way this site documents different examples with all kinds of hardware. I wish there were more sites like these for other microcontroller/SoC languages like lua, mPython, Arduino derived boards etc.
As I sat down at my desk to start the work day and finish a dreaded task, my mind of course immediately drifted away and felt the urge to check some news on reddit or HW. I opened HW and this is the first title.
I am disappointed your comment did not have more responses because I'm very interested in deconstructing this argument I've heard over and over again. ("it just predicts the next words in the sentence").
While explanations of how GPT-style LLMs work involve a layering of structures which encode at the first levels some understanding of syntax, grammar etc. and then as the more levels of transformers are added, eventually some contextual and logical meanings are encoded.
I really want to see a developed conversation about this.
What are we humans even doing when zooming out? We're processing the current inputs to determine what best to do in the present, nearest future or even far future. Sometimes, in a more relaxed space (say a "brainstorming" meeting), we relax our prediction capabilities to the point our ideas come from a hallucination realm if no boundaries are imposed.
LLMs mimic these things in the spoken language space quite well.
I didn't want to spend time reading that wall of text, so I wanted a summarized version of it. Asked ChatGPT (4) to do it for me and it told me that Omegle is not going to close.
My intuition told me otherwise, so the irony is that I spent more time than needed on this.
PEBCAK, I went to omegle.com and was able to infer exactly what the author meant. It took me approx. 2 minutes to do so. Using ChatGPT to read short things like this is akin to only reading the headline of an article -- you tell me if the latter makes much sense usually.
I inferred this information by seeing the number of comments on HN (a relevant parameter in estimating the importance of an event imo), the time period 2009-2023 and then immediately after clicking seeing a tombstone.
This took me under 2-3 seconds.
The rest of the info I wasn't going time to spend on was the motives etc. It's not the first postmortem I see, I was just curious to why by skipping all of the boilerplate. Omegle is not important to me, but I know it was quite a phenomenon .
My point was that GPT failed blatantly at inferring this same easy task.
Sorry but I have to ask. In what kinds of contexts did you become experienced? Is LSD in at least a gray area anywhere on earth? The country I come from that is an illegal drug, but it being a neuroplasticity enhancer is what attracted my attention. I've also read about microdosing which can enhance focus or creativity.
Can it be prescribed for treating depression or other mental deficits or the approvals are not at that stage yet?
Recreationally dosing in all kinds of settings and doses (e.g. microdosing in almost any setting on very different doses etc.). I have accurately tested LSD, so I know the exact dose I'm taking. Also I've done a bit of research of various scientific articles.
It's unfortunately not really legal though where I'm living.
AFAIK there are a few psychiatrists/researchers around the world where it's approved for clinical studies, where you could try it in a clinical setting. But unfortunately you're often still on your own with trying it. I tried microdosing it for treating ADHD, but I'm not really sure if it really helped there, and I'm back on the usual medications, YMMV though, as LSD has mild stimulant properties as well.
I may try a combination/mix of LSD and other ADHD medications in the near future though.
It being a neuroplasticity enhancer doesn't necessarily mean that it's enhancing in a good/healthy way. I noticed that my thoughts/imagination have gone a bit more "visual" (not sure if that's really the right term), I'm not sure if it really made me more creative. I think for me it mostly manifested my mind further and I'm a bit more in tune with nature etc. (much more conscious of all the "ecocrimes" humanity is doing).
This may not always be a good thing if e.g. you're prone to conspiracy theories.
It can also mean that it's promoting growth in areas in the brain that shouldn't be connected (e.g. leading to psychosis), it's still a powerful substance which should be handled carefully.
But in general I think it still takes some time until this is a clinically approved substance. I think it will likely be for stuff like depression or PTSD or generally disorders originating mostly in the frontal lobe.
I have certainly noticed the antidepressive properties of it (although I never had a real depression while taking it).
For now, this is just funny, I laughed. But with the advent of all these new open-source LLMs, it will get worse.
If you thought people were gullible for falling for fake comments from bots, just wait for the near future. Post-factual/Post-Truth era truly begins. The internet that used to be a source of information is no more, only islands of truth will remain (like, hopefully, Wikipedia, but even that is questionable). The rest will be a cesspool of meaningless information. Just as sailors of the sea are experts at navigating the waters, so we'll have to learn to surf the web once again.
The funniest thing for me is how stupidly lazy are these jerks that employ GPT for such things. The printed book example really made me lol.
The simplest thing they could've done is use a service like quillbot to rephrase, just as I used here to rephrase my comment:
-----------------------
I chuckled. For now, this is just hilarious. However, it will grow worse when more new open-source LLMs emerge.
Just wait till the near future if you thought people were naive enough to believe counterfeit comments from bots. The post-factual/post-truth age has arrived. Only isolated truths will remain when the internet ceases to be a reliable source of knowledge (like, ideally, Wikipedia, but even that is debatable). The remaining material will be an ocean of useless data. We'll have to relearn how to navigate the web, just as seafarers are specialists at navigating the seas.
The most amusing thing to me is how exceptionally sloppy these idiots are that use GPT for such things.
The thing is, the internet is already a cesspool of meaningless, misleading and outright malicious information. I guess the earlier we all collectively realize it the best it is.
The internet is currently quite useful, it could stand to be a lot less so. Don’t let your cynicism blind you to the fact that we do have a lot to lose.
At least in my case, I didn't mean to dismiss the internet and that we have nothing to lose. There is some good information there, but it is important that we do not believe everything (or even most) that we read and see.
I am more optimistic here. While LLMs allow you to produce tons of garbage, they also provide the tools to filter through that garbage, something we didn't have before. LLMs allow us to view content in a way that we decided, not the content creator. That's extremely powerful and lets us sidesteps a lot of the old methods used to manipulate us.
The risk is more in the LLMs themselves, as whoever gets to control them gets to decide how people are going to experience the world. For the time being I might still double check all the answers I get from ChatGPT, but overtime the LLMs will get better and I'll get more lazy, thus making the LLMs the primary lens through which one views the world.
> The risk is more in the LLMs themselves, as whoever gets to control them gets to decide how people are going to experience the world. For the time being I might still double check all the answers I get from ChatGPT, but overtime the LLMs will get better and I'll get more lazy, thus making the LLMs the primary lens through which one views the world.
You've underlined the major risk these LLMs are for humanity. For a brief time in the history of human race, after information was democratized, most of us (at least educated people) had to use our own critical faculties to understand the world we live in. Now, that capacity will be outsourced to custom LLMs, most of them derived from other pre-trained with some ideological biases built-in. The informational Dark Ages of the technological era.
If they provide the tools to filter through the garbage, it'll probably be standardized in some way as an interface to the web. So just as HTML and its satellite technologies limit and standardize the representational aspect of information on the web, I think this AI-interface will severely limit the knowledge/wisdom aspect you can derive from information on the web. It's a hard thing to put my finger on, I hope you can understand what I'm saying.
Reflexively then, good comments are good, no matter what produced them. Or is a quality comment impugned by knowing it came from an LLM? Does it cheapen what it means to be human if other humans think highly of an LLMs attempts at English? Is it at all impressive that ChatGPT is able to spell words correctly, given that it's a computer? What does that mean for the spelling bee industry?
Predicting whether a text was written by a LLM or not is not trivial. What was the latest number by OpenAI? 30%? As LLMs get better, it seems like we won't be able to distinguish real text from fake text. Your LLM will be able to summarize it, but it will still be 99% spam.
You don't need to predict if it what written by LLM, if it's a human or machine makes no difference to the validity of a text. You just need to be able to extract the actual information out of it and cross check it against other sources.
The summary that an LLM can provide is not just of one text, but of all the texts about the topic it has access to. Thus you never need to access the actual texts itself, just whatever the LLM condenses out of them.
"just" need to "extract the actual information out of it and cross check it against other sources".
How do you determine the trustworthiness of those other sources when an ever increasing portion are also LLM generated?
All the "you just need to" responses are predicted on being able to police the LLM output based upon your own expertise (e.g., much talk about code generation being like working with junior devs, and so being able to replace all your juniors and just have super productive seniors).
Question: how does one become an expert? Yep, it's right there: experts are made through experience.
So if LLMs replace all the low experience roles, how exactly do new experts emerge?
You're trusting the LLM a lot more than you should. It's entirely possible to skew those too. (Even ignoring the philosophical question of what an "unskewed" LLM would even be.) I'm actually impressed by OpenAI's efforts to do so. I also deplore them and think it's an atrocity, but I'm still impressed. The "As an AI language model" bit is just the obvious way they're skewed. I wouldn't trust an LLM any farther than I can throw it to accurately summarize anything important.
For HN and forums in general, I think this will mean disabling APIs and having strict captchas for posting.
Beyond HN, I think this will translate in video content and reviews becoming more trustworthy, even if it's just a person reading a LLM-produced script. You will at least know they cared enough to put a human in the loop. That and reputation. More and more credit will be assigned based on reputation, number of followers, etc. And that'll be until each of these systems get cracked somehow (fake followers, plausible generated videos, etc.).
Banal is banal, whether written by a human or not.
But GPT text is inherently deceptive, even when factually flawless— because we humans never evaluate a message merely on its factuality. We read between the lines. The same way insects are confused and fly in spirals around light, we will be flying spirals around GPT text based on our assumptions about its nature or the nature of the human whom we presume to have written it.
Bachelors degrees have mostly been a signal for a long time. The problem is that we have credential inflation, so now you need a masters or phd to send that same signal to employers. As a result, you have fewer people going to college, but a greater percentage of people who go to college are getting advanced degrees.
LLMs check the answers? How do they check the answers? By what appears most frequently in the training corpus - that's the "answer".
So, how well curated are the texts that make up the training corpus? Is it just what's generally available on the internet? How much do you think that text accurately reflects reality? "Truth is determined by the most frequent posters" seems like really bad epistemology.
> For now, this is just funny, I laughed. But with the advent of all these new open-source LLMs, it will get worse. If you thought people were gullible for falling for fake comments from bots, just wait for the near future. Post-factual/Post-Truth era truly begins. The internet that used to be a source of information is no more, only islands of truth will remain (like, hopefully, Wikipedia, but even that is questionable). The rest will be a cesspool of meaningless information. Just as sailors of the sea are experts at navigating the waters, so we'll have to learn to surf the web once again.
I'm not sure what rock you've been living under, but this has been the internet for probably longer than a decade by now, the only difference is the volume. Even back before LLMs, or before Facebook, you couldn't take any "fact" at face value when found via the internet. And before that, the same people who fall for it now on the internet, fell for it when watching TV, or reading newspapers. People who are not interested in truth because it doesn't fit their world-view, will never be interested in the truth, no matter what medium it comes via.
I am aware of that. I like to think that millenials/gen-z at least knew a little how to sift through the fake information, and the gullible people were the elders.
But now with such obscene amounts of fake info at every corner, I think the internet and all source of information (even printed! - because printed at least would require significant effort) will loose credibility. Science will be the last bastion, and even that can easily be influenced by money.
> People who are not interested in truth because it doesn't fit their world-view, will never be interested in the truth, no matter what medium it comes via.
Yes, the claim is self-referential in the sense that it describes a certain attitude towards truth and how that attitude can affect one’s openness to new information. Specifically, the claim suggests that individuals who are not interested in truth because it conflicts with their existing beliefs are unlikely to change their minds even when presented with evidence or information that contradicts their views. This can create a self-reinforcing cycle where the individual becomes increasingly resistant to new ideas and perspectives.
The claim is: "People who are not interested in truth because it doesn't fit their world-view, will never be interested in the truth, no matter what medium it comes via."
It is not a suggestion, it does not say "it is unlikely, it is an unequivocal assertion of fact.
> This can create a self-reinforcing cycle where the individual becomes increasingly resistant to new ideas and perspectives.
That's my point (about the thinking underlying the comment in question).
It's interesting how humans self-privilege themselves when applying epistemology - other people's claims must be actually true, but for one's own claims "close enough" is typically an adequate bar. And it is typically only the other person who needs to improve their thinking.
The thing about gippie is it will never shut up, it lists in bullet point fashion, and it uses alot of filler words: like, 'however', additionally, 'currently', 'also', 'that', etc.
i feel i can start to read when someone uses gippie, because i use it alot. I imagine a future where i use gippie to write an email and the receiver uses gippie to summarize and respond. There's also a future evolution of 'typo', where gippie hellucinates some non sensical answer. "Oh my bad, my bots trippin' LOL'
There will be self verifiable truths like provable theorems in axiomatic mathematics. There will be enforceable contracts like Elon musk’s purchase of twitter. There will be quarterly investor reports and earnings calls from public companies that avoid lying at risk of shareholder and sec lawsuits. There will be documents time stamped with hashes and bitcoin. the bots will need karma points as well.