Hacker Newsnew | past | comments | ask | show | jobs | submit | markwkw's commentslogin

You trim, yes, but AI content surely invades (all?) areas of written material. People are increasingly using AI to assist their writing. Even it if's for slight editing, word choice suggestions.

Even AP doesn't ban the use of LLMs, its standards prohibit direct publishing of AI-generated content. I'm sure its writers leverage LLMs in some ways in their workflow, though. They would probably continue to use these even if AP attempted to ban LLMs (human incentives).


If the AI generated content is filtered for quality or is corrected then it will still be good data. The phenomenon of model degradation is only in the case where there is no outside influence in the generated data.


I think this is extremely important with AI generated content, but seems to be given less and less thought as people start to "trust" AI as it seeps into the public conscious more. It needs to be reviewed, filtered, and fixed where appropriate. After that, it isn't any different from reviewing data on your own, and wording it in a way that fits the piece you're writing. Unfortunately, there's so much trust in AI now that people will go ahead and publish content without even reading it for the correct tense!


The same problem exists if you blindly trust any source without verifying it. There is a huge amount of endlessly recycled incorrect blog spam out there for all domains. Not only that but this problem has always existed for second hand information so it's not like we were even starting from some pristine state of perfect truthfulness. We have the tools we need to deal with the situation and they were developed hundreds of years ago. Empiricism being chief among them. Nullius in verba[0]

[0] https://en.wikipedia.org/wiki/Nullius_in_verba


If tail events aren't produced by these models, no amount of human filtering will get them back. People would not just need to filter or adjust AI generated content, but create novel content of their own.


Mechanical engineering interviews seem to do the same as software: "Engineers always ask about beam bending, stress strain curves, and conservation of work. Know the theory and any technical questions are easy."

Basically an equivalent of simple algorithmic questions. Not "real" because it's impossible to share enough context of a real problem in an interview to make it practical. Short, testing principles, but most importantly basic thinking and problem solving facilities.


> Mechanical engineering interviews seem to do the same as software:

I've been an engineer in the past (physics undergrad -> aerospace job -> grad school/ml). I have never seen or heard of an engineer being expected to solve math equations on a whiteboard during an interview. It is expected that you already know these things. Honestly, it is expected that you have a reference to these equations and you'll have memorized what you do most.

As an example, I got a call when I was finishing my undergrad for a job from Raytheon. I was supposedly the only undergrad being interviewed but first interview was a phone interview. I got asked an optics question and I said to the interviewer "you mind if I grab my book? I have it right next to me and I bookmarked that equation thinking you might ask and I'm blanking on the coefficients (explain form of equation while opening book)". He was super cool with that and at the end of the interview said I was on his short list.

I see no problem with this method. We live in the age of the internet. You shouldn't be memorizing a bunch of stuff purposefully, you should be memorizing by accident (aka through routine usage). You should know the abstractions and core concepts but the details are not worth knowing off the top of your head (obviously you should have known at some point) unless you are actively using them.


I've had a coding interview (screen, not whiteboard) fail where the main criticism was that one routine detail I took a while to get right could have been googled faster. In hindsight I still doubt that, given all the semi-related tangents you end up following from Google, but that was their expectation, look up the right piece of example code and recognize the missing bit (or get out right immediately).

For a proper engineering question (as in not software), I'd expect the expected answer to be naming the reference book where you'd look up the formula. Last thing you want is someone overconfident in their from memory version of physics.


> Last thing you want is someone overconfident in their from memory version of physics.

Honestly, having been in both worlds, there's not too much of a difference. Physics is harder but coding you got more things to juggle in your brain. So I really do not think it is an issue to offload infrequent "equations"[0] to a book/google/whatever.

[0] And equations could be taken out of quotes considering that math and code are the same thing.


I had a senior engineer chastise me once for NOT using the lookup tables.

"How do you know your memory was infallible at that moment? Would you stake other people's lives on that memory?"

So what you did on that phone interview was probably the biggest green-flag they'd seen all day.


We live in the age of ChatGPT. It might actually be time to assess how candidates use it during interviews. What prompts they write, how they refine their prompts, how they use the answers, whether they take them at face value, etc.


Sure, and we live in the age of calculators. Just because we have calculators doesn't mean we should ban them on math tests. It means you adapt and test for the more important stuff. You remove the rote mundane aspect and focus on the abstract and nuance.

You still can't get GPT to understand and give nuanced responses without significant prompt engineering (usually requiring someone that understands said nuance of the specific problem). So... I'm not concerned. If you're getting GPT to pass your interviews, then you should change your interviews. LLMs are useful tools, but compression machines aren't nuanced thinking machines, even if they can mascaraed as such in fun examples.

Essentially ask yourself this: why in my example was the engineer not only okay with me grabbing my book but happy? Understand that and you'll understand my point.

Edit: I see you're the founder of Archipelago AI. I happen to be an ML researcher. We both know that there's lots of snakeoil in this field. Are you telling me you can't frequently sniff that out? Rabbit? Devon? Humane Pin? I have receipts for calling several of these out at launch. (I haven't looked more than your profile, should I look at your company?)


I'm actually not talking about interviewees (ab)using ChatGPT to pass interviews and interviewers trying to catch that or work around that. I'm talking about testing candidates' use of ChatGPT as one of the skills they have.

> I see you're the founder of Archipelago AI.

I don't know where you got that from, but I'm not.


> I'm talking about testing candidates' use of ChatGPT as one of the skills they have.

The same way? I guess I'm confused why this is any different. You ask them? Assuming you have expertise in this, then you do that. If you don't, you ask them to maybe demonstrate it and try to listen while they explain. I'll give you a strong hint here: people that know their shit talk about nuance. They might be shy and not give it to you right away or might think they're "showing off" or something else, but it is not too hard to get experts to excitedly talk about things they're experts in. Look for that.

> I don't know where you got that from, but I'm not.

Ops, somehow I clicked esafak's profile instead. My bad


You might as well ask how they use book libraries and web search.


I'm a chemist by education, so all my college friends are chemists.

Being asked a theoretical chemistry question at a job interview would be...odd.

You can be asked about your proficiency with some lab equipment, your experience with various procedures and what not.

But the very thought of being asked theoretical questions is beyond ridiculous.


Why, don't they get imposters? You sure run into people who can't code in coding interviews.


Because to be a chemist you need to graduate in chemistry.

What would be the point of asking theoretical questions?

There's just no way in hell people can remember even 10% of what they studied in college, book knowledge isn't really the goal, rather than teaching you how to learn and master the topics.


Because to actually have those types of conversations you have to have legitimate experience. To be a bit flippant, here's a relevant xkcd[0]. To be less so, "in groups" are pretty good at detecting others in their groups. I mean can you not talk to another <insert anything where you have domain expertise, including hobbies> and not figure out who's also a domain expert? It's because people "in-group" understand nuance of the subject matter.

[0] https://xkcd.com/451/


Doesn’t that comic more closely hew to the idea that some fields are complete bullshit?


That's one interpretation. But that interpretation is still dependent upon intra-group recognition. The joke relies on the intra-group recognition __being__ the act of bullshitting.


Hmm… I have a twist on this. Chemistry is a really big field.

My degree is in computational/theoretical chemistry. Even before I went into software engineering, it would have been really odd for me to be asked questions about wet chemistry.

Admittedly it would have been odd to be quizzed on theory out of the blue as well.

What would not have been odd was to give a job talk and be asked questions based on that talk; in my case this would have included aspects of theory relevant to the simulation work and analysis I presented.


And software and computing isn’t a big field? Ever heard of EE?


You can easily demonstrate that an LLM does know certain fact X AND demonstrate that the LLM will deny that they know fact X (or be flaky about it, randomly denying and divulging the fact)

There are two explanations: A. They lack self-reflection B. They know they know fact X, but avoid acknowledging for ... reasons?

I find the argument for A quite compelling


> demonstrate that the LLM will deny that they know fact X (or be flaky about it, randomly denying and divulging the fact)

No, the sampling algorithm you used to query the LLM does that. Not the model itself.

e.g. https://arxiv.org/pdf/2306.03341.pdf

> B. They know they know fact X, but avoid acknowledging for ... reasons?

That reason being that the sampling algorithm didn't successfully sample the answer.


They will say "it's just a bad LLM", don't bother


I wish they tried to solve the 1989 4x4 crossword puzzle optimization with a modern solver, but a small memory limit (~8MB) and perhaps a severely underclocked CPU to showcase the algorithm improvements.


It's kind of funny because comparable hardware would be hard to find nowadays.

Even the ESP32 which can be purchased for something in the neighborhood of $2 runs at 600mips (technically dmips but all that means is they're not benchmarked for floating point operations https://en.wikipedia.org/wiki/ESP32), although I am not sure that they can run the full exact same instruction sets.


I heard about some competition like this: they made a boolean satisfiability problem. Then they ran a "race" -- an old solving algorithm running on modern hardware, versus a new algorithm on old hardware. The new solver won, even with a massive speed handicap!


I think there is something special. Some special coupling of emotional states between us (manifests as empathy, feeling of community, feeling of connection). Not saying that this connection cannot be replicated or faked, this remains an open question.

Perhaps building an AI vastly smarter than us will conflict with replicating this coupling. Perhaps we will prefer artwork from an AI with which we do feel this type of connection. Perhaps this will make us build an AI that is really close to us in terms of experiencing feelings an emotional states, even if it means it's less "smart". This wouldn't be a dull outcome...


I sometimes wonder how people managed in the 19th century. Many went from villages to increasingly crowded cities. With no savings, minimal or no education. Industrial production was ramping up, but with crazy demands on workers, long hours, no safety standards. Couldn’t get a mortgage and wait for a nice corporation to build an apartment for you. Couldn’t look for a place in suburbs (how would you commute). Somehow our grand-grand-grand ancestors managed. Seems astounding.

On the other hand, in coming times of global depopulation, what will housing situation look like?


They were just less sheltered. I mean many people in countries across the world live in worse conditions.

To be honest most people in America have become so comfortable that they just look for problems to make there lives seem harder than they really are now days.

I would be willing to bet people were also happier back then too.


If you're genuinely curious, I'd highly recommend the book "The Road to Wigan Pier" to get an understanding of working class life was like in the late 19th and early 20th centuries.


> I sometimes wonder how people managed in the 19th century.

In very harsh conditions. Looking at the history of the 19th century with all the wars, financial crises, social upheavals and whatnot.


Drunkenly


I don't know. Look at hundreds of thousands of pages of regulations, many layered regions and municipalities, meddling politicians; agencies and bureaus involved in US market (or EU market).

If you wrote down all the rules of Google app store, including all their sweetheart deals (I'm sure there are many), and accounted for all departments within Google that influence AppStore, it would be massively smaller and way simpler conceptually. The scale of bureaucracy, inconsistency, bad incentives and accumulated cruft is just not comparable.


Try finding justice on a company-owned platform like Android's app store, or YouTube. I see a lot of stories titled "YouTube shut down my account for no reason", and I never see stories titled "The government shut down my business for no reason".

At least with the government you are backed by the rule of law. With companies it's always "tough luck, you agreed to the EULA".

PS: and never forget that government agencies gave us the Internet, so all those layers and layers of bureaucracy did give us something great and it isn't all bad even if you want to see it that way.


Except that in the appstore and playstore case, those rules do not matter and they are not bound to it, they can decide to cut you off from the mobile market at any point for no reason.


Sonny: Yes! Rapidly and repeatedly. [then add the actual quote:] Can you?

Human: ...

Oh how the tables have turned.


"Cool. Create a 4x4 matrix of symphonies, with 'vivaciousness' on the X axis and 'dramatic themes' on the Y axis, then listen to them all and tell me which one's best."

Sonny: "...oh god"


These two effects are not similar in scale. Fertility across the globe is falling by a lot. Africa went from 6.7 to 4.2 in a span of 50 years. Asia went from 6 to 1.9.

Anywhere but least developed countries child mortality is low, such that fertility replacement level is estimated at 2.1 (vs 2.0 if everyone survived until reproductive age).

We used to have 5-6 kids, with one dying if you were a bit unlucky (statistically speaking). Now we have 2 or less.


Sure, it's not the full cause of the fall in birth rates.

Somehow it had just never occurred to me that of course any family that has an "ideal size" is going to have fewer births if all the children survive than it will if one or more of them die, and now I'm trying to figure out what terms I would even look up to find out how much of the effect could be attributed to that.

In other words, how much of the decline in birth rate is due to parents reaching their ideal family size without the death of any children? I'm reasonably sure it's small, but are we talking <1% small or 10% small?

Edit: typo


Isn't the utility of the cycler that it can be a massive object with a lot of shielding for humans and permanently installed life support systems? Like a big space station for whole crews to live safely and comfortably while in transit.

You (incrementally) build up a large cycler in parts, each part accelerated to the orbit once.

Once the cycler is large, it seems infeasible to burn fuel to periodically adjust the orbit since it's massive. Unless propellant less stuff like light sails can be used over long periods.


The benefit of a cycler is the near zero delta V required to maintain it, and the fact that it flies by two interesting orbits, like Mars and Earth.

The consequence of this varies, but includes castles as you suggest.


Yes, it could be essentially a cruise ship for the journey, and starships are used just to ferry passengers and supplies to/from the cycler at each destination. With life support closure you might not even need much in supplies.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: