Hacker Newsnew | past | comments | ask | show | jobs | submit | quxbar's commentslogin

Do y'all get mosquitos on the roof? My back patio is screened in and some are still sneaking through. Would love to know the wired-ethernet-certainty type of approach to dealing with this.


There are sometimes mosquitoes, but there's usually a pretty nice breeze on the roof which makes it hard for mosquitoes to navigate there.


I now have the pleasure of giving exercises to candidates where they are explicitly allowed to use any AI or autocomplete that they want, but it's one of those tricky real-world problems where you'll only get yourself into trouble if you only follow the model's suggestions. It really separates the builders from the bureaucrats far more effectively than seeing who can whiteboard or leetcode.


Its kind of a trap, we allow people in interviews to do the same and some of them waste so much time accepting wrong LLM completions and then changing them than if they'd just written the code themselves.


Ive been doing this inadvertently for years by making tasks that were as realistic as possible - explicitly based upon the code the candidate will be working upon.

As it happens, this meant when candidates started throwing AI at the task, instead of performing that magic it usually can when you make it build a todo app or solve some done-to-death irrelevant leetcode problem it flailed and left the candidate feeling embarrassed.

I really hope AI signals the death knell of fucking stupid interview problems like leetcode. Alas many companies are instead knee jerking and "banning" AI from interview use instead (even claude, hilariously).


> but it's one of those tricky real-world problems where you'll only get yourself into trouble if you only follow the model's suggestions.

What's the goal of this? What are you looking for?


I presume, people who can code, as opposed to people who can only prompt an LLM.

In the real world, you hit problems that the LLM doesn't know what to do with. When that happens, are you stuck, or can you write the code?


Id be seeing if the candidate actually understanding what the llm is spitting out and pushing back when it doesn't make sense vs are they one of the "infinite monkeys on infinite typewriters"


IF (and a big IF) LLMs are the future of coding, this doesn't mean humans don't do anything, but the role has changed from author to editor. Maybe you don't need to create the implementation but you sure better know how to read and understand it.


That's really interesting... can you give more details about the problem you are using?

This sounds like in there will be a race between this kind of booby trap tests and AIs learning them.


Long-tail problems are not reiterated in the dataset. Making LLM remember that can be difficult.


Some code challenge platforms allow for seeing how often someone pasted things in. That's been interesting.


Interesting, care to elaborate? Or this is a carefully guarded secret?


Not sharing what our coding questions are, but we also allow LLMs now. Interviewees choice to do so.

In quite a few interviews in the last year I have come away convinced that they would have performed far better if they had relied on their own knowledge/experience exclusively. Fumbling with windows/tabs, not quite reading what they are copying, if I ask why they chose something, some of them would fold immediately and opt for something way better or more sensible, implying they would have known what to do had they bothered to actually think for a moment.

I put down "no hire" for all of them of course.


IMO you're better off investing in tooling that works with or without LLMs: - extremely clean, succinct code - autogenerated interfaces from openAPI spec - exhaustive e2e testing

Once that is set up, you can treat your agents like (sleep-deprived) junior devs.


Been using gRPC with json transcoding to REST on a greenfield project. All auto generated clients across 3 languages. Added frontend wrapper to pre-flight auth requests so it can dynamically display what users are allowed to do.

Claude Code has been an absolute beast when I tell it to study examples of existing APIs and create new ones, ignoring bringing any generated code into context.


Autogenerated interfaces from openAPI spec is so key - agents are extremely good at creating React code based on these interfaces (+ typescript + tests + lints.. for extra feedback loops etc..)


https://www.dailybot.com/ I think we're re-inventing early 2010s development trends with extra steps.


this is literally something my team already does —- you don’t need AI but this does fit with my running theory that AI makes it easier for people to see stupid processes they should change


I think of the sync paid tier as analogous to a patreon membership, combined with paying someone a tiny amount to manage my data for me. The fact that it's all markdown makes me confident I could take my files and go play elsewhere at any time, but I enjoy knowing my money helps keep Obsidian going.


It depends on what you are trying to get out of a novel. If you merely require repetitions on a theme in a comfortable format, Lester Dent style 'crank it out' writing has been dominant in the marketplace for >100 years already (https://myweb.uiowa.edu/jwolcott/Doc/pulp_plot.htm).

Can an AI novel add something new to the conversation of literature? That's less clear to me because it is so hard to get any model I work with to truly stand by its convictions.


I've been using Youtube to re-discover a lot of fun movies from the 80s-00s that I never saw when I was a kid. It's quite nice to tune in and out while working.


Maybe not for books, but my taste in music benefitted tremendously from streaming and discovery algorithms.


> They are not suggesting new, very interesting melodies. They are finding you the tweaked versions of the songs you already like and, even on your first listen you can predict the melody that’s to come

This seems like the complaint of somebody who hasn't been using spotify very long. After a decade plus, I feel like my algorithm is a rich compost pile of all of my previous phases of music. Spotify is excellent at letting me broaden my horizons or jump down a rabbit hole from a random starting point, like a song I hear in a public space or commercial or something sent by a friend. Maybe the OP should keep their ears open to more sources of randomness from the outside world?


I feel the opposite: my Spotify recs (after at least 8 years with an account) tend to get stuck on whatever I've been listening to recently. I've had to consistently go afield to find any new (to me) music. Even their "new releases for you" falls short of recommending me releases from artists I follow. How much less capable could it be?


Release Radar is consistently the worst feature of Spotify. It misses entire new albums from artists I listen to regularly, and seems to have a quota of songs to fill so after the first two or three it's no longer aligned with my interests. I can forgive it not being coherent since it's supposed to include multiple genres together, but I can't forgive it going way off from what I like just to hit 30 songs.


Not even Release Radar, but the "New Releases for You" list should probably have new releases by the artists I follow (as a basic minimum).


Huh, I don't even have that section on my Spotify. I have a "New music you need to hear this week" at the very bottom (none of it is anything I need to hear this week), but it's just generic "new music in X genre" playlists.


This is my experience as well... I have a very broad music taste but with some main themes. I find Spotify's algorithm (11 years of Premium) to regularly surface things I'll like, whether new music from artists I already know, music correlating strongly with known tastes, or every once in a while something that seems out of distribution but I like it anyway!

It probably helps that the strongest areas of my taste are relatively small or niche genres, like Scottish trad and Celtic (folk) rock. In those niches, similar-but-different is often distinctively different in actual experience. Sure, there's covers of the same song from time to time, but I actually do like enough of those not to be bothered, if they bring something new.


Economists like Paul Krugman are using a SURVEY of people asking them what they spent (for a NON FIXED basket of goods).

Let's say that you used to eat 10 units of grains ($1 each) and 5 units of beef ($5 each), for total spending of $35.

Let's now say that prices doubled (grains $2, beef $10).

You can't afford $70, so you adjust your consumption patterns and replace 2 units of beef with 2 units of grain (cutting your beef consumption in half).

We have now replicated the graph that Krugman so proudly showed off.

"Food at home" prices are "only" up ~25%.

Paraphrased and copy-pasted from here: https://twitter.com/MorlockP/status/1761782081351196814


> Economists like Paul Krugman are using a SURVEY of people asking them what they spent (for a NON FIXED basket of goods).

1. What survey are you talking about? It's certainly not mentioned in the OP, and I don't have a twitter account so I can't see the prior tweets.

2. It feels incredibly suspect to paint "economists" as using this particular methodology, when "inflation" is usually synonymous with "CPI", and that uses a fixed basket. In fact your linked tweet even admits this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: