Hacker News new | past | comments | ask | show | jobs | submit | qsort's comments login

> if an interviewer says "I don't think that will be a good idea" just take the hint that it won't be what the expect and change it.

I'd like to underline this just in case the author reads the thread. He really does seem great and I wish all the best to him, but reading between the lines is a useful skill regardless of this specific situation. He says he doesn't speak English well, that might have played a role in the misunderstanding, but "I don't think that will be a good idea" is not a suggestion, it's an order.


If the interviewer were better they would instead ask: if no one else on the dev team knows the type system how do you handle that situation?

Which gets at the real risk, maybe valid, that someone super smart who isn’t the best communicator will go and make a bunch of code that no one else in the company can reason about. Maybe you get a really nice explanation of the type system, or they are aware this is an esoteric approach but used it anyway because you said anything goes and it’s cool.


Good leadership starts with clearly communicating expectations. If your boss can not say "I want you to use this tool" or "use whatever you seem fit", but instead hints to you as to some possible drawbacks of certain tools he is bad at his job.

There are multiple ways to read that suggestion. It can also be read as the interviewer saying he does not believe in the technical depth of the candidate, which can be taken as a challenge.

It would also have been better to give a choice of a few selected languages. As that means the interviewer can be much better prepared.


what is this good leadership you speak of, where did you encounter that?


military


Yeah as an interviewer if the type-heavy solution wasn’t what I wanted to look at I would’ve asked the candidate to pretend like they don’t understand the type system and adjust the solution accordingly.

Actually though if they wanted to test for debugging ability, presenting some real code with defects would have worked a lot better than this.


That last part is key. Way better than FizzBuzz from scratch, best interview results I've seen come from giving a candidate a pre-made solution or architecture document that technically works but with glaring issues and just talking about their opinion on it, coding or pseudocoding optional for presenting solutions.


>There are multiple ways to read that suggestion. It can also be read as the interviewer saying he does not believe in the technical depth of the candidate, which can be taken as a challenge.

There are not really multiple ways to read "I don't think that's a good idea" in an interview. If your "technical depth" leads you to an inappropriate solution, it doesn't matter if it's right. Just because this solution worked for FizzBuzz and all their rules does not mean it's maintainable. It's obvious to most senior people that such a weird solution is brittle and overkill, even if they can't come up with a rule to break it. But here's a simple one: Output FizzBuzz if all the rules are passed, except if there is a database entry matching the number. Good luck solving that with some bullshit type theory, and good luck to anyone who comes behind this guy to add that rule to his Rube Goldberg code.

It is not uncommon for self-taught people to develop weird fixations on particular niche tech. Even if their understanding is thorough, they don't have the experience to know when their tool is appropriate. For example, I worked with one mostly self-taught guy who wanted to rewrite everything in a niche compiled language, including things that really should be done in scripting languages like Python or shell. Thankfully nobody else let him do that. But deep down I think he didn't accept that he was wrong, and on multiple occasions he expressed that he thought the average person on the team was woefully inexperienced. In fact, it was him that was woefully inexperienced, because he didn't understand how inappropriate and even flat-out wrong his solutions were. He wrote a fair amount of seemingly sophisticated code and sounded smart until you really got down to details and realized how nonsensical it all was. The worst part is if such a person becomes a manager and YOU have to be the one to fight for the right solution with them. I think that guy talked his way into management after I left the company, and I pity anyone who has to work under him.


The way you figure these things out is talking about them. Plenty of people like to solve challenges in weird ways and might even see the suggestions that it is inappropriate as a challenge.

>There are not really multiple ways to read "I don't think that's a good idea" in an interview. If your "technical depth" leads you to an inappropriate solution, it doesn't matter if it's right. Just because this solution worked for FizzBuzz and all their rules does not mean it's maintainable. It's obvious to most senior people that such a weird solution is brittle and overkill, even if they can't come up with a rule to break it. But here's a simple one: Output FizzBuzz if all the rules are passed, except if there is a database entry matching the number. Good luck solving that with some bullshit type theory, and good luck to anyone who comes behind this guy to add that rule to his Rube Goldberg code.

"What set of tools would you choose to build <normal software project>? And why would you choose them over alternatives?" or "We here at X make use of Y a lot, have you worked with Y or alternatives? What did you think about Y or alternatives?". Both are infinitely more telling about the candidate.

Someone's choice for a contrived joke problem will not reflect their choices for a real software project.

>For example, I worked with one mostly self-taught guy who wanted to rewrite everything in a niche compiled language, including things that really should be done in scripting languages like Python or shell. Thankfully nobody else let him do that. But deep down I think he didn't accept that he was wrong, and on multiple occasions he expressed that he thought the average person on the team was woefully inexperienced. In fact, it was him that was woefully inexperienced, because he didn't understand how inappropriate and even flat-out wrong his solutions were.

I worked with people who liked to solve silly problems in silly ways, but when it came to real projects always preferred mature languages and libraries which focused on long term support, stability and maintainability.

The problem with the interview is that instead of talking about the subjects, they themselves want to rely on subtle hints about the candidate. Which may not mean anything.


>Plenty of people like to solve challenges in weird ways and might even see the suggestions that it is inappropriate as a challenge.

That's absolutely not the way to approach an interview. "Let me try to do things the exact way you clearly don't want, just to see if we can." Is the candidate going to try to do their next job this way? What are you going to do when their egotistical attempts to solve problems in clever ways despite your advice and even instructions bites YOU?

>"What set of tools would you choose to build <normal software project>? And why would you choose them over alternatives?" or "We here at X make use of Y a lot, have you worked with Y or alternatives? What did you think about Y or alternatives?". Both are infinitely more telling about the candidate.

I hate coding interviews but I am sympathetic to people who want to do them because I've met people who are such good bullshitters that they could fool most people. If you're talking about someone with no working experience and who claims to be self-taught, you really have to make them write code.

>Someone's choice for a contrived joke problem will not reflect their choices for a real software project.

The interviewers tried to tell him to approach it like an industrial-grade solution, not a weird academic exercise. He was in the mood to do an academic exercise, and that's what he did. The interviewers seriously don't know what he will do in the workplace. That's why they're trying to make him write some code in such a way. Self-taught people are more of a risk in that they often overcomplicate (or oversimplify) things.

>I worked with people who liked to solve silly problems in silly ways, but when it came to real projects always preferred mature languages and libraries which focused on long term support, stability and maintainability.

Good for you? I'm not talking about silly problems. I'm talking about someone who wanted to rewrite our build system in a compiled language, and our Python unit test driver in a different compiled language. He wanted to use inappropriate "fun" languages at work. I'm not categorically against using interesting new languages and tools, but when there is ZERO benefit to doing so and nobody else knows said languages, it is not to be done.

>The problem with the interview is that instead of talking about the subjects, they themselves want to rely on subtle hints about the candidate. Which may not mean anything.

The whole point of the interview is to get hints about the candidate. There are times when interviewers read too much into what the candidate does or says, but this isn't one of them. The candidate wanted to show off his knowledge of type theory despite pretty obvious hints that the interviewers didn't want that. That means he has bad social skills or else he has an ego issue. The fact he blogged about it in such a way to brag about his solution suggests he does have an ego problem. There's also a healthy chance that the whole story is fiction, just to advertise himself as a self-taught "genius" who is turned down for being "too good" lol.


I strongly disagree. I’ll take the opportunity to wildly diverge when the chance naturally presents itself. It’s how I can show the interviewer that I’m a creative person who comes up with interesting alternatives and can also be fun to work with.

For example, I was asked to do FizzBuzz once. I laughed, said we’ve both done this dozens of times, and would they like to have fun with it? We ended up building this wild thing with recursive Python generators and itertools and a state machine or something. I don’t remember the details a decade later, except that the interviewer thought it was hilarious, and I taught them some Python (“wait, that part there, does that actually work?!”, and they paused me to test it on their laptop).

I got the job.

As a candidate, you’re interviewing them, too. If the person is a martinet who can’t look deviate from the script even slightly, and you have other options, do you really want to work with them? That sounds joyless.


Sometimes going off script is OK but that is not what they wanted here, it seems. Besides I guess that if this interview ever happened, the "solution" might not be the reason that they weren't hired. But the numerous hints that the candidate got suggest that they wanted a normal solution. Asking someone to deliver a solution within some reasonable constraints is not "joyless" and disregarding these constraints on purpose over and over is not "smart."


>The whole point of the interview is to get hints about the candidate.

Which it didn't accomplish at all, because the interviewers refused to do the single most important thing. Actually talking with their candidate about these things. Instead they are relying on psychoanalysis to divine some secret meaning in his actions.

I am not arguing that your interview shouldn't try to figure out the personality or professional approach of a person, but that this particular interview made it near impossible to do so. Simply because they refused to talk about the things they wanted to know.


You make it sound so much more mysterious than it is. The candidate rejected polite social cues to not go down the path he did. He came up with an oddball, brittle, and undebuggable solution. They said what they wanted for this fairly simple problem. It's not a guess at some secret, it's a simple observation that the candidate is not right for the job.


> The interviewers tried to tell him to approach it like an industrial-grade solution, not a weird academic exercise.

Wrong. They put in silly rules like "Max of 30 lines" and "Mutating array operations are forbidden". These do not describe industrial-grade rules. They describe an academic, esoteric challenge. And then when he provides them with it, they punish him for his creativity by adding in bullshit rules retroactively e.g. "Hardcoding matrices is forbidden."

You just sound upset that he's able to walk the walk but you WANT him to be just a bullshitter.


> I hate coding interviews but I am sympathetic to people who want to do them because I've met people who are such good bullshitters that they could fool most people. If you're talking about someone with no working experience and who claims to be self-taught, you really have to make them write code.

The choice is not limited to made up toy problems vs not testing coding skills at all. You can give them real problems to solve.

> The interviewers tried to tell him to approach it like an industrial-grade solution, not a weird academic exercise.

Hahaha, and how exactly do you write an 'industrial-grade' FizzBuzz? ;)

Obligatory: https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...

> The whole point of the interview is to get hints about the candidate. There are times when interviewers read too much into what the candidate does or says, but this isn't one of them.

Oh but it is. If I were to ask you, the expert, to draw a blue line with red ink and then attempt to draw conclusions from your behaviour based on that question, could I ever get a valid assessment of you? If a test is faulty, so are its results. Garbage in, garbage out.

Interviewing is no trivial task. It is an attempt to test how well someone will do a thing without having them actually do it. By definition, that is impossible. Still, we can try to get good enough results by minimising the number of differences between our test environment (interview) and production (the job). That will involve:

a) making the interview environment resemble the job as much as possible (no hazing, minimal pressure on the candidate, writing code in and IDE instead of a whiteboard, etc.)

b) presenting the candidates with coding tasks that match what the company does on a daily basis (take a suitable bug you had in your codebase, touch it up a bit with more issues, have them fix it; pair program with them to add a new feature to your codebase, etc.)

Some concrete examples:

- https://rachelbythebay.com/w/2022/03/19/time/ (it suffices to read the first two paragraphs)

- https://quuxplusone.github.io/blog/2022/01/06/memcached-inte...

- https://blog.jez.io/bugsquash/


> I wish all the best to him, but reading between the lines is a useful skill regardless of this specific situation.

I disagree. This is not a fair ask, especially for a programming position. Programming and maths in particular puts a lot of emphasis on attention to detail.

If he can write it in X, and there's no rule against it, and the job gets done well, then there is no issue. Arguing any further of it is unproductive. He's applying for software development, not for public relations.

> "I don't think that will be a good idea" is not a suggestion, it's an order.

Then it should be a rule. "Reading between the lines" sounds like an excuse to me for bullshit criteria. It should be written, it should be explicit, and it should be known. If the interviewer is uncomfortable writing it down as a rule then that tells me they KNOW that it's too silly or pedantic. This whole idea of unwritten rules is a double standard designed to weed out neurodivergent or autistic individuals who are more than capable of fulfilling job requirements and, to me, seems like a potentially illegal form of discrimination that violates disability civil rights laws.


If company is trying to hire people in other countries/cultures to save some buck, they should be mindful that communication may be a bit more of an issue, and try harder, too. Otherwise they'll have the same issue after hiring and not just during interviews.

It's not all up to the interviewee to decipher everything. Both should be trying a bit to get to the same understanding, prior to setting off to work.

Anyway, the company will be wasting money when the communication works out poorly, so it's ultimately up to them.


That's part of good senior/manager skills unfortunately. These sort of subtle hints happen all the time, it looks bad if folks keep missing them. Ie British are famous for them, you'll rarely experience straight talk to the bone.


They're 18 rules in and now they're going to "hint" that something isn't okay when the candidate directly asked yes or no? Fuck that.

And the reason for the hesitance was a worry that it would have trouble with future rules, which turned out to be completely unfounded.


Yeah, it's an English thing. A literal translation in Dutch would mean "explain yourself here", and if you'd change tack because of the question alone, it would seem like you really lack self-confidence.

But after living in the UK for a bit, in the UK that is most likely an order.


In the UK, it’s not at all cut and dried - depends more on the personality of the person saying it. Could be lazy assertion of control, an attempt to help or an implied challenge.

I also think the interview setup and management were poor.


> and if you'd change tack because of the question alone, it would seem like you really lack self-confidence.

Or because the candidate realized that they've messed up, and by dropping the issue can at least salvage the next XY minutes of the interview by not going down the wrong rabbit hole.

"Could you tell me more about this?" and "Are you sure about this?" are invitations for providing the rationalization for your answers. "I'm not sure that's a good idea" is a very unsubtle, but polite way of hinting that the you have gone way off the map.

As an interviewer, I want my candidates to succeed. I want them to put their best foot forward. I've asked my question over a hundred times, and I've seen many ways that people have solved it, correctly or no. If I'm giving them this suggestion, it means that I know that they are going down one of the many, many wrong garden paths.


I know, I have now spent enough time in the Anglosphere to understand this cultural code. But I do want to emphasize that, yes, if you only know English as a language, that that subcontext is not obvious.

In France, professors would literally say "You are wrong." as an invitation to explain yourself better. There are only 500km between London and Paris, but the culture behind these words is the complete opposite.


That phrase in English has a strong connotation of the recipient screwing up somehow though, so I’d probably say “that’s not what I was looking for in this case. Try something else.”


> Yeah, it's an English thing. A literal translation in Dutch would mean "explain yourself here",

That's just patently untrue. The literal translation is "ik denk niet dat dat een goed idee is" and the better translation would be "dat is niet de bedoeling".

If I got told in an interview "dat is niet de bedoeling" I'd be damn sure to rework my solution because they're clearly trying to coax me towards whatever they're looking for. And in a way it is actually a nice thing of them, because they could just say nothing and fail me out of that round of interviews.


Interviewer was wrong. Interviewee learned that interviewer is happy to advance incorrect ideas and stifle innovation.


Smells like poor management to me.

"We'd like you to explore this path and show how you would deal with problems that occur there" is much easier to interpret than the passive aggressive tone of "I don't think .."

I would also go with my idea and see how the manager reacted: there is only so much micromanagement I'm willing to tolerate at work. Interviews go both ways.


For anything serious your best bet is using an SSH client like Prompt or Blink, or an RDP client (Microsoft's app is decent).

You might have some luck with something like iSH, but it's pretty janky if you have to do real work.

Juno is a decent workflow for Python/Jupyter and has local execution, but it's far from a complete IDE.


I have read novels written by engineers. More often than not, they are unreadable. They fail to understand why people read (or write). They are an engineer's idea of what a novel should look like.

In the most loving way possible and as a software engineer myself, this is an engineer's idea of what a deck of cards should look like.


> I have read novels written by engineers. More often than not, they are unreadable. They fail to understand why people read (or write). They are an engineer's idea of what a novel should look like.

On the other hand: Greg Egan ([1], [2]) is a mathematician and novelist. His novels are sometimes considered to be barely readable (if you are not part of his audience). On the other hand: he does seem to understand why people read his novels - he knows his audience. :-D Readers praise his novels for the worldbuilding (though commonly the story is considered to the weaker side of many of his novels).

So I do imagine that such engineers who write novels perfectly know why people read - they just write for a different audience. :-)

[1] https://www.gregegan.net/

[2] https://en.wikipedia.org/wiki/Greg_Egan


What, like There Is No Antimemetics Division?

I think you read novels by bad writers, who happened to be engineers.


Having followed qntm's work since Fine Structure (2006), he's really improved since then. He got a lot better at character writing with Ra (2011).

I think it's largely a matter of intentional practice. I think that some sci-fi writers don't prioritize honing that skill, since they just want to build a cool world and the story is a way to demonstrate that.


I have never heard of that book, but a quick Google search reveals that the author is in the process of "releasing version 2". Refer to the previous comment for my opinion on that matter.


In addition to the content licensing, it was released as serial web fiction presented in a wiki. Serial fiction usually benefits from significant editing to fit the structure of a novel.


Well, it's a fantastic book.


It's being rewritten for content licensing reasons.


All it needs is some artsy marketingy touch to sneak it past normies' defenses, and it'll shine :).


A configurable deck of cards. It's not like you see these in every store.

That being said, I love the page. The layout, use of colors and the content.


They aren't that uncommon outside of physical stores, where board game nerds gather - off the top of my head I can think of at least three similar multi-game decks: Everdeck [0], Badger Deck [1], and Singularity Deck [2], while a cursory search nets me another dozen or so [3].

There's also the Decktet [4], of course, even if it takes a very different approach than the rest (for one thing, it does away with traditional suits - it is a very interesting deck, and one I really like).

[0] https://www.drivethrucards.com/product/291492/The-Everdeck [1] https://www.drivethrucards.com/product/130446/The-Badger-Dec... [2] https://www.drivethrucards.com/product/189681/The-Singularit... [3] https://boardgamegeek.com/geeklist/252876/playing-card-game-... [4] https://www.decktet.com/


Just bought the decket and the book based on your recommendation. It looks neat. I've recently gotten into card games with my kid and it's a lot of fun. I'm hoping these games are half decent as well. Have you played many?

On another note, I've really enjoyed finding one-player games that use a standard 52-card deck as well. There's a ton of interesting games out there.


Right, but a card game with specific cards already chosen and designed for me that let the cognitive burden be on the game strategy itself is much more useful for the majority of people, and typically cost $5-10. This multi deck would be a nightmare with children too.


Parent referred to this as if it were a novel written by an engineer, that it then, as usual, wouldn't be a good one. The off-the-shelf card games are a standard novel. These cards are not your average novel; you'd expect people to be into this unusual or for some even unpleasant genre and style.


Nevil Shute was an engineer and he wrote some well respected novels like the nuclear war story On the Beach (made into a movie twice), A Town like Alice (made into a film and TV series), and No Highway (sometimes called No Highway in the Sky after the title of the film based on it), which actually deals with aircraft engineering and the problems involved in designing early jet airliners.

https://en.wikipedia.org/wiki/Nevil_Shute


It seems though like these are kind of a tool for a “game engineer” to prototype a game though. Not just a deck to keep around the house for general play.


I don't see the point. If/when that happens, programmers start doing the thing that comes next. The thread was about skills, ASM skills aren't irrelevant for doing C, C skills aren't irrelevant for doing Python and so on.

One-trick ponies aren't good programmers in the first place.


Many things that come next isn't programming as we know it.

Already today, see SaaS products for content management, CMS and no-code frontend.

There is zero programming, what one ends up doing is configuring SaaS products to connect among themselves, plug data sources, have AI algorithms process marketing data, export a generated UI into Vercel/Nelify and that is about it for 90% of customers.


For all those SaaS products today, you see legions of consultants, system integration projects, a cottage industry for customizations...

The skills aren't irrelevant.


True, but it isn't programming.


I work a lot with databases and I've seen... stuff. It's not as bad as you might think if you know what you are doing. Most RDBMSs support recursive CTEs, it feels like writing Prolog with a slightly sadistic syntax. For something like AoC the most difficult part is probably parsing the input.


Speaking of parsing, back around y2k we were building an app that used XML everywhere, which was the style at the time, and our DBA wanted to write an xml parser in SQL (the api would involve sending XML to the database). That got vetoed.

IMO, this kind of thing is what AoC is good for - you get to play with weird/obscure stuff without affecting your day job code.


I did something with JSON back before there was reasonable native support - it's certainly not robust, but it handled a few syntax variants for a use case where we had an extra attribute column that serialized JSON, and wanted to surface one of the fields as a proper column on the table.

https://blog.tracefunc.com/2011/11/19/parsing-json-in-sql-ht...


Funnily, I’m actively working on rewriting a stored procedure which parses an XML snippet as one of its arguments

Luckily it’s not a handwritten XML parser though: https://learn.microsoft.com/en-us/sql/t-sql/functions/openxm...


Just around the same time I was working at a place that used Oracle's web app extension, with CGI endpoints written completely in PL/SQL. I did end up writing an XML parser/serializer for it.


I do AoC in SQL, I wish it was true. With Postgres, you have lots of regex/string manipulation functions that make it easy.

For me, the biggest problem was memory. Recursive CTEs are meant to generate tables, so if you are doing some maze traversal, you have to keep every step in memory, until you are done.


It is closer to Datalog I think, or can you express cut? CTEs are fairly restricted compared to logic programmming languages though, at least for Postgres. In particular, relations cannot be mutually recursive and your rules may only be linearly recursive in themselves (i.e. can contain only one instance of themselves in the right hand side). Postgres is overly restrictive in the latter and requires at most once recursive reference over all subqueries in the UNION even though it would be safe to only restrict the number of recursive calls for each subquery (each corresponding to a separate Datalog rule for the same relation). It is possible to work around that restriction using a local WITH expression (a hack really), but then you are also on your own since it disables all checks and allows you to write rules which uses actual nonlinear recursion and will give incorrect result sets when evaluated.

I really would like Postgres to have proper support for writing Datalog queries, and with better and more efficient incremental evaluation algorithms as opposed to the iterative semi-naive algorithm that is only supported now.


Haven’t written SQL in a while (and I used to write a lot) but I think SQL Server recursive CTEs are fairly unbounded so it’s just a Postgres limitation unfortunately.

(I’m a fan of MS SQL but it’s Microsoft and also hard to financially justify for many companies. But if you ever get to use it, it is a very solid RDBMS, even if the rest of your stack is open source.)


The cost of MSSQL is largely controlled by how the system is designed and the complexity of the business.

The model I am most familiar with is a 10-20 employee B2B SaaS startup running one "big" instance on a single vm in the cloud somewhere. If this is approximately all you require, then the cost should not be a dominating factor in your decision.

I think "because Microsoft" is also really poor justification if we are being serious about the technological capabilities and pursuing high quality business outcomes. If your business is fundamentally about open source advocacy and you are operating as a non profit, I totally get it. But, this is probably not your business model.


MS SQL is one fascinating piece of software and much closer to commercial big bro offering such as Oracle and DB2, yet much more user friendly and convenient.

This sentiment against it really always comes only from people who have not used it or have never touched the enterprise version which is a very mature ecosystem with lotta features available for ages now.

Most of my career I’ve been dealing with DBs and MSSql is the easiest to admin perhaps being also tightly integrated with all scripting in the platform. It also runs Linux and is doing it better than the rest can say for running Windows.


A company I worked for uses Syteline ERP which heavily relies on SQL Server. But the DBA was constantly complaining about how slow the Syteline SQL was. One major issue was long running transactions taking 10 minutes locking rows/tables for too long and using a lot of memory. You would think very expensive ERP systems would have decent SQL.


> Most RDBMSs support recursive CTEs, it feels like writing Prolog with a slightly sadistic syntax.

Which makes sense as both are declarative logic-based languages. IMHO, SQL and Prolog fundamentally have much in common.


I did a semester at the university of Edinburgh and took database systems and logic programming at the same time, and I definitely felt the synergy between them.


parsing is most difficult for probably the first third of the problems. when you get to day 19 or so, the input is still just a grid or a bunch of ints just like day 1, but the algorithms required are considerably more challenging than the parsing part. (I've done all 25 problems in all years)


Thanks for that comment.

I laughed aloud at "It's not as bad as you might think if you know what you are doing."

... because that pretty much describes all human activity :-)


I'm mostly in the camp that notebooks aren't that great for software development, they thrive as an "excel for coders" of sorts, but take a look at nbdev from fast.ai.

The literate programming aspect is very nice and I wish it was explored more.


I believe it's more frustration directed at the mismatch between marketing and reality, combined with the general well deserved growing hatred for SV culture, and, more broadly, software engineers. The sentiment would be completely different if the entire industry marketed themselves like the helpful tools they are rather than the second coming of Christ they aren't. This distinction is hard to make on "fast food" forums like this one.

If you aren't a coder, it's hard to find much utility in "Google, but it burns a tree whenever you make an API call, and everything it tells you might be wrong". I for one have never used it for anything else. It just hasn't ever come up.

It's great at cheating on homework, kids love GPTs. It's great at cheating in general, in interviews for instance. Or at ruining Christmas, after this year's LLM debacle it's unclear if we'll have another edition of Advent of Code. None of this is the technology's fault, of course, you could say the same about the Internet, phones or what have you, but it's hardly a point in favor either.

And if you are a coder, models like Claude actually do help you, but you have to monitor their output and thoroughly test whatever comes out of them, a far cry from the promises of complete automation and insane productivity gains.

If you are only a consumer of this technology, like the vast majority of us here, there isn't that much of an upside in being an early adopter. I'll sit and wait, slowly integrating new technology in my workflow if and when it makes sense to do so.

Happy new year, I guess.


> there isn't that much of an upside in being an early adopter.

Other than, y'know, using the new tools. As a programmer heavy forum, we focus a lot on LLMs' (lack of) correctness. There's more than a little bit of annoyance when things are wrong, like being asked to grab the red blanket and then getting into an argument over it being orange instead of what was important, someone needed the blanket because they were cold.

Most of the non-tech people who use ChatGPT that I've talked to absolutely love it because they don't feel it judges them for asking stupid questions and they have conversations about absolutely everything in their lives with it down to which outfit to wear to the party. There are wrong answers to that question as well, but they're far more subjective and just having another opinion in the room is invaluable. It's just a computer and won't get hurt if you totally ignore it's recommendations, and even better, it won't gloat (unless you ask it to) if you tell it later that it was right and you were wrong.

Some people have found upsides for themselves in their lives, even at this nascent stage. No one's forcing you to use one, but your job isn't going to be taken by AI, it's going to be taken by someone else who can outperform you that's using AI.


Yikes.

Clearly said, yet the general sentiment awakens in me a feeling more gothic horror than bright futurism. I am stuck with wonder and worry at the question of how rapidly this stuff will infiltrate into the global tech supply chain, and the eventual consequences of misguided trust.

To my eye, too much current AI and related tech are just exaggerated versions of magic 8-balls, Ouija boards, horoscopes, or Weizenbaum's ELIZA. The fundamental problem is people personifying these toys and letting their guard down. Human instincts take over and people effectively social engineer themselves, putting trust in plausible fictions.

It's not just LLMs though. It's been a long time coming, the way modern tech platforms have been exaggerating their capability with smoke and mirrors UX tricks, where a gleaming facade promises more reality and truth than it actually delivers. Individual users and user populations are left to soak up the errors and omissions and convince themselves everything is working as it should.

Someday, maybe, anthropologists will look back on us and recognize something like cargo cults. When we kept going through the motions of Search and Retrieval even though real information was no longer coming in for a landing.


Ruby Storm player spotted? :)


Hah! Nah, this is an M:tG Arena deck for Historic. Besides I'm too rogue for Ruby Storm :)


According to this thread: https://old.reddit.com/r/adventofcode/comments/1hnk1c5/resul...

o1 got 20 out of 25 (or 19 out of 24, depending on how you want to count). Unclear experimental setup (it's not obvious how much it was prompted), but it seems to check out with leaderboard times, where problems solvable with LLMs had clear times flat out impossible for humans.

An agent-type setup using Claude got 14 out of 25 (or, again, 13/24)

https://github.com/JasonSteving99/agent-of-code/tree/main


I have to wonder why o1 didn't work. That post is unfortunately light on details that seem pretty important.


I was thinking 20/25 is pretty great! At least 5 of the problems were pretty tricky and easy to fail due to small errors.


Several problems this year were DP already. This year is much easier than 2023, but Day 21 was hard.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: