Time for my yearly tradition of making it to day 4 in Haskell and feeling very smart before completely giving up on the first hard problem that needs regular arrays :)
I know exactly what you mean. I'm going to try AoC this year again with Haskell, but this time around I'm just going to cheat and use it as an imperative language.
{-# LANGUAGE BlockArguments #-}
{-# LANGUAGE Strict #-}
stateful s' f = runState f s'
forEach xs state' f = foldM f state' xs
solve input = forEach input initial_state $ \st current_value -> stateful st do
...
I tried that last year, and got worn down by how often I'd Google how to do something and get an answer explaining how recursion works from first principles as opposed to just pointing at the deciding standard library function that did what I needed.
I want to try again this year, do you have any advice for how to get going with Ocaml as someone who has programmed long enough to understand how recursion, linked lists, and all that jazz works, and just wants to figure out how to be productive in the language?
Also you can use F# notebooks with .NET Interactive and the Polyglot Notebooks Visual Studio Code extension, which is great for something like Advent of Code.
I'm not an OCaml expert either. I've used Haskell professionally a little bit and also know OCaml. I find OCaml more practical because you don't have to fight with "pure" functional programming when you really want an imperative algorithm. For example, OCaml provides mutable arrays even though it encourages functional style whenever possible.
One downside of OCaml is that its standard library is not very powerful and there are multiple third party replacements that are popular. Jane Street's base/core, Containers, Batteries Included, to name a few. So you have to make a choice.
Having said that, I think you can live with the built in standard library to solve most of the AOC problems.
I did OCaml the passed two years. GPL productivity in OCaml is easily accessible, but you need opam packages. Its not a deeply batteries-included language at all. With the exception of eio, my goto set of base packages are: https://github.com/cdaringe/protohacks/blob/main/dune-projec...
containers is my preferred “add batteries” std lib extension. Also, the ocaml discord chat and discuss forums are very active and welcome to questions
AoC is one of my favorite events of the year! I find the puzzles generally approachable, but interesting enough to spend time on. I also like that there's a definitively right answer, which motivates me in an interesting way. I've developed a base class over the years that handles input parsing, so I can focus more on the solutions themselves.
Additionally I've been solving for a number of years, but for the past 2 years, I've done a daily explanation of the solution. I use interesting parts of the Python stdlib and walk readers through common algorithms. I've found it _incredibly_ rewarding and plan on doing it again this year.
I just skimmed through your repo, and I loved the way that you used AoC as an opportunity to share many of the batteries that are included in the Python stdlib. In one particular case, I wanted to also share some more fun corners of the collections module / common algorithms and sent you a pull request.
Every time I do the AoC puzzles I wish for two things:
1) The site should ask for the language used to solve the problem and then use this as semi-scientific data for comparing "time-to-solution" for various languages. The sheer volume of data and the relatively high difficulty of cheating on novel problems would make this data set interesting and maybe even useful.
2) I wish all the puzzles had a "hardcore" mode where a naive approach would take a million years of compute, or exabytes of memory. Have multi-gigabyte inputs[1] such that the time-to-solution isn't just how fast you can bang out some simple parsers and loops, but the runtime performance would materially affect your time score.
[1] A method for this would be to use an input-file-generator that is small to download but can generate huge puzzle inputs.
Part2 is quite often what you wish for in 2), isn't it? At least for the later puzzles, part1 is often something that can be solved naively, while part2 needs some cleverness to not blow up.
Yeah, usually part 2 is. Bear in mind that tonight's puzzle is a warmup, it's day 1 after all!
I do find there's usually one or two spots that you can eek by with threads and throwing compute power at the problem. But those are rare — usually you have to actually solve it.
> site should ask for the language used to solve the problem and then use this as semi-scientific data for comparing "time-to-solution" for various languages.
I think this would just show you the average time zone of the language's users.
Today 4 minutes gets you top 1000. There’s more than enough users starting when the problem gets released that an analysis could be done by checking for solutions submitted within the first hour.
I generally find time to solution uninteresting and that how closely the solution models/mimics the problem statement the far more interesting thing.
I imagine that there’s a wide breadth of languages showing up in the top results but would guess that it skews towards imperative languages. Yet, the functional languages will almost read like the problem statement, which is what I find wonderful.
> The site should ask for the language used to solve the problem and then use this as semi-scientific data for comparing "time-to-solution" for various languages.
I can't believe any valid causal inference about languages could be drawn from this, because the skill of the individual programmer at solving these types of puzzles (recognizing the patterns, knowing the number theory, etc.) would surely dominate any impact of the language itself.
I suppose you could conclude something along the lines of "newer programmers use Python more often than Haskell", but I doubt there would be many surprises there.
It would work if language to programmer mapping was largely random with respect to skill level, or randomly assigned languages to use rather than giving the programmers a choice, but then you'd also need to give six months notice or something to learn the assigned language if they didn't already know it.
I did several of these last year and found them deeply unpleasant. I like puzzles but some puzzles are just...bad. Like the ones that require you to just brute force a solution, or write lots of detailed code to capture the requirements...that shouldn't be what this is. Just my opinion, of course, but like I said I found last year an unpleasant chore that felt more like work. Hope it's not like that this year.
Each year does vary slightly in theme and focus, so it's possible you'll have more fun this time if you give it another chance. Or not. But remember that it's never too late to give up. :)
My favorite so far is 2019, where several of the problems had you write and extend an emulator for the imaginary (and wacky) "IntCode" computer.
2019 was excellent. Each day, the puzzle would have you adding new features (new instructions, addressing modes, input/output handling) to the emulator. The early days were fairly simple, but things got progressively more complicated. On one day the puzzle input was code that played a game of Breakout. Your task was to implement an emulated joystick that would play perfectly. The solution was the final score. On another day the puzzle required several virtual machines running in parallel feeding the output of one into the input of the next to get the final answer.
Reddit r/adventofcode is also fun. People post not only solutions in various languages, but visualizations of solutions (e.g., an animation of that game of Breakout). Sometimes people post fun things, like a solution to a puzzle in Apple II BASIC and a video of it running on an actual Apple II.
One of the things I really loved about the IntCode problems was best exhibited by that Breakout game. One way of solving the problem was to create a joystick, as you said, but if you had built an IntCode inspector/debugger, you could also dig into the game code itself (your puzzle input) and find out where the blocks were in memory, and how much each one was worth, and just sum it up directly.
It reminded me of Wastl's other big puzzle collection (Synacore) which also had a big emphasis on implementing a VM to host further puzzles on top of.
I felt similar last year. I was trying to do them in Zig and Rust, and it felt like most of the challenge was just writing custom data parsers for poorly formatted data which felt too much like my day job.
I think it helps to do it in a language you're not familiar with, or with some artificial limitation on how to solve it (no third party libraries for instance).
It takes more time, but felt less like a grind for me.
Last year I have used Python without using any imports and, whenever I felt like it, tried to golf it as much as I could. Turned out easier than I expected, even without any libraries Python is pretty powerful for this kind of tasks: https://gitlab.com/dos1/AoC21
There were only like two or three days that felt frustrating, but mostly because of the problem being poorly specified.
I'm using a language with easier data parsing this year and so far I'm having a much better time (I'm also more time poor and these exercises fit in pretty nicely to a bit of after dinner free time).
Fwiw there’s a few recurring formats, the old hands have a bunch of helpers for that and other things. Sometimes there’s something completely bespoke but the most commons are space-separated lines of things, usually ints, commonly fixed-size.
There was one about piecing together data from multiple under-water sensors regarding the locations of mines or something. With the data being rotated in various orientations (in right angle increments) thus having to correlate the overlapping data items. I think it was 19. I wrote quite a log of code to brute force this:
I recently went through the 2021 puzzles and I don't recall having this experience.
I also think the choice of language matters a lot. I did it in prolog a few years ago and it was super concise because you just dealt with pure logic at that point.
I can see it be more verbose in Java :) but seriously I find these puzzles hard but not in a chore kind of way.
The idea is that you read and understand a small piece of code (full of useful techniques) and make a small change to demonstrate understanding.
Games that require you to write the code are limited to rehashing the same old tired algorithms... reverse a string and other sequence techniques, edit distance and dp variants, optimization by binary search and evaluation, etc., the standard leetcode stuff. Basically, useless wankery you will never use. The competitive programming standards.
If you don't have to write it, just understand it, the game can cover some very interesting new algorithmic terrain. It becomes part book, part game. Like Hacker's Delight: The Game.
I like this idea! Sounds more relaxing like I could passively work on it while on the train, and also see code examples that I wouldn’t normally be exposed to in my day job.
Seems like it might help me get better at code reviews too. Granted, I don’t like doing code reviews (mostly cus it’s usually pretty uninteresting code). But I do like puzzles! I’ll check it out :)
I wonder how the persons on the leader board manages it? First person today solved part1 after 39 seoncds, and was done with part2 after 53 seconds!
I felt I was quite fast in being done with part1 after 3 minutes and part2 in 4 minutes (rank ~1000). I even tried to skimread the explanation to be fast, have a script to download the input and run it etc., but still no match!
> 1. Have everything in place at the start (a script to grab the inputs helps).
Definitely. A lot of it is always the same (open files, iterate over input, operating on arrays/lists etc.). Something like Peter Norvig has in Python:
“Have solved many similar problems in the past so you can parse these problem statements quickly”
This sounds like an advent of code specific thing. The problem itself is trivial, no need to have solved anything similar in the past, but the description is so verbose (because it’s a cute Santa-related story) that it becomes hard to parse the problem we’re trying to solve, quickly.
I don’t think this is common in any other competitive programming environment.
Feel I have to disagree on this one. Firstly, most programming competitions I've been to have this kind of problem description as well, just check Kattis problems for instance.
But secondly, while the problems in the beginning are trivial, you still have a huge advantage if you can immediately start programming without having to think, or having to look up a function in the stdlib. (Spoilers:) I probably wasted a minute today thinking about the solution of making a list for elves, and creating a new element whenever I saw an empty line. While a pro probably immediately did a split on "\n\n", summed and was done.
I have most of it in place, but I guess it's 4&5 I need to improve. I feel like my problem, when comparing to the video posted of today's winner, is that I'm "too safe". I should gamble a bit more, read only parts of it at the chance of misunderstanding, and go with my first thought of implementation even though it may not work. Because the moment I start to consider things the others are already done, heh.
And I guess I could do better on 2, but on this day I only needed 1 run. I use Kotlin, and guess I lose a second or two on it compiling before running. I see today's winner just did everything in browser console as a repl. So that's probably a good way to minimize the loop. Pretty annoying for the later problems, though.
There's a session token in your browser cookie for the site, you have to pass that to the server. Honestly, though, I'm not going to be in the top 100 so I don't bother anymore. I open the input and hit Cmd-s and save it to my inputs directory. I've made it into the top 100 precisely one time in the 4 years (this is my 5th) I've participated, and I honestly don't know why I was in the top that time. It wasn't a terribly hard problem (for the folks that were reliably making it in the top 100) if you'd done the earlier parts since it was a simple search (increment register 0 value, check that it's not in a cycle, and count the number of executed instruction if it halts). You did need to complete day 16 first though, and it was closer to Christmas itself so maybe that was why.
If you do use a download script make sure you check whether or not you've downloaded it before. He's also requested that you put in a user agent that provides some contact info (rather than being generic). The server has been getting slammed by people hitting it too often and he's considering banning some, and wants a way to reach out to people who may have a misbehaving script.
For the easier tasks, my "fun" is doing it as quickly as possible first ;) Then I rewrite it with a goal in mind, for instance entirely functional with no mutations, as a single expression or whatever.
First day was a pretty fun start. Just heard about this for the first time this year and going to give it a go.
Tangential question - why does the HN crowd hate these types of questions in interviews but apparently likes to do them on their own time? Different folks answering to different posts? Honestly curious.
- You can use whatever language or paradigm you like.
- You don't have to explain anything to anyone.
- You are not ashamed of using hacks or shortcuts.
- It's a learning exercise if you are using a new language.
- You can google for things like "I know this problem needs A* but I forgot how to implement it", which show you know the important part, how to solve the problem, but forgot the useless part, the algorithm itself, which you can easily google.
- There's a single, objective answer to the problem and it gives you hints about it (too low, too high).
If interviews where like this where I could do problems at my own pace at home and then later in the next interview we'd discuss my solution, interviews would be way less stressful and interesting. The interviewer would even know how long I took to solve the problem.
Why would it be optional in an interview? You want a job, I want to know that you've seen software development before.
(Now for context, the interview questions I typically would recommend are a order of magnitude easier than AoC's typical bar, in the language of the candidate's choice, ideally on a real computer and not a whiteboard. Tonight's question though (a warmup) wouldn't be a bad one. It's straight-forward, no trickery, nothing clever, and directly applicable to real-world coding. And it would screen >50% of candidates that I'd give it to.)
Personally i just think it's not a good way to assess. Such challenges are mostly about coming up with "creative" solutions. I think that pressure can be good for creativity but it can also be it's death.
However, motivation is important for creativity. Competing online with other like minded people is a good motivation whereas a job is something you need literally in order to survive. It's not really comparable imo.
> Such challenges are mostly about coming up with "creative" solutions.
No, or you're strawmanning the type of interview that is being suggested here. No, the challenge is about solving the problem; it doesn't take "creativity" to find the maximum in a list of integers, and the problems a SWE is going to encounter in the real world far dwarf such a question in terms of difficulty or "creativity" required to solve.
Some of the rougher AoC problems are harder, yes a bit more so. But a.) I qualified that that's not in context here in the original comment and b.) even there, it often isn't creativity that's required, it is knowing your basic principles, e.g., what data structures exist and when to apply them. But HN has an anathema when it comes to this sort of stuff; heaven forbid that SWE requires skills.
> I think that pressure can be good for creativity but it can also be it's death. However, motivation is important for creativity. Competing online with other like minded people is a good motivation whereas a job is something you need literally in order to survive. It's not really comparable imo.
This is mostly non sequitur. There's no eliminating stress completely from an interview, but depending on what tech I have handy, allowing the candidate a full REPL, access to reference material, coding in an IDE and language of their choice are all there to reduce stress and let the interview focus on what hopefully the candidate is good at.
I'm not attempting to compare competing online with an interview: you're not competing online in one of my interviews. The sort of problems, however, as an example of what might be asked in a coding question is the scope of what is being compared here. The point is: a. they're not the infamous sewer lid question b. they don't require "one clever insight" to solve and c. they in many ways mirror real-world issues. (E.g., parsing an input, taking an english description and turning it into relatively simple code.)
I disagree with you. It does take creativity and actually it is the same in math. Many proofs have a little "trick" that required in my opinion creative or out of the box thinking to discover. That is the same for many leet code problems, and I'm already accounting for knowing data structures and algorithms, most people learn those at university.
I just want to mention though, I am not trying to criticize "your interviews". It seems like you put quite a bit of thought into it which i think is great and every interviewer should.
It's like a game of HORSE for basketball players or a home run derby for baseball players. A game that's related to your profession and might be fun, but a terrible way to measure ability in the actual job.
Because they are fun! But in an interview environment it can feel like being tested on whether you are cool under pressure and less like you are being assessed for merit.
Parsing a list of list of integers, finding the maximum, in a hour, in the coding environment of your choice: this shouldn't be "under pressure".
(We do not ask in interviews questions of the same nature as the last third of AoC. If anything, most of our interview questions are below AoC day 1 in difficulty. And yet they screen absurd numbers of candidates.)
That is more time and an easier problem than I’ve seen, but that’s fine. The point is if you asked some one to tie their shoes while scrutinized they’d probably make a mistake, and the only solution is practice, but that practice is principally not for the purpose of improving your skill but for reducing the impact of being acutely observed.
I don’t think this is a problem per se, I think it’s very egalitarian, but it’s also not difficult to see why people might be uncomfortable or complain.
Then people should be giving examples of questions they're encountering in interviews when they want to discuss why, then; industry norm is not rougher than AoC.
While I've definitely encountered tougher than this AoC in interviews, I find I have a hard time maintaining that bar: I have to push to maintain even the bar of this day's AoC question (e.g., "write min()") as it turns away sufficient number of candidates that hiring managers start looking to lower the bar, which is absurd.
(But that said, when I encounter tougher problems, they're not tougher than about mid-AoC, which is still well within reason, to me. They're still not the infamous sewer lid. Nobody is asking the sewer lid question anymore, and collectively we need to stop propagating that myth. It was banned at Google well over a decade ago…)
(Ultimately, it seems to me to be an economics problem: we're not attracting good devs, which means we're not offering good devs something enticing enough; we're left with the subset for which our offer is enticing, but that isn't what we're looking for. But I can't solve that problem.)
How much external help do you guys seek out for these puzzles?
Last year I solved two puzzles by looking up the answer on reddit, and other than that I tried to solve them as is. But I feel like it is sometimes a bit like solving a puzzle is a bit like reinventing parts of math and computer science if you didn't happen to have learned those from school or a book.
Elixir ended up being a great choice last year. I contemplated using ocaml, sbcl, or guile this go round but ultimately decided to see how far I can get with awk. I'm feeling confident I can get by with awk using getline as needed, we'll see how long I endure...
Yes! I did two years in awk (https://github.com/tomberek/advent) and loved the ability to dig in and truly grok the language. The performance wasn’t bad, but the multidimensional array support and ergonomics were…meh. I’m contemplating Elixir.
You reminded me of Pascal. I used it at school in 1994 (on an IBM RS/6000 I think). I was planning to do the AoC in Shen (I already know Scheme) but you make me want to do it also in Pascal. Thanks for sharing your solutions.
The presence of files with the "lpi" and "lps" extensions means you are using an IDE ? Moreover, I thought the extension for Pascal source code was "pas" but I see "lpr". Why ?
I really like how the Advent of Code is structured. Not only it makes you think about how to solve the problem, it pushes you to architecture your code defensively so when you get the new requirements in part 2, you have the least amount of code to change. I remember being bitten in a problem involving a keypad: I didn't think the keypad shape could change!
I personally hate it. I wish they could just give a clear specification of what they want instead of putting it in the format of a story where some things are vague.
To me, often the "dumb" solution is the modular one, because I can't design the entire solution at once, but I can parse this string into this format, and then I can find out whether this sequence is contained in the other, and then I can map some chars to a number, and then ah..! the solution is just calling these 3 functions in the right order. Part 2 is just swapping one of the functions out with the other, great.
Now this is simplified I know, but I'm not convinced in your proposed duality.
I wonder how helpful GPT-3 and Copilot will be for this :) Not necessarily to solve the whole puzzle but rather to give hints when I'm stuck with a particular task.
Last years I started around the 14th and Copilot auto completed the solutions for all the old problems from basically just the file name. I created a file called 'advent_of_code/1.py' typed 'def main()' and Copilot did everything else.
Last year when I was doing AoC in Go, I was pleasantly surprised how good Copilot is and it felt like magic. Given most of the time I use Terraform, YAML, JSON, jq, and Bash these days, I can't really appreciate Copilot, but last year I did!
I ended up turning off Copilot last year since it was literally giving me complete correct solutions with no effort on my part. Kind of took the fun out of it.
I can't say it gave me complete solutions, but it really guessed my intent most of the time and saved me typing time so that I could come close to the winners who predominantly use Python.
I'm using Elixir and a Liveview notebook this year to go through it. Using Liveview notebooks is so nice, as you can even add and run `ExUnit` tests within one. So, it's just a single document in which to work with syntax highlighting, formatting, modules, tests, etc.
I may also do it in F# and .NET Interactive notebooks using the Polyglot Notebooks Visual Studio Code extension, but I suspect the solutions will look very similar.
Last year I completed it in Python, but I'm pretty familiar with the language. This year I want to try in a new language - mostly either zig or rust. Eager to see how far I can get!
Last year I went through 8 of them. I wonder what's the theme of this year. Hopefully it's somewhat closer to CPU design (because reading Soul of the new machine)
Not CPU design, but does involve a simple VM, have you checked out 2019? About half (1/3rd?) of the challenges involve a VM for "intcode". First few challenges get it up and running, then concurrency is added, and then challenges are presented inside it.
One of my dreams is to consume small pieces of challenges of some topic, one sightly more involved than the previous, then viola! At the end I get to learn say 50% of a CS course.
Books are good but they don't force me to write code. I'm a bad learner :D
No, I'm not doing it. At least, I'm not doing it in real time! What a time waster - my family hated me! Sorry, offline holidays and online programming contest don't mix well! Enjoy your life, solve the problems after the holidays!
You have to have a login (Google, Github, Reddit, Twitter) to get inputs, and part 2 is only unlocked after completing (correctly) part 1 for any given day. You don't have to share your identity with anyone but the host of the server.