Hacker News new | past | comments | ask | show | jobs | submit login
The Hardest Logic Puzzles (conceptispuzzles.com)
105 points by ColinWright on July 27, 2013 | hide | past | favorite | 56 comments



What about making Conway's Game of Life in Conway's Game of Life?

http://www.youtube.com/watch?v=xP5-iIeKXE8


The logic is mind blowing. That's similar to what I imagine to be programming of nanodevices in the future.


This is brilliant. It's not even a "logic puzzle" as much as it is an engineering feat.

And it's not like software engineering, where you're defining the nature of the system as a whole, and then instantiating it, but more like a physical engineering discipline, in which you're juxtaposing already-existing materials against each other such that their pre-existing properties interact with each other to produce the behavior you want.


Conways' Game of Life is not a strategy game. I built one using Javascript:

http://jules.boussekeyt.org/game-of-life/

(instructions: click on some squares and press "Start")


It certainly has a strategy element seeing as the whole game is build upon the premise that particles act depending on a certain strategy.


So I just got sucked into that Killer Sudoku puzzle - and ugh, now I have a burning desire to print it out on a big A2 spread to work on it.

After the first couple of "gimme" squares, the interface is completely worthless for tracking the information you actually need to remember. There's no good way to express "This square is three more than that square, and also can only be one of these possibilities", which makes it a pain to correlate that with similar information about other squares, but at the same time (at least at this, admittedly early stage) it still feels like you can always make another logical step to another piece of information.

I don't even want to look at the others until I've either finished or given up on this one.


#2 is easy, by using double negatives and asking the same question to each god (asking different questions does you no good):

"Is the other non-random god capable of lying?"

The truth telling god will always answer: "yes" (da || ja)

The false telling god will always answer: "yes" (da || ja) [the truthful answer is 'no', but this god tells only lies, therefore the answer is 'yes']

The random god will answer: "yes || no" ((da || ja) || (ja || da))

This means, you will always only get the following combination of answers, no matter what A,B,C order you ask:

da, da, da; ja, ja, ja;

da, da, ja; ja, ja, da;

ja, ja, da; da, da, ja;

da, ja, da; ja, da, ja;

ja, da, ja; da, ja, da;

'Yes' will always have 2 or 3 of the same values. 'No' can only have 1 or 0 values. Now that you have the cipher, you can decode everything.

(edit: formatting)


That's an interesting insight, and it may be useful as part of the solution, but it doesn't solve the puzzle. The question is how to determine the identity of the three gods with three questions.

Personally, I think it's interesting that they are gods. Does this mean that they can answer questions about what WILL happen? E.g. you could ask a god what the next god will answer. If the answer is correct, that's the true or random god. If the answer is wrong, then it's the false or random god. But then we get recursive rather quickly if we ask the question twice in a row. :)


I suspect they are mentioned to be gods to suggest that they may be asked extremely complicated questions, which they will interpret from a purely logical standpoint. ie. Not human.


Starting from this premise, imagine this sequence of questions (it's not the answer, btw):

1. What is the answer to the next question? A: 1, 0

2. What was the answer to the previous question? B: 1, 0

3. What was the answer to the previous question? C: 1, 0

000-impossible: no false god

001-C is the false god, but can't distinguish the other two

010-impossible: no true god.

011-C is the true god, but you can't distinguish between the false and random god

100-A is the false god, but can't distinguish the other two

101-impossible: no true god.

110-A is the true god, but you can't distinguish between the false and random god

111-impossible: no false god

This is interesting: we have eliminated 4 possibilities, 000, 111, 101 and 010. Not only that, now we have a whole new universe to converse about - we can ask the gods about these patterns!

The remaining possibilities show some symmetry: 001 and 100, 011 and 110.

So, thinking aloud, start with these questions:

1. Will the result of my questioning be either 001 or 100?

2. Will the result of my questioning be either 011 or 110?

The answers won't give us much information because they can be anything. But we can "break symmetry" and ask them like this instead:

1. Will the result of my questioning be either 001 or 011?

2. Will the result of my questioning be either 100 or 110?

...and I feel like I'm very close but I'm missing something.


How can you then distinguish between the two non-randomly-answering gods? They will answer the same, and the answer from the randomly-answering god appears to contain no useful information.


So I started to work it out, only to discover the link about the problem in the article gives it away - counterfactuals can be used, so might as well read that instead of any thing I come up with. And thus I just wasted a bunch of time trying to figure out which specific one worked (now erased) ... but my hunch was correct! (Although hunches are hardly proofs, tsk tsk)

Also, I want to thank you again for helping me out with awk/sed/grep/command line stuff, from a year ago!


What's a counterfactual? can you give a simple explanation of how that solves the problem?


A counterfactual is a statement or question in the form 'If X ... would Y". Y would happen if X were true. Or, would Y be true, given that X were true? [0]

In the case of this puzzle, a counterfactual can be used to embed a question X within your question Y. By doing so (and exploiting the phrasing of the problem), you force the gods to answer in a manner that either reveals their identity as True, False, or Random.

Explaining the actual process through which the counterfactual is used to solve the problem is a bit tedious but the Wikipedia article on the puzzle has a pretty straightforward explanation of the reasoning[1].

[0] http://en.wikipedia.org/wiki/Counterfactual_conditional [1] http://en.wikipedia.org/wiki/The_Hardest_Logic_Puzzle_Ever.


You're welcome!


Oh, I must not have read the instructions thoroughly enough - I thought the solution was looking for an answer, not for the identities. My bad.

I'm guessing the same principle should apply though, I'll comment an update if I don't fall asleep from having been up all night. Thanks for pointing that out, though.


I haven't looked at the answers yet, but here's what I came up with in pseudocode:

    A: Would one of the other Gods, who is not Random, say "da" \
        if I asked him if God C was Random?
    (DA) {
        :God A or B is Random.

        C: Does "da" mean "yes"?
        (DA) {
            :God C is True.
        } (JA) {
            :God C is False.
        }
        
        C: Would you say "da" if asked whether God A is Random?
        (DA) {
            :God A is Random
            :God B is !God C.
        } (JA) {
            :God B is Random.
            :God A is !God C.
        }
    } (JA) {

        :God A or C is Random.
        B: Does "da" mean "yes"?
        (DA) {
            :God B is True.
        } (JA) {
            :God B is False.
        }
        
        B: Would you say "da" if asked whether God A is Random?
        (DA) {
            :God A is Random
            :God C is !God B.
        } (JA) {
            :God C is Random.
            :God A is !God B.
        }
    }
I found it helpful to build the logic assuming they would answer "yes" and "no" in English. Once that was done it was fairly simply to modify it to handle the da/ja.


vg vf cbffvoyr gb qrgrezvar gur vqragvgvrf bs gur guerr tbqf jvgubhg xabjvat juvpu jnl ebhaq wn naq qn ner. (r.t. guvax nobhg ubj gb qb guvf jvgu whfg gur Gehr naq Snyfr tbqf, vtabevat Enaqbz)

gur dhrfgvbaf lbh ner nfxvat va trareny fubhyq punatr qrcraqvat ba gur bofreirq qn|wn nafjref gb gur cerivbhf dhrfgvbaf. n fgengrtl pna or n gerr bs dhrfgvbaf, abg n frdhrapr svkrq va nqinapr.

gur uneq ovg vf vfbyngvat gur enaqbz tbq sebz gur bgure gjb tbqf. abgr gung gur enaqbz tbq qbrf abg enaqbzyl rzvg lrf be ab (rapbqrq nf wn<->qn), vg enaqbzyl nafjref gur tvira dhrfgvba rvgure gehgushyyl be snyfryl. fb lbh pna senzr n fhvgnoyl pbagbegrq dhrfgvba gung sbeprf gur fnzr erfcbafr sebz gur enaqbz tbq va obgu gehgul-zbqr naq snyfrl-zbqr, gung qvfgvathvfurf vg sebz gur aba-enaqbz tbqf...


Apropos of nothing in particular ...

Not bothering to use ROT13, I just dropped that into my generic substitution cipher decoder and it spat out the answer almost immediately - quite pleased with that.


Interesting. You cycle through substitutions and match against a dictionary to detect a hit?


Nope, I use the shotgun stochastic hill-climbing algorithm I wrote for something else, with the ballistic option disabled.

In short:

    While True:
        Generate a random key
        "Decode"
        Score
        Start timer
        While timer not expired:

            Perturb the key
            Score
            If better:
                Keep the new key
                Reset timer
        Print decrypt
In this case the first output was completely readable.


I have no idea how you'd make a scoring system that worked well with a hill climbing algorithm. You could check each word for a match in the dictionary, but with a substitution cipher you could essentially turn one word into any other word of the same length (excluding words with the same letter in them). Without a way to check for "close" words, you wouldn't be able to climb the hill. would "pello" score a 0.7 and "hello" a 1? I can't think of a fast way to get that number.

I'd like to try my own hand at this, but the way I'm thinking of making the fitness function doesn't seem like it would be smooth enough.

What did you do?


Take a frequency chart of all trigrams in English, and take the vector product of the frequency in the decrypt with the frequency in English.

So, let E(t) be the frequency of occurrence of the trigram t in English, d(t) be how often it occurs in the decrypted text, and compute:

  Score = sum( [ E(t)*d(t) for t in decrypt ] )
If common trigrams turn up frequently in the decrypt, this will be "large". If uncommon trigrams turn up frequently, this will be "small".


...

that is a really good idea...

Turns out there's also a fair amount of data to use as well.

http://www.cse.chalmers.se/edu/year/2010/course/TDA351/ass1/...

I was thinking too rigidly. Thanks!


Norvig has made n-gram data available:

http://norvig.com/ngrams/


Why not use a residual sum of squares here?

https://en.wikipedia.org/wiki/Residual_sum_of_squares


I don't understand your question.

My (admittedly naive) understanding of using RSS is this. I have a model of the data, which in this case will be expected frequencies of my n-grams. Then I compute the actual frequencies of the n-grams, take the difference, square the difference, and add up all the squares. A small number is then indicative of a good fit.

I don't see why this would be better. Not least, this has the problem of all those n-grams that don't turn up in my decrypt. I still have to square their expected frequency and add them into the sum. That contrasts with the method I'm using, where I just take the n-grams that do appear, multiply by the expected frequency, and add that into a running total.

So I guess I don't understand your suggestion.


Please, if you're going to give a solution, rot13 it or link to it or something. Don't blurt it out in the first line of your answer.


#2 is really only difficult because it's easy to misinterpret the rules. Random doesn't randomly answer yes or know, he randomly decides whether to answer truthfully or falsely. So for example if you ask the recursive question "are you answering this current question truthfully" he will answer yes either way. Or rather, his word for yes. Once that rule is clear, the puzzle is pretty straightforward.


I am not sure whether that info has much bearing on the puzzle itself as such. Even in the case when random randomly answers either "yes" or "no", the solution is equally straightforward and simple.


Regarding #2,

I read Boolos' paper and found that the way the solution was explained wasn't quite to my taste (it did not have discussion on how one would go about thinking of a solution and step-by-step, logically proceed toward a solution). Add to that, the concepts of abstraction, putting away the complexity into a neat-separate function and then forgetting about the internal details, could be applied nicely to the solution in a number of places and I thought it would make the solution much easier to understand. I took an approach where I keep on reducing the problem to a simpler one by abstracting away some details and complications.

Do have a look at my attempt at explaining the solution, especially the process of arriving at the solution (for "The Hardest Logic Puzzle Ever"): http://blog.sujeet.me/2013/07/solving-the-hardest-logic-puzz...

Feedback's welcome!


The Martin Gardner doesn't really seem to fit, since it is trivial to solve using brute force. Even if using a computer is considered cheating, there are shortcuts to use to keep from having to try everything. For example, having a "1" digit gets you nowhere, a "0" kills you, and a "5" and any even digit also kills you (and the 5 will persist at the end if you don't have an even digit, so it will probably kill you next round).

This led me to wonder, does persistence ever max out? It seems likely to me that it does.


I dont think it will max out. I brute forced it upto 10 million and I got the following numbers (5, 679) (6, 6788) (7, 68889) (8, 2677889) sure the numbers are exponential but no reason to suspect that a glass ceiling exists.


Interestingly, Wikipedia disagrees.

For a radix of 10, there is thought to be no number with a multiplicative persistence > 11: this is known to be true for numbers up to 10 to the power of 50.

http://en.wikipedia.org/wiki/Persistence_of_a_number

I guess the problem is that when you multiply lots of digits together you become increasingly likely to end up with a 0 digit somewhere.


Thanks for the link, I dug a little more and came across [1] which mentions a contribution by erdos to the effect that persistence might not be bounded. I guess I might spend some time to figure out number 12 :) [1] http://web.archive.org/web/20050214141815/http://www.wschnei...


How about xkcd's Blue Eyes puzzle?

https://xkcd.com/blue_eyes.html


I'm surprised "no one leaves"

the guru really doesn't seem to provide any new information. everyone knew she could see someone with blue eyes.

Nor does it seem anyone can act on the information, nor does it seem everyone's inaction be taken as new information.

If the guru said. I see someone with red eyes, then everyone would try to leave (or only the guy with red eyes would try to leave if the guru wasn't lying). That's the only circumstance I can see where someone could take action.

As much as that page insists that there is no word trickery going on, I'm inclined to think I've misunderstood the situation.


The guru does provide very indirect meta-information. Everyone's inaction cannot be taken as information on the first day, but inaction over time can.

Consider the case with 100 brown and 1 blue person (+ guru). As soon as the guru speaks, the blue person knows their eye colour and leaves.

Consider the case with 100 brown and 2 blue people. After the guru speaks, both of the blue people still don't know their own eye colour - the guru could have been thinking of the other one. So both blue people stay on the island on the first night. The other one staying on the island is information to the each of the blue people. Because the other blue one didn't leave on the first night, they must be also unsure - and therefore they must have blue eyes themselves. Both leave on second night.

See also: http://img.spikedmath.com/comics/445-three-logicians-walk-in...


Exactly this; ProblemFactory has shown how the guru provides directly actionable information for the simplest case of the problem (1 blue eyed islander) and also highlighted the inductive step that leads to a proof for any number of blue eyed islanders.

Reducing to the simplest scenario and building (as ProblemFactory) has done, makes the result much more intuitive -- as it does in most proofs, really.

Also, the logicians comic is wonderful :)


I found this difficult to pinpoint, too, until I read about common knowledge. The example here is very relevant: http://en.wikipedia.org/wiki/Common_knowledge_(logic)

At first, the islanders do not know that the other islanders are making the same logical judgments.

The proof relies on the fact that each islander knows what the other islanders have deduced by each day.

By making an announcement to everyone at once, the new information that the guru provides is the common knowledge that (i) there is a blue eyed person, (ii) the islanders know there is a blue eyed person, (iii) the islanders know the other islanders know that there is a blue eyed person, and so on.

Note that if the guru went to each islander one by one and told them individually that there is a blue-eyed person, they do not learn anything new about the other islanders and nobody would leave.


the version of this I've seen before every one has blue eyes (except maybe the guru). day one some body with blue eyes looks around and knows there's at least one other person with blue eyes, so maybe the guru was talking about that person. day two no one has left so they know there's at least two people with blue eyes or else the blue eyed would have figured it out day one. of course they can still see at least two people with blue eyes, so no problem. Etc... day N hits and they come to the inescapable conclusion that they all have blue eyes and they all leave.

I'm not sure how that translates to two eye colors though. I'd guess basically the same, but I'd need to think some more.


CAUTION THIS IS PROBABLY A SPOILER:

Well, the simplest explanation is that the proof involves thinking about hypothetical worlds and each agent modelling everybody's behaviour in each hypothetical world, including what hypothetical-hypothetical worlds can arise, and hypothetical³ and so on. The Guru's statement exist in every world, no matter how hypothetical it is, while the subjective observation that there are blue-eyed people disappears in some of them.


What makes a difficult sudoku problem difficult?

Is it possible that a brilliant, experienced solver would find the right "tricks" to solve the puzzle? Or is the sudoku such that it can be only solved by some flavor of exhaustive search on the space of potential solutions?


The difference between an "easy" and a "hard" sudoku (and other problems of that ilk) is essentially how many steps ahead you need to think in order to derive useful information.

An "easy" sudoku is one solvable by mechanical application of the rules of the game - if there's only number that can be placed in a cell, fill it in, and similarly if there's only one cell in a given row, column, or box that a given number fits in, fill it in there.

As things get harder, other logical steps are required. For example, if there are two cells in a line, and both of them can only accept the numbers 5 and 9, you can conclusively say that no other cell in that line can be a 5 or a 9. That's looking one step ahead, and (depending on how often it's necessary) makes for a medium-difficulty puzzle.

As you move up difficulties things get more complex, and thinking several steps ahead becomes necessary if you want to make meaningful headway.

--

I would expect the "hardest" sudoku would not be one that's only solvable by brute force - that's a rather uninteresting puzzle, for a start. Instead, the hardest (in my opinion) would be one that requires making the trickiest insights in order to make headway without resorting to guess-and-check style exhaustive search.


According to http://www.nature.com/srep/2012/121011/srep00725/full/srep00..., there is a nice correlation between perceived difficulty and a nice, objectively computable number.

So, a sudoku seems to be extremely hard iff η > 3.

And no, you don't need to do an exhaustive search.


Difficulty rating systems I've seen essentially take a sudoku and then solve it starting with easy human steps, and working their way down to more complicated logical deductions.

Chokepoints that can only be found in a large set, or having to solve the puzzle with a complicated deduction like an X-wing or Y-wing, raise the number. With guessing as a fully last resort.


The more backtracking you have to do, the harder it is for a human to solve / more likely that your weak flesh brain will encounter a stack overflow.


I got the sense alot of people consider a sudoku puzzle unfair if there isn't a discrete set of logical steps you can take to solve it, sans guessing.

After all, if you're looking for a hard Sudoku to solve in that vein, just look at the minimally unique Sudokus. Those required a metric ton of back tracking.


Hardest? Are you kidding me? A computer can solve these quite easily.

If you want really hard logic puzzles, get puzzle books from Peter Winkler (such as Mind Benders or Connoiseur's Collection). These sometimes even contain unsolved puzzles as well.


You can find an alternative list of top 10 hardest Sudoku puzzles here (with C source code to find them):

https://github.com/hpenedones/sudoku

Basically, I have a solver and I count the number of backtracking decisions that the solver has to make.


Just out of interest, how would you propose a computer solve number 2?

Or 4?

Or 9?


You can solve 2 with a computer by enumerating all possible questions. The gods can be in 3! = 6 configurations, and either da means yes or da means no. So there are 12 configurations in total. A question you can pose to a god will, as far as I can tell, always be of the form "are we in configuration A or configuration B or configuration C ...". So there are 2^12 questions you can pose. You can enumerate all of those and find a question strategy that works.


How does the traditional "double-ask" question:

    If I were to ask you if you are the
    truth-telling god, would you say "da"?
fit in your scheme?

In particular, I think you'll find the solution to 2 does not fit in your scheme. At least, not as far as I understand it. I'd like to see your scheme for enumerating possible questions fleshed out in more detail, but I don't think I believe you.

And you haven't answered about 4. Or 9.


If I'm not mistaken, a question of the form "If I asked you Q, would you say A?" is the same as asking "Q xor (A means yes) xor (you are speaking the truth)". That is a direct question, which is part of the enumeration.

> And you haven't answered about 4. Or 9.

Umm...yes? I didn't intend to. But if you insist, 4 is not a logic puzzle, and 9 is easily solved in principle but possibly prohibitively long to calculate in practice. Compared to humans, computers are relatively less bad at finding a guaranteed winning move than at playing Go well, so you can't necessarily conclude that because humans play Go better than computers, they would also be able to solve that puzzle faster than computers.


COMBIN3! - (visual logic puzzles) - Online demo: http://combin3.com/demo/





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: