There are a couple of strategies for understanding the real numbers. One is to write down a definition of real numbers, for example using rational numbers and Dedekind cuts, hoping that what you're describing is really what you mean. The other is to write down the properties of real numbers as you understand them as "axioms", and go from there. An important property of real numbers that always comes up (either as a consequence of Dedekind cuts or as an axiom itself) is the least upper bound property -- every set which has an upper bound has a least upper bound. That's what gives you the "completeness" of the real numbers, from which you can prove facts like the completeness of the real numbers (i.e., Cauchy sequences always converge), the Heine-Borel theorem (closed and bounded subsets of the reals are "compact", and vice-versa), and Cantor's intersection theorem (that the nested intersection of a sequence of non-empty compact sets is also compact).
The diagonalization argument is an intuitive tool, IMHO. It is great if it convinces you, but it's difficult to make rigorous in a way that everyone accepts due to the use of a decimal expansion for every real number. One way to avoid that is to prove a little fact: the union of a finite number of intervals can be written as the finite union of disjoint intervals, and that the total length of those intervals is at most the total length of the original intervals. (Prove it by induction.)
THEOREM: [0, 1] is uncountable. Proof: By way of contradiction, let f be the surjection that shows [0, 1] is countable. Let U_i be the interval of length 1/2*i centered on f(i). The union V_n = U_1 + U_2 + ... + U_n has combined length 1 - 1/2*n < 1, so it can't contain [0, 1]. Another way to state that is that K_n = [0, 1] - V_n is non-empty. K_n also compact, as it's closed (complement of V_n) and bounded (subset of [0, 1]). By Cantor's intersection theorem, there is some x in all K_n, which means it's in [0,1] but none of the U_i; in particular, it can't be f(i) for any i. That contradicts our assumption that f is surjective.
Through the right lens, this is precisely the idea of the diagonalization argument, with our intervals of length 2*-n (centered at points in the sequence) replacing intervals replacing intervals of length 10*-n (not centered at points in the sequence) implicit in the "diagonal" construction.
Then use 1/3 instead of 1/2 for a combined length of 2/3 -- the total length of the intervals can be as small as you like. This hints at the fact that any countable subset of the real numbers is Lebesgue measure zero.
Even using 1/2, the set that remains is nonempty due to the Cantor intersection theorem. The total length of the intervals is 1, which means that the remainder has no "interior" (i.e., contains no open interval), but the converse is not true: removing intervals whose lengths sum to less than one does not mean that the remainder will contain any interval. This is the consideration that allows you to create what are called "fat Cantor sets" -- the middle thirds Cantor set has Lebesgue measure zero, but by removing smaller intervals you can get other, homeomorphic sets that have positive measure.
Then let U_i be the interval of length 1/3^i centered on f(i), so that the total length is 1/2, far less than 1.
Even though the supposed "surjection" is infinite, it's still the case that every x in [0,1] would be in in one of the finite U_n and therefore V_n. But every K_n clearly has measure > 0 and is therefore non-empty, and since the K_n are nested subsets, there is at least one special point x_omega that is in all of the K_n.
The "intuitive" problem (not logical problem) with PP's proof is that it relies on measure and completeness, which is far more technologically complex than the decimal diagonizalization argument.
Here is intuitive "rebuttal": the same proof strategy seemingly proves that the rationals are uncountable! (This is of course technically false, because rational intervals are incomplete and all have measure 0 in the first place. But understanding this is much more complicated than imagining an 2-D infinite spreadsheet of decimal numbers between 0 and 1.)
If I let U_i be the interval of length 1/10^(i) centered on f(i), than what I'm saying is pick a different decimal digit to avoid this particular real.
The diagonalization argument is an intuitive tool, IMHO. It is great if it convinces you, but it's difficult to make rigorous in a way that everyone accepts due to the use of a decimal expansion for every real number. One way to avoid that is to prove a little fact: the union of a finite number of intervals can be written as the finite union of disjoint intervals, and that the total length of those intervals is at most the total length of the original intervals. (Prove it by induction.)
THEOREM: [0, 1] is uncountable. Proof: By way of contradiction, let f be the surjection that shows [0, 1] is countable. Let U_i be the interval of length 1/2*i centered on f(i). The union V_n = U_1 + U_2 + ... + U_n has combined length 1 - 1/2*n < 1, so it can't contain [0, 1]. Another way to state that is that K_n = [0, 1] - V_n is non-empty. K_n also compact, as it's closed (complement of V_n) and bounded (subset of [0, 1]). By Cantor's intersection theorem, there is some x in all K_n, which means it's in [0,1] but none of the U_i; in particular, it can't be f(i) for any i. That contradicts our assumption that f is surjective.
Through the right lens, this is precisely the idea of the diagonalization argument, with our intervals of length 2*-n (centered at points in the sequence) replacing intervals replacing intervals of length 10*-n (not centered at points in the sequence) implicit in the "diagonal" construction.