It's hackish, but you can do Unicode in LaTeX with the inputenc and ucs packages. If you do:
\usepackage[utf8x]{inputenc}
\usepackage{ucs}
then the inputenc macro package does a first pass on the document, handing off all non-ASCII characters to ucs to replace them with LaTeX commands to generate the appropriate characters, if supported. So, ö gets replaced by \"{o}, א gets replaced with \hebalef, etc. Doesn't work for everything, but handles a good portion of common languages.
! Undefined control sequence.
\u-default-1488 #1->\hebalef
And the Japanese example above gave me:
Package ucs Error: Unknown Unicode character 12371 = U+3053
Either way, there's not much of a point in using LaTeX when XeLaTeX is available, save for intercompatibility. So until sites like this one start moving to XeTeX/XeLaTeX, we'll still often be stuck without (true) Unicode.
I agree, ScribTeX needs support for XeTeX and other LaTeX compilers. As the creator of the site, I guess I'm in a unique position to make this happen! It's been on my todo list for a while now and isn't without its difficulties, but I'll use this conversation as renewed motivation to get it done.
Ah yeah, for things other than the built-in special characters (like umlauts), you also need all the relevant language-specific packages that actually implement the macro, a bit of additional fun. ;-) In the Hebrew case, you need the ivritex package for \hebalef to be defined.
But yeah, XeLaTeX makes that all much simpler. The LaTeX solution is mostly good if you're already using LaTeX for some reason, but need to include non-ASCII text from a limited range of languages (e.g. you're writing a Hebrew document, or an English document with a Greek literature quotation).