Hacker News new | past | comments | ask | show | jobs | submit login
If there's no 16-bit layer in 64-bit Windows, how come 16-bit installers run? (msdn.com)
138 points by luu on Nov 3, 2013 | hide | past | favorite | 79 comments



My favorite comment in the thread is from John Ludlow who explained why you would want to package a 32 bit program with a 16 bit installer:

"The installer has to be the lowest common denominator in terms of bitness. This is because it needs to be able to check whether the application can run on that system and give an intelligent "um, you can't install Program X because your computer is 16-bit and doesn't support it" message rather than falling over in a big heap."

Exactly right. And the kind of detail that is easy to forget about.


That's really something the OS should be able to do. It ought to be able to examine the dependencies of the app and throw up a helpful alert if they're not met.

Of course, since the common case is newer apps on older OSes, that requires thinking ahead before such a problem actually happens, and that's tough. Even if you implement it, it's hard to get right because it's hard to really test. Apple does this, but the facility was broken in several Mac OS X releases, making it far less useful than it should have been.


In the case of 16-bit Windows, this would have been particularly difficult to foresee. Even when Windows 3.0 shipped, it was still planned that OS/2 would be the 32-bit successor to Windows, so there was no 32-bit Windows target even on the roadmap.

(According to the book Showstopper!, when Microsoft ditched OS/2, they created Win32 by taking the OS/2 API and fixing any divergences from Win16 to make it look as compatible as possible.)


If they had designed the win32 PE .exe format differently, they could have included a 16bit NE stub to throw up a messagebox with the message - in the same way the .exe files already include a 16bit MS-DOS MZ .exe stub with an equivalent message.


I believe this is the case. If you try to run a 16-bit executable on x64 Windows, you will receive a "not compatible" error MessageBox essentially saying WOW cannot help you.


> It ought to be able to examine the dependencies of the app and throw up a helpful alert if they're not met.

Very hard, since would require, say, a version of windows that only shipped with unmanaged executables (e.g. XP) to foresee, and properly assertion-fail, managed executables. Or any number of other bigger changes than "new API."

Nothing so smart need be done, though, I think. Just run the program, and let it crash if the dependencies aren't met. Give the crash-reporter a heuristic that detects that a program crash was dependency-based, and have it present a different UX in that instance. (This is basically the Erlang philosophy.)


I've always had the issue the other way round, but i guess it depends on the market.


For extra fun, give a runnable file (any EXE, or CMD or BAT file) "install" or "setup" (I think that a prefix also works). You'll get a "do you want to run this setup program" dialog unless you have UAC turned off.

This is hard-coded in some horrible place. It cost me a day once. I mentioned it to a guy next to me at work (random sample ex-MS dev) and said the same thing.

The Windows kernel is (for the most part) a pretty good piece of engineering. The stuff on top of it, not so much.


Of course it's hardcoded. In Vista and above you have to include a manifest file telling the OS whether you require elevation or not. In order to support old applications written pre-Vista, and to protect users against developers who don't care to learn how installers are supposed to work, Windows will force UAC if the file name sounds like something a setup program would call itself, particularly when most installers will install or modify things that require Admin privileges -- such as writing to Program Files.

This seems perfectly sane and reasonable to me. What's the alternative? That manifest-less installers should fail because a user forgot to manually elevate a program by shift-right clicking it and selecting "Run as Administrator", all because they upgraded their OS and now their installers won't run any more?


I think the objection is not "old installers shouldn't be elevated" but that making the OS decide this based on the filename of the EXE is a disappointingly kludgey (but sadly unsurprising) way to do the job.

One alternative might be to have a security framework that offered the UAC prompt when a program actually attempted to do something that required elevation. I had thought that was the way UAC actually worked.

I wouldn't be surprised, given all the details, to learn that this "check the filename" approach really was the most sensible and appropriate thing to do at the time. We could call it a matter of "path dependency" if we're feeling charitable or a matter of "painting oneself into a corner" if we're not.

What I find so fascinating and irritating at the same time, every time I hear about some bit of Windows internals like this, is that MS seems to need an exit from a corner they've painted themselves into all the time.


>One alternative might be to have a security framework that offered the UAC prompt when a program actually attempted to do something that required elevation. I had thought that was the way UAC actually worked.

This is how it works in most cases, but it's different with access to directories inside "Program Files" folder: because it was a common behaviour in the past, there is a such feature as UAC virtualisation:

http://msdn.microsoft.com/en-us/library/bb756960.aspx

>Prior to Windows Vista, many applications were typically run by administrators. As a result, applications could freely read and write system files and registry keys. If standard users ran these applications, they would fail due to insufficient access. Windows Vista improves application compatibility for standard users by redirecting writes (and subsequent file or registry operations) to a per-user location within the user’s profile. For example, if an application attempts to write to C:\Program Files\Contoso\Settings.ini, and the user does not have permissions to write to that directory, the write will get redirected to C:\Users\Username\AppData\Local\VirtualStore\Program Files\contoso\settings.ini. For the registry, if an application attempts to write to HKEY_LOCAL_MACHINE\Software\Contoso\ it will automatically get redirected to HKEY_CURRENT_USER\Software\Classes\VirtualStore\MACHINE\Software\Contoso or HKEY_USERS\UserSID_Classes\VirtualStore\Machine\Software\Contoso.


One of the problems with it is there's no "don't run as administrator" right click option. Developers can add a manifest, there's some crazy sysadmin tools that aren't even installed with windows to suppress it, but ordinary users don't have an out--changing the filename doesn't help, the resources in the file still identify it as "setup.exe".

This is one of the things that makes user separation hard on windows, as anyone who wants to do anything vaguely useful with the computer still needs Administrator access.


It's a problem that simply naming a file 'install' or 'setup' would cause extra, unwanted behavior.

I guess it's similar to naming files PRN or NUL. You learn not to do that. It doesn't make it okay.


It's not unwanted. Not automatically elevating would break older installers written in a pre-Vista era, or written by developers who don't bother embedding a manifest telling Windows that it requires elevation. Imagine being a user upgrading from XP to Vista/7 and finding that all their old programs no longer install as they fail to write to Program Files -- something that you could get away with in older versions of Windows where people used admin privileges for everything. And yet that very problem would go away for almost everyone, everywhere, by simply elevating programs named setup or install. Seems like a perfectly sane way to make things backwards compatible.


It is unwanted, broken and stupid to hardcode policy behaviour on the filename of executables.

I lost a few hours to this because it's so quirky and easy to forget. This happened during the compilation of a certain library, which created an install.exe program to move files around the filesystem. That stupid UAC dialog was not even kicking-in and basically the copying just died.


Unwanted.

1) Write a little, one-off initialization script (I forget what it did -- probably copied a few files from point A to point B). Expect this task to take about five minutes.

2) Four hours later you discover that merely calling that stupid little script 'setup.cmd' caused all kinds of UAC havoc to ensue. Bang head against desk.

So, now I know. But it sure was counter-intuitive, and (I suggest) more than a little ham-handed on Microsoft's part.

Don't get me started about significant trailing spaces in environment variables (another little gift of non-intuitive behavior in the MS environment).


> 1) Write a little, one-off initialization script (I forget what it did -- probably copied a few files from point A to point B). Expect this task to take about five minutes.

The number of users who run legacy installers and expect them to work exceeds the number of users who write "little, one-off installation scripts" by several orders of magnitude.


And break Cygwin/MSYS/<*nix compat layer> that provides install(1).


This is a good example of the straightjacked of backwards compatibility. Not too long, not too short, a fun display both of people who know what they're talking about vs just complaining.

Also a good example of inside the box thinking. The reason you need to run a 16 bit installer is the software isn't free or open source therefore a simple recompile with a newer version of debhelper or whatever isn't going to bring it up to modern packaging standards.


> The reason you need to run a 16 bit installer is the software isn't free or open source therefore a simple recompile with a newer version of debhelper or whatever isn't going to bring it up to modern packaging standards.

On the other hand, the old windows software in question will most likely actually run. Debhelper is irrelevant: confronted with a package of comparable age on Linux, you'll be digging up the source code and trying to make it build, not merely tweaking the package, with all its ancient dependencies.


You make your linux binary executables future-proof by statically linking them. It's a rare event that it happens, but I've no problems with 15 year old a.out binaries on 64-bit systems.


As soon as they use for example, say, utmp, they will break. Worse, they might trash your utmp. The utmp format was changed with the transition from libc5 to glibc 2, to support IPv6. You might not care too much about utmp nowaways, but it's an example for something which changed.

Also, your 15 year old statically compiled binary will contain every little bug any of the linked in libraries had those 15 years ago.


And this is the trade off you make for not having the source, but it doesn't negate it as a method to keep things running for longer. If you should keep things running, without having the source, for that long, is another issue.


Okay, I can agree, it certainly beats "not running at all" 8)


Yeah, I am having the same issue trying to compile gcc on an emulated old MIPS. :-/


Actually, no: I've had a lot more luck getting old Linux software working than old Windows software, as the Windows software inevitably has DLL Hell written all over it.


> DLL Hell

An issue that was relevant on the Windows 3.x operating systems and, on an unusually bad day, Windows 95/98. It's irrelevant to what we are talking about here and also kind of a red flag for Linux fanboyism.


Not in my experience: The lack of versioned libraries is still a problem and trying to dismiss it by calling me a fanboy is not logically sound.


"The reason you need to run a 16 bit installer" is to display a nice "Tough luck, you need a 32-bit computer to install this" message to the user, instead of an ugly Windows error message.

This was important at the time. And the whole point of backward compatibility is to allowed old applications (written, compiled and packaged at that time) to work.

Even if the software is open-source and has been repackaged in the meantime, the old version you found on a long-forgotten CD rotting away in your basement doesn't have that. The Windows philosophy is to make sure that this one still works (which is a crazy goal IMO, but an interesting one).


MS has gone to great lengths to reach that goal. There is even special code in Windows to ensure the DOS version of SimCity works: http://ianmurdock.com/platforms/on-the-importance-of-backwar...


Crazy ? Well, that was one of the most important selling point of MS.

I wish I could run any old software under current Ubuntu. IE: gwibber.


The reason for this hack is that 32-bit software used 16-bit installers long after the switch to 32-bit. [0] This isn't a matter of supporting Windows 3.11-era software, more like Windows 9x or even XP.

[0]: Like this guy: http://www.tomshardware.com/forum/107534-13-trying-install-g....


>> The Windows philosophy is to make sure that this one still works (which is a crazy goal IMO, but an interesting one)

It's a requirement. Imagine how many people would have never upgraded to Windows 95 if their Windows 3.x programs didn't work.

Each program Microsoft 'fixes' is additional sales.


There's no point in thinking outside the box when the box was built twenty years ago and your entire job is to make the stuff inside it keep working for another OS release.


"Windows: A 64-bit extension to a 32-bit patch for a 16-bit GUI shell running on top of an 8-bit operating system written for a 4-bit processor by a 2-bit company who cannot stand 1 bit of competition."


Unfortunately, no longer really true since they switched from the DOS based Windows to the Windows NT branch. (At least I think so.)

The snide about competition has also become somewhat hollow. Microsoft would love to go back to the monopoly days of the late nineties. But those days are gone.


Modern Windows (NT stream) carries a lot of baggage from Win16, however; Win32 wasn't a green-field effort.


In fact a merge happened, windows kernel now is mostly NT based but has DOS era artifacts on it.


This is made up. The design of NT was such that you could plug in "subsystems" such as Windows, POSIX, OS/2 etc. The Windows subsystem has had the most development, sure, but there is no "DOS" in the kernel.


"Merge" is overstating it. It is a NT kernel with some very small and inconsequential "fixups" done to it to make certain DOS constructs work.


When I had access, I decided one afternoon to read the source to the CMD interpreter.

It was about as bad as I thought it would be.

About an hour into using my Mac Pro's shell, I wanted to weep in relief. Microsoft does not get it. (Or if they do, it gets perverted into something like PowerShell, which I recently wasted a week on).


What is Mac Pro's shell? You mean your shell on OS X? And what did you find so bad with Powershell?


The "terminal" on MacOS X. It has:

- tabs

- cut and paste that actually works

- various ways to focus windows

- hey, search!

... and a bunch of other stuff. While CMD.EXE has its feet firmly rooted in 1982 or so.

Yeah, I know you can buy more whizzy shells. But MS really has no excuse for letting the NT console system rot as much as it has. Given that their programmers use it /all the time/, I find it hard to believe that someone hasn't written something better. (PowerShell overshot the mark, and wound up being unusable).


So all your complaints are against the terminal emulator, which everyone agrees is bad. Use something like ConEmu. Here's Scott Hanselman on ConEmu - http://www.hanselman.com/blog/ConEmuTheWindowsTerminalConsol...

> PowerShell overshot the mark, and wound up being unusable

You haven't still any points against Powershell and continue to pile up on it.


ConEmu can't fully replicate the CSRSS terminal behavior. It breaks pretty grossly when using advanced console functionality that works correctly in CSRSS's own terminal.


Aside from good cut&paste behaviour, one of the biggest problems with cmd.exe is you cannot resize the window horizontally larger than its set size by dragging on the window side. I do this when I see wrapped text from the output of a command. And if you experience wrapped text in a window smaller than its max size, enlarging the window doesn't reflow the text! Oh I see the same problem happens in Powershell too. :-(


Powershell and the command prompt use the same underlying console subsystem, that handles character-mode programs running and displaying their output in a window. It's a core part of Windows, rather than something supplied by each program.

The console system doesn't provide a TTY, as in Unix, but instead uses a CGA/MDA text mode metaphor: a matrix of (character,colour) pairs. This leaves no information about the text that's been printed, per se - it's more like the text equivalent of a bitmap. No carriage returns, no tabs, just an arrangement of characters that happened to have been put in the right places. If you widen the screen, there's not much Windows can do except for leave the new area blank, because it has no idea at all which line breaks are significant and which were just wrapped. (And indeed, if you resize the buffer in the console window's properties, this blanking is exactly what it does.)

There's sort-of nothing wrong with this system, in that it makes sense for a certain style of text-mode GUI console program, that were at one point exceedingly popular in MS-DOS - but unfortunately, it's not so hot for the TTY style of console program that everybody uses these days :)

(Fingers crossed for a much-needed shakeup, but this nonsense has stuck around for so long - I don't think that console properties dialog has changed one pixel since Windows NT 4 - that it's probably a bit late now. Meanwhile, one solution might be to run cmd.exe with a better class of terminal emulator - emacs will do this.)


You are mixing the shell and the terminal. (But you're probably right that neither of them is good on Windows.)


I think CONHOST/CSRSS is to blame for this not CMD.


As far as I can tell, they're joined at the hip.

You /could/ redo one without redoing the other. No problem. But the result would be poor. :-/


So it isn't a shell. It's more like xterm, then, right?


Yes. In fact it's called Terminal.app; it's just a terminal emulator, but one that doesn't particularly suck. The shell is ordinary bash (or ksh or zsh or whatever; I forget what the default is).


The Powershell interpreter is made for the Powershell ISE. They just can't touch the CMD interpreter for the same reasons you found, Someone depends on that awful for console apps and god knows what.

So all the work went into the ISE instead of fixing CMD. Powershell console and cmd run powershell are just compatibility layers to make it more familiar.


This is how jokes become urban legends.


Ah, my favorite problem with Win9x in fact was how Caldera was able to continue to sue MS based on the fact that it was based on DOS. I mentioned it in my blog article on the MS OS/2 2.0 fiasco and it should be obvious why.


People speculate in the comments that Microsoft could provide a 16bit emulator in Windows for DOS applications. In fact they have an x86 emulator in Windows, but not for DOS.

It's used to run int10h calls (VGABIOS) on early initialization, funnily even on UEFI.


I don't understand why they gave up on DOS compatibility for 64-bit Windows.. the effort could not have been all that much for them (either with an emulator or x86 tricks). The benefit is huge: they can continue the lock-in of ancient DOS programs. For example, I can report that OrCAD for DOS works better in 32-bit Windows XP better than it ever did in MS-DOS.

Likewise, I don't understand why vm86 mode can not run directly from 64-bit mode in x86. This seems very like an architectural mistake to me.


As I remember, when an x86-64 chip is running in 64 bit mode, it can run 32 bit code, but the option necessary to run in 16 bit mode was removed. To run DOS they'd basically have to have a full emulator for a 286, like a limited version of bochs.


> > Why can't you just run the 16-bit emulator from 32-bit Windows in the 32-bit emulator on 64-bit Windows?

> [Because that emulator uses v86-mode, which is not supported by 64-bit processors. -Raymond]

-- http://blogs.msdn.com/b/oldnewthing/archive/2013/10/31/10461...


I don't know what mechanisms they use, but VMware's various products run 16-bit code just fine on x86-64 chips. Are you saying they added a full 286 emulator to their virtualization products?


Essentially yes. But I guess they simply kept their virtualization code from pre-hardware virtualization times around. 286 shouldn't require much maintenance :-)


Mmmm, I think it's just not profitable for them to do this any more.


That's interesting, do you mean early initialisation of Windows or the VDM?


This has nothing to do with VDM; the emulator is only accessible from kernel mode (it's in HAL.dll) and is not meant to be used by usermode applications. See [1,2].

[1] http://x86asm.net/articles/calling-bios-from-driver-in-windo...

[2] http://www.geoffchappell.com/studies/windows/km/hal/api/x86b...


There's also a similar emulator in X, that allows drivers for PC-specific graphics cards that used x86 initialisation in their video BIOS to work on non-x86 platforms, like SPARC.


We use that same emulator (x86emu) in coreboot to isolate the VGABIOS from real hardware (except the video card, of course)


It's worth pointing out that, while 64-bit Windows has dropped its 16-bit layer, 64-bit Wine has not. You can happily run your original 16-bit version of Chip's Challenge on an out of the box 64-bit Linux/Wine install these days.


All this is exactly why I've stopped buying PCs and use Windows. Silly inconsistencies like shell not supporting > 260 file paths (I know that some Win32 APIs and NT itself limits file paths to 32,768 or so Unicode characters), but it's all those patches, inconsistencies, limited shell (cmd.exe), drive letters which drive me crazy. With Mac OS X and Linux all that is gone, and the world seems a better place now.


Don't laugh, but there is "PowerShell." :) (laughs)


The idea to do something like that is just plain... ugly. It's almost like guessing MIME type based on the file content (hint: IE). I wonder how long it will take until someone finds a security exploit of this "feature"?


> I wonder how long it will take until someone finds a security exploit of this "feature"

I'm not sure what exactly crafting a 16-bit installer file to exploit some bug in the bundled installshield to get it to execute code of your choosing (when the user tries to run your installer) would achieve.

You've got the user to run an installer that you've made. They've just double-clicked on your program. Forget crafting dodgy installer data files, you can run whatever code you like just by... running it.

Moreover, given the user is trying to install your program, they'll be happy to elevate its privileges, too (through UAC). So your arbitrary code is running as admin - what more could a security flaw in the bundled installshield get you? (And if the user doesn't think your program's an installer, they're not going to be any less confused by the appcompat installshield's UAC prompt than if your program just tries to elevate by UAC by itself. It's not like the bundled one is setuid root (or the windows equivalent)).

See also: http://blogs.msdn.com/b/oldnewthing/archive/2006/05/08/59235...


> So your arbitrary code is running as admin - what more could a security flaw in the bundled installshield get you?

In the days of Antivirus products and executable whitelists, the "elevation of privilege" may be getting arbitrary execution as any user. Of course trustedinstaller.exe or whatever that has a valid Microsoft signature can run, why shouldn't it? No need to ship that file back home either for further analysis (as AV products like to do with unknown files), its a standard OS thing. Therefore my rootkit isn't burned either.


> It's almost like guessing MIME type based on the file content (hint: IE).

Hint: Linux? It does something similar pretty much on the OS level.


That's on your file system.

The HTTP protocol specifies that, if the server says a document is 'text/plain', the client must not attempt to second-guess it based on content.

IE second-guesses it based on content, so you cannot serve up something that looks like HTML as text/plain to get it to display.

Substantially worse, IMO.

That said, it'd be better to actually save content types in extended attributes or something. And make all applications magically save and respect these attributes.


Actually MS broke HTTP protocol on that one and it lead to some very interesting exploits. Malicious user could upload a GIF to some site which allowed image uploads. Image would appear normal to a casual observer, except that it was specially crafted so that IE thought it was actually a JavaScript file (because it guessed the file type on its contents and didn't obey server-side set MIME types) and executed it -> XSS.

I don't think Linux does anything remotely like this. It does allow you to use "file" utility to guess the contents of the file, but it doesn't act on it.


What the fuck are you talking about?


#! shebang i assume?


This is a lot more involved than magic numbers, and magic numbers are configurable.


So we are still a long way off having sensible Linux style repositories then...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: