Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft, Once Infested with Security Flaws, Does an About-Face (nytimes.com)
158 points by hackuser on Nov 18, 2015 | hide | past | favorite | 176 comments



This is a weird story, since professional security people would have told you the same thing back in 2007.

Windows wasn't originally designed to be secure. Even NT, which is a serious multi-user kernel, was a product of 1990s C programming style. And while that's true of the Unices of the time as well, none of them had Microsoft's absurd user base, and so none of them had the same terrible malware incentives.

This all came to a head around 2001-2003, when the Internet worm phenomenon got so bad that Microsoft was routinely on the front page of CNN, and serious talk of congressional action began.

From what I understand, there was a dramatic top-down response, led by Gates and Ballmer, requiring software security training for developers, giving product managers the power to slip release dates to ensure bugs were caught, and funding what I believe is probably the largest 3rd-party software pentesting program in the industry. Several well-known software security firms (my old firm, Matasano, not really among them) were basically bootstrapped out of Microsoft contracts.

Today, Google probably does a better job on software security than Microsoft does, but it's hard to come up with another rival. Tellingly, Google's security efforts were also a top-down reaction to a major security incident.


I joined Microsoft in 2003 maybe two weeks after SQL Slammer to do internal security work. "Dramatic" is a pretty good way to describe the situation. There were some very hostile debates even over very small bugs that got escalated to the executive level to which the answer was more or less "what part of the policy is unclear? fix it". It's an impressive transformation and I wish, say, Apple were that good. I had hopes for Apple when someone I knew from the Windows group went there but it doesn't seem from outside like the company really prioritizes security to the same degree as MS.


Not that weird. Illustrative perhaps. So you and I would have agreed in 2007 that Windows was much better at security than they had been, but we are both pretty tightly connected to the technology market.

Today, 8 years later, my Mom and Dad think Windows is a "secure" system as they haven't had any issues for long enough that their opinion of it has changed.

The final leg of this journey will be when Windows + Windows Defender is all you need to keep your system secure. Basically once there isn't a market for add-on security products because the base product is "good enough."

I'm curious why you mention Google though, their security record on Android is a lot worse than either Windows phone or IOS. In many ways I feel like they are exactly Windows in 2003 with regard to "its secure if you use our APIs" kind of security. Would love to hear your thoughts on that.


> I'm curious why you mention Google though, their security record on Android is a lot worse than either Windows phone or IOS. In many ways I feel like they are exactly Windows in 2003 with regard to "its secure if you use our APIs" kind of security.

Chrome (OS)'s security model is a lot better, and compared to Android it was designed more in-house. Android was an acquisition and has more legacy design baggage (though of course the vast majority of the code has been written by Google at this point).


Android's security is actaully fantastic. The problem is the inability for google to distribute security updates. In 6.0 I now get monthly security updates and there is even a "security update version" of like "november 2015" in the status.

The latest junk even made it into Android 4.1 devices for security updates. But that is neither here nor there, the fact that we have 4.1 devices is a problem.


Android's system security design is inferior to that of iOS.

But, iOS's superiority (a) derives in significant part from Apple's total control over the hardware platform†, and (b) comes at the cost of a lot of user control tradeoffs that nerds like us tend to hate.

Really, to suggest that Android's security is at parity with Apple's, you'd have to be arguing that Apple does a terrible job at exploiting their inherent advantages of control over hardware and control over what's allowed to run on the platform. Apple does not do a terrible job at those things.

Yes, Google controls some of their hardware, but they have an ongoing support requirement for a lot of hardware they have no control over at all, and will have that requirement forever, which limits their options.


On the other hand, I'm unaware of any automated analysis of applications on the iTunes App Store, dynamic or static. Doing this properly isn't in Apple's DNA. For example, when XcodeGhost apps infected some hundreds of millions of users, it took Apple days to take down the affected apps, seemingly waiting for third party reports instead of simply scanning the entire store for the XcodeGhost signature themselves.


That's true. My friend is doing a company to address that now:

https://sourcedna.com/


My 4.3, 4.4 and 5.0 devices from Samsung and Asus (operator free) are yet to receive the said updates.


Google internally, especially post Chinese misadventure, has pretty great security. It isn't necessarily in Android, but their cloud services are great, and internal security is great.


In Android, Google's stuck between a rock and a hard place because so much of the distribution is handled by the downstream vendors. Were I google in setting up the distribution agreements, I'd be much more strict on the requirement for getting Google services on the phone, like reasonable minimum security support periods. I'd also make it easier for vendors to be able to agree to such things, by promising support on their end for a reasonable amount of time, and preferably, a stable kernel-level abi so that they don't need to do as extensive regression testing when a kernel-level patch needs released.

Android development feels a lot like current web development plans, where things are released, and not a lot of thought is given to backward compatibility and backporting security. Google needs to tighten those reins a bit, either by pushing handset developers forward faster and/or giving reasonable support for older devices themselves.


I largely agree, and it is the same problem Microsoft faced when downstream OEMs put their OS load on their hardware, adding in their own drivers and "features". Even today people like Lenovo get smacked for adding things like Superfish. But the underlying OS has gotten much more reslient. Anyway, that is why, to me, Google seems a bit like Microsoft in the 2003.


> Tellingly, Google's security efforts were also a top-down reaction to a major security incident.

What was that? All I can think of is when the chinese stole their source code but the response to that would presumably be more about managing who has access to what internally than improving the security of their user facing products.

edit: to be clear I'm thinking of the time they had code stolen by a chinese employee in their china office, presumably on request of the govt.


Yes, I think that was the first event that pushed Google to focus much more on security. The second one was of course in the summer of Snowden, when Google found out NSA had full access to its network. Since then it has taken quite a few measures to improve security and now it treats its own network as the "untrusted Internet".

https://www.usenix.org/conference/lisa13/enterprise-architec...

Unfortunately, other than the default full disk encryption it's pushing on Android 6+ devices, I'm not really seeing Google push client-side encryption anymore. I wonder if it even wants the E2E email extension to be fully developed anymore. And even though it should be quite trivial for Google to adopt Signal's text and voice encryption in Hangouts, I doubt it has any intention of ever doing that.


Cite a source that demonstrates that NSA had full access to Google's network, please.


Not the OP, but stuff like this showed up all over the Intenet just after news of the Snowden leak (http://www.theverge.com/2013/6/6/4403868/nsa-fbi-mine-data-a...):

> The US National Security Agency and Federal Bureau of Investigation have been harvesting data such as audio, video, photographs, emails, and documents from the internal servers of nine major technology companies, according to a leaked 41-slide security presentation obtained by The Washington Post and The Guardian.

Google is in that list. Granted, this did not demonstrate full access to all of Google's internal communications, but the category of "audio, video, photographs, emails, and documents" is broad and damaging enough that it doesn't really matter if NSA had full access or not.

And yes, I know that Google and all the other major companies vigorously denied any back doors, but as people were saying at the time on this very forum they didn't have any other realistic or legal choices. The President of the United States himself was saying things like: "You can't have 100% security, and also then have 100% privacy and zero inconvenience", which, if you were a smart enough CEO, was a very good hint about what to do and say in the heat of the moment.


[1] shows a full packet capture of a google-internal RPC transaction. As a xoogler familiar with the product in question, I can tell you that that packet had no business being on an external link; That was only sent datacenter to datacenter. I was in a conference war-room shortly after this dropped, and the universal reaction was "Fuck."

[1] http://apps.washingtonpost.com/g/page/world/what-yahoo-and-g...


Not the GP, but [0]. The NSA didn't have access to all of Google's data. The NSA had reportedly tapped the inter-datacenter fiber (and that the data on those lines was unencrypted).

[0] http://arstechnica.com/tech-policy/2013/10/new-docs-show-nsa...


I generally agree with you about claims regarding full (root) access to Google's servers, but in this case it's a weaker claim about the network. One might quibble whether tapping without injection counts as full access, but that's a reasonable claim without too much hyperbole. Maybe the NSA didn't have hooks into every switch, but Google's network design also meant a lot of data was flowing beyond the boundaries of any one physical site.


I'm just curious but why do you think Google is better at security than Microsoft? Number of exploits on Android vs Windows? Amount of funding on security research?


I actually agree with you (that Microsoft's internal processes are more security orientated than Google's version of the same).

That being said: A lot of Android's security issues are not always Google's fault. Some of them are in generic Linux libraries, some of them are in OEM added components (Samsung), and some of them are security issues unique to an app ecosystem (i.e. on Windows Win32 user applications normally have full permissions as that user, on Android an APK running as a user has a limited set of permissions, if it exceeds those permissions this is now an "exploit" which is now an additional set of security issues), and many are simply malware being placed on an app store.

Android's biggest issue isn't really that it is poorly designed (in particular with SELinux as standard). Android's biggest issue is how horribly the OS update mechanism works, so relatively minor security issues may not be fixed for years.


> A lot of Android's security issues are not always Google's fault. Some of them are in generic Linux libraries, some of them are in OEM added components (Samsung), and some of them are security issues unique to an app ecosystem (i.e. on Windows Win32 user applications normally have full permissions as that user, on Android an APK running as a user has a limited set of permissions, if it exceeds those permissions this is now an "exploit" which is now an additional set of security issues), and many are simply malware being placed on an app store.

I think these mostly are Google's fault. They chose those Linux libraries and implemented them; they certainly have the resources to modify them, to otherwise secure them, or to develop their own. They are producing an OS not for a laboratory but for the real world where there are OEM components, app ecosystems, and malware. Dealing with those issues is an essential part of what an OS does. If you build a ship that works fine in calm weather but sinks when a storm hits, the problem is not the weather.


Apple has many of the same issues, and uses a huge amount of bug-riddled open source. The big difference between Android and iOS is that iOS, which is the same on every platform it runs, is locked down to a far greater extent than Android is. This is a good thing for security and a bad thing for end-user control; Google and Apple just took two different tradeoffs here.


I'm curious of what the general consensus is if we made it [pun] more apples to apples by comparing current state Nexus 5x/6p @ Android 6.x vs iPhone 6s @ iOS 9.x? Android, today, seems to be riddled with the legacy a la Microsoft - wherein XP is still at large, seemingly a similar problem with Android 3/4/5 still in use...

But is the cutting edge comparison really that much more skewed in Apple's favor?

I'd say that prior to the improvement on patch cycles as of recent in Android - iOS had an edge, but - outside of the walled garden I'm still very curious on the quality between the two with specific regard to the SDLC and resulting output.


> professional security people would have told you the same thing back in 2007

Did they really turn it around that fast, in one iteration of Windows (XP was released 2001, Vista in 2007)? I would think that fixing bugs would be necessary but not nearly sufficient, and they would have had to re-architect and re-develop major parts of the system. And Microsoft needed to do that while maintaing the backward compatibility that is a major selling point and pushing products out the door quickly enough to generate revenue.

It sounds like a nightmare, and not acheivable in 4 years. My impression was that many bugs were fixed but there wasn't a major redesign, which always made me doubt how secure Windows could be.


Vista was a major security redesign, not just bug fixes. UAC was one of the biggest security improvements in the OS, because it meant that users were no longer running as admin by default.


UAC is not a security feature/ boundary (even according to MSFT themselves). It was aimed to help stop ISVs from writing software assuming admin privileges.

Vista had A LOT of security improvements overall, but UAC shouldn't be considered as one of them (similar to PatchGuard for x64 Windows - it's cited as sometimes a security feature, but it's not).


I'm sure they didn't like it because security is, while necessary, a pita


> UAC was one of the biggest security improvements

I assumed it was just an interface stapled onto the old system, and something underneath that allowed changing permissions without logging off.

It might have had a big effect, but it doesn't sound like a significant change in the system. But maybe my assumptions are wrong ...


The entire concept of UAC and programs not running by default was introduced in Vista. It was such a big change that it pretty much ruined the reputation of the OS, single-handedly. I remember that one of the major complaints about Vista was the number and intrusiveness of UAC prompts, which occurred because programs were doing things like keeping settings in C:\Program Files, rather than the user's application data folder.

Windows 7 has the same permissions model as Vista, but has a much better reputation, because by the time 7 was released, applications had been updated to not require as much in the way of permissions. From the users perspective, this meant that 7 "worked properly", even though little had changed in terms of the security model between Vista and 7.


I've used Linux for many years, but I've always defended Vista. The other complaint about Vista was excessive resource use. I tell people that When XP came out, Intel released the Pentium 4. When Vista came out, Intel released the Core 2 Duo.

I'm conflating processor date with date-you-could-buy-a-computer-with-that-chip, but still: Pentium 4 is a world away from Core. Windows Vista does a lot more than XP.

The complaints went away because people gradually bought computers with Core architectures.


The real problem with Vista and resource use came from the OEMs. Details here: http://blog.seattlepi.com/microsoft/2008/02/27/full-text-mic...

Microsoft has a program where they certify that hardware will work acceptably with their operating systems. This is important in the run-up to a new version of Windows, because during the period when everyone knows that Windows X+1 is coming soon but all you can buy off the shelf are machines with Windows X, people want to be confident that they'll be able to upgrade to Windows X+1 when it's available.

For Vista, if your machine passed this certification, you got the right to ship it with a shiny sticker that said "Vista Ready." Seeing the "Vista Ready" sticker on that XP laptop on the shelf at Best Buy told customers that they could buy it without fear. It was certified future-proof.

However, Vista required more resources than XP did to run with acceptable performance. That meant that OEMs had a ton of ultra-cheap XP machines in the pipeline that could not with a straight face be called "Vista Ready." And this was a problem, because they didn't want to get stuck with warehouses full of machines nobody would buy because they couldn't run Vista.

So what they pushed for, and what MS eventually gave them, was a new certification: "Vista Capable." Unlike "Vista Ready," "Vista Capable" didn't mean Vista would run well on the hardware in question. It just meant you could install Vista on it, with no guarantees as to how it would actually run. And the shiny "Vista Capable" sticker looked pretty much exactly like the shiny "Vista Ready" sticker, so if you weren't looking carefully, it was easy to confuse the two.

The OEMs loved this solution, because they could now slap "Vista Capable" stickers on all those crappy XP machines and unload them on people. But, entirely predictably, then Vista came out and suddenly tons of people were installing it on hardware that was nowhere near beefy enough for it. So for umpty-ump millions of people who bought that crappy hardware, the experience of using Vista was painful and awkward and slooooow. Eventually even the cheapest machines were powerful enough to meet Vista's requirements -- but by that point Vista's reputation had been sealed. As the saying goes, you never get a second chance to make a first impression.

But the OEMs did get all that inventory out of their warehouses. So, you know, mission accomplished.


This, exactly.

Vista is now my go-to example for when people managing platforms say "just break badly written applications, their incompetence is their problem, not ours." Microsoft took a staggering PR hit when all sorts of crappy, poorly-written applications broke under Vista, because their users all blamed Microsoft for it. People don't know their applications are crappy under the hood; all they know is that they used to work, and now they don't.


UAC in Vista essentially introduced the idea of a "split token". Administrators get two access tokens: a full access token, and a filtered access token. The filtered access token essentially is a standard user token and used by default. Only when elevating (UAC is prompted) is the full access token used (which is the filtered access token plus various admin-level privileges).

Each token has different privileges on what it allows/ doesn't allow. One notable change in Vista was the introduction of SetTimeZonePrivilege, since in XP, you had to be an adminstrator in order to change the time.


In XP, you could always change permission without logging off, by using the runas command or Run as... in the context menu. I think you could also set shortcuts to run programs as Administrator.

So you could run as a non privileged users, and you'd only need to log in as Administrator for some very specific tasks.

However, there were many applications that didn't play nice with that model and required you to run as Administrator. By changing the default, MS had to do a big push to make software makers abide by the rules (which had been in place since XP (edit: in the mainstream branch) but were often ignored)


You could do that under XP, but needed a separate user account for administrator. It was also more of a hassle, as you always had to enter the password when starting a program as Administrator. I ran with such a setup on my desktop, and remember that to install programs I usually logged out and logged in as Administrator again.

Vista introduced a model that was almost as good, but made it more user-friendly and the default.


And judging by the pageviews on my blogpost on how to disable it, users didn't really like it very much...


Which was caused in part by badly behaving programs (writing user data to the Program Files directory) that triggered the UAC prompt, if I'm not mistaken.


They improved the system in the next release, and since Win7 UAC generally works perfectly even with the "badly behaving" software. Sensitive directories are now mirrored elsewhere in the filesystem, so that when your program wants to read from, say, Program Files directly, what it really reads is C:/Users/[username]/AppData/Local/VirtualStore/...


I've never experienced this in writing my own software. If I try to read or write to a directory I'm not allow to I simply get access denied.

f = open('C:/windows/test.txt')

try:

    f.write('test')
 
except Exception as ex:

    print ex
 
IOError: [Errno 13] Permission denied: 'C:/windows'


Try the same with C:/test.txt. Open Explorer and see if the file actually appears in C:/.


So that's what the policy "Virtualize file and registry write failures to per-user locations" does... uh, good to know :) thanks!

In my particular legacy-apps' field, I believe most problems are due to interactions with the DCOM subsystem though.


Yeah, I only learned about it after I discovered what "~" is being resolved to in my Emacs installation on Windows...


That was already in Vista, if I remember correctly.


Apple has always been exceptional in this regard as well. It usually drives people up a wall when this is pointed out, as the fanboys like to trumpet it a bit too loudly, but it's true. In-the-wild exploits are rare, and the company moves quickly to squash them and to prevent the entire category of exploit from biting them a second time.


There are a lot of great people at Apple and the security model of iOS is an achievement --- in a lot of practical ways better than that of Android. But I do not know a lot of people who would argue the Apple has a better security program than Google does. Google's team is better funded and better staffed, and has a much broader charter than Apple's.


> in a lot of practical ways better than that of Android.

Umm no - the update situation is better on iOS but fundamentally iOS has bigger problems - https://twit.tv/shows/security-now/episodes/532?autostart=fa... . That problem is unfixable easily due to the way ObjC works. Android gets code access control for free with Java. There have always been Jailbreaks for most iOS versions and it's not like they haven't had other security issues. The ability to fix them quickly is certainly an advantage but there is nothing in iOS that is fundamentally more secure than anything else on the market.

Frankly I think Apple's security is a combination of happenstance and restrictive policies - I don't think they care (yet) about the processes, infrastructure and people required to do what Google and Microsoft do. (No offense to the good security people at Apple - this isn't about them, this is about having organization wide security focus like MS needed to turn around Windows.)


I don't understand what your argument is. Untethered jailbreaks on iOS are worth gigantic amounts of money because they are not easy to come by.


But they have existed for every version of iOS none the less. The relative difficulty may quite well be due to other reasons - closed source, locked down hardware etc. Says nothing about software security.


Rooting your phone when your not allowed is so common in android it might as well be a non-event.


Some recent figures: (http://betanews.com/2015/06/26/android-is-the-biggest-target...)

>"There was significant growth in Android malware, which currently consists of 97 percent of all mobile malware developed. In 2014 alone, there were 1,268 known families of Android malware, which is an increase of 464 from 2013 and 1,030 from 2012", it said.

Apple’s iOS, on the other hand, went through last year basically unscratched. The report said that there were just four iOS targeted attacks in 2014, and the majority of those were designed to infiltrate jailbroken devices.


I would not read too much into 'reports' by companies trying to sell you security products.

If you want to talk impacts - both iOS and Android have been similarly impacted - big name apps getting into App Store that were compiled by hacked XCode, Ad SDKs using forbidden APIs etc. Likewise most Android malware is due to rooting and side loading apps from questionable sources.


So Android getting about 100x as much malware as iOS is not significant? That's from all reporting I've seen, not just that one. Just because iOS has problems too doesn't make the numbers the same.


It would be significant if the statement was "There are 100x more infected Android phones than iOS phones."

Remember that Android is a lot of things - there are Nexus phones, there are OHA OEM phones (majority of them), there are Chinese no name phones that use open source Android etc. So if most Chinese people use AOSP build provided by their phone maker and they all sideload apps and get infected - that's different. Even considering all this nobody is making the above statement.

Just having malware written for an OS means nothing. It only suggest that it is targeted more due to market share. If people jailbreak their iPhones and install random apps from untrusted sources there is hardly anything Apple's security can do to prevent it. Same goes for Android. Nothing in that reflects the security of the underlying platform.


Orders of magnitude more iOS users have been infected by malware (via XcodeGhost) than Google-flavored Android users, despite the latter platform having multiple times more users. The reports you're pointing to list malware in Chinese app stores on non-Google-flavored devices.


"That problem is unfixable easily due to the way ObjC works"

Can you explain that?


You should listen to the podcast for details but the gist of it is that "The Objective-C model of object-oriented programming is based on message passing to object instances. In Objective-C one does not call a method; one sends a message." So let's say you have an app that uses a runtime. The runtime in turn may use private/internal calls that your app is not supposed to use. Well there is no reliable way to prevent it because as long as you can construct a message and know the string/name of the target you can call it and there is no easy way for static analysis to detect such behavior.

Some apps were exploiting this to get a list of running apps and things like that.


This is extraordinarily silly. In modern systems security, real security boundaries aren't enforced at the language level. No amount of ObjC message-sending trickery is going to change your UID.


Yeah, I have been around long enough to know you can't change to UID zero by message passing. That is just preposterous to assume. I was talking at the Runtime level - I even cited an app that was calling a runtime method to get list of running apps. Essentially they have no reliable runtime permission model - they rely on obscurity and static scanning to prevent you from passing message to some receivers that they don't want you to.

I would have thought you will research it a bit before asserting silliness - but oh well.


Apple is slowly migrating a massive amount of system features out of private frameworks and into background daemons protected by entitlements or privacy prompts. The end goal is that all sensitive data or hardware features are completely inaccessible from inside the sandbox, neither by private API, nor IOKit, nor syscall, nor direct filesystem access.

Retrieving the application list is a particularly poor example as there used to be a public API that did exactly that: CFPreferencesCopyApplicationList


The runtime is trivial to bypass on Android as well: Reflection, NDK, etc. It's not intended to enforce a security policy.

The "receivers that they don't want you to" on iOS is not about security, but correctness, binary compatibility, and app store guidelines. iOS's security model is not defeated by bypassing the ObjC runtime.


No it isn't - if your app did not ask for say a permission to connect to Internet or get a list of apps - there is no way to do that using reflection or NDK or whatever.


I don't know about the latest version of iOS, but your statement was certainly wrong just 2 years ago.

See https://www.usenix.org/system/files/conference/usenixsecurit... for details of how to write an app that bypasses App Store review but will have security holes that allow your app to access APIs at runtime with no notification that it was not supposed to have access to.


Yes, that was exactly my point. People keep repeating the iOS security being fundamentally better marketing mantra but it has been ordinary although the closed system helps it somewhat and they did seem to get the fingerprint security right. And I was referring to Android's permissions model when I said no you can't bypass it.


You may think it's "marketing mantra" if you're unaware of the technical differences. But compare, say, Apple's Secure Enclave with Host Card Emulation. Apple's design is just more secure. http://www.tomshardware.com/news/host-card-emulation-secure-...

I certainly don't understand characterizing iOS's security model as "ordinary." For example, it encrypts using a separate coprocessor running an entirely separate OS, that is protected against even an iOS kernel exploit. That's definitely not an ordinary design!


And on iOS, if your app does not receive permission to access your location or contacts or camera or Internet, there's no way to do that by using objc_msgSend or whatever.

On both platforms, these security policies are enforced at the process boundary, not by the runtime.


Java's access controls are trivial to bypass on Android. Neither the Android or iOS runtimes are there to enforce a security policy.

You should read Apple's iOS Security Whitepaper: http://www.apple.com/business/docs/iOS_Security_Guide.pdf See for example the data protection classes: a very thoughtful design, with no analog in Android, and that certainly could not have come about by "happenstance."

Heck, Android doesn't even encrypt your data by default! That alone makes iOS "fundamentally more secure."


> Java's access controls are trivial to bypass on Android.

You keep repeating that but I am certain you don't understand what you are talking about. Go download the Android SDK, emulator and write an app that does that and post it on Github We will talk about it then.

Also - I'll leave this here - http://www.macrumors.com/2015/10/19/apple-to-remove-hundreds...


> write an app that does that

Here's a sample of how to invoke `Activity.savedDialogKeyFor`, which is private:

    Method privateMethod =
        Activity.class.getDeclaredMethod("savedDialogKeyFor", int.class);
    privateMethod.setAccessible(true);
    String result = (String)privateMethod.invoke(this, 42);
    System.out.println("Got result: " + result);
Worked perfectly on Android Marshmallow emulator. As I said, it's trivial.

> Also - I'll leave this here

What's your point? There was no security exploit here, and no security policy can realistically prevent networked apps from sharing data like your email address. That falls to the review process.

What's remarkable here is how little data this malware was actually able to capture. Certainly less than on Android, where users routinely grant excessive permissions, like giving Netflix access to your phone.


You're confusing accessing private methods with violating the Android permissions model. Two totally separate things.

Edit: Also my larger point was the iOS security is not fundamentally better than anything else. The closed nature, restrictive policies etc. help but fundamentally it's nothing outstanding. It was a response to tptacek claiming opposite.


I am not confused. Please reread the thread.

You asserted that iOS is "unfixable" because the ObjC runtime cannot prevent apps from using "private/internal calls that your app is not supposed to use," whereas "Android gets code access control for free with Java."

But as I showed, Java access controls are easily bypassed, so they do not provide any security. This is by design: security is enforced at the process boundary, not by the runtime.

My hope is that you now appreciate that neither the ObjC nor Android Java runtimes are a security risk, because they are not responsible for enforcing any security policy.

> Also my larger point was the iOS security is not fundamentally better than anything else

iOS security is fundamentally better. You can read the whitepaper to understand the ways: data protection classes, the Secure Enclave, and lots more.

But here's a damning fact: iOS encrypts your data by default, Android does not. That by itself makes iOS fundamentally more secure.


You should really read this Usenix paper - https://www.usenix.org/system/files/conference/usenixsecurit... .

What you are not understanding or ignoring is that iOS apps (over 250) that were App Store approved were able to retrieve personal user data including email addresses by reverse engineering the names of the private APIs and using message passing. Android sure has private APIs and you can access those but you're still restricted to the permissions you asked for. For example you need to declare android.permission.GET_ACCOUNTS permission to get the user's primary email. Not on iOS apparently where they rely on manual review to ensure you are not calling the Private API - which fails as can be seen in the Chinese AD SDK fiasco I posted.

So no Android runtime isn't a security risk as much as iOS private APIs are - your app gets a broad set of permissions on iOS by default and you can do clever trickery to call private APIs to collect personal info and who knows what else without the user knowing. Android needs your app to ask for that permission first (and at runtime on M)- you aren't calling a private method on Android without declaring the necessary permission to get what you want without user interaction.


> You should really read this Usenix paper

I have read it. It describes an attack on the app review process, i.e. a trojan. Their apps require the user to grant privileges. For example, their GreetingCard app requests access to the user's address book, and the user has to grant it.

> iOS apps (over 250) that were App Store approved were able to retrieve personal user data including email addresses

This is not true. Here's the blog: https://sourcedna.com/blog/20151018/ios-apps-using-private-a...

The data they collected was list of installed apps, serial numbers, and some sort of AppleID numeric identifier. In particular, they did not (could not) collect email addresses.

It's bad that the SDK was collecting this stuff, but this data is fairly innocuous. Last I checked, Android provides information like the list of installed apps and various serial numbers without requiring elevated permission.

If you think it's possible to get the user's email address through an iOS private API, I challenge you to tell me what that private API is.

> For example you need to declare android.permission.GET_ACCOUNTS permission to get the user's primary email. Not on iOS apparently

This is wrong. On iOS, the only way to access the user's email is through the Address Book framework, which prompts the user at the time of access.

> your app gets a broad set of permissions on iOS by default

This is completely false. iOS has a comprehensive on-demand permissions model, which is widely recognized as better than the install-time permission model on Android. This is why Android is switching to iOS style on-demand permissions in Marshmallow.

> you aren't calling a private method on Android without declaring the necessary permission

Please stop confusing private methods with elevated permissions. You CAN call private methods without elevated permissions, as my code above demonstrates.


>The data they collected was list of installed apps, serial numbers, and some sort of AppleID numeric identifier. In particular, they did not (could not) collect email addresses.

[Edited for unnecessary stuff]

Oh the article you linked has Apple response that is quoted verbatim below - it references user email addresses. Specifically.

“We’ve identified a group of apps that are using a third-party advertising SDK, developed by Youmi, a mobile advertising provider, that uses private APIs to gather private information, such as user email addresses.."

> Please stop confusing private methods with elevated permissions. You CAN call private methods without elevated permissions, as my code above demonstrates.

What I wrote was you are not going to be able to call an Android API via private invocation and succeed if the API requires a specific permission and your app hasn't declared it.

All of this only goes to prove that Apple's security in iOS is not extraordinary as you claim - it is fallible like every other platform except with the exception of fingerprints which are currently believed to be secure - but that's now the case with Android as well - in M they are using ARM Trust Zone with no app access.


Maybe there's a private API on iOS that leaks the user's email addresses without the proper permissions. Maybe there's one on Android too. Neither OS has a runtime that will prevent malicious apps from exploiting such an API.

> What I wrote was you are not going to be able to call an Android API via private invocation and succeed if the API requires a specific permission and your app hasn't declared it

Just like on iOS, with the difference that it happens at call time and not installation time.

> All of this only goes to prove that Apple's security in iOS is not extraordinary as you claim

It shows the exact opposite! Notice how ridiculously weak these results are. On one of the most high-profile targets today, an app may (unconfirmed) be able to determine the user's email address and send it to a server. On a trojan app that the user deliberately installed, and then deliberately granted access to Twitter, it can post a tweet without the user's confirmation, if the user has not updated the OS. Fetch the smelling salts!

Meanwhile, millions of Android phones are part of botnets, like NotCompatible.C, at one point reaching 1.5% of mobile devices in the USA. A Chrome 0-day came out last week, allowing full control remotely of fully-patched Android phones. These aren't research papers showing theoretical attacks, this is real life.

Yes, iOS has extraordinary security, and its competition only makes it look better.


A Safari zero day was just sold to governments (Nov 2nd). But yeah continue to assert otherwise if that makes you feel better. I am sure you have some explanation for that and all the previous jailbreaks for iOS and how they show iOS security being extraordinary! Android phones with botnets? Yeah you can believe that so hard it will make it a fact soon!

/jeez why do I bother with Apple fanboys?


It would be if Symbian already didn't had most of those features.


The pwn2own contestants never have any problems pwning Macs, but iOS's security record is hugely impressive.


I can't help but think this is more a result of low volume and extremely tight control over their ecosystem rather than an intentionally prioritizing on security.


The MacOS X ecosystem is 25 years old now, counting NextStep, and has an installed base of close to a hundred million systems, most of them unsophisticated personal computer users running a full Unix operating system with internet access. That is a prime target, considering there are malicious exploits that take advantage of Z-series mainframes in recent years. (one of the Pirate Bay founders got popped for looting the mainframe at a tax accounting firm)

I think it's more to do with the development culture inside Apple. The features in Swift designed to improve secure coding shows you they're actively thinking about secuirty and how to achieve it, and have been for a while.

The only regular security headlines you see about the Mac is in the Pwn2Own contest and their like, where researchers trot out vicious exploits that are then dutifully squashed by Apple in the next update, never to be seen in the wild. (And there's a reason Apple makes it a PITA to install Flash and Java these days, and includes their own very nice .pdf reader.)


First, there is Mac malware.

Second, retail-level malware is a numbers game. Malware isn't cross-platform. A malware author chooses their target based on how remunerative the target is. Windows remains more remunerative than OS X.

There is no fundamental difference between the security models of modern Windows and OS X that accounts for the disparity in malware infections.

(I'm a Mac user, and have been since ~2001.)


I don't know. My father-in-laws Windows PC is routinely bogged down with malware/adware. It's not the same kind of security holes that used to be rampant, but it's still too easy for malicious software to cause trouble.


Though Microsoft have improved their computers still often come laden with crapware much of the time unlike Apple or I think most Chromebooks. You then end up with stuff like Superfish if you're not lucky.


Sorry about repeating myself on this subject: buy laptops from the microsoft store - no crapware installed ("signature" editions).


I buy the PCs for him and personally remove the crapware. He just clicks on random stuff. If it tells him to buy something, he buys it.



Like Android you mean.

Any system that allows free reign to OEMs will be ridlled with crapware.


Apple


Microsoft, after much R&D work, deployed two technologies in Windows 7 that improved the security situation considerably. The first was the Static Driver Verifier.[1] That's the formal proof-of-correctness system that checks the source code for a driver for termination, bad pointers, incorrect API calls, and anything that could result in a kernel crash. It's a symbolic path tracer - it symbolically executes all paths through the code. All drivers must pass that verifier. Before this was deployed, about half of Windows crashes were due to drivers. Now, very few are.

The other technology was a classifier for panic dumps. When Windows crashes and reports data to Microsoft, that data goes into a classifier system which tries to cluster similar crashes together. So, when there's a crash bug, the reports of similar crashes are all looked at by the same person at the same time, which tends to get it fixed.

Linux lacks either technology, which is a problem.

[1] https://msdn.microsoft.com/en-us/library/windows/hardware/ff...


"Still, episodes of online hacking have become even more startling, including the theft of personal data from millions of Target customers and terabytes of private emails from Sony Pictures Entertainment (and both companies use some Microsoft products)."

So somewhere in Sony and Target's organisations there are one or more Windows computers?

This is just lazy reporting NYT. Do better.


Agreed - I bet they use Google search every day and their execs have iPhones too.


Of mild interest, in the Sony hack: "The hack, which was launched Nov. 24, only affected computers with Microsoft Corp's (MSFT.O) Windows software, so Sony employees using Apple Inc (AAPL.O) Macs, including many in the marketing department, had not been affected." IMHO Microsoft still have a fairly ho hum attitude. If you want to hack a company like Sony the easiest way is to target employees still using old systems like XP and they could have mopped a lot of that up by offering free upgrades to 10 when they offered them to users of 7 and 8 but nah.


By the time of the Sony hack, any machine still using XP is a machine that would be using XP even if Microsoft did exactly what you suggest. There's been no lack of opportunities to upgrade and the cost of a Windows license is generally trivial next to the labor expense, training expense, and monetized risk of "my critical software doesn't work" of the upgrade of these systems.


Though many comments here speak of engineering flaws, but to me it was a cultural flaw. The most outstanding anecdote I have to illustrate this is when I told my manager that "I can't <note that I say "can't", not "won't"> run that internal test tool (the insect farm thing, for those that were in DevDiv around 2003-ish) that runs 24/7 with complete network access because it requires <but did not need> admin privileges."

That nearly got me fired. You read that right: when I point out that a sloppily written application that someone wanted the entire developer division to run was insecure, my manager basically told me to run it or else. If the dev can't even be bothered to not write to PROGRAM_FILES (which is the only reason it needed admin privileges), what other holes does it have? Well, I'm not about to find out on my dev box that's hooked to the corpnet. Running on an internal-only alpha version of the early .NET runtime to boot; what could possibly go wrong? (And as it turned out, nothing went wrong, but still...)

And this was after Valentine's mail was sent. SQL Slammer had already happened. What, you thought the whole company just jumped on the security bandwagon? Yeah, I thought a new day had dawned, too. You can make 'em quit blindly using strcpy, but you won't change their minds with an email even after Valentine asks the whole company to come in and take Slammer support calls.


"Microsoft was once the epitome of evrything that is wrong with security in technology."

Certainly they have improved over the last decade, but who hasn't? Not to mention they have boatloads of cash to throw at the problem.

But the fact^W opinion remains Windows is still the easiest target of any OS. A user can configure any OS to be less secure, and other OS can become as popular a target as Windows but there's something about Windows that makes it a far greater liability than all the rest.

It's closed source.

How are you ever going to assess the quality of this software in terms of security? By reading the New York Times?

Boatloads of cash also buys PR.


That's an interesting opinion... what makes you think it's the easiest OS to target? Do you have any data to back up the claim that a modern Windows OS is less secure than it's major competitors (OSX and, in some circumstances, Linux)

My feeling would be that Microsoft have done a lot in the security line and have also given a lot back to the security community (their SDL documentation which is freely avaiable for example) and that they are one of the better examples of security in the software industry these days


The "security line" is not simply a question of "doing a lot" and "giving a lot back", ex post facto, or setting an "example" in the "security industry".

It also has to do with design goals and priorities. Layer upon layer of cruft, with an OS weighing in at multiple GB, is not a confidence builder in the "security line". It also includes default configurations.

There are reasons that so many Windows instances have been and are now part of botnets. There are reasons why the security updates have increased in quantity and frequency over the years and appear to be neverending.

Some of those reasons have to do with design and priorities. Others with default configurations that Redmond assumes no user will ever change.

No amount of PR can change reality (e.g., massive botnets of Windows users), although it might change people's perception of reality.

Also, I never said "major competitors". I said "other OS". For example, the OS I use is probably not a "major competitor". It is much smaller and open source. That is what is important to me.


Sure design goals, well I'd argue that Windows has had "improving security" as a design goal for some time now, and that this has had measurable impacts on the security of their products.

For example take SQL server as a good example, compare the number of RCE issues that it's had with say.... Oracle's Database server, another well funded company with loads of "PR" money. You'll find the SQL server has many fewer security issues than the competition, and I would suggest this is evidence of Microsofts improved attention to security...

MS default configuration are really very good. I'd compare to your OS of choice, but you don't choose to disclose it :)

So on the server-side I'd say that when I test modern default installs of windows based products they tend to have a good security posture out of the box.

Security Updates, well everyone has a load of those, are you suggesting the MS is worse than their competition? Counting OS vulnerabilities is notoriously difficult to it's hard to get an Apples to Apples comparison here.

Botnets, well there are botnets on linux for sure, and OSX has had it's share of malware to as has Android.

If you like a small open source OS then that's fine, but it doesn't necessarily make another entirely different OS have bad security.

now I know there's a reasonable chance you're thinking I'm an MS "fanboy" or similar at this point, but I'm not. I use OSX/Linux and Windows (as well as some iOS and Android) where they work best for me.


The reason is exactly why MS has improved their security over the years. One of the things they've done is made automatic updating a mandatory feature of the OS. People can't just lazily turn updates off anymore because they can't be hassled for a 45 second break for their computer to maintain itself. Were these people running Linux, a lot of them would be doing the same thing, with the same results. Windows has a lot of botnetted computers because Windows runs the vast majority of computer systems out there.

The neverending security updates is part of the difficult balance MS has to take between compatibility and security. Fixing a security problem that breaks a buggy program written 20 years ago by a company that no longer exists suddenly becomes a support issue, because there are a lot of people who don't want to hear that they have to upgrade their copy of PrintShop.

MS releases security updates in part because they audit their code, and are making strides to get rid of a lot of the cruft. Windows 10 pulled a lot more services out of kernel space and into user space, for example. They're doing so while being conscientious of user needs, instead of telling the user to just code the fix for older programs themselves.

In your small OS, who do you go to for support if something breaks? Who will you go to for support when a program from today breaks ten years from now? These are responsibilities many open source programmers will slough onto the end user, while they're working on the Latest and Greatest PulseConsoleSystemAudioKitD.


>But the fact remains Windows is still the easiest target of any OS.

How is that a fact?


> But the fact^W opinion remains Windows is still the easiest target of any OS.

Windows isn't the easiest to target; but it is the most profitable, simply because it has the highest proportion of users.


OK, I'll bite. What do you think is the easiest?

Keep in mind what I said about configuration. Distinguish configuration from source code. Proper configuration s within the user's control and can be anticipatory and preventative.

Whereas poor quality code in a closed source program is outside the control of the user to fix and usually requires knowledge of someone exploiting it before it will be fixed. This is, unfortunately, after the fact.

Proactive versus reactive.


>How are you ever going to assess the quality of this software in terms of security?

I am not aware of a single third party that has reviewed all of the code that goes into a Linux distribution. Do you know of one?


Not sure what Linux has to do with my comment.

Are you assuming I use a Linux "distribution"?

Sometimes I have done so, but only occasionally when I need to check something on Linux.

Anyway, I am missing your point.


You're not sure what the most popular open-source OS has to do with your comment about open source OSes?


Nope. Maybe you can explain?

My comment was about closed source versus open.

Popularity is only relevant to the extent someone would argue Windows is not the easiest target but rather the most frequent one, due to its popularity, i.e., userbase size.

There's more to open source than just Linux.


Well, I simply highlighted the difference between theory and practice. The average user does not have the money to audit open source software. And even if you get someone to bankroll the cash, you will need to re-do the audit for every single check-in since the audit.

You made a point about Windows being impossible to audit, but in practice you're in pretty much the same boat when it comes to Linux.


Again, you mention Linux. I do not use it. How is it relevant to my comment?

And then there's this mythical "average user". But what does that have to do with me and my own solutions?

I know only one user: myself. I know what works for me. I live in a tty. Do I need a Windows GUI? No.

Finally, I also know that what one can do, another can do. But that is their decision and I am not trying to convince anyone to do what I do.

Windows is a massive, complex truckload of legacy source code that keeps growing with every edition; it has a lot of flaws and the number grows every year; it is not "open source" in the sense of public source code respositories and enabling users to compile from source. This is not opinion. It's fact. These facts do contribute to the state of Windows "security". Bravo for fixing flaws in recent years. But no points for having them to begin with: poor quality control.


>Again, you mention Linux. I do not use it. How is it relevant to my comment?

Um, because you compare like to like. If you are comparing millions of lines of code to 10,000 lines of code, then obviously its easier to audit. Your point about auditing code makes no sense unless you compare the task of auditing equal amounts of source code.

>Windows is a massive, complex truckload of legacy source code that keeps growing with every edition

Please enlighten us how you got access to the source code, which parts you evaluated, what methods you used to evaluate it, and why you think those methods are accurate and scientifically valid.

Unless you do those things, you cannot claim to be fact based. Its fine to have an opinion. Many non technical users who don't understand the NT OS design, confuse the implementation flaws of user mode code, kernel code, third party code, and are unable to differentiate it from NT design flaws. Sure, from a responsibility standpoint, I'm right there with them - If you ship it - you should own up to the flaws regardless of where they come from. I think that MS in the past made some super bone headed decisions (possibly driven by commercial reasons) that screwed them security wise because the 'default install' of Windows was insecure out of the box.

> But no points for having them to begin with: poor quality control.

How do you know this?

As an aside, I find it ironic for you to lament about "complex truckload of legacy source code" while using a TTY which itself is the exact same thing. Ah ! C'est la vie


"How do you know this?"

As a user, I don't. It's closed source. That's the point. What users have is only circumstantial evidence. And then there is the marketing and PR, such as the NYT article.

One of original two comments was "What would we find?" There is nothing to suggest I have read the source code.

Unless and until Windows becomes an open source project, such as the ones that are routinely discussed in this forum, where users can remove code they do not want, then no amount of "updates" or PR by Redmond is going "fix" Windows to my satisfaction. As I said, I am not expecting that to happen, ever.

There is a comment in these threads from a former Microsoft employee that confirms my suspicions about poor quality control. Are you still in disbelief?

As for your aside, I agree. There's legacy code in both. But I suspect it is far less code overall. And, in my opinion, it's in some cases higher quality than what I am getting with Windows (there are certainly exceptions: Dave Cutter's work on the NT kernel being one). Of course, I do not have the Windows source code so I can only speculate what is in there.

More importantly, the size of the software is much smaller and I can modify and recompile it.

I can see to some extent what has been added and changed over the years. I can continue to learn from the source and the people who wrote it, instead of from a marketing department.

Living in a tty is "the exact same thing" as using Windows?

Is that an example of "comparing like to like"?

I am in VGA textmode. I am not using a graphic layer.

The amount of code to implement the tty, which is available to me to read, edit, compile and redistribute, is, I speculate, much smaller and less complex than the amount of code and complexity used to implement the Windows GUI.

Pure speculation of course.


As long as you're claiming that your POV is an opinion, or informed speculation at best, I have absolutely no issues with what you're saying, and do not wish to engage in further argument. We probably agree on most things.


I remember my first experience with Linux back in the early 90's -- once connected to the Internet that Redhat box was rooted almost immediately.

From my perspective, Windows doesn't seem to be less secure but it has a greater share of users who do stupid things.


Maybe Redhat's configuration was at fault?

I have seen popular Linux distributions where interfaces are enabled and have programs listening by default.

I, the user, never asked for that.

This is one reason I do not use Linux distributions.

Too many assumptions about what the user wants.


You have to remember this was the 90's and it was a different time back then. I think there's a tendency to compare Windows in the 90s with how Linux is now. This was the same era as Mac OS 9 where a single application could still crash the entire system.

Exploiting common faults in Linux system software was pretty easy back then too.


> How are you ever going to assess the quality of this software in terms of security?

If it's very important to you, you can obtain a license that includes source code:

https://www.microsoft.com/en-us/sharedsource/


Or I could just use an open source alternative that does not require jumping through such hoops.

One where I can edit and compile the source, run it and redistribute it, too. All for free. But I guess all that is also possible under this shared source program you mention?


I don't think you need to sarcastically explain the benefits of Free Software in this forum.


I agree. Which is why I do not understand how anyone can claim Windows is not "closed source" in the sense of the opposite of "open source", as that term is commonly understood in this forum. Maybe they were being sarcastic?


Would anyone agree that complexity provides a foundation for insecurity while simplicity makes audits easier? Large software with many parts have more potential for flaws. Small software with few parts have less potential for flaws because they are easier to find and fix. Implausible? Well, I happen to believe this.

If Microsoft ever released the Windows source code, what would we find? Simplicity?

How easy would it be to audit?

Bias disclosure: I like small software. Windows and most all other software released by Microsoft is large, or packaged in such a way as to necessitate a large download/install.


> If Microsoft ever released the Windows source code, what would we find? Simplicity?

Depends on where you look. Much of the NT kernel is "simple", but it's not easy stuff to get right. There's a bunch of legacy code in the Win32 layers, especially dealing with user input, that is just frightening (comments like "This stupid hack makes the utterly broken Compaq XYZ-3000 keyboard not crash the system"). The COM stuff is just complex and arcane and top-heavy with architecture astronautics. The build system is, or was, a soul-destroying, radioactive and rotting cesspool of Perl; doing Windows builds sucked real hard.

So it's a mix of really quite good code, and really quite awful code (that they're dealing with, I think), and code that makes you want to quit, every day.

(Soapbox: You should never have code in your project that you are scared of touching. Never. If you do, get rid of it and replace it. Don't layer over it, don't give it to some intern to maintain, just face the problem and deal with it, or it will be the most costly code in your product).


If the code were ever released in a form that I could compile myself, and I could omit the parts I did no want... then I might be interested in Windows.

Given that MS is a very successful company that got to where it is today based on closed source and copyright, I am not expecting that to happen, ever.

I appreciate your candor.


> If Microsoft ever released the Windows source code, what would we find? Simplicity?

Windows source code has been available for a long time under the Shared Source initiative. Some governments (including Russia), large companies, universities, partners and maybe even MVPs have had access to some versions. Email source@microsoft.com

https://www.microsoft.com/en-us/sharedsource/


All software is large. If you break it into smaller pieces, it's still large. If you look into how iOS devices are jailbroken, it's usually combination of bugs from different pieces of smaller software that makes it possible.


If you like small code, simplicity, and a code-audit culture, look no further than OpenBSD.


A well-deserved endorsement from a Microsoft employee. (Assuming your HN profile is up-to-date.)

The kernel I start with is indeed derived from the same one that project started with.


My profile is up-to-date, although maybe I should add the caveat that my opinions are merely my own. :)


Of course.

Though I'm sure you are not the only Microsoft employee who has used OpenBSD.

At one point, after the Danger acquisition, Microsoft HR was advertising a position for a NetBSD developer.

Are there any rules about using a non-Windows OS in the office? Even if it increases your capabilities and productivity?


No, there aren't hard rules against it. Practically, however, it's not very useful to use a non-Windows OS depending on which team/product you're working with.

For instance, Macbooks are really common on some apps teams. In contrast, I work on the OS itself, so tools such as WinDBG and Hyper-V are essential to getting work done.


At least part of the secret is formal verification. MSR throughout the last decade made some big advances in software verification technology. These resulted in more than just academic papers, they were used to find tons of real bugs in MS and external (as part of the driver development kit) source code. There was a point at which all of the biggest names in software verification were either academics or at MSR or at both.

SLAM, Z3, DART were all tools that came out of MSR and have been incredibly influential on the whole field of software verification.


I wouldn't say that they are "the best in class". I have recently (~3 months ago) reported a pretty important security flaw in outlook.com (including its Office 365 version). They have only found the issue a week ago or so (and it isn't fixed yet!). I like Microsoft, but their awful responsiveness doesn't make me want to use their products, or to work again for them.


> company’s co-founder, Bill Gates, once ordered all of Microsoft engineers to stop writing new code for a month

Source?


Valentine actually dictated it, but it did actually happen. Brian's mail was something along the lines of, "I'm tired of reading about the latest security vulnerability in the NYT, so...". And I think it was six weeks, not a month.

Source: me, who worked there (in VS, not Windows) at the time.

EDIT: oh, yeah, forgot about the Gates mail. References are buried in this link: http://www.microsoft.com/security/sdl/story/#chapter-1

I stand by the B. Valentine version, just can't find a link.


This is the closest reference I could find. http://www.cnet.com/news/gates-security-is-top-priority/

I don't have Microsoft source though.


Man this is a well known and often commented fact, but god damn am I having trouble searching for sources and references on this one. Can anyone help out?


Not sure 10 years is an about face?


It takes Saturn 15 years to do an "about face" so that it's on the opposite side of the Sun. Perhaps Microsoft's code base, when printed out, is as massive as a planet.


Plus they are talking about caring about security and putting plans in place. Are they claiming they never cared and never had plans to make the situation better?

This is just like reporting on government where they report an idea a President mentions as "President dramatically reverses his entire 20 year belief system"

Cynically (and probably accurately), it’s a clever way to disguise a user feature (security) as a way to gather data so as to "protect" them. Sound familar?


The "about face" was when they decided that security really mattered (as opposed to the previous month, when shipping features really mattered, and security was an afterthought).


Right. They started being serious about security in 2003, it just took another 10-12 years for most people to get off of 2001's XP to actually start seeing the benefits. :)


When you have a code base as massive as Microsoft's, it is.


"Microsoft’s latest version of its operating system, Windows 10, has a feature called Windows Hello that allows people to log in to a PC with a scan of their finger, iris or face instead of using a password — weak versions of which are a common cause of data breaches."

Is that more secure in practice?

http://hackaday.com/2015/11/10/your-unhashable-fingerprints-...


Too bad it's doing the opposite on the privacy front, trying to collect more data than ever about Windows users, by default.


Set telemetry to Basic, in which case it doesn't collect any data about Windows users.

This whole debate has gone overboard through an inability to recognize the difference between telemtry and "spying". For example, between counting cars (important for city planning) and numberplate recognition.


This is the salient point. What good is platform security when the platform is keylogging and shipping telemetry to a third party not in your control?


Which third party is this?


Not to mention the usability front. Windows is ugly and confusing. It didn't used to be so ugly and was less confusing in the past.


I think windows 10 is quite nice. Windows 8 is a POS.


Windows 8 was great if you went straight to the desktop and never looked back. There were some terrific upgrades to the classic desktop (even the task manager got real-time graphs and proper hierarchies for the first time).

I always pictured a really talented dev team somewhere in Redmond plugging away thanklessly on things that the powers that be thought no-one cared about (like security and usability) until one day someone decreed from on high that the whole frontend would need to "beat the iPad", and it all turned to poop.

This is speculation of course, but drawn on multiple similar experiences.


Very true. But every time I pressed the Windows key to open an application, I was reminded of the horrors that lurked outside of the desktop.


Aye, but it got me really good at just typing the program name I wanted, instead of having to go poking through that pain in the ass menu.


Once i had my most used stuff either pinned or as a desktop shortcut, 8.1 was no biggie.


For $3 you could get StartIsBack and you would never know the Start menu had changed since Windows 7, or the nature of the desktop vs. the tiles screen. That made Windows 8 work just fine for me.


For free, you can use ClassicShell. I actually forget I'm on Windows 8.1 at home vs Windows 7 at work as they're almost identical.


Or update to 10 for free too?


"All software is large."

FALSE.

But this statement does not surprise me. It is this distorted view of programs that is a large part of the "security" problem, in my opinion.


Since this subthread turned into an off-topic flamewar, we detached it from https://news.ycombinator.com/item?id=10588972.


Saying it is false and calling views "distorted" does nothing to further the conversation. If you're interested in furthering conversation on the subject try providing reasoning behind your statements instead of condescension.


Perhaps you could explain how the statement "All software is large" "furthers the conversation"? How am I supposed to respond to that?

You think that false statement is a respectable "response" to a comment on the benefits of small software?

Small software exists both in the past (when memory and storage were more limited) and in the present. The very idea of small programs is a foundational one in the field of computing.

"All software is large" is not a "view" that is in agreement with reality. As such, it is distorted.

I am using small software every day. In fact I am writing this comment with a small program. When I write software, it is small, at least relative to anything from Microsoft.

I am too dumb to write large software.


The point is small software just does less; when you need to do more you either build large software or you put together a bunch of small software which is effectively, from a security standpoint, the same thing.

Small programs that don't do anything are secure but nobody cares.

And you've probably written software larger than Notepad; a very popular small Microsoft program.


In contrast to others, this is a comment that reflects truth.

The only points I would contest are 1. that "a bunch" is effectively the "same thing" as large software. A "bunch" can vary in number and quality. My userland is a single "multi-call" binary and quite small. The sum total of source code is not so large that I cannot manage it. It's keeping tabs on the kernel code that presents the challenge; and 2. that "nobody cares".

If "nobody" cared, then you would not be seeing a comment such as mine because there would be nobody to author it.

Moreover there would be no reasonably small kernel source that users like me could use. Some people care enough to maintain that kernel and to keep it relatively small.

Maybe that group of people is like the software: small. Suits me just fine.


> In fact I am writing this comment with a small program.

Which small program is that?


Are you saying you are unaware of any program that can post text to a web server that is "small"?

If all you are aware of is large software, then your statement that "All software is large" makes more sense.

"All software is not as large as Microsoft's software."

Would you agree with that?


I'm also interested in the answer to their question instead of these diversions.


[flagged]


Windows is actually very modular; even the kernel is designed in a micro-kernel style even though it all runs in ring 0. There are versions of Windows with no GUI.

Every other comparable operating system is in the same order of magnitude in the size of the code. The attack surface for any large OS is going to be about the same.


"There are versions of Windows with no GUI."

Do they run in VGA textmode?

I have used a program called "window" and today I use tmux but neither of those come from Microsoft.

How do I obtain one of these versions of Microsoft Windows that runs in VGA textmode?

"The attack surface for any large OS..."

Well yes, because they are all large. But what if the OS is small?

Let me guess, now you are going to tell me that all OS, e.g., kernel plus userland, are the same size?


Of course, not all OS kernel plus userland are the same size. However, we aren't talking about "smaller" we are talking about "small". Small being something you can write and/or fully understand. If you wrote your kernel and userland or at least understand every part of it then, congratulations, you made your point. But if you don't, your OS/userland is smaller but it isn't small.

And probably most developers could write an OS/userland by themselves given enough time but it would probably be a long time before it's practically useful or secure.


Alright, I think we have reached agreement.

Software that is "smaller" than Microsoft Windows.

It exists.

I prefer such "smaller" software and use it every day.

For me, it is "practically useful". I regularly see others stating it is useful for them, too.

Have no idea what "secure" means in the abstract, but maybe smaller software and the way it is used can be "more secure" (or less secure) than Windows.

"Fully understand" is a very high watermark to reach with any system. But given the choice between a larger system that is opaque and one that is smaller and open source I believe I can (partially) understand the smaller one better.


My point is all software is large; even Linux combined with it's user-land is large. Larger than Windows was a while back. What you prefer is pretty immaterial to the discussion; you choice is perfectly valid even though I would personally find it wasteful.


I do not use Linux on a day to day basis as I said multiple times in these threads, nor do I consider the popular Linux distributions as a good example of small(er) software. They keep getting larger.

Some have said the larger Linux distributions aim to be a "replacement" for Windows. I would tend to agree. I have little interest in Windows nor the "desktop" metaphor.

But, at least with an open source kernel such as Linux (aside from the binary blobs), unlike closed source Microsoft Windows kernel, a user can reduce the size.

One can also use an initrd with her own small(er) programs or a mutli-call binary instead of a GNU userland put together by a third party organization.

I have never compiled my own reduced size Windows kernel or any other part of Windows given that it is a _closed source kernel_. The idea that I am even having to state seems ridiculous. Most everyone reading this forum knows this. This whole "discussion" is surreal.

Funny that you use the word "wasteful". That is exactly how I view large(r) software. It wastes valuable resources that, on my systems, are in limited and often short supply.

The reason I mentioned what I prefer was to disclose bias. Certainly everyone has one.

The reason I mentioned what I use was only to provide example, to illustrate that such small(er) software exists. Again, the idea I even have to state this, to someone on this forum, seems surreal.

There are alternatives to Windows and to Linux distributions, at least in my case. I know for fact I am not the only one using small(er) software, but I am not comfortable speaking for others. What software they choose to use is their business, not mine.


No matter how small of a Linux (or other kernel) you compile, as long as it's runnable on a modern system, it's vastly more complicated that almost any other human machine. A minimal kernel is still more complicated than your car, a jumbo jet, etc. Usually well beyond the ability to be certain it has no security flaws.

Elements of your userland might be easy to understand but combine them and you have yet another highly complex machine.

Programmers 30 years ago would be totally amazed at the size of your "small" software.

As for wasteful, I have vastly more computing power than I need -- data is huge but most software, even Windows, is comparatively small. You can run versions Windows on machines that cost just a few dollars. I can virtualize and run multiple operating systems at once on a single machine without breaking a sweat. I also have 2 high resolution 24" monitors which I'm not going to waste on VGA text mode.


Maybe you have a very poor understanding of your systems, or you have given up trying to understand them.

Or maybe you think that because any attempts at understanding systems are in your opinion "difficult", other users should "give up" and leave everything to some company like Microsoft.

Or maybe there is some other explanation. I don't know.

I am not sure why it matters to you how someone else chooses to use their own hardware, or what software they choose to use.

Not sure I really want to know. But I think you are wasting your time.

Your systems are larger and more complicated than they have to be. You have admitted it. You also admit your systems are full of flaws, and it does not sound like you are expending any effort to find and fix them.

Now you are telling me the size of your monitors. Who cares? If you see all software as "large" and "complex", then that is how you see it. So what? What does this have to do with anything?

I started these threads because Microsoft ran an article in the NYT about greater "security". But it's still closed source. Microsoft does not let users unconditionally review nor trim the size of the code.

If you think Windows is the best option for you, then go right ahead and keep using it to the exclusion of anything else. I'm not trying to persuade you to do anything. Do whatever you want. I wish you all the best.

But spare me the comments. I do not follow your reasoning, or whatever point you are trying to make, and I'm not interested.


Why didn't you just answer the question? Which small software are did you use to post your comment? Why be so vague?


What difference does it make what small software program I am using?

You do not believe such a program exists?

Or maybe you want to critique the program? That would be typical commenter behavior, but it is totally irrelevant to the discussion.

The topic is Microsoft software and the relationship between code size and probability of security flaws.


> What difference does it make what small software program I am using?

You brought it up and now you don't want to say, I find that odd. It's hard to have a conversation if you're going to be so.. cryptic.

> Or maybe you want to critique the program?

Maybe I want a significant example of a "small program". At this point, neither terms have been well defined. You called the program used to make your comment "small" so tell me what it is.

> The topic is Microsoft software and the relationship between code size and probability of security flaws.

The greater the code size the greater the probability of bugs. And security flaws can be exploited utilizing the combined flaws across different independent programs or included libraries.

My point is the only programs so small that security issues are a non-issue are either not interesting or combined with other programs to do something useful.


"The greater the code size the greater the probability of bugs."

Right. I think we are done here.

Microsoft Windows is larger than any OS I know of, and only keeps growing.

To answer your question, I am using a text only browser that's based on the links project, with my own minor modifications.

I still have no idea why you and the other commenter are asking. It seems quite clear neither of you have any legitimate interest in such programs. Otherwise you would not be challenging my statement of such a simple fact as the quoted sentence above, as it applies to Microsoft.

I strongly disagree that "small" programs are not useful. As I have previously stated, I use them every day. I use programs such as sed and netcat to interact with the www even more than I use a browser.


So Microsoft hired a PR firm.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: