Hacker News new | past | comments | ask | show | jobs | submit login
Think twice before encrypting your HFS+ volumes on High Sierra (bombich.com)
212 points by tobiasrenger on Oct 2, 2017 | hide | past | favorite | 73 comments



A warning before FS conversion would be nice, given such a system isn't mountable by any previous OS version. This might be an example of where thorough QA seems to have gone down hill, if Apple Support think this behavior was "impossible".


I made a CCC backup of my Sierra (10.12.6) boot volume before upgrading it to High Sierra. When that didn't go smoothly (Hackintosh reasons, not High Sierra's fault), I was able to boot the backup, and 10.12.6 turned out to mount my now-APFS High Sierra volume just fine. So "any previous" isn't strictly true.


Great idea! That's exactly what the author of the article says at the bottom.

(Edit: reworded.)


Why discuss anything in the article at all if we have responses like yours to look forward to?


The HN guidelines explicitly ask to not leave comments like this.


Seriously regretting the upgrade. Am I the only one having random graphics issues, such as rendering bugs, latency, and freezing, on a multi-monitor setup on a Late 2013 MBP? Lesson learned: always wait a few months after a major version bump.


I generally wait until version n + 1 is at least in beta before installing version n. There is rarely a sufficiently compelling new feature to persuade me to risk the stability of my system sooner than that.

(Still pining for Snow Leopard :-)


Initially I was regretting upgrading as well. Crazy high CPU as a 'kernel_task' which in turn made my machine unusable. It would completely lock up.

I ran 'kexstat' via the terminal to check out what kernel extensions I had that might be causing the issues. The only third party extensions I had were VirtualBox and Avira. Uninstalled Avira and now things are buttery smooth.


The window manager now runs on Metal 2, which should give a nice jump in responsiveness, but it is likely responsible for the graphics glitches people have been reporting, especially with older machines.

As a counter-anecdote, I have a rMBP 2012, and I haven't encountered any major bugs (no kernel panics, hangs, lag), though I have seen a few minor glitches (Safari tab bar didn't draw properly, only saw this once). On balance I'm happy with the upgrade however, my system feels more stable and faster as a whole, fixed a few issues from Sierra.


You’re not the only one I’ve heard of, but I haven’t worked a pattern yet. I’d agree with your conclusion though, unless you have two machines wait until the .2 release.

Of course make sure you report the issues to Apple.


MacBook Pro (2014) with two external 4K screens here: Got glitches (some areas of the screen sometimes look like rendered memory corruption).

When I scale the external monitors’ resolution from 1920x1080 Retina to 2560x1440 Retina per monitor, macOS lags heavily and freezes. Sometimes, after a few minutes, it recovers and seems responsive. Then I drag a window and it becomes unresponsive again to the point where it won’t recover and only a hard reboot helps.

Upside: With Metal 2 the fan stays quieter. I can use two 4K screens and won’t get deaf from a jet turbine on my desk. Before High Sierra I connected at most one 4K screen and occasionally disconnected it because the built-in dedicated graphics card got really hot and caused the fan to run at high speeds.


Lots of people are having problems. Check out the replies here:

https://twitter.com/stroughtonsmith/status/91239022851777331...


Always wait a few months. I go out of my way to download the recent OS to my apple app store account in case I need to re-install, and always maintain a carbon copy cloner backup.

Edit: SuperDuper is also a fair alternative to CCC, don't work for either, I forgot bombich owns CCC.


> Am I the only one having random graphics issues

Something was changed in the rendering and CALayer is often nil during viewDidLoad. I assume it will break lots of apps that render custom UI elements.


Having precisely the same problem on precisely the same hardware.


Those issues have been fine with me, and my Late 2013 MBP.

VMWare Fusion + Bootcamp, however...


For Fusion and Bootcamp, especially if you converted to APFS, you'll need VMware Fusion 10. Then on top of that you'll have to remove the VM that points to the bootcamp partition and recreate that bootcamp VM.

Some have reported success when using that scenario. Unfortunately not everybody has been that lucky, so far it is unclear why.


I haven't converted my disk yet, but that's good to know.


Keep an eye on this thread [0] at the VMware Fusion forum.

Seems that it depends on if you installed Windows 10 by BIOS or EFI. For BIOS installed configurations you would have to disable SIP in order to use the bootcamp partition as a VM.

Hopefully VMware finds a better solution.

[0] https://communities.vmware.com/thread/572924


One more recommendation is to use the "try before you buy" option [0]. You have 30 days to test.

If it works out for you then buy and assign the license to the install and the time limit will be removed (so no need to reinstall)

[0] http://www.vmware.com/products/fusion/fusion-evaluation


Same series of machine here. It restarted itself at 8am this morning when the lid was closed and it was on the side. That’s the only problem I’ve seen.


If these are bad enough, they are sometimes fixed by a fresh install (which has the effect of reseting settings to High Sierra defaults).


I restarted into MacOS for the first time since March, updated to High Sierra...wiggled around a little bit still feeling this operating system is pretty much abandonware at this point.

Restarted back into bootcamp Windows 10 minutes later.

I'll be back next March to poke around some more.

I can't see MacOS existing in 1-2 years in its current form. Apple will unify everything and move to ARM.

I almost see this as a fact at this point.


Another reason is that read/write speeds on APFS encrypted volumes are 30-40% slower than non-encrypted volumes[1].

[1] https://news.ycombinator.com/item?id=15333036


As the speed test was using a beta of High Sierra / APFS, I performed some tests of my own. I've written /dev/zero to the disk, and read Xcode_9_GM_seed.xip from the disk using different block sizes using `dd`. I ran the read tests a few times, as there was a lot variation in the results. I've excluded the first result of the read results for each FS. The test was run on a Samsung Evo Pro 850 500GB SSD inside a Sharkoon USB-C case connected over USB-C to a MacBook Pro 2016 w/ Touchbar.

    Speed in MB/s
                HFS+    HFS+ Encrypted  APFS        APFS Encrypted
    1M write    498     473             490  (-2%)  444 (-6%)
    1M read     6896    6815            6860 (-1%)  6827 (0%)
    4k write    486     477             485  (-0%)  445 (-7%)
    4k read     2734    2709            2599 (-5%)  2545 (-6%)
While there is still some performance degradation between HFS+ and APFS, it's within a single digit now.


Following the comments on the test method, I've run the 'Blackmagic Disk Speed Test'. The results are the following.

    Speed in MB/s
            HFS+    HFS+ Encrypted  APFS    APFS Encrypted
    read    513     513             508     506
    write   477     460             478     432
The earlier results of 7 GB/s are quite impossible with the Samsung Evo Pro 850 -- those max out around 500 MB/s. So my earlier test results are invalid. The new test results mostly show that both file systems can saturate the disk's raw throughput.

-- update

On my internal Apple SSD (PCI-Express) I've created a partition for testing. I've kept "Blackmagic Disk Speed Test" running for a few minutes and took an average of the tests. The results are in favor of APFS, beating HFS+ by 3-17% in throughput.

    Speed in MB/s
            HFS+    HFS+ Encrypted  APFS        APFS Encrypted
    read    2400    2240            2700 (+13%) 2400 (+7%)
    write   1900    1930            2000 (+5%)  1980 (+3%)


dd is not an acceptable benchmark. You need to eliminate the effects of buffering/caching to get accurate results.


/dev/zero is also not an acceptable source of data; since \0\0\0\0\etc is unnaturally easy to compress. A large real file (e.g. installation ISO) may have been a better idea.


  Speed in MB/s
             HFS+     HFS+ Encrypted  APFS     APFS Encrypted
  1M WRITE   1375     1373            1372     933
  1M READ    2446     2340            2162     1304
  4K WRITE   852      797             502      378
  4K READ    2106     1486            2156     1001

https://news.ycombinator.com/item?id=15333754


This was high sierra beta though.


That's really strange. Why would encryption affect APFS so badly but not HFS+?


Lots of reasons.

If crypto is part of the syscall path, then you definitely pay with latency. If you can push IO to a worker thread then you can release the user call faster. This also requires tuning the read-ahead cache more aggressively.

It could also be a change in crypto algorithm, unaligned page IO, or any other overlooked IO latency problems.


At a high level, I believe APFS encrypts files individually (and uniquely) instead of the whole drive. Have to do some more reading :)


Not instead of, but optionally in addition to (https://developer.apple.com/library/content/documentation/Fi...)

On Mac OS, it seems you only get full disk encryption, though (https://arstechnica.com/gadgets/2017/09/macos-10-13-high-sie...)


Appreciate the clarification. Both are better than just one.


Just tested this. Was consistently getting 2000mb/s+ writes before, now getting ~1600mb/s. Reads are inconsistent, but most results are also 20-40% slower than before.


Just curious, at these speeds, what use case would really affect you, or people you can imagine?

30-40% in a stress test is significant. but if I'm not editing 4k videos perhaps, where would I notice this?

Even amongst us programmers who use horrible inefficient IDEs and make horrible inefficient electron apps, disk speed isn't really a noticeable bottleneck right now?

Even a convoluted local environment to simulate a server environment with a ton of microservices wouldn't really be affected much by this


Disk speed is always a bottleneck, much more than CPU generally. Almost every thing you do leads to, or is dependent on, IO speeds. If you disk speed drops by 40%, for example, every application will be at least 40% slower to load. That's a major hit. That horribly inefficient electron app is now slow to load, not just slow to run.

Or, to put this another way: Do you have an SSD? Why?


The filesystem is cached in RAM so it’s too simplistic to talk about applications like that.


yeah, not always true, and this is more about noticeable performance. The answer is to why I have an SSD is speed and throughput and the point of calling this file system change out is because there are diminishing returns to the productivity in real world use

a benchmark is just a repeated process which is not reflective or real world use except in very small narrowly tailored applications


Also, AFAIK, if you use your disk for Time Machine backups, it cannot be APFS: https://support.apple.com/en-gb/guide/mac-help/disks-you-can...


Even if you stick with Sierra, precisely to avoid this sort of thing, you might fall foul of the disappearing password hint problem: https://apple.stackexchange.com/questions/299558/


So, I went to my local Apple Store because I could not format my boot volume to APFS on my MBP-15(non touch-bar). I had backed up and already wiped my drive. Clean slate.

The Apple Store genius flipped out and was like "why are you doing this?", called me a "fan boy" and just was in general disbelief that somebody would be upgrading a boot volume. He sort of dropped comments throughout our interaction about just how bad an idea this was. It made me feel weird about making the decision.


> APFS volumes cannot be reverted to HFS

Is that actually true? I’d heard that Apple had reversion instructions for those running the High Sierra beats on a system with a Fusion Drive, which they dropped support for in the final release.


The instructions are backup, reformat, restore.


Hah, good to know. I guess that’s what you get for installing a beta OS on a machine you care about.


Converting between different file systems in place is an interesting technical problem.

Let's say you have a disk that is formatted with filesystem S (for "Source"), and you want it change it to filesystem D (for "Destination").

Let's assume for now that S is not encrypted. Let's also assume that we have a fair amount of scratch storage available somewhere not on the disk we are trying to convert. (I'll address removing this requirement later).

One approach is to make something that given a list of files, including metadata, and a blank disk, can create a D filesystem on that disk and then copies the files to it.

You also need something that given a file on S can give you a list of the block numbers that hold that file's data.

That's an S to D copier, not an S to D in place converter, but it is useful for testing your understanding of D. Once you are confident that you know how D works, you can then make a new version of your D copier.

The new version doesn't actually write anything out to the blank disk. Instead, it writes instructions for making the disk out to a file in scratch space. For blocks that would hold metadata (e.g., inode blocks on Unix filesystem), it records in the scratch space what their content would be and where they would go on D.

For blocks that would hold file data, you write into the scratch area the list of where that data resides on S, and where that data would go on D.

What you end up with, then, in the scratch space is essentially a set of block by block instructions for copying S to a blank disk and ending up with a D filesystem, where instructions are either "copy block X from S to block Y of D" or "Fill block Y of D with the following literal data".

So to do an in place conversion, just follow those instructions, but with S and D the same, right?

NO!!!!! That would likely totally trash things, because it is quite possible that a source block in one instruction may have already been overwritten as a destination in an earlier instruction.

What you first need is something that goes through the whole list of block copy/fill instructions and figures out an order they can be done in that is safe.

It is possible that there is no such order. In that case, you also need some free space that can used for temporary storage. Given some free space you can always find an order for the block copy/fill that works.

The free space can either be on another device, or if S is not full you can find and use its free space.

Anyway, once you do all this, you end up with a simple list of data move/fill steps that will convert S to D in place. You just have to figure out how to actually execute them. The tricky case is when S is your system disk so you cannot un-mount it. Your OS might get very unhappy if you start converting it in place, and you also have to worry that it might write things to it in the middle of the conversion that could corrupt it.

Probably best would be to do the conversion booted from something like a Linux live CD.

I said earlier that you needed some separate scratch space to store the information for the new filesystem. If there is free space on S you can probably use that.

This comment is getting long, so I'm going to cut it off here. In a reply to it I'll toss in a couple of implementation tricks that can be useful, especially if D is a proprietary filesystem that you do not have a complete spec for and so cannot make your thing that figures out where to write D metadata and file data.


(continuation of above comment)

If you do not have a good spec for D, a kludge is to make a virtual disk driver. Make a blank D volume on that virtual disk. Then you can copy everything from S to D, except you replace the actual data for each file with blocks of the form: <signature> <block # on S> <some constant fill pattern>.

For each block written that does NOT match that pattern, your virtual disk driver actually does save the data somewhere. For blocks that do match that pattern, it just records what block on S they came from.

From the information gathered by the virtual disk driver you have enough information to construct a D file system from the S filesystem.

If you do not have an operating system that understand both S and D (common in dual boot scenarios), you can still do the above, but first you have to have a step where from the OS that understands S you run something that makes a pseudo dump of S. For example, a modified version of tar. This modified tar would act like normal tar, except instead of copying the actual file data, it would just make a list of disk blocks.

Then you switch to the OS that understands D, and you can use your virtual disk driver and the output from the modified tar to make the virtual D filesystem and gather the necessary information.

Note that this only has a chance if D does not care about the actual contents of files. If D does care, for example maybe it stores a checksum of the file in the metadata, then we are screwed because we do not have the original data in this scenario. Whether or not this can be worked around depends on just how much we know about D.

When you are ready to do the actual conversion, you face the question of where to run it. If S is not your boot volume, you might be able to get away with dismounting it and doing the conversion under your normal operating system.

When we were playing around with something like the above in the late '90s (part of a project considering making a PartitionMagic competitor, although we didn't get much past the experimenting stage before management decided the market was too small, PartitionMagic was too dominant in it, and their patent was scary and so cancelled the project) we were leaning toward using a Linux live CD (for everything, not just the actual copying).

You probably want the code that actually goes through the block copy/fill list and executes it to have some sort of fault tolerance that allows it to pick up where it left off if something interrupts it such as a power failure.


I think it is easier. Split your source disk in:

- directory blocks that describe the directory structure and, for each file, where to find this data on disk.

- data blocks that contain actual file data.

- free space.

Chances are you don’t need to touch the data blocks, and you have the free space as scratch space. All you have to do is to write a new set of directory blocks for the new disk format, and then delete the set of old directory blocks. If the on-disk format is flexible enough w.r.t. the location of directory and data blocks and you’re somewhat lucky, you can mostly write the new set of directory blocks in existing free space on the source disk.

That would lead to an intermediate state where the source disk still has the old format, but has one extra file that contains most of the new directory structure.

The final, risky part then is fairly small: overwrite the disk’s root blocks so that the system sees the disk as having the new file format with the new directory structure.

If the disk formats use different disk blocks as first starting points, you can even have a disk that can be used in either format (IIRC, that was possible with FAT and HFS. FAT has its master directory block in sector 0, HFS has it in sector 2)


It's possible to upgrade an ext4 filesystem to btrfs, and it's done the way you're describing. The end result is a btrfs filesystem containing the same file hierarchy, with a bonus: an additional raw file whose content is a disk image of the ext4 filesystem (pointing to the same blocks as the files), which allows a rollback.


Yeah, that would be a good approach if you have enough free space.

You could also have a preprocessing step where you rearrange files on the source disk so that the free space is arranged optimally for holding the directory structure for the new filesystem.

Speaking of rearranging files...another fun technical problem is doing a (mostly) portable disk defragmenter. Given one filesystem/OS dependent function,

  block_addr * blocks_for_file(char * pathname)
which given a file returns a list of the addresses of the disk block addresses of the blocks that contain the file, and a filesystem/os dependent function that can set the metadata (ownership, permissions, etc) for one file to be the same as another file, you can write a defragmenter where everything else just uses portable functions, such as C stdio. It won't be quite as good as a defragmenter that knows filesystem internals and does low level writing directly to the disk, because the (mostly) portable defragmenter cannot easily move directory blocks around in a predictable way. Still, you can get most files defragmented.

You can first build a map from files to disk blocks with the blocks_for_file function. You can also map the free space by filling the free space with files and using blocks_for_file to map those. When filling the free space to map it, you should fill it with many one block files rather than a few large files, for reasons that will soon become apparent.

Then suppose you want to move fragmented file X to blocks B, B+1, B+2, ..., B+N-1, where N is the size of X in blocks. First you find all files that overlap [B, B+N-1]. For each file Y that you need to move out of the way you do as follows (remember, earlier we filled up all the free space with small files, so right now the disk is full):

1. Delete just enough of our free space filler files to free up enough space for Y.

2. Copy Y to Y.new, including copying ownership and permissions.

3. Delete Y.

4. Rename Y.new to Y

5. Fill up the space that was freed in step #3 with one block files.

After you have done this with each file that overlapped [B, B+N-1], all of those files now reside in the original free space, and [B, B+N-1] is covered with small files you created. You can now delete those files, leaving the disk with just enough free space to let you make a copy of X, which will end up in that free space and be consecutive. You can then delete the original X, and fill the newly free space with more one block files, and we are back where we started, but with X defragmented.

Repeat for every other file you need to defragment.

What I like about this approach is that you defragmenter doesn't have to know anything about how the underlying filesystem works.

Going the other way, defragmenting when you do know intimate details of the filesystem, there is also a very cool albeit probably impractical approach to defragmenting.

First, you figure out where everything currently is and where the free space is.

Then, you have to peek at the filesystem's in-memory data structures to find all the state that it keeps at runtime concerning allocation.

You need to know enough, and grab enough runtime data, that you can do all of the following:

1. Accurately predict where a file copy operation will place the new file,

2. Accurately predict how that will change the internal state,

3. Accurately predict how file deletion will change the internal state.

These all have to be good enough that given the state of the filesystem (on disk and in memory), you can look at proposed series of copies and deletes, and correctly predict what the final layout of the disk will be.

When you can do that, you can in theory then write a defragmenter that looks at the state of the disk and the in-memory state, decides how it would like the disk to be arranged, and then writes out a shell script that accomplishes that entirely with a long list of cp, rm, chown, and chmod commands (or the equivalent if you are not on a Unix or Unix-like system).


That portable defragmenter probably could run into problems because writing a single block file can take more than one block on disk (the file system may need extra directory block(s)). Instead of deleting the files, setting them to zero length might work better.

”and then writes out a shell script that accomplishes that”

You better use a different device for that.

Firstly, you don’t know how long that script will be, and its length will affect what commands to use.

Secondly, if you just make it long enough up front, it may occupy space that, at the end of its run, must store other data.

Here’s another even more impractical idea: if all you know is that your disk does first fit, can you write a program that, after whatever amount of disk I/O, defragments a drive? I think you can (ignoring blocks used to store directory information)


Kind of tangentially, the defragmenter stuff reminds me of another interesting thing you can do given low level information about layout and access. I'm not sure how much of the following is still applicable on modern filesystems and hardware. It was pretty effective back at the end of the 20th century and early 21st century.

Consider a program like Photoshop or a web browser. If you watch the I/O it does while starting there are a lot of cases where it opens some file, reads a few k, then goes and reads from a bunch of other files, and eventually comes back and reads more from that first sale.

It often happens that the data it reads from that first file is actually consecutive on the disk, but because it read it in two separate reads separated by many reads from other files it has to do a seek when it comes back for that second part.

This typically happens for many different files during a launch. Font files, dynamic libraries, and databases, for example.

If you make a record the I/O sequences during many launches of a given program you also find that they are mostly the same. There might be a few differences due to it making temp files, or due to differences in the documents you are opening on each launch, but there is also a lot of commonality.

At this point you can get clever. Make something that can tweak the I/O requests made during application launch. When a program starts launching, your tweak thingy can check to see if you have a log of a previous launch. If you do it can load that and then for each I/O during the launch it can predict if data beyond the extent of that particular read is also going to be needed. If so, it can add another read to grab that data, reading it into a temp buffer somewhere.

That might seem pointless, because the launching program is still going to come back and try to do a read from that same part of the file later. Yes, it will...but now the data from your early read might still be in the system's file cache, saving a seek.

I'm simplifying a bit. What you would do in practice is analyze the logs of the prior launches, identify which requests caused seeks, and then taking into account the size of the system's file cache and whatever you know about how the system cache works, figure out which reads should be extended and which should not be (because they don't incur extra seeks, or because their data won't hang around in the cache long enough).

Basically, you are preloading the cache based on your knowledge of what I/Os will be upcoming.

On Windows 98 doing this could knock something like 30% of the launch time for Microsoft Office programs, Netscape Navigator, and Photoshop.

I was curious once if this would work on Linux, probably around 2000 or so. I made some logs of Netscape launching by simply using strace to record all the opens and reads that occurred during a few launches.

I then identified several small files that had multiple reads during launch separated by reads of other files, and then make a shell script that just did something like this:

  cp file_1 > /dev/null
  cp file_2 > /dev/null
  ...
  cp file_N > /dev/null
  exec /path/to/netscape $*
where file_1, ..., file_N were some of the files that had multiple interleaved reads during launch. I made no attempt to just read the parts that were needed (which could have been done with dd) as I just wanted a quick test to see if there was a hint that things could be sped up.

Launching netscape via my shell script turned out to be something like 10-15% faster than normal, if I recall correctly. I was surprised at how well it worked considering that doing it this way (do all the cache preloading up front) should result in fewer cache hits than the "preload the cache for each file the first time that file is accessed during launch" method.


Mac OS X did that in version ??? to speed up system booting. See http://osxbook.com/book/bonus/misc/optimizations/#ONE, which claims it halved boot time.


Don't think twice. Keep backups and encrypt your laptops if you care about any of your data and your laptop travels beyond your desk and a locked safe.


Glad I upgraded to Sierra just before High Sierra came out. Always good to be a release behind, and upgrade to the latest if there's a reason to, or 6 months or a year after all these wrinkles are ironed out.

APFS looks very promising, it's hard to let Apple use me as a live beta tester.


> Always good to be a release behind

You at least patch your system right? Being behind did Equifax no favors.


Apple maintain security updates for at least 2 previous revisions of macOS. As long as you keep your system updated you’re as secure as the current version.


There are unfortunately many security fixes that are not backported. A quick Google search turns up numerous incidents: https://www.google.com/search?q=apple+security+kernel+not+ba...

There are also numerous incidents mentioned by Google Peoject Zero, if you read through their macOS disclosures.

It’s definitely unsafe from a security standpoint to run anything other than the latest release.


I wasn't aware of this.

Still, running something like little snitch, behind a router, along with some browser plugins can go a long way. Until Sierra, I have never had an upgrade that went smoothly, something critical I used has always broken.

Unfortunately Apple's OS updates are as bad as a zer0 day exploit in how they can knock one's setup inoperable, sometimes for many weeks or months until there's a way to fix, or revert.

Software vendors I'm using seemed to be out ahead of the High Sierra train which was encouraging.


> Unfortunately Apple's OS updates are as bad as a zer0 day exploit in how they can knock one's setup inoperable, sometimes for many weeks or months until there's a way to fix, or revert.

You do backup things right? Disk cloning through Carbon Copy Cloner is super fast and restores are painless.


In my mind I don't own a macbook as much as I own a warranty and an immediately bootable backup.

Apples backups were the reason I switched to Mac from windows a decade ago. Can't be beat.

I run Time Machine on one hd at office, carbon copy cloner on another hd at home, all they the USB hub.

It's not a backup if it doesn't exist in more than 1 place.

I was referring to the case where half your apps don't work and the other half benefit greatly from the new version.


Ouch. I always upgrade within a couple of months though so I guess I've never noticed this. Certainly APFS has made my 5 year old Retina MacBook Pro feel faster when compiling so I'm not complaining too much.


Yup.

Being a major release behind (and staying patched) is something I picked up during using windows for far too long.

Once you have an environment you rely on more than having the latest, it makes a case for having a personal vs working device.


Tell that to all the Debian stable folks.


I just wish Apple did LTS releases (for OSX and IOS both). Yealy major releases are a major hassle.


iOS developers don't get that luxury for long.

Usually the way they do it: Apple Store requires apps to work with this new feature on the latest iOS, or nothing gets approved. For the IDE to access these latest features, Xcode needs to be upgraded to the latest version. For Xcode to upgrade to the latest version, your MacOS has to be the latest version. And sometimes, to upgrade to the latest MacOS version, your computer has to be a newer version.

Apple stock goes up. Apple Store employees counter your criticism about the new gimped device by pointing to the large sales orders.

You question your sanity as the Apple distortion field warps around you. "Maybe this is a good device....."


> Apple stock goes up. Apple Store employees counter your criticism about the new gimped device by pointing to the large sales orders.

The number of sales orders caused by iOS developers whose hardware is too old to upgrade to the new major OS is going to be so small as to be meaningless.


Doesn't really matter, now does it.

Speculators gonna speculate and sales people going to say non-objective sales people things.


Haha.. iOS development is a very fair point - I keep a Mac Mini updated for that purpose, but still run into snags when trying to run xcode.

I never do major MacOS upgrades without doing a full carbon copy clone disk image to restore if things go sideways.


I upgraded one of my systems from El Capitan to Sierra this morning. I had a moment of panic when Sierra didn't show up on the Purchased tab of the Mac App Store, although I did in fact have a working copy of "Install macOS Sierra" in my Applications folder.

I tend to like these refinement updates, like Snow Leopard and Mountain Lion, but I have to say that APFS gives me pause. If Apple doesn't have enough confidence in it to deploy on anything other than SSDs, then it doesn't sound like it's fully baked.


It's not a matter of "confidence"...it's just designed with the assumption you're using it on an SSD. It won't work as well on a spinning drive.


Sierra was the first smooth upgrade I've experienced on Apple. Every previous version was wrought with core items breaking that couldn't be fixed, and required restoring backups. I find it's not the new features that mess things up, but Apple's desire to make existing functionality break.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: