You also have no idea how http://files.dryga.com/boxes/osx-mavericks-0.1.0.box was built (which is my main bugbear with Docker and Vagrant images), which kext hacks were used, whether the image has been tampered with etc etc.
Edit: okay, this was a bit harsh. It looks like the creator of the images only uses them on a Mac OS host - so no kext hacks needed like if you were building a Hackintosh. I'd still rather a script of some kind that took a dmg installer and built this image. It looks like https://github.com/radeksimko/vagrant-osx will do something like that - linked from Andrew's repo, so thanks!
Can a vanilla OS X DMG / ISO be installed with this text file modification? (last time I looked, you had to replace one of VirtualBox's DLLs on Windows, for example)
Sorry, but I can choose "Mac OS X" right from the "machine type" (It then lists a number of OS X versions from Snow Leopard up to El Capitan) when creating a new machine on VirtualBox 5.x on Linux, and I'm fairly sure it was the same on 4.x too. How is this "unsupported"? Don't know about VMware though.
Edit: I must add that there are no Guest Additions available for OS X, so if you go on and create the machine, it won't be able to benefit from decent graphics integration, shared clipboard, drag & drop support - but this is IMHO far from OS X not being supported.
It's really sad that the only way to get macOS running in a virtual machine is with hacks and tweaks. I'm working on a cross platform desktop app written in Qt and while it should theoretically work on macOS I haven't had a chance to actually try it, because I don't own any Apple computers. Apple is shooting themselves in the foot.
How is that exactly? A huge majority of developers in your situation go out and buy a Mac; another hardware purchase in their pocket. This is part of their business strategy: developers are forced to participate, and by and large they do just that.
It hurts you far more than it does Apple by refusing to participate. If you believe you're "hurting Apple" by not developing for their platform, think again. Even if your software is free to download, it's not Apple that suffers - it's your (potential) users; and they will blame you, not Apple, for the lack of support.
Apple wins this battle every time. Welcome to the ecosystem. :)
So I'm one of those developers that got forced into buying a Mac Mini in order to test my software on MacOS. On one hand, you're correct that Apple "won" because I was pushed into spending $500 on their hardware. On the other hand, the parent is very much correct. Candidly, I'm still angry at Apple for making their ecosystem a pain in the ass to work with. Hell, I would have paid good money to buy a MacOS virtualization license because it screws up my work flow. Every other OS I work with I can virtualize on my laptop, which means that I can work remotely without internet such as on airplanes or in cheap hotels. For MacOS, I have to ssh into the box, but even that is a royal pain in the ass because there's a lot of the ecosystem that flat out doesn't work without the desktop open. Environment variables are set differently. Some programs like MATLAB will not run (Yes, you can run MATLAB on the console through an ssh session, but you have to VNC into the box first to make sure the desktop is running.) I hate it, which means that I flat out tell my customers not to use Apple hardware if at all possible. So, yes, they got $500 out of me, but I actively work to ensure they get less business in the long run because their ecosystem is a wreck. Just letting us virtualize MacOS would go a long way to fixing that problem.
Yeah, sure, it's possible. It's also against their EULA and they could break it (just like Hackintoshing) at any point in time and then bam; half of your customer base is receiving untested software because you no longer have access to your virtualized OSX until you fix it (just like every time an OSX/xcode update is rolled out). Is that worth saving the $500? For most of us, it's not.
Sure you can mitigate it by not doing updates, having a test machine to receive the updates and verify that it still works, yadda yadda.
I managed CI infrastructures for IOS/Android developers a few years ago and the hoops we had to jump through with xcode / EULA-breaking virtualization, etc was something I'd never experienced before. Hopefully it's different now, but we used to have to run farms of mac minis because we couldn't have jenkins/bamboo trigger more than one xcode test/build run at a time. So we'd have these $1500 mac minis sitting around doing ONE job at a time whilst the Android builds could easily do 1-10 things simultaneously.
Everything felt like Apple was fighting our developer experience. It was even worse for our business model as we developed 3rd party applications for other companies which brought it's own headaches with signing, testflight, getting demo/beta apps to clients before release to app store, etc.
> but we used to have to run farms of mac minis because we couldn't have jenkins/bamboo trigger more than one xcode test/build run at a time.
As someone who is interested in OSX testing like this - ie, not arguing with you - would it be possible to have a mac mini use virtualbox with real OSX installations running in parallel?
From what i've heard in this thread, Virtualbox/etc support OSX just fine, on OSX machines. Hypothetically, you could parallel-ize on a single machine via virtualbox, no?
I understand Apple is still making it a PITA though. Not disagreeing, just learning why you were only running a single instance per box. :)
I last worked on this stuff 2-5 years ago so I'm not completely clear. I'm probably very wrong on this nowadays so hopefully someone can give you better info. IIRC you can run two OSX guests on an OSX host with Apple hardware. Anything that isn't using OSX as the base OS is EULA violating. I see a lot of people doing it via esxi. I'm not sure if the EULA has changed and that's ok now or people are just doing it. Thankfully I'm out of that side of CI/CD and plan to never deal with it again.
Our initial goal was to have one OSX system running multiple xcode builds/tests. I think at some point we may have gotten it working by having different targets set up but I don't recall the specifics.
Sorry I can't be of more help. We worked with 3-4 other app developers and it seemed like all of us were doing completely different work arounds to reach the same goal. I'd love to know how King or any of the other huge (number of apps) developers deal with these things. It doesn't seem like it's talked about much.
Mac OS X is mainly a desktop OS and those of us that know Apple since the Mac Classic days don't have any issue with their desktop culture, which wasn't any different from the Atari and Amiga ones.
That's the point. I don't want to have to ssh into it. However, since I can't legally virtualize MacOS on my laptop I'm forced to in order to make sure that my software works fine on MacOS. That's why I'm resentful toward Apple. They make it really difficult for people to develop cross platform applications that work on MacOS and the only response I tend to get is a smug, "Eh, just buy a Mac Book."
I'm just guessing here, but I don't think that Apple is all that interested in making it easier to develop cross-platform applications. From their point of view, the best applications are OS-specific ones. So if you want a Mac application, you're best coding on a Mac, with Mac tools, and within that hardware eco-system. From this point of view, the same argument applies with Windows, Linux, etc... (albeit with the caveat that hardware is more varied).
Apple isn't trying to make that way of working easier... they are trying to make life easier for macOS-specific devs. Apple is looking for well integrated applications, and as I'm sure you know, that's really hard to do well cross-platform as you often end up coding to the lowest-common interface.
Apples interest in making cross platform applications easier waxes and wanes with their market share. If I remember correctly, they were quite keen on java and making it work well at one point.
They still heavily use Java on their backend systems, but that's not what we're talking about here...
You're right - they did have a Java-Cocoa bridge early in OS X, but deprecated it quickly when it was difficult to translate Obj-C semantics into Java. I remember trying it, but it wasn't really that easy to use. Even in this case though, the idea was to write Mac-specific applications, just in a different language. Most of the application's magic was still in the ObjC/Cocoa layers, which weren't cross-platform (or were they - I can't remember if there was a Windows port at sometime??).
While this would have made it easier to write cross-platform applications, realistically, that was never the goal. I think the goal of the Java-Cocoa bridge was to offer a backup plan in case too many devs didn't like Obj-C. At the time, Objective C was a novel language for many people. Once Obj-C got enough mindshare and it looked like it got enough of a buy-in from developers, Apple ditched Java quickly. I'm sure the licensing issues from Sun in the early 2000's didn't help matters here.
Also, it's only been fairly recently that Apple turned over the Mac Java port back over to Oracle. For the longest time, Java developers on Macs were always a version or two behind because they were on the OS X release schedule, rather than the Oracle/Sun release schedule.
The Java-Cocoa bridge was only an early bet as they weren't sure if the Mac developers were keen in jumping into Objective-C.
They dropped it the moment they realized most of them were happy to use Objective-C and somehow they were the first company to turn back the tide from JIT to AOT compilation.
Yes, back in the NeXT days there was a Windows version of the whole Objective-C dev environment for Windows, and also an initial port of Cocoa to Windows when it was still called Rhapsody.
As for fairly recent, it was almost 10 years ago. :)
Not only Apple, that was the culture in the 80 and 90's home computers, as you mention.
Which again is hardly different from UNIX culture, which doesn't matter what the hardware is capable of, as long as, it runs UNIX and given that the UNIX culture never cared for the desktop experience (Xlib and Motif really?!) that gives a clash of cultures.
> only response I tend to get is a smug, "Eh, just buy a Mac Book."
Of course.
Back in the day we had to buy Commodore 64, Spectrum, Spectrum +, Spectrum +2A, Spectrum +3, Atari ST, Atari Falcon, Amiga 500, Amiga 1200, PC, .... and Mac.
So those of us that enjoy a packaged experience of OS + Software + Dev Tools are more than used to it.
The culture of the hardware doesn't matter, just give me a CLI and a 2D frame buffer isn't the culture of the desktop computers.
6502 on the Commodore, z80 on the ZX's, 68k on the Ataris, Amigas, and Macs, x86 on the PC (and different hardware architectures, even when they had the same family of CPU)...compared to x86_64 on a PC architecture for just about anything now, with the biggest hurdle being legal, not technical.
That's not an "Of course" situation. It's a "bang your head against the wall in frustration" situation. The hardware is commodity hardware, so the software runs on the same hardware that everyone owns, but you're legally required to pretend that it doesn't.
...and? I think I see the point that you're trying to make, but I don't think that it's pertinent to the discussion. A tablet is still a "personal computer" (solely as a descriptive term) without the keyboard, and regardless of Apple marketing. But it's not a "personal computer", in the same sense that common usage of that word doesn't include (most) tablets, smartphones, PDAs, programmable calculators, etc.
But, we digress. Tell me again how it's reasonable that hardware that will run OSes 1, 2, and 3 shouldn't be allowed to run OS 1, because the hardware doesn't have a fruit sticker on the side.
Old hardware was inherently incompatible software-wise, due to widely varying CPU and system architectures. ARM and MIPS hardware today somewhat mirrors that on a device-by-device level; every model has different interconnections between its components and often different mixes of components connected in. Luckily, we've got OSes that provide relatively robust hardware abstraction layers, so that the same software package from my 2010 device is highly likely to run on my 2016 device. I think that strengthens my argument: when even widely varied hardware isn't much of a barrier to software interoperability, why should we be satisfied with legal limitations?
And anyhow, the point was software development. Going into the definition of what is and isn't a PC is a bunch of BS semantics. Let's drill down to the devices that are pertinent to the discussion: The set of devices that are technically capable of native (as opposed to web) OSX and iOS development, and the set of devices that are legally capable of the same. Those are what the discussion is about.
No doubt. However, in the past, we physically couldn't do what we can do now. At the moment, this is a EULA problem. If Apple changed a single line, that'd solve a ton of problems for developers and they can still have their packaged experience.
> Apple wins this battle every time. Welcome to the ecosystem. :)
Yes, that's true, but we can also start being sincere about the fact that the whole story is BS. I've been recently assigned a mac by my university (never used one before) and I have to say that the ecosystem is at least unhealthy.
Maybe mac users don't really care, but having to install a third party software for common tasks like reverse scrolling, keeping the system from going to sleep, writing to NTFS, are symptoms, in my opinion, of a too much strong top down interaction between the os and the user. Same goes for the whole "OSX only on Apple hardware" story.
So no, Apple isn't shooting temselves in the foot, but surely they are disrespecting the end user. You can excuse that by saying that it is business, but I would reply that it is still true and by avoiding to talk about it we are implicitly doing Apple a favor by hiding the issues with what would otherwise be a great piece of software. We should say more often that these practices are bad for the consumer.
Reverse scrolling? It's an option on the touchpad / mouse settings to scroll naturally/unnaturally isn't it?
Going to sleep - it's under power options in Settings.
Also writing to NTFS is not a problem they care about. When's the last time you wrote to ext4 or HFS+ on Windows natively without extra tools?
It's exactly the same here - they support HFS, HFS+ and FAT32 and VFAT. Windows supports NTFS and FAT32 and VFAT. Do we get upset with Microsoft because they don't support HFS+?
> Reverse scrolling? It's an option on the touchpad / mouse settings to scroll naturally/unnaturally
Yes, but do you want to have different setting for your trackpad and mouse at the same time? You need a third party software. It's not as bad as I told the story
> Going to sleep - it's under power options in Settings
But I don't want to open the settings and change the behavior of the whole system if I right now need to have it awake for 10 minutes. Sure I could launch `caffeinate` on a terminal, but a clickable applet would be better. But of course it's either paid or buggy.
> writing to NTFS is not a problem they care about.
As to say, they don't care about something very basic that their users might need and don't need much implementation.
In fact I think you can simply `mount -t <ntfs_or_something>` your drive or add a line to `/etc/fstab`, but you won't have it in finder.
> When's the last time you wrote to ext4 or HFS+ on Windows natively without extra tools?
This just makes windows a worse OS. If you compare it to a real OS, like your favorite linux variant, you can.
> Do we get upset with Microsoft because they don't support HFS+?
Actually I do. This is exactly what I'm talking about: it's just a business practice to enforce their own standards, since the whole implementation wouldn't be so hard: they could just add a pop up upon installation that says "This software is not guaranteed to work, use at your own risk".
In the end, it is clear that a OS which support more FSes (especially the most widespread ones) is better than the same OS without that feature, and I say we should complain that we can't get it.
So if I'm right, you're complaining that MacOS and Windows aren't Linux?
I think you are expecting the wrong thing from the OS. How do you keep Windows awake for 10 minutes only? Proper NTFS support on Linux is only recent (and I don't think it supports all the excellent features of NTFS).
If an OS vendor adds in support for something that is only partial or offers messages regarding "this isn't really supported", either a) nobody would use it, or b) people would use it and complain. It is safer to release something complete, or not at all. And since "typical" Mac users aren't trying to mount ext4, ReiserFS, XFS or NTFS on their machines I can see why they would not support it.
For comparison, when's the last time you used COM or DCOM on Linux or Mac? When's the last time you ran a Cocoa app on Linux? Does it sicken you that RedHat and SUSE won't pull their finger out and support these technologies? Why doesn't RedHat release a driver for SQL Server ODBC support independent of Microsoft? They must be attempting to force their own Linux-centric monopoly!
You can see how foolish this sounds. It is based on unrealistic expectations.
> You can change the way the scroll works from your Mac Settings, same goes for the power options.
It only works for a single device. So if you want to keep natural scrolling on the trackpad and use regular scrolling on a external mouse, you can't. For some reasons the two checkboxes for this (even under different panes) are linked.
It is that market of small utilities that puts money in the pockets of developers that care about the desktop experience, and that isn't exclusive of Apple systems.
This simply means that Apple would rather outsource the job and make some small developer happy and a few users (like me) unhappy, rather than improve their highly praised interface.
By the way, there is an ecosystem of excellent small utilities developed open source for linux that are, in my opinions, superior to the ones on OSX, or at least, the one I work and am familiar with. Of course open source produces many useless tools, but I've also found unusable applets for mac, so...
No, it means there are some developers out there that are happy to earn their income and pay their bills developing software for desktop users, something that doesn't really exist in GNU/Linux.
Living from donations isn't really a sustainable thing.
Thanks. The problem is that opening a terminal is always a pain since spotlight sends me to the one already open. I know that I should set up a shortcut to open a terminal...
> How is that exactly? A huge majority of developers in your situation go out and buy a Mac
CI. You have to jump through inordinate hoops to have a CI system that tests on OS X. In the cloud there's Travis CI that supports OS X, or there's the GitLab runner that can drive Parallels or VirtualBox (incl. snapshot and rollback).
This is a big pain point for me right now. Due to the licensing we have two mac minis in the datacentre running VMware fusion with a few VMs on each. But the hardware is not suited to the role, performs abysmally and is a pain to manage.
We're currently moving our Linux, BSD and Windows CI from VMs and bare metal to openstack. It would be great if openstack could (legally) run MacOS X on the Linux hosts, or if we could have it use the openstack API to interact with the VMs via vmware, so we can use ansible to maintain the VMs across the board.
> A huge majority of developers in your situation go out and buy a Mac; another hardware purchase in their pocket
Yeah, not happening. I have some users who sometimes complain about something broken in iOS on Safari, I google, and it turns out to be a known Safari bug. I just tell them that it is their device which is broken, which they agree with because it works fine on their computers. And it works fine for other users with Android phones.
The people who this hurts, are the Apple end users. There are a few bugs they just have to deal with because it's impossible for me to test a fix. Tough luck for them.
This is however for a free product. Anyone is of course welcome to buy me Apple hardware, but I don't see that happening either.
I think the only way it hurts Apple is that no one runs OSX Server anywhere. I'm willing to bet Apple doesn't have data centres filled with MacPros, but instead has proprietary internal hardware for servers running OSX.
OSX is terrible to run your services on compared to any other Linux/BSD variant out there with build-in package management. Apple is not tapping into that particular market, but considering how much they make off their consumer hardware I doubt it matters.
Or simply use one of the BSDs on commodity hardware for internal server needs.
At this point in time OSX/MacOS/iOS and Android/ChromeOS is pretty much the same thing. A basically proprietary platform (yeah yeah, i know about AOSP. But good luck using most apps without having Google's services installed) living on top of a vestigial _nix.
> At this point in time OSX/MacOS/iOS and Android/ChromeOS is pretty much the same thing. A basically proprietary platform (yeah yeah, i know about AOSP. But good luck using most apps without having Google's services installed) living on top of a vestigial _nix.
At least Google's stuff generally works from a Linux desktop; Apple's just refuses (including iCloud: apparently I am required to use a Mac or Windows machine to view pictures my mom posts; Apple seem to think that Linux browsers are incapable of displaying pictures and text).
> Apple wins this battle every time. Welcome to the ecosystem. :)
Haha, joke's on them - I simply won't test[1] my web apps against Safari!
1. But our QA does :(, so I guess they won. Fortunately Apple decided to create 1st-party webdriver support as of Safari 10, so they at least recognize that assisting developers test on Safari is important. Hopefully someday we'll get macOS disk images to test Safari à la modern.ie
Recently I familiarized myself with Qt, that is often presented as core windows/linux framework. Half of linux apps is Qt. Years ago I also knew GTK+ 2.x very well. The other half is GTK. From developers perspective, Cocoa+AppKit/UIKit are orders of magnitude more powerful and conception-rich, and that is easily seen in UX, once you get base skills in it (i.e. stop believing 'it cannot do that', because 'it' mostly can).
They're the only vendor for the Mac platform and one of two competitors for mainstream desktop OSes. Tell me they don't have market power. It's just...self limited to a niche of a bigger market.
Microsoft also happens to be the only vendor of Windows. They get to dictate what goes in the OS, what is left out, what technologies they'll support, what APIs and development platforms will exist in the future (MFC, C++, .net). Is this a problem?
Also my Hotpoint dishwasher at home has a circuit board that I can only buy from Hotpoint. Does this mean they are abusing their position? Should I expect to be able to run something else on my dishwasher?
My car is a Volkswagen. I cannot take engine management software from a Chrysler and put it into my Volkswagen. Yet Volkswagen is a very large competitor in the "car" market. Does this mean that Volkswagen is abusing its position regarding the engine management software in my car?
If I am right, you are saying that Apple has a monopoly of power inside its own operating system. Yes, I think that's correct!
Where I work, my company has a monopoly on the features and timescale in the application software that we develop. I don't think anyone would ever call that a monopoly in the true sense of the word. Would you let me dictate what features go in your software?
Why would Apple let anyone else tell them what should go in their software?
Depending on what your software does, Darwin might be a decent option to create a VM for tests. I assume this is what Travis-CI does for their "macOS" support.
For me "all the Unix goodness" is a proper package manager, an environment I can modify as much as I want and nowadays too an environment that doesn't have yearly big updates, but a rolling release.
My desktop has stayed exactly the same[0] for almost seven years already. My configs follow me from job to job and from computer to computer. I don't want or need changes, just an editor and a proper *nix kind of operating system and no changes ever how things look or work.
And if I'm not using any CPU for my window manager, why do you think I don't need any for other stuff. I need to compile pretty big rust/clojure/scala projects daily...
That pretty much limits you to Debian unstable... Or Windows, if forcibly upgrading counts as "rolling."
But wouldn't a rolling release be the exact opposite of what you'd want if you want no changes, ever? And if that's the case, just don't upgrade macOS until Xcode forces you to, that usually gives you a few years. I stayed on Snow Leopard for years until Apple put a gun to my head, but Mavericks and Yosemite gave me no problems; everything worked just like it did before. (I still miss Snow Leopard, though...)
Debian Unstable isn't the only rolling release distro. Another popular one I can think of Arch Linux.
Typically, changes in rolling releases are easier to deal with. At any time, the change is smaller, so you don't have to be overwhelmed by a large number of changes all over the place, as is sometimes the case with versioned releases; and the changes are fresher, so your inputs to the developers who made the change are easier to apply.
Exactly. And when your installation is quite minimal, there's a very small change the updates will mess anything up. My arch installation has been running without problems for years because I don't have any massive desktop environments installed and all the development services I need are launched through docker.
Paradoxically, I find it easier to port applications to Windows than MacOS. Due to the efforts from the Cygwin and MSYS groups, just about every build script ports naturally to Windows. As long as I use MinGW, the resulting application is native and can be distributed.
On MacOS, there's a bunch of quirks that make it harder to port. Environment variables are different enough to require duplication in scripts (DYLD_LIBRARY_PATH instead of LD_LIBRARY_PATH, unless we need DYLD_FALLBACK_LIBRARY_PATH.) Certain utilities like gdb will not run because System Integrity Protection disallows them. In theory, we can sign gdb through a somewhat circuitous series of steps, except that this doesn't work if we want to ssh into the box. This requires turning off SIP, modifying some files, and then turning SIP back on. Technically, it's all possible, but it's one series of aggravations after another. Developing on a platform should not be this difficult.
On MacOS, there's a bunch of quirks that make it harder to port. Environment variables are different enough to require duplication in scripts (DYLD_LIBRARY_PATH instead of LD_LIBRARY_PATH, unless we need DYLD_FALLBACK_LIBRARY_PATH.)
I use CMake and it's a walk in the park. I just install the dependencies using Homebrew. CMake adds all the necessary include/library flags and builds the application bundle. It's even much easier than Linux, where building a package on the next Ubuntu LTS is always a bit of a drag (updated dependencies, changed library paths, etc.).
Windows, in contrast, has been hell. Some of the dependencies of one particular project only compile with Visual Studio on Windows, not Cygwin/MingW. So, the only reasonable way to compile the application is to get VC++-compiled versions of zlib, libxml, etc. Since everything does not have canonical UNIX paths, you end up a lot of paths manually in CMake.
And then you have to deal with things such as VC's standard library not having getopt, etc.
For me, macOS is the most comfortable development environment by far. Except that they should put out a Mac Pro with replaceable GPUs again. I use CUDA a lot, so the only practical option is Linux.
I use Homebrew and CMake as well, but it certainly has not been as smooth as my CMake scripts contain a series of fixes for MacOS. I also like CPack, but there's been less consistency with MacOS. Bundles don't work for multilanguage libraries since things like mex files can't be packaged as a framework. PackageMaker was deprecated since Yosemite. OSXX11 is really X11 specific. DragNDrop sort of works, but it can't run any post install scripts to do things like set environment variables, which, again, the process for doing so changes from release to release and whether or not we're on the console. Conversely, the WIX generator on Windows allows me to package everything as I need it to. Though less nice, NSIS also works well.
Certainly, that's not to say that everything is rosy and cherry on Windows. I just did a cold build from scratch on a new Windows license and it honestly went as well as it did because I know exactly which tools to get. Mostly, it's that after banging my head against both the Windows and MacOS ecosystems, I can build a new development system vastly faster on Windows and have few headaches.
As a quick note, thank god for the Homebrew crew as that does make things much nicer.
As a final note, some of these headaches would immediately disappear if I could virtualize MacOS since it would fix the console login vs desktop login problems. Also, I could actually fix problems when working on an airplane.
> Windows, in contrast, has been hell. Some of the dependencies of one particular project only compile with Visual Studio on Windows, not Cygwin/MingW.
Seems like the issue is with the particular project you are trying to compile and you are blaming it on the OS.
Also, Visual Studio has its own package manager with tons of libraries and tools.
I am not blaming Windows. It's just that Windows is a mismatch for projects that come from a UNIX tradition. I work in a field where most software is UNIX-first (ML/NLP), so Windows was virtually never a serious option as a development environment.
Of course, much of changes now with Windows Subsystem for Linux.
If everything is using cmake, and you don't need any inline assembly or fortran for scientific code, then maybe. But you end up in a parallel universe where everything that touches MSVC is just needlessly different than every other platform. If mingw gcc is working for you, then it's much easier for dev skills to transfer between building for Windows and building for unix. If you do need inline assembly and/or Fortran (LAPACK anyone?) then it's easiest to pretend visual studio doesn't exist.
Actually Fortran works best if you combine Visual Studio with Intel Fortran compiler. You can freely call C from Fortran and viceversa, furthermore Intel's versions of BLAS and LAPACK are seriously faster than the reference BLAS and LAPACK you will build yourself on Windows.
It is also possible to link gfortran compiled dll's with C and C++ in Visual Studio. A few years ago I even called a gfortran dll from C# using p-invoke with no particular problem.
Of course you don't build the reference BLAS if you care about performance. But OpenBLAS is quite competitive, and open source. A non-free compiler is a big impediment to build reproducibility and accessibility for new contributors for a large open source project.
> A non-free compiler is a big impediment to build reproducibility and accessibility for new contributors for a large open source project.
Agree, this is why I've mentioned that you can use Fortran code compiled as a DLL with gfortran (open source) from Visual Studio in C, C++ and C# projects.
Well, and that's the trick. Originally, I shied away from VC because I link into LAPACK. Even if I hook into a precompiled library from a manufacturer, they use the same name mangling scheme as Fortran. In order to figure that out, both autotools and CMake compile a small Fortran program and check, which means that I need a Fortran compiler. Technically, I could set a macro to the name mangling by hand since there's only four possibilities, but it's a little irritating. I suppose that I could use VC for the C++ pieces and gfortran for the Fortran pieces, but it's always been easier just to use MinGW.
Or Intel will sell you their Fortran compiler. I guess you can get academic or open source license deals too, but it ends up feeling like you need a law degree to determine whether or not you can redistribute the results.
Kind of annoying that Intel's Fortran compiler on Windows is the only commonly-used one that doesn't at least allow you to ask for gfortran-compatible mangling. Hopefully if the PGI LLVM Fortran front-end ever gets released it'll be another option (fingers crossed that it uses gfortran compatible mangling so life is simpler).
The linked KB article only talks about VMware Fusion not ESXi :)
I am aware about the statement that ESXi running on Mac Hardware is supposed to be legal.
However if you install bootcamp on your hardware, install Windows in the bootcamp partition and then use VMware Workstation/Player to try and run OS X / macOS you can't. That is not legal.
This has always confused me.
apple hardware -> ESXi -> macOS == OK
apple hardware -> macOS -> Fusion -> macOS == OK
apple hardware -> Windows -> Workstation -> macOS == not OK
Sure just like games' EULAs say "no copying!!1" but the law perfectly allows for copying for personal use (e.g. backup purposes) and everyone does it anyway.
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Box 'AndrewDryga/vagrant-box-osx' could not be found. Attempting to find and install...
default: Box Provider: virtualbox
default: Box Version: >= 0
==> default: Loading metadata for box 'AndrewDryga/vagrant-box-osx'
default: URL: https://atlas.hashicorp.com/AndrewDryga/vagrant-box-osx
==> default: Adding box 'AndrewDryga/vagrant-box-osx' (v0.2.1) for provider: virtualbox
default: Downloading: https://atlas.hashicorp.com/AndrewDryga/boxes/vagrant-box-os...
An error occurred while downloading the remote file. The error
message, if any, is reproduced below. Please fix this error and try
again.
The requested URL returned error: 500 Internal Server Error
One thing which always bugs me about OSX vms is there is still no way of running OpenGL or Metal applications on them. Not even qemu-system-ppc emulates a suitable GPU, even though there are a bunch of emulators for 7th-generation game consoles which have come quite far in emulating gpus with similar style architectures which could probably be adapted.
> One thing which always bugs me about OSX vms is there is still no way of running OpenGL or Metal applications on them
This is more or less true for most virtualization solutions (with the exception of KVM on Linux where apparently you can use your video card from the VM). For example VirtualBox emulates OpenGL 2 and DirectX 9 for Windows guests. Vmware is a bit better and it can emulate OpenGL 3, didn't checked if this works with a macOS guest ...
I wish Apple would port their DE/WM to Linux and Free it with a license that says something along the lines of "you can install this if you paid for it". I'd easily pay 100$ for the Apple DE/WM suite sitting over my distro.
I guess this is the closest you can get (aside from using Elementary OS).
> While I'm around, anyone know the keyboard shortcut to UNminimize a window?
Are you referencing when you minimize a window and want to get back to it without using the mouse or trackpad to click it from the bar? If so I have a suggestion.
Let's assume I minimized a firefox window. Now I want that back.
1. ⌘-tab (hold ⌘ to leave selectable windows open)
2. Use left/right arrow to get to firefox
3. Press down.
4. Select window to UNminimize and press enter.
This is convoluted and probably not what you are looking for but its the best I have. Would love to hear from others? Also, apologies if I misread your question.
Another method is to ⌘-tab to the minimized app, and while still pressing ⌘, hold down the Option key and release ⌘.
Not very elegant and it won't work if the app has a mix of minimized and non-minimized windows, but I usually don't have a mix. If that's the case, you can still use notyourwork's method above.
I liked how some WMs handle windows of same application. I can Alt-Tab to a window of an app, and then — without releasing the Alt key — Alt-` to a different window of the same app.
On Kwin, for example, going from Alt-Tab to Alt-` smoothly filters out all the non-same-app windows.
Alt-Tab => Alt-` is also much easier than ⌘-tab => ⌘-Option => Option
Yeah, in my case I have a mix of open and minimized windows in a given app (Preview, iTerm, Emacs...), hence the issue.
MacOS is not keyboard-only-friendly. It took me a year before someone pointed out the option to let "Tab" cycle through ALL controls in a dialog >_< (FYI: System Preferences -> Keyboard -> Shortcuts, then at the bottom, the "Full Keyboard Access" radio buttons)
I'd want the exact opposite (OS X with the ability to run i3 on it). I'm used to i3 (at home and school) and using the default windows/linux shortcuts (like ctrl + x/c/v) and I'm using a Mac at work and I have issues between going back and forth.
Doesn't look like hosting it alone would. In the readme, there's a section about licensing:
"Apple's EULA states that you can install your copy on your actual Apple-hardware, plus up to two VMs running on your Apple-hardware. So using this box on another hardware is may be illegal and you should do it on your own risk."
You also have no idea how http://files.dryga.com/boxes/osx-mavericks-0.1.0.box was built (which is my main bugbear with Docker and Vagrant images), which kext hacks were used, whether the image has been tampered with etc etc.
Edit: okay, this was a bit harsh. It looks like the creator of the images only uses them on a Mac OS host - so no kext hacks needed like if you were building a Hackintosh. I'd still rather a script of some kind that took a dmg installer and built this image. It looks like https://github.com/radeksimko/vagrant-osx will do something like that - linked from Andrew's repo, so thanks!