> Apparently my vi command muscle memory hasn't faded.
Back in the day I did most of my dev work on Solaris. I then spent 4 years as CTO as a startup that was pretty much only Windows.
When I subsequently went back to working at a unix shop I was initially struggling with vi as I tried to read some of the C++ code. I couldn't remember commands, was having to refer to the man pages every few mins. It was torture.
A couple of days in, I was writing up some notes in vi when someone walked past my desk and started chatting. When we finished talking I looked down at the monitor and I'd written more than I had when I was concentrating, nicely formatted, the works. Turns out "my hands" had remembered a load of what I thought I had forgotten.
For the next few days I had to keep finding ways to distract myself so that I could work efficiently. Eventually it all came to the foreground but it was the most bizarre experience while it was happening.
This always happens to me with passwords. I have trouble remembering some passwords. Yet, when I'm in front of the page and I actually need to get in, either I lose some focus and let my unconsciousness do the work or I won't get it right.
Heh, a few weeks ago I realised that I'd become unsure about my email password (almost never have to log in). So I logged out and tried to log back in, trying every possible combination, keeping track of them on paper. I had to give up, turns out that I was way out. The next day I wanted to check my emails and typed in the password immediately.
There is the (internet) story of an engineer that could only log in when he was sitting down. Standing up he could not login.
The problem was he had cleaned his keyboard a couple of days earlier and put some keys back wrong. When he was sitting down he logged in by touch typing. Standing up he looked at his keyboard when he typed.
I was trying to share my banking passwords with my wife ("in case of emergency" kind of stuff), and I was forced to admit that I had no idea what my bank password was, even though I log into it several times a week. I had to open a word processor and type my username and password for her all at once. The act of typing my username allowed the password to pop into my memory.
I tend to think it's a good thing that I don't actually know my bank passwords - a password manager is much simpler. Having strong passwords for all different banks is a piece of cake when you don't have to remember them.
This is nice. Some criminals torture you:
"Give us your password so we can take your money".
"I cannot give it to you! I can only enter it in a stress-free environment when not thinking about it."
(Applies also to those government warrants forcing you to divulge information.)
Yes! That same has happened to me. I'm sitting there trying to remember my password, but I can't. But I just try to start typing it and I get it right. I hadn't remembered my password, I'd only remembered the motions.
When I used to play piano as a kid, this happened to me all the time.
Sit me in front of the piano and I'll mindlessly bang out the song I've been practicing. Make me think about it in the middle and I'll forget what comes next.
When I was a kid, I could not start reciting the alphabet anywhere but from 'A'. In the alphabet song, the latter bits go really slowly compared to the first bits, so for the longest time I thought "P" was the middle of the alphabet, because it comes about halfway through the song. It came as some surprise to me that "M" is the 13th letter.
I’m not using SDL or any other library to hide those platform differences.
Once start working on Linux port he'll regret about that. Every developer that start with own platform-specific code end up using SDL2 anyway. Don't do that mistake.
Furthermore SDL2 is absolutely fantastic and just works. You may say, you do not want to use it as it "just" abstracts away keyboard/mouse/controller input, window and GL context creation but these things are damn hard to get right.
As a current SDL2 user: it is not fantastic and it does not just work.
- On Windows, DPI scaling is broken (ask for a 1280x720 window, get 1600x900)
- On Mac, mouse locking is broken
- On Linux, my Xbox gamepads don't work at all
- Required several hacks before it could be used as a CMake submodule
This stuff is under active development, which is shocking for such an established and widespread library. I'm glad it exists, but it doesn't "just work".
Can't really blame it though, it has a very boring and nearly impossible task.
Thanks, yeah I tried something equivalent to this (SetProcessDpiAwareness, as mentioned by another commenter). It broke some other SDL stuff though. I'm just gonna have to screw around with it when release time comes.
Have no idea about DPI on Windows, but from my experience mouse locking on OS X worked and my Xbox pad works for sure (checked just month ago or so). I know about some issues with force feedback, but not sure if those are even SDL issue.
Tested on El Capitan. IIRC, SDL_SetRelativeMouseMode didn't work at all, the cursor remained visible. After some workarounds, there are still issues with the cursor not re-centering correctly, menu bar remaining visible, etc.
It's probably because I'm on a random Git commit rather than a stable release, but I had to do that due to another issue. It's surprising how many things are in flux, given how long it's been around.
Really? I use Linux full time at home and I've never had problems with 360 controllers either when developing with SDL2 or playing games built with it. Are you talking about original Xbox and/or Xbox One controllers?
Aftermarket Xbox 360 controllers. I've seen other Linux SDL games work with 360 controllers, so I assume the problem is on my end. I just wanted to contest the notion that SDL "just works". It still takes a lot of fiddling unfortunately.
It has its own quirks, but I was mostly impressed with SFML when I tried it. SDL always seemed vaguely badly designed -- it worked well enough that I never seriously considered replacing it, but I can't say I ever really enjoyed using it.
Yeah I tried that before and it solved the problem, but then it broke some other aspects of SDL. Can't quite remember, I think fullscreen got messed up.
There's code in SDL specifically for high DPI, but it's confusing and practically undocumented.
It's just something I'll have to mess around with a while, and then remember not to breathe on anything once it's working. :)
Unfortunately most of game developers that decide against use of SDL2 do that based on their experience of Windows development where it's pretty much standard to have own code for everything (except development tools and some middleware). E.g they think that their own code going to take less lines and be cleaner than SDL.
As result these people usually don't even consider any alternatives as all of them even worse from their standpoint.
I would use SDL in Linux ports of things because it is the closest to a reasonable native API on Linux (which says more about Linux than SDL actually). But even having done so I would then use native APIs in Windows, OSX, etc.
If your standard of quality is high enough, it won't really be possible to reach it using a blanket API like SDL everywhere.
It's reasonable to keep native code when you already spend months / years working with it. Or if you're huge company with hundreds of programmers that want to have own everything.
Though when it's relatively new game with own engine and small team maintenance cost for own cross platform code going to be high. Even on Windows there is tons of small problems that already solved within SDL. It's really not fun to debug problems of XP, Vista and some not updated systems.
PS: Also as far as I aware SDL2 currently used by all Valve games on all platforms. I pretty sure they wouldn't be using it if it's wasn't working well.
So it means you prefer SDL2 on Linux because it is the least awful. Can you elaborate why you would not use it on e.g. Windows? Which parts of it do you consider inappropriate do use it cross-platform?
From what I can gather, not with V3 and above.
I think starting V4 they will go render-agnostic so they can use Vulkan and Metal.
Besides, there is more than rendering for a decent engine. Managing controls in an agnostic way is nice to. I am actually working on adding better control-events so it can wrap Windows, Windows phone, Apple TV and External controllers easier.
> "From what I can gather, not with V3 and above. I think starting V4 they will go render-agnostic so they can use Vulkan and Metal."
That would make sense, last time I touched it was back in 2012.
> "Besides, there is more than rendering for a decent engine."
For sure, but Cocos2d-x is a complete software framework and SDL is meant to be a library. Although I would grant that it would make sense if the author would choose to use a game engine for his game.
I wish the author told me more about it than just this. Can somebody comment on how it compares to recent VS editions these days? About 5 years ago I also looked into using OSX as main OS. As I've always been using non-commandline graphic text editors and IDEs for most coding that made XCode the go-to environment but I just couldn't deal with it even though I tried. I don't remember all details but in general it just felt inferior to VS on like all fronts, with no advantages of any kind (for C++). Again, IIRC, but it did annoying things like opening the same document in multiple windows, one for editing and one for debugging or so? Anyway, what's the state today?
- I don't spend a lot of time setting up the IDE, so it's important the defaults are sensible. For instance, it might be possible to change this, but XCode's code completion seems to be less useful than VC2015. I really need it to be near immediate, and I also need it to check string that may appear in the middle of a function name instead of just the start. Especially since NS libraries have strange and long names for things. As I'm writing this, I just found out you can have tabs in XCode. Why isn't that a default?
- XCode crashes maybe once or twice a week for me. VC doesn't.
- Unspecific weird things happen in XCode way more than VC. For instance I was unable to see variable values in XCode for a while. Eats up my time looking it up. Hasn't happened to me on VC yet.
- XCode is highly integrated with the Apple environment. You can build stuff for the app store and send it right there.
- XCode has a less than complete Git integration. You need a bit more detail than what it gives you. I use SourceTree anyway, but it might matter for some people.
- Compile times are hard to compare, as I'm doing different things on the two environments. VC2015 is definitely a lot faster than a few years ago though.
> XCode crashes maybe once or twice a week for me. VC doesn't.
Only once per week? Lucky you. Sometimes XCode has crashed for me several times per hour. Sometimes it works for a while. Sigh Apple [1]. The fact XCode uses clang as a compiler absolutely rocks.
That said, Visual Studio 2012 just crashed earlier. I guess it was some bug in Window splitting or something, did some unusual things with that just before the crash. VS2012/2015 seems to be generally stable. Visual Studio C++ IDE context operations, like finding references just doesn't work well at all. It finds so many totally irrelevant items.
Excited to try VS2015 with Clang support for Windows applications, with official support.
[1]: When it comes to personal experience, Apple's recent software quality seems to be shoddy. For example USB3 mass storage is stable only for a few minutes before a forced USB stack reset on my El Capitan Macbook Pro retina 13" 2015. Sigh. Impossible to do things like run virtual machines off USB storage on a relatively new $2k machine... At least without booting to Linux or Windows. Things like these make me seriously consider to stop using OSX as my primary machine.
I have the same MBP and while the it'll drop anything on the right one that's more intensive than a flash drive, the left one has no such issues. Lot of reports of the same "solution" working floating around.
XCode reference finding doesn't work fantastically for me. When I look for a symbol it puts in some strange characters that might be wildcards, but doesn't change the search mode. Means I end up doing string search most of the time.
VS is okay, not perfect. But I'd say it's better than searching for strings when you really want a type or function.
I'm having a terrible time in VS2015 so far. When debugging I get very frequent hangs of the whole environment. I haven't managed to find a solution to the problem yet.
Xcode compilations has been much faster for me. A Qt program i develop at work compiles in 2 minutes with Xcode and over 6 with VS 2013. Clang and GCC compile much faster for me than VS 2013.
I use both VS and Xcode daily. Once you get over the initial "gee this thing looks like iTunes" shock and get used to a few small annoyances, Xcode is quite ok to work with. Keyboard shortcuts and source file navigation are completely different to anything else though, once you get used to it, it works well though.
Where Xcode is better than Visual Studio for C/C++ dev (IMHO of course):
- C++ compiling and linking is easily 5x..10x faster out of the box than Visual Studio thanks to clang
- the static analyzer has a really nice 'arrow'-visualization of the steps that lead to the warning
- clang provides more useful error messages
- compiler warnings and errors are directly overlaid into the text editor view
- built-in support for clang address sanitizer (just a checkbox to tick)
- support for iOS development is really slick
- better out-of-the-box support for command line builds either through xcodebuild or the gcc-compatible toolchain
- Xcode comes with a lot of profiling and analysis tools where Visual Studio has only slowly caught up (but VS2015 seems to be mostly on par).
Where Xcode falls behind compared to VS:
- Xcode has that strange 'Scheme' feature for build configuration
- the debugger's variable inspection has usability issues
- working on source files with a couple thousand lines of code feels laggy
- before El Capitan, the whole UI felt slow on a Retina MBP, but I guess that's because of general optimizations in the OS
- it crashes or freezes about once or twice a week on me
- probably a number of smaller ignorances which I have learned to ignore
I usually don't touch any of the UI builder tools in both IDEs, only straight C and C++ stuff so I can't comment on the more platform-specific features.
I find it interesting that most of your points in favor of Xcode are just because of the clang backend, while the points for Visual Studio are more about the IDE itself.
Would be interesting to hear your opinion again, once clang is fully supported through Visual Studio.
Yes, I noticed this too while writing the points down. It really comes down to the clang back-end. I'm really looking forward to the clang integration in VS, and also really love the work MS is doing to fix Android native development.
One strange thing I see with clang running on Windows (in the form of the emscripten fastcomp backend) is, that clang runs a lot slower on Windows than OSX or Unix. So may be it is some underlying IO problem? I'm not sure but hope this can be fixed.
This matches my own experience quite closely. Good list.
There is a lot of head scratching with XCode project configuration (for me at least). It's very unintuitive.
clang (though XCode) is much, much faster compiling my projects than cl.exe (through VS) even though I'm using precompiled headers in the Windows build and no PCH on the clang build. VS2015 is supposedly much faster but I still have a few compilation issues I need to work through before I can switch to it.
The debugging experience on Visual Studio is much more pleasant.
Visual Studio feels a lot more "enterprise-y" than Xcode and offers a lot more advanced features and tools on pretty much all fronts, especially in the UI (e.g.: Xcode doesn't even have file tabs, but sth. I'd describe as big "whole-UI-tabs"). But...
Xcode has llvm. This compiler and it's tools (i.e. the analyzer, debugger, etc.) just make VS' compiler look like it's from the 90s. Really.
-> Have you ever heard of llvm's "address sanitizer"? Forget the days of endless debugging! This little helper has revolutionized my debugging productivity and solved so many little subtile bugs for me...
So in the end you'll loose a lot of nice UI gimmicks and additional tools, but the compiler suite makes up for that.
And even if you don't need those llvm features, you still get a unix environment, which makes working on many fronts a lot easier. E.g. I'm primarely working on different kinds of web servers: To test everything I can just install whatever I need... brew, curl, netcat, wrk, ... And it will just work. And let's not forget all those "standard" unix tools like find, xargs, grep, etc.!
What does the modern Xcode tooling offers in regards to the 90's parallel debugger, parallel watchs, thread control, directx debugging, visualizers, code navigation, extendable security analyzers?
> Xcode doesn't even have file tabs, but sth. I'd describe as big "whole-UI-tabs"
I think this is an intentional interaction design. When you are always working with hundreds or thousands of source files, tabs kind of lose their meaning. The fuzzy-search quick open panel and project-wide find become your main dependency for quickly jumping around your codebase (with the benefit that you don't need to touch the mouse).
LLVM may not remain a feature that only exists in one column soon. It's a brave new .NET world... LLILC is a thing (.NET frontend for LLVM). LLVM has a role in Android and iOS work in VS already and I think that role will only expand further.
There's a lot of attention on .NET Native. Given that LLILC went from nothing to being able to build and JIT Roslyn in 6 months...
I'm primarily a Python guy and with the 2/3 split, I've had my eye on .Net Core to migrate my business platform to. Pretty much been looking to dump Python for anything over a line count of 500 and keep it that way.
I'm open to any speculation as to where it's going because I'm finding the .Net platform to be more attractive than ever. A few years ago it was looking pretty sad but MS really turned it around and I'm interested in a permanent migration.
I really can't find much, that I enjoy using, that approaches Python's broad use cases than C# on .Net. With dotNet Native and Xamarin, I'm very seriously considering the plunge.
I use both on a regular basis, VS for my job and XCode for hobby work.
I prefer VS because it performs better when editing and navigating the code base ( it is actually faster to run in it inside VirtualBox than XCode natively ).
Also VS has code navigation and editing features that XCode lacks. For instance if you want to do a find/replace in VS you can double click on a word do Ctrl+H and the word you selected populates the search box. You can then populate the replace box with what you have in the clipboard or just type in what you want.
In XCode you need to copy the word to replace into the clipboard, do a Cmd+F, paste the word into the search box and then type in the replace box. This is much slower.
In VS you can setup bookmarks in your code with Ctrl+F2 and jump between bookmarks by pressing F2. This is great when you need to have present multiple parts of the code base to accomplish some task.
I don't know how to do this in XCode.
In VS you have a stack of source code windows that you can easily move about via Ctrl+W+n where n is the stack depth.
This is incredibly useful to navigate between multiple files.
Again, I don't know how to this without using the mouse in XCode.
Then there are other issues with XCode mentioned by other comments like the incredibly confusing build settings and the instability of the IDE.
I often wonder if Apple actually uses it to develop their software.
I haven't used VS in many years, but I like what you describe. I don't know if this will be helpful, but these are my alternatives in Xcode:
> For instance if you want to do a find/replace in VS you can double click on a word do Ctrl+H and the word you selected populates the search box.
Your workflow sounds nicer than Xcode's here. While I don't often use find+replace, I do use project level find constantly. My muscle memory shortcut for this is:
Cmd+C, Cmd+Shift+F, Cmd+V, Return
I'm pretty sure I use this hundreds of times a day. Especially when analysing unfamiliar code to trace its execution paths.
> Ctrl+F2 code bookmarks
These sound really cool. A bit like numbered unit groups in Starcraft.
The only similar mechanism in Xcode is quick open. Cmd+Shift+O then start fuzzy-typing the name of a file, method or declaration and hit return to jump to it in the editor (option+return for assistant editor). I have actually changed my quick open shortcut to Cmd+Shift+D because it's easier to trigger one-handed.
> In VS you have a stack of source code windows that you can easily move about via Ctrl+W+n
Unfortunately, while a stack of source locations is maintained in Xcode, you can't jump to a direct location within the stack.
What I do here is use Ctrl+Cmd+Left and Ctrl+Cmd+Right to navigate back and forward in the history stack for an editor window. So, for example, if you Cmd+Click a symbol to jump to its declaration, you can press Ctrl+Cmd+Left to go back.
The one thing I like about Xcode's version of this is that it keeps track of source locations rather than files or windows. So you actually navigate back/forward within the same file (if you were jumping around within the file) as well as between files.
In regards to the find and replace, you can actually rename a variable or function by placing your cursor in it and hitting Cmd+Ctrl+E (that may not be the right shortcut, but I know there is one). No danger of changing something you didn't want to change.
As far as the file stack goes, I'm not really sure what that does; but Xcode has other navigation options, like tabs, and files can be switched with the fuzzy finder.
You are absolutely right about the instability—that's the reason I switched away from Mac/iOS development. Too many weird bugs, in Xcode and in Swift. I haven't found it too slow on an SSD but when I used an HDD the speed was horrible.
> ( it is actually faster to run in it inside VirtualBox than XCode natively ).
Is there anything special you're doing with VBox? In a Windows VM, Eclipse is mostly usable, but disk accesses (or something) have incredible latency. It makes opening a new tab take a few seconds.
>In regards to the find and replace, you can actually rename a variable or function by placing your cursor in it and hitting Cmd+Ctrl+E (that may not be the right shortcut, but I know there is one).
This only performs a rename in the current file though, not across all files in the current project. (At least in Xcode 5, maybe this has been changed in more recent releases.)
I've been using XCode daily since version 3 and I still miss VS. XCode (IMO) is a festival of UX badness - can't change positions of navigator windows, can't change fonts/colors outside of source/console, the 'assistant' window (or whatever it's called this week) frequently opens up unexpected or unrelated files, refuses to stop trying to automatically balance square brackets, spews truncated errors and warnings over the top of code then shows a tiny version in a mouseover for about 3 seconds before hiding it, truncates said warnings in the issue navigator, tiny fonts almost everywhere, severe weirdness with variable inspection and no integrated 'watch' in the debugger - which likes to show disassembley even when you ask it not to.
Lack of refactoring support for Swift is currently baking my noodle, as is the lack of a supported plugin API (with which I could solve many of my own problems). Very crashy.
All that being said, it has a lot of good stuff. The analysis and profiling tools are great, there's a bunch of really powerful tools for games and 3D, view debugging is a thing now, the UI builder is super powerful (once you get used to it) and the unit/performance/UI testing tools are (IMO) pretty awesome. Lately integration with all things App Store (provisioning, entitlements, etc) has got a lot better and most of my pain points there have gone away.
It's hugely subjective though, some people would disagree with much of this. Like any IDE, you end up in a love/hate relationship. The main difference being that if you hate VS you can hit up the extensibility API and make your pain go away, whereas Apple don't really care if you hate them.*
*I'm aware of Alcatraz and it's neat, I'm just not quite sure I want to introduce unsupported hackery into my production environment.
Does turning off the "Automatically balance brackets in Objective-C" preference not work? I like the feature, so I've never tried turning it off, but it sounds like it'd solve your problem.
You can also change the assistant editor to manual control: I frequently alternate between automatic, UI, and manual control of the right pane. There are some great keyboard controls to make it easier.
I don't use (need?) 'watch' in my debugging, but I thought the UI exposed a command for doing so (is that what you mean by integrated?). I think right clicking a value/variable has 'watch'. And I'm sure that LLDB has commands for it.
> Lack of refactoring support for Swift is currently baking my noodle,
Yeah, I hear the refactoring tools are great, but they don't work with C++ either, which is what most of my source-base is.
>as is the lack of a supported plugin API (with which I could solve many of my own problems).
I don't know if this would solve your problem, but you can tell Xcode to compile (or maybe handle is a better word) any type of file with a script. So if you want to compile your Haskell source with Xcode, for example, just go to the target settings, click on the "Build Rules" tab, and add a rule that says "Process files with names matching: " *.hs, and Using "Custom Script:" with a pointer to a script that runs the input file through ghc, or whatever you need.
Adding in a third option to the mix - JetBrains recently added a C++ editor to their catalogue called CLion[1]. While VS and XCode have quite a headstart over it, CLion is cross platform and comes with native support for CMake projects. JetBrains also make the (de-facto from my experience) Java IDE - IntelliJ which shares a core engine with CLion.
A bugbear of mine for XCode is the absence of C++ refactoring tools, which CLion certainly has.
If you do mainly Objective-C, JetBrains AppCode is pretty good. Refactoring tools are awesome, even though they are still working in Swift support. Code navigation et.al. are on par with other JetBrains products. Debugging works pretty fine.
You still have to revert to XCode for that crap that is Interface Builder. JetBrains tried to write an IB clone inside AppCode, but they abandoned it.
AppCode lets me code 80% of the time without having to use that horror that XCode is. Alcatraz helps a bit alleviating the other 20%, specially the XVim plugin for Xcode.
> You still have to revert to XCode for that crap that is Interface Builder
As an alternative opinion, I use Interface Builder every day to visually create my interfaces, add layout constraints to it, hook up actions to buttons and define the navigation flow of the app. I very much like the fact that I have small view controllers.
I don't use VS but I use Xcode every day. Downsides: incredibly laggy, buggy, can crash a lot depending on the version. Upsides: actually pretty thoughtful in terms of UI and UX, at least insofar as a fully-fledged IDE can be. I could never wrap my head around VS's stupid toolbars. Debugging, when it works, is really nice; there's more interactivity and inline code interaction than I remember VS having. (VS probably has more power features in this arena, though.) The profiling tools are top-notch; you can even capture frames when debugging an iOS OpenGL app. Fuzzy search is a huge help — is VS still missing this feature? You can very easily access the docs for any property or method with an Alt-click, as well as the headers with a Cmd-click. Adding external code and projects, as well as hooking up all the dependencies, is pretty easy once you've done it a few times. So yeah, not great by any stretch, but pretty good when it works.
Laggy and buggy? That sounds like Visual Studio to me and I use that 8 hours a day. However, it's a damn good environment for C# and F# development, and Intellisense is pretty awesome in 2015.
In VS, Control-; is a pretty good way to navigate. I turn off all the toolbars anyway.
If you're having issues, I would suggest it's something to do with your particular setup. VS is a RAM hog, but if you have enough RAM, it is very stable. Certainly doesn't lock-up or bomb-out-to-desktop as often as Xcode, NetBeans, or Eclipse on me.
I run VS with all of the toolbars removed and drive it almost completely from keyboard shortcuts. Occasionally, I use the menu bar, but it's usually only for first-time-project-settings-tweaks.
I can only speak for Xcode 5, but after I read an article similar to http://mattorb.com/xcode-behaviors-for-fun-and-profit/ that explained how to effectively use tabs and "behaviors", I am mostly a happy camper. I also replaced its clang/libclang.dylib with a more recent build, so I could have C++14 support.
I'm way more productive in Xcode. I've also used it more in the last years so that obviously influences my statement.
I find the UI in VS2013 very "unstable" - I constantly manage to drag stuff away and hide windows I need to use. I miss the .h/.cpp side by side view when I'm in VS.
It's very easy to setup a color scheme in Xcode that looks nice. I've used hours in VS to get something that is ok. And then it resets to the default about once a week randomly.
Debugging C++ template code actually works in Xcode. Running debug build of stl under win32 is extremely slow in my experience.
I almost never have crashes in Xcode, but I think this probably depends on your code base and project setup a lot.
LLVM compiler errors are much easier to understand. That said, VS have found bugs that LLVM don't see. So, compiling on both help keep code base in good state.
Our game code compiles in half the time on Xcode/LLVM compared to VS2013.
- it's incredibly unstable. I've had it crash 8 times within an hour.
- I can't do a find-in-files without freezing the ide for 30 minutes. (Visual studio can handle the same code base in seconds)
- a lot of ui design decisions are "different" for seemingly no reason. Most devs can jump between vs/IntelliJ/eclipse easily, but almost every Xcode design decision is just weird. Like why do compile errors show up where my file navigator is supposed to be? Why are important tabs just shown with tiny incomprehensible flat icons? It's all arbitrary, but it seems like every ide has settled on conventions that Xcode breaks to no obvious advantage
Dunno much about VC but the Xcode editor compared to Jetbrains IDEs (IntelliJ) is horrible, ABSOLUTELY HORRIBLE. I'm just talking about the code editor (storyboard editor and other tools are fine).
I'm missing Jetbrains code completion so hard and all those useful shortcuts like CMD+E, CMD+W, CMD+ALT+V, CMD+ALT+M, SHIFT+F6, CMD+SHIFT+UP/DOWN, CTRL+N, CMD+1..9, CMD+F9, ALT+SHIFT+F9/F10.
I can attest that Visual Studio and Xcode can both feel laggy, especially with respect to C++ development and static analysis (Intellisense or whatever they call it). However, this may not be so much Microsoft's or Apple's fault as much as that IDE tools carry high overhead as they have to manage a lot of symbols from every header file referenced, including parameter lookups, etc.
On top of that, I think Xcode (like Eclipse) compiles your code as you type, leaving you no surprises until you need to link...
Xcode checks your code as you type (90% of the time, sometimes early errors, e.g. in a header, make it give up completely on a file). Compiling still is a separate step. Current versions of VS do the same, though, even for C++.
Visual studio compiles as you type. It has to, to work with e.g. 'auto'. Vs has two compilers actually, one that compiles as you type (and tries to guess what you mean more), and one for the real compile cycle.
Just the title screen, not a complete port. Nor would 1FPS be probably considered "working" :)
He put the OS glue in place in 1 week and that sounds about right for this sort of effort, given some prior experience with writing portable code. The bulk of effort was spent earlier on abstracting principal code from the platform specifics, and sounds like he did all the right things there. Good stuff.
He is extremely productive, to be fair - "Shining Rock Software" is only him, the entire game, pretty successful on Steam, is just coded and maintained by him.
It's an absorbing simcity-esque game, as long as you don't go in expecting Civ/SimCity replay value it's definitely worth it if you like those type of games, or at the very least putting on your wishlist for the steam winter sale.
The level of quality for a one-person team is phenomenal.
It's not working. It just rendered. Like the author said, it's mostly just moving the core c++ codebase in then futzing with xcode to get it to build. If you can use *nix you can use OSX.
I'm curious about his opinions on OSX after he's gotten the game to run with sound and above 1FPS.
If you have written your game with a good api layer that shield you from platform specificity, I can't imagine why it would take that long to port... It's mostly SMOP
That's what most PC developers think until they hit their first Console port.
Correct data organization is critical for getting this right. The PS3 was particularly brutal in this respect since it forced you to segment your computations in 256kb chunks(to fit in the SPU).
Fair point, I've never done any console dev. On the PC, it's really only code related to the way you might alloc memory, files, and some other io related tasks...
I recently went through a very similar process porting my screensaver [1] from Windows to Mac without using a library like SDL. Here are some additional difficulties I encountered during this process:
OpenGL on multiple monitors - this was much more difficult to do on MacOS. I had to create a separate window for each monitor, create a rendering context for each window, make sure my graphics code was issuing the drawing commands to the proper context, then have each context queue/batch "pending" rendering commands and issue them all at once at the end of a frame on a by-context basis. Whereas on Windows you can pretty much create a window that spans multiple monitors and draw to it with a single rendering context.
Input - I used DirectInput on Windows and wrangled a suitable implementation using HID Utilities on Mac, which was not easy given my lack of previous USB programming experience. A major annoyance was the lack of a device "guid" that you can get via HID Utilities to uniquely identify an input device - I had to manually contruct one using (among other things) the USB port # that the device was plugged into. Not ideal.
SSE intrinsics - my experience was that Microsoft's compiler was MUCH better at generating code from SSE/SSE2 intrinsics then clang - my Windows SSE-optimized functions ran significantly faster then my "pure" C++ implementations, where as the Mac versions ran a bit slower! My next thought was to take this particular bit of code gen responsibility away from clang and write inline assembly versions of these functions, but I took a look at the clang inline assembly syntax and decided to skip that effort. (I did write an inline assembly version using the MS syntax and squeezed an additional 15% perf over the MS intrinsic code.)
Prtty much everything else (porting audio from DirectSound to OpenAL, issuing HTTP requests, kludging up a GUI etc) was pretty straight forward/did not have any nasty surprises.
> SSE intrinsics - my experience was that Microsoft's compiler was MUCH better at generating code from SSE/SSE2 intrinsics then clang
I don't know whether this is still the case – or something like tweaking the target CPUs would help – but assuming it is, did you report it to either the open-source Clang project or Apple? The developers have seemed to be quite responsive to reports like this.
Does anyone know whether he did the graphics too? It's not world class art, but it looks real good, so I'd be very impressed if that was the same guy too.
It's not a bad game, especially for its price, but remember that this is a town, not city building game. That means that there is a hard limit to the size of the town - any bigger, and the agents (citizens) will literally starve to death en route to their next destination because it is too far from their home.
If you want to expand further, all you're doing is making exact, self-contained replicas of the same town in other places on the map. There's not much variety because each town needs the same resources, and each map has those same resources.
Like many games of this genre, the game has a reverse difficulty curve. This is especially true here because of the focus on survival. That means that the first few winters will be spent micromanaging every single resource to ensure everyone has sufficient materials, but after that initial period is over, it's impossible to fail because the town basically runs itself.
All true. But there's trading that can be quite a bit entertaining. And theres Colonial Charter[1], which is an excellent mod, but it can be daunting because of sheer volume of changes and new stuff.
Indeed, for me it's only recent city builder that captures the feeling of the original Settlers game(s). The feeling where I just want to see what my little city is up to, see the people scurrying about their little lives.
The game is relatively deep, and has tons of room for clever tactics. Also, it was developed by a single person, which is pretty impressive for a game like this.
It has mods now like Colonial Charter[0] which I've never used but heard great things about it. I'm on OS X and have run in in Wine (Wineskin specifically IIRC) but I too haven't played in at least a year. If he releases an OS X version I'll be sure to buy it though. It really is a ton of fun.
For me the most interesting part (and answer to it) why on OS X the game runs at 1FPS, whereas on windows machine with the same graphics card it runs just fine?
I have never seen such huge difference. I'm playing and making games and most of them run 10% faster on Windows at most, and this is due to graphics driver.
However, there is one thing that is a major difference: the system timer. Windows sleep() call has a granularity of 10ms, while OSX one has 1ms. So, if you (or your game library) is using sleep to return unspent CPU time to OS, it's easy to write the code that would run fast on one OS and very slow on the other.
Most developers treat OSX version as a checkbox on their "required features list" and only care whether the game works or not when porting. They do not want to spend time looking at the intricacies of OSX and just tell you that it's graphics driver's fault.
I develop games on Linux and OSX and later port to Windows before releasing (because that's where the players are), so I have to write specific code for Windows to make sure it runs faster. For example, my latest Steam game was really lagging and slow on Windows until I figured out the timer problem I wrote about above.
Other developers do it the other way around, and OSX performance isn't top priority.
OpenGL is not the same everywhere even on the same GPU.
Every driver can negotiate its capabilities with the client app.
Apple has a software renderer that takes hold if client ask for an extension the gpu doesn't have, you have to ask a render surface in a specific way to get an error back when you ask ans unsupported feature.
At least that was the case when I last worked on a OSX app.
"Since the OpenGL framework normally provides a software renderer as a fallback in addition to whatever hardware renderer it chooses, you need to prevent OpenGL from choosing the software renderer as an option."
also note that 1 this might not be the banished problem and 2 this may be an outdated document
This is a good hypothesis. A difference in a few FPS could be attributed to less efficient drivers on one platform, but dropping to 1 FPS sounds to me like he's hitting a software rendering path on one platform and is fully HW accelerated on the other. I admit, it's been over a decade since I wrote OpenGL drivers, so perhaps things have changed drastically and my intuition is out of date.
1 FPS sounds like software rendering. He probably accidentally requested some OpenGL feature which is not hardware accelerated in his GPU, so the whole pipeline transparently switched to software rendering to be able to fulfill his request.
OpenGL on OS X has platform specific quirks which can require using different methodology. Linux developers of Witcher 2 (from VirtualProgramming) stumbled on various parallelism issues when trying to port their OS X version of eON wrapper to Linux. See https://github.com/virtual-programming/witcher2-linux/issues...
To put it shortly - you can't just assume "it's all OpenGL so same code will work the same".
It should be something elementary the author still hasn't implemented properly during his porting actions. He says that he has "specific startup code" in "two different sets of the same GL code" in "copy-pasted-slightly-edited" form and that he "started writing things specific to OSX. Memory management, file I/O, timers, date handling, threading, etc."
Basically, there's a lot that could possibly go wrong during his first days of porting, and after all that he just saw "the title screen" and observes 1 FPS rendering. So it can be anything, it's too early to know, I hope the author posts later what was missing.
The existing OSX drivers are surely good enough for games, so it's the problem of the use and surely not the case that "the platform doesn't allow" or the "drivers."
Also note that it's not about DirectX vs OpenGL. He has OpenGL on Windows too (see the first paragraph here for the quote).
Drivers. On MacOS they tend to be a bit behind the Windows versions, also MacOS itself is reasonably GPU-heavy. Add to that Apple's tendency to lag behind the latest (as in from the last five years) OpenGL standards and you've got a recipe for slow OpenGL.
Though interesting that it's an Iris Pro, MacOS used to have the best Intel drivers as Apple pulled them in-house. ATI and NVidia have always treated MacOS as a secondary target.
If it was a 10% performance hit, I could blame the driver but comparing 60 FPS (I presume) with 1 FPS when people are playing the game in Mac OS X using WINE; that sounds like something is probably not quite right in his code.
I also found that even with a GPU-heavy fullscreen application, having another (smaller) window on top doesn't noticibly degrade rendering performance. It seems the window manager is doing some clever things there in regards to compositing.
This makes no sense since OSX is certainly capable of the same level of graphic performance as any other OS (some would say it has even more powerful low-level tools now that Metal is available), so there must be something else going on in his setup that is causing a problem. As others have noted, possibly the drivers, but more likely he's not using the Mac's API to guarantee GPU rendering vs software rendering, or other fairly small settings that can lead to a big difference.
I don't have the links with me, but I've read somewhere that this happens because Windows uses DirectX, whereas OS X uses OpenGL.
I assumed this is because Game Engines are built usually targeting Windows/DirectX. Some say that DirectX is more mature and powerful, although this might be subjective. And so the games perform better on Windows.
FYI,
With the advent of Vulkan, maybe subjective opinion that DirectX is better than OpenGL dies down, as Vulkan is supposed to very good - kind of like a rewrite of OpenGL.
And with DirectX 12 on Windows 10, Microsoft has done a lot of good stuff.
Vulkan is still in development. It is based on the same concepts as Apple's Metal, AMD's Mantle and Microsoft's DX12, i.e. low-level access to the GPU.
Which is pointless if you are limited to a MS only walled garden. Also, rushing it out ahead just to be first isn't a plus if it has deficiencies that could be fixed before the release. Vulkan is developed with reasonable pace and it's good that they aren't trying to rush it out before it's ready.
> Professional game developers don't have any issues with walled gardens.
That's nonsense. Tell them that doing double work is a good thing (especially for the limited budget). No sane developer likes walled gardens and lock-in because it always translates in complications (not caused by real technical reasons), and doing the same work multiple times to address stupidity of vendors who push said lock-in.
> That is how the industry has survived
Saying that industry survives on lock-in is like saying that technology survives on the lack of progress. I.e. it's a completely backwards thinking.
It is how many in the industry make a living, by doing consulting as experts in porting games between platforms.
Arguing about the beauty of FOSS and OpenGL, and the dismay of them being ignored by professional game developers, only reveals lack of knowledge how the industry works.
Once upon a time I learned the hard way that being too focused on that, made me lose the picture how the inner workings of the industry are. Back when I still cared about game development and had the privilege to visit a few well known studios. One of them apparently owns a black console that is selling quite well.
Professional game developers don't care about FOSS, 3D standards or whatever.
The only thing that matters is getting their vision of game out in the hands of their fans, regardless of what the systems their fans might have available.
There are plenty of companies selling middleware and consulting services for porting activities.
Gate keepers help prevent a flood of low quality games and copy cats like the one that caused the 1983 crash.
Industry makes a living on technology progress. If someone makes a living on pushing lock-in (MS), they are doing a disservice to the whole industry. It has nothing to do with FOSS - it's about progress in general (lock-in is the opposite of it).
You didn't disprove what I said above. Duplication of work costs more money, and no one likes to waste money, especially when reasons for that duplication aren't even technical, but are caused by crooked vendors who force that extra expense on others with lock-in.
> Professional game developers don't care
They care about their budgets. Your idea that duplication of work is welcomed is complete nonsense.
> Again, you don't seem to understand how the industry works.
Economics work as usual here and gaming industry is no exception. If someone forces duplication of work on others, that increases costs, which ends up being passed to some party. And for the end user it can translate in lower availability, slower time to the market, higher prices and so on. So far you didn't manage to demonstrate that it somehow magically comes for free.
TL;DR: lock-in taxes the whole industry and slows down progress.
My whole point was to say how the industry works and that professional game developers, important word here professional not indie, don't care one little second about the point of view you are expressing.
I do not intend to demonstrate that something, whatever it might be, magically comes for free.
> My whole point was to say how the industry works and that professional game developers, important word here professional not indie, don't care one little second about the point of view you are expressing.
You made several mistakes in that statement.
1. You claimed that only publisher funded developers are professionals, while those who are self funded or backed by other means (like investors or crowdfunding), i.e. independent (=indie) developers are not professionals. That's an insult to many truly professional people. There is no dependency on publisher funding to be a professional.
2. You assumed that publisher funded production doesn't care about this issue. Do you think they don't have to balance their budgets? Just because they are publisher funded doesn't mean they have infinite resources and doesn't mean that those publishers are happy about extra costs.
I.e. everyone cares about it and nobody normal likes it. The only ones who like lock-in are those crooked vendors who push it on everyone else. Also, if someone doesn't care about the industry progressing - they can't be called professionals.
Those are mistakes on your point of view, not mine.
Of course there are production and development costs, like in any other business, however the industry doesn't get crazy about FOSS and stuff like that.
My point of view steams from having had the opportunity to get a glimpse how the AAA game development industry works.
Have you ever been there, instead of trying to advocate for everyone cares about costs and free mantra?!
Just go attend a GDC, ask around how many devs care about your point of view.
Based on my experience attending them, I bet the answer will be very few.
Term AAA is ambiguous. Please define it. If you mean publisher funded (a common meaning), then see above. If you mean big budget, then your remark about independent developers is invalid as well (there are independent studios with big budget games). Anyway, I don't see how any of that is related to professionalism. Funding method or budget size has nothing to do with it.
> Of course there are production and development costs, like in any other business
Yes, overcoming lock-in and duplication of effort add extra costs. That's exactly what I was saying above. It equally affects big and small budget projects, as well as publisher funded and independent studios. Saying they don't care about extra costs is simply ignoring the reality.
> How the AAA game development industry works.
Still ambiguous, but let's assume you mean AAA = publisher funded (since you contrasted it with independent studios before).
Simple example - most legacy publishers don't even release games for multiple APIs (such as OpenGL), because of costs. I.e. they are hostages of lock-in. That exactly demonstrates the issue above, and the fact that it has a direct impact.
So saying that no one cares about it (or no one is impacted by this tax on the indstry) is completely wrong.
> Simple example - most legacy publishers don't even release games for multiple APIs (such as OpenGL), because of costs. I.e. they are hostages of lock-in. That exactly demonstrates the issue above, and the fact that it has a direct impact.
It is not how it works in the industry.
They focus on one platform, because game programming is more than the graphics API, the hardware architecture and OS are also part of the whole equation, and what means being able to extract every single byte and ms for a few extra FPS.
The talks done by Naughty Dog are a good example of how much it matters to be an expert on a specific platform.
Then they leave the ports to other game studios that specialize in porting to specific platforms, which is another way how money flows inside the industry.
There is a whole industry specialized in game ports since the days of Atari ruled the world.
A publisher that targets PC, XBOX, PS4 and Nintendo has already by definition supported 4 graphical APIs, not counting the additional OS and hardware differences.
You can shout to the windmills how much bad lock-in and duplication of efforts are, like it happened to Don Quixote, no one will care until you change the speech to the language and mentality that reigns in the game industry.
What matters is IP, licenses and getting the games into the hands of users.
The technology used comes a few bullet points down in the priority list.
> They focus on one platform, because game programming is more than the graphics API, the hardware architecture and OS are also part of the whole equation, and what means being able to extract every single byte and ms for a few extra FPS.
Not according to experts who actually work on cross platform games.
> A publisher that targets PC, XBOX, PS4 and Nintendo has already by definition supported 4 graphical APIs
That's exactly the point. You can't claim they are happy about spending x4 times more on supporting their engine on each system and have a very limited ability to share code. It's always extra costs. They do it because vendors of those walled gardens limit developers' choice and artificially force incompatible APIs on them.
> you can shout to the windmills how much bad lock-in and duplication of efforts are
They are bad and everyone knows it.
> no one will care
Those who care more, work on breaking that lock-in. See what Oxide Games developers have to say about this lock-in idiocy, and don't claim they aren't professionals.
At the same time they strongly criticized DX12 for being MS only and said it's necessary to have a cross platform solution (i.e. Vulkan). Grasp the simple fact that no one likes lock-in except for crooks.
>They focus on one platform, because game programming is more than the graphics API, the hardware architecture and OS are also part of the whole equation, and what means being able to extract every single byte and ms for a few extra FPS.
^ That is how I know you're BSing. I can assure you, while reading between the lines you seem to be very concerned with promoting Microsoft.. if you think game devs are extracting "every single byte and ms for a few FPS".. given the buggy, unoptimized nature of many games, you are quite the comedian.
The comment about C++ templates is baffling and I wish the author would elaborate. The behavior he describes that clang doesn't support is... how templates are specified to work. They're near-useless without that property.
Most of these had to do with templates that expected the code inside them not to be compiled until they were instantiated. The Microsoft compiler has that behavior, while clang does not.
TL;DR: If a type, variable... depends on template parameters, code using it is checked when the template is instantiated with concrete template arguments. Otherwise it is checked when the template is defined.
I don't quite understand why you call templates near-useless, if two-phase name lookup didn't exist?
I went the other way: last year I converted an OS X app to Windows. I hadn't used Windows for six years, and had forgotten most things.
It took two weeks to get the code compiling and running. That turned out to be the easy part. Getting the application performing well, feeling "native", and getting the bug count down took another six months.
I love Banished and I'd like to see a completed OS X port. But I'm not expecting this to be done, like, tomorrow!
It is a nice starting point for queries about files on the system as a whole rather than in a particular folder. i.e. Show me all of the PDF files I've used recently. Or find a file with name x but I have no idea where it is. `find / -name foo` vs `find . -name foo`
The man's commitment to his game, considering that he works alone is incredible. Without using terms like "10xer" and "rockstar", he's got an incredible level of perseverance and dedication, considering that he already launched the game and at this point is working on features that are considered by many boring and a grind, all to make a polished finished product. The fact that he documented pretty much everything in his blog is great if you need motivation or are just curious about how to make a game from zero.
Yeah, I guess the author meant more like "very similar to Unixes (unices?) I've used before", which quite possibly means 'various Linux distros'. Aside from official certification, OSX is still sufficiently different to make this mistake forgivable, and not just in terms of the UI. The filesystem organisation is unusual, to say the least.
However, just like C, any certified implementation is free to add extra behaviors and there are certain parts that are actually implementation specific, like how signal handlers behave in certain situations.
I do it this way, but it's based on my particular skill set:
First make the iOS version. Then, port it over to Java. Then, port it over to C# or maybe ActionScript3/Flash.
This way, I can recursively update previous versions as the 'best solution' to interesting problems become most clear by the end of the 2nd or 3rd port. This gives the Objective-C/iOS version the attention it needs, and I can use the rapid application development features for each new port.
We develop our game on Mac OS X and port to Win32 and Linux. Using CMake, SDL2 and C++11 there is very little code that actually needs to be rewritten. The windows build process is just a python script that pulls, cmake configure, compiles and zips the latest build.
The code that is completely different on the platforms is stuff like HTTPS requests, open file dialog, create/delete folders.
The fact that it’s running at 1 FPS is a little disheartening – I know the GPU is fast enough. I’ve got a Windows machine with an Intel Iris Pro 5000 that runs Banished just fine, which is the same graphics hardware in my MacBook Pro. I’ve got my suspicions as to whats going on but I have a bunch of testing ahead of me to make sure I fix the issue properly.
Did the author buy a MacBook Pro just for this purpose? I'd assume this is his personal laptop, but his "Using a Mac" section sounds like he's not a Mac user even in his free time.
The first thing the author talks about is their unfamiliarity with the Mac environment, and the game wasn't made for OS X in the first instance, which would suggest they did buy a Mac purely for an OS X port.
> Where would one have to hide to retain this level of ignorance for so long?
This comment breaks the HN guidelines by being both uncivil and unsubstantive (i.e. it adds no information). Please only post comments that are both civil and unsubstantive.
I'm sure he'd heard that OS X had Unix underpinnings but if this was the first time he'd needed to actually touch a Mac, it was probably a relief to see how smooth it actually is. “Good deal, I can do that.”
If he's worked in the Windows world, the most likely comparison was probably Cygwin or Microsoft's old Services for Unix, both of which are much clumsier experiences obviously taped on to different base.
Well I sold my macbook pro because I wasn't able to build my ogre project properly. For years. Also there was some OIS (not iOS, OIS) input issue. It comes from apple force feeding Cocoa into opengl apps, or something like that, which can only be remedied by using some SDL hack.
Anyway, I don't really care anymore, I bought a thinkpad instead. Cocoa is just something I just can't even.
My experience has been pretty different. I'm not a professional developer though.
Thought too. But in my opinion, an OS has larger responsibilities when it comes to improve inter compatibility, instead of breaking things to have developers who stay loyal and ending up with apple-exclusive software. I really wonder about the value objective C brings, especially since NextStep did not work as a project. You can't always put the fault on library devs.
What boggles my mind, is that OSX is an unix underneath, so I don't understand why it would do anything different and force developers to learn new habits. That's not how you attract devs. Apple has made an habit to break backward compatibility, something neither linux nor windows tend to do.
I think it's not so much to think that OS manufacturers should not to be different than their competition by separating even how their development tools work. The only objective of that is to have developers who stay loyal to apple because they can't have their app running on both windows and mac. Not to mention I had to re do everthing at each new XCode version.
So in the end, having my project run on both XCode and MSVC, was too much time lost, so I just sold that aging laptop. Apple is just so special, and I guess I was not good enough for that.
Back in the day I did most of my dev work on Solaris. I then spent 4 years as CTO as a startup that was pretty much only Windows.
When I subsequently went back to working at a unix shop I was initially struggling with vi as I tried to read some of the C++ code. I couldn't remember commands, was having to refer to the man pages every few mins. It was torture.
A couple of days in, I was writing up some notes in vi when someone walked past my desk and started chatting. When we finished talking I looked down at the monitor and I'd written more than I had when I was concentrating, nicely formatted, the works. Turns out "my hands" had remembered a load of what I thought I had forgotten.
For the next few days I had to keep finding ways to distract myself so that I could work efficiently. Eventually it all came to the foreground but it was the most bizarre experience while it was happening.