Hacker News new | past | comments | ask | show | jobs | submit login
What we learned from rewriting our robotic control software in Swift (sunsetlakesoftware.com)
94 points by ingve on Nov 3, 2015 | hide | past | favorite | 83 comments



This quote is the big takeaway for me:

"We use Macs to drive these robots, with control software that was written in Objective-C. We made the decision to rewrite this software in Swift late last year, but only after a careful survey of the bugs that had impacted our customers over the years. That survey revealed that most of these shipped bugs would have been prevented or detected early in Swift-based code."

Apparently they were able to categorize their shipped bugs and prevent those types of bugs from happening again with Swift. That's a very professional process.


   That's a very professional process.
Sad, but true. Not about the Swift part (I have no opinion there) but about the analysis and review of shipped bugs.

It should really be "that's an industry standard practice", but a huge amount of software development seems to be done in ways that don't have any systematic way to attempt to learn from mistakes.


This is probably a heretical opinion, but here goes anyway: while it's definitely great to take the time to do a full analysis of common classes of bugs, I don't think it is necessary to do so in order to have a good sense for what common issues a language like Swift solves. I don't need to evaluate my log of bugs to know that I have been bitten tons of times by having a null object or pointer at runtime when a compiler could have made sure I was checking. Languages like Swift and Rust (and Haskell and Ocaml and others) have protections for this and other no-brainer bug classes that I really don't think you need a big bug log analysis to see the advantage of.


> I don't think it is necessary to do so in order to have a good sense for what common issues a language like Swift solves.

That's true but when you're running a business this type of analysis makes a lot of sense. As experienced software developers we're probably all pretty wary of the "big rewrite", knowing full well that this often doesn't end up saving time in the long run but rather eats up a ton of time without delivering any new features or functionality. This analysis likely helped make the case that going forward they could expect substantial time savings, not to mention happier customers.

In any case, great article overall, very interesting.


Sure, you can see what Swift solves, based on the experience of not writing in Swift. That doesn't tell you what new kinds of bugs Swift enables, though. It's going to take years of using it to find out. You can somewhat speed that up by doing a full analysis of the bugs you get using the new language.


>Sure, you can see what Swift solves, based on the experience of not writing in Swift.

That's actually not true. You can see the trivial stuff it solves from the promo materials - you don't know the full extent of what's realistically possible to encode in the type system.

I had this thought yesterday writing C++ code - I was doing some re factoring and I copy pasted the same identifier twice instead of writing data to two separate buffers. The case where this mattered was not covered by unit testing because it's an edge case in a tightly coupled part - it's a major chore to test that kind of code for little gain.

My first instinct was "just put unit test there and forget about it" but on the other hand while buffer API is the same two buffers are semantically different so I could have made them two distinct types (eg. with template argument tag) and required untyped input buffer to be wrapped in a type explicitly stating data use case - this would have made the bug obvious and compiler would catch any discrepancy.

My point is there are plenty of non-obvious ways to prevent bugs trough a strong type system.


Well, you can start with all of the things that Objective C has that Swift does not that are likely to cause bugs.

For example, static analysis returns nothing where it did something before. Not calling super.viewWillAppear() in a subclass does not raise the appropriate warning in Swift.


Always liked

> Weinberg's Second Law: If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization

It succinctly describes the "process" everywhere I've ever worked.


I've always sort of disliked that one, because there are very good reasons that it is difficult for programmers to write programs the way builders/engineers build physical infrastructure.

However, there are not any good reasons to avoid improving your process and output iteratively via feedback.


I always took it as the general way we throw piles of stuff on top of other stuff until eventually it's so unstable a woodpecker would knock it all over rather than as an accurate indictment of software practices which are of course different to building, software is part science, part art and part prayer.


> software is part science, part art and part prayer

I've never heard that phrase before—is it yours? I really like it.


As far as I know, I've not read it anywhere :).

Google thinks its original to me so I've not accidentally stolen it unknowingly.


It's all yours, my friend.


>there are very good reasons that it is difficult for programmers to write programs the way builders/engineers build physical infrastructure

Mostly we just haven't been doing it as long.


>> write programs the way builders/engineers build physical infrastructure

> Mostly we just haven't been doing it as long.

This is a popular analogy (among non-programmers) but I have more recently been countering it like this:

It's generally predictable how long it will take to build a house, because many have been built, and they're all more or less the same. Sure they have different layouts, number of windows, etc, but essentially it's the same materials, same tools, and same methods.

Most programs written are not similar in the same way two houses are similar. They are more comparable to the way a house is different from an airport terminal, or a water treatment plant is different from a golf course.

Once you've built many different types of programs, you get a bit better at estimating, but each new type of program poses brand new challenges. A builder used to building small houses from wood is going to be pretty bad at estimating how long it will take to build a missile silo from concrete, and even after building both, will still not be able to come up with a very good estimate for the time to build a railway line between two cities.

Also keep in mind that typically, programmers don't build the same (or even similar) program more than once: unlike builders, we have copy+paste.


builders/engineers are not as perfect as some people seem to think.


Or inverted: There are very good reasons why it is difficult for builders/architects to build physical things the way developers write programs.


Yes that is equally true, but with different implications.


Yes, definitely! I should have said:

That's a very professional process, and it should also be an inspirational process.

Just because a lot of software development is haphazard doesn't mean that it has to be! Who cares if you use Swift or Ruby or C or Lisp, ignore the details - we should see the good, mature, "grown-up" things that other people do and copy & modify them!

Jack Ganssle (an embedded systems guru) has a great interview about being "a grown-up engineer" here: http://embedded.fm/episodes/2014/5/27/53-being-a-grownup-eng...

It applies to all software and hardware systems, not just embedded systems.


> That's a very professional process.

There's nothing professional about choosing the wrong platform for your application and then jumping into a new proprietary language in the hope of fixing things with a rewrite.


Have you looked at the medical devices in a hospital, recently?


> Have you looked at the medical devices in a hospital, recently?

Are you trying to imply they are Apple devices and/or becoming Apple devices? Because that's entirely opposite of what's going on at the several local hospitals (admittedly all under the same organization) near me.


They are implying that medical devices are notoriously bad at being proprietary/closed. Hospitals would be much better off if the devices could talk to each other but it isn't feasible as-is because device manufacturers want lock-in. The current situation leads to issues like alarm fatigue where nurses effectively ignore problems due to too many devices. If they were centralized then there could be one management system that intelligently alerts the nurse of any issues.

We have a major effort at Hopkins going on right now to get around this. See this article more info: http://hub.jhu.edu/2015/10/19/hopkins-microsoft-patient-safe...


Let's not forget that Apple announced back in June that Swift would be open sourced by the end of 2015; it might end up being a holiday present for developers.


There's also nothing professional about irrational hatred of a platform, or having the need to complain about another's choice of platform.


What's irrational about disliking an environment hostile to developers? When you you do robotics, you want full control over the software, so the natural fit is some Linux variant, not OS X or iOS.


Hostile to FOSS developers you mean.

Many of us don't have any issues with comercial vendors.


Many of you have no issues with the lack of quality in your work.


Actually, I am the opinion all types of software should suffer the same legal penalties and quality control that other industries have in place.


I could say that far more about FOSS developers. Many of them operate under the idea that they bear no responsibility for the software they write, and as such, have no impetus toward quality.


Yeah, no. I've never bought the "hostile to developers" argument. Try again.

"When you you do robotics, you want full control over the software"

And OS X gives me this.


Maybe you don't realize that "software" includes the kernel. Is it even possible to do a real-time fork of it like people did with RTLinux, RTAI and Xenomai?

Of course it's not, because OS X gives you shit and you invested so much in it already that you have to convince yourself that it tastes good.


I've moved to Swift as well. It's much better, for much the same reasons as stated in the article.

Particularly stronger typing is useful. Code that uses the Any object (or Object if you're in c#) tends to look horrible and have runtime errors, or it (what class it really is) has to occur to you as you're coding, which sadly doesn't happen 100% of the time for me.

I also find it's easier to not do [obj msg] everywhere. It just seems unnatural to have to go to the beginning of a token and put a [ in front. Much easier with a more ordinary syntax that's just obj.msg. ObjC had some of this, but mixing the two made it even weirder.

I did come across a compiler bug though, and that tends to take a long time to be sure of (don't want to write a bug report and find out it was your own code all along). I guess it's just a question of time before that sort of thing is stable.

Another pet peeve of mine is separating classes into .h and .m. I have to do this in c++ code as well, and I don't like it. Much better just to let the language take care of it for me. The interface is pretty clear anyway, so why separate it and have another file to keep consistent? I guess it's mainly a legacy issue.

One thing that needs to be done is XCode needs to be able to refactor Swift. Hopefully it's around the corner.


I honestly don't get the hate toward header files. They're not great, but are they really that bad?


Yes, they are.

1) They add fragility. It's all too easy to take a dependency on code without explicitly stating it. I'm using objects from "b.h" by including "a.h" which include "b.h" but I don't realize it.

2) They bring extraneous data into the code. A #include says more about the structure of your files than it may about the logical relationships between objects. Where to get things for the build is a job for the build config/system, not your business logic.

3) They're easy to use poorly. From your first circular include to the time you have to sort out a crawling build brought on by a bloated include tree, you'll swear there's gotta be a better way.

They are good for some people though; there's a small cottage industry that's built around resolving the myriad irritations headers bring.


I guess you never used module based languages.

Why write things twice when the compilers are able to export text definitions from modules and IDEs can show them as well.


Maybe I am misunderstanding, but I thought the point of headers is to avoid writing things twice (or more).


On the header file you write the function interface and class definitions.

On the implementation file you have again to repeat those plus the actual implementation.

On many languages with module support, you just mark whatever symbol with public, that's it.


OK, you meant specifically the source/header split on classes in languages like C++. Yeah, there is definitely redundancy there that gets eliminated by headers.

I also think I may have been conflating "header" with "include file" to a greater degree than is appropriate.


I have. And while nice, I honestly don't see what's so bad about writing headers. I like being able to export a set of headers and a binary library.


Why write them if tools can do it automatically for us?


Because the tool may include more information than is needed or wanted. The tool may also not be able to detect (without being told) that some information should be included.


I fail to see what one can different.

Public types are visible, which are the only ones that matter to the module user.

Many languages with modules support public and private comments as well.


They're not bad. Just redundant.


And redundancy _is_ bad.


Redundancy is not always bad. It allows concurrent evolution in your code, specialization, and info hiding, because each copy is in a more specific context and you know just what you want from it there.

Related: http://yosefk.com/blog/low-level-is-easy.html

In Objective-C writing headers by hand gives full manual control of what your library's users get to know about your class, including comments. Internal things just don't appear in the first place.

C++, of course, makes it impossible to hide anything from your users because they need to allocate your class themselves, you need to put all your private ivars and methods in the public class definition, and you can't add ivars to the class later. But that's its problem.


> C++, of course, makes it impossible to hide anything from your users because they need to allocate your class themselves, you need to put all your private ivars and methods in the public class definition, and you can't add ivars to the class later.

On the other hand, actual practice: http://www.gotw.ca/gotw/024.htm


Oh, but Objective-C does the right thing using the standard and simplest way to write it, and you can subclass other people's classes and everything!

http://stackoverflow.com/questions/12522053/what-is-a-non-fr...


Depends. Other times we call it "backup". Or "failover".


I'll let you in on a secret: I don't trust any of the code I write. I've been programming for decades, and I still keep making the same stupid mistakes.

I'd love to work with this guy. Overconfidence is not a virtue in developers.


Unfortunately, it is in many developer job interviews.


The nature of job interviews means that you have to be like that. (Though I have heard from Sedes on here that the culture is more huble there).


I did work with him for a little while. He's good.


This is not a fair apples to apples comparison. I am certain that had they rewritten the application in Objective-C from the ground up, taking into account all they learned from the mistakes of the past, they would have ended up with similar or even better gains (by developing in a familiar environment).

It is natural, almost unavoidable in fact, that an application developed and continuously enhanced over many years will end up in a bit of a mess, unless the approach is very disciplined - which it was not in this case (inadequate unit tests for one example).

This was not a weakness or failure of Objective-C, but of circumstances and approach.


It's not apples to apples, but it sounds like both quality improvement and code size reduction were gained by leveraging the Swift type system instead of relying on runtime type checks in NSArray-based code. Would that kind of thing be fixable in an Objective-C rewrite without writing more code?


Honestly the syntax is a lot simpler and rewriting in Objective-C would probably in some cases get more bulky if you are transitioning to using typed collections:

    NSMutableDictionary<NSString *, NSString *> *dictionary = [[NSMutableDictionary<NSString *, NSString *> alloc] init];
compared to the swift version:

    var dictionary = [String:String]()


>the syntax is a lot simpler

and the flexibility is less too. The expressions are far from being equivalent. The Objective-C expression specifies what kind of dictionary interface and implementation will be there, where is the Swift's versions says there will be just Swift's map. Ability to manage implementation may for example start to matter once you're off beaten path - like large number of items, high concurrency or some very specific requirements on life cycle, memory or performance or you want/have to exploit very specific type/set of keys. Though in mainstream case general standard implementation is probably just fine. Everybody chooses their own poison.


you could just say

    [NSMutableDictionary dictionary]
instead of -alloc -init on the annotated class.


That's true. The rhs isn't the the most obtrusive part compared to its type declaration:

    NSMutableDictionary<NSString *, NSString *> *
imagine having to pass the typed dictionary into a method, now you've got to add the type declaration to the method signature in addition to it being initialized elsewhere. Its brutal but also worth doing if you stick to Objc.


Exactly what I think every time I read a "we re-wrote it in X and it's much better thanks to X".

We do it about every 3-5 years, and it seems to coincide with new CTOs.


This article points to the Lyft article where they also claim that rewriting in Swift reduced their code base:

"Over the years, the original version of Lyft had ballooned to 75,000 lines of code. By the time the company was done recreating it in Swift, they had something that performed the same tasks in less than a third of that."

If you're interested in learning Swift, I have a small project where I collect all good Swift urls that I find:

http://www.h4labs.com/dev/ios/swift.html

Also, note that it's easy to start using Swift in existing Objective C code without needing to rewrite. It takes a few minutes to set up a project to use a bridging file. If you've got a lot Objective C, you can extend those classes with Swift by using extensions, then migrated a few methods at a time.

    extension MyObjectiveC_Class {

      func aSwiftFunc() { // Can be called from ObjC

      // ....

      }
    }


> The Swift version of our Objective-C application, with the same user interface and features, is only 39% as large

This makes a lot of sense. In Objective-C chaining too many method calls together gets ugly and you end up having to break it up into multiple lines of code for your own sanity. Take the code below, autocompletion is guaranteed to break on you for the Objective-C version.

Objective-C:

    [self.view convertPoint:apoint toView:[[UIApplication sharedApplication] keyWindow]];
Swift:

    view.convertPoint(apoint, toView: UIApplication.sharedApplication().keyWindow)
Objc Refatored:

    UIApplication *application = [UIApplication sharedApplication];
    UIWindow *window = [application keyWindow];
    [self.view convertPoint:apoint toView:window];


Chances are, the window you want is the window the view is in (which is probably the same as your app's window), UIView has a handy property for that.

  [self.view convertPoint:apoint toView:self.view.window];


      [self.view convertPoint:apoint toView:[UIApplication sharedApplication].keyWindow];
No reason to skip the other places where dot notation would be useful.


Ok my example was a bad example because keyWindow is a property. But I find myself breaking up lines of code over and over again just for the reason that it can't figure out how to autocomplete due to the bracket hell. And I leave it broken apart because if I don't now a later refactoring likely will anyway.


I think swift is great as long as you are squarely in the Apple ecosystem, which was true for this use case...but the fact that they were using macs to drive robots was curious to me. Is there some advantage in robotics to using Apple/OSX? Why not linux, BSD, or some other embeddable operating system and off-the-shelf hardware?


It is explained here:

We originally built the software for these systems in a cross-platform manner, targeting Mac, Windows, and Linux using a C++ codebase. About eight years ago, we realized that as a small team we needed to focus on one platform in order to accelerate development and take on our much larger competitors.

After a lengthy evaluation of the Windows, Mac, and Linux development environments, we chose the Mac and Cocoa (despite none of us having much experience with the Mac before that). We rewrote our entire control software in Cocoa seven years ago, and looking back I feel that it was one of the best decisions our company has made.



Just guessing, but I don't think this system controls the robot with the computer. It may command it, but the closed loop control is probably not happening on the Mac. At that point your interfaces have much looser requirements.

That segmentation is very similar to hobbyist 3d printers.

Looking at the solutions they sell my guess is that the system is expensive enough that adding an iMac rather than some other consumer PC doesn't change cost too much.

At that point they probably picked the platform they were most comfortable developing on. Going back to the first blog post shows that the author prefers Cocoa for developing GUI's and the bio states that he is CTO of the company.

Again, all speculation, but thats how I'll rationalize it until he tells us different.


Just a guess, but with Apple, you trade cost-effectiveness for robustness and reliability. By using Macs as the computer and OS X as the UNIX-of-choice, they are able to eliminate entire classes of errors related to drivers, hardware-compatibility, and system configuration. Apple knows its hardware very well and OS X has (mostly) sane defaults, but I agree it is an atypical choice.


They have two systems listed on the site and both are turnkey packages including an iMac by the looks of it. OS X is mostly nice to program for and apple makes nice hardware. I suspect they could have written the software using C# and windows with a custom PC build somewhat easily, but UNIX is a nice and stable base os so I can see why they might have chosen OS X.


> Is there some advantage in robotics to using Apple/OSX?

Don't know about robotics, but OS X is architected to be able to handled real-time audio. So, I can certainly imagine (but don't explicitly know) that it has better responsiveness guarantees.


My guess is that they run the robotics software on the same machine they do their design work.


Do we really care that much about LoC? I understand that more code means more opportunity for bugs, but I'd be far more concerned about performance, battery usage, code quality metrics, etc, not to mention maintainability and codebase stability, access to talent who understand the language/frameworks/idioms, etc.

Don't get me wrong, I'm sure Swift is a big improvement in a lot of ways, but LoC is a weird measure to focus on, IMO... though, I can't say I blame them. LoC is easy to compute. Turns out measuring actually useful things can be rather hard...


Yes.

  Here is a list of activities that grow at a more-than-linear rate
  as project size increases:
  - communication
  - planning
  - management
  - requirements development
  - system functional design
  - interface design and specification
  - architecture
  - integration
  - defect removal
  - system testing
  - document production
Code Complete section 27.5. Size in this section is referring to LOC.


Smaller code bases tend to make it easier to find things. Also I find in my old ObjC code there's a lot of lines to do one thing, often related to getting some object and casting it, then running some function on it. The kind of thing that should really be one action.


The bulk of the LOC reductions were elimination of header files, I'm sure, and the fact they wrote something a second time, but knowing what the picture on the box was.

Do I believe Swift is better? Sure.

Do I believe they eliminated some bugs in the original code? Yep.

Do I believe they introduced a whole whack of new bugs they now don't know they have? Most definitely. That's what happens when you re-write software (http://www.joelonsoftware.com/articles/fog0000000069.html).

I've been waiting on writing Swift in production until Apple stops making breaking changes to the syntax. Running a converter over a million line codebase every few months and then checking it in is simply not a good answer. I think we're either there, or very close. That said I'm definitely, super double not rewriting large swaths of code. There's no value there. New stuff? Totally.


A million LOC is a lot of work to create. Any chance you were exaggerating?


I joined a startup recently and the product was over engineered by a huge amount. I think I could have rewritten it in around a tenth of the code. The result makes it far harder for new devs to get up to speed with the application, and bug fixe take far more effort as there is so much more code to read through.


Is about 200/3700 words really 'focusing on' LoC? I didn't really feel like the OP stressed that point too hard. I'd also argue that less code is (generally) more maintainable.


Good point, there's a lot more to the article than that.

Unfortunately, it's the only objective measure offered.

Where are the defect rates? Coverage in unit tests? Time to delivery? User satisfaction measures?


Fair! He does mention improved throughput and responsiveness, but I don't see what metrics they're using for that.


I'm sort of interested in what their migration process was like.

Particularly because I have seen so many HN posts "written in X converted to Y success stories". Most companies just can't drop everything and do a code conversion. They need to do it incremental. It seems with Swift this should be straightforward but maybe its not (I have to imagine its easier than going X to Golang which seems to be in vogue)?

IMO would have found it far more interesting to hear about the migration than why Swift is superior.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: