Makes total sense. Once you've figured out how coding works, you can do any stack, any framework, any language.
It bothers me how job ads tend to focus on various buzzwords. I'm quite sure you're better off hiring someone with good fundamentals who can learn your stack, rather than someone who is a novice but has some experience in the stack.
I started off doing financial trading code in c++ / c#. It wasn't a huge leap doing Android and iOS. Principles are the same:
- Some kind of OO. They are all pretty similar for most purposes. C++ is maybe a bit more intricate. Similarly, some kind of functional/lambda.
- Some way to access a database. Plenty of ORMs or ways to do SQL if you need that.
- Some GUI organisational principles. WinForms, WPF, Qt, Android, iOS, whatever. They tend to have things like "only draw on the main thread" and a pile of APIs for organising the screens, buttons, and so forth. But if you've done one, you're not going to need a course to do another.
- StackOverflow will tell you everything that's pure translation. Things like {} dicts for python.
- Get a brief tutorial for each new language/framework. They're all over the web, and the main point it just to highlight the features.
A technical recruiter, having discovered that that the ways of Unix hackers were strange to him, sought an audience with Master Foo to learn more about the Way. Master Foo met the recruiter in the HR offices of a large firm.
The recruiter said, “I have observed that Unix hackers scowl or become annoyed when I ask them how many years of experience they have in a new programming language. Why is this so?”
Master Foo stood, and began to pace across the office floor. The recruiter was puzzled, and asked “What are you doing?”
“I am learning to walk,” replied Master Foo.
“I saw you walk through that door” the recruiter exclaimed, “and you are not stumbling over your own feet. Obviously you already know how to walk.”
“Yes, but this floor is new to me.” replied Master Foo.
Though I note that in real life the recruiter would never be enlightened. He is beholden to the HR departments, and the HR departments have settled on a system which works for their purposes.
Cute. Although there's some truth in that parable, back in the real world, experience in a specific programming language actually does count for something:
1) Learning the intricacies of a language takes some time before you're truly proficient in it.
2) Different languages demand a different approach to problems, a different style of forming the solution. A programmer with 10 years of experience in Ruby and 0 years of C is less desirable for a C programming position than a programmer with 5 years of experience in C. Though the Ruby programmer will be quick to pick up Python, he/she never had to deal with manual memory management, pointer arithmetic, and all the pitfalls that come with those.
If you're hiring someone to do something like implement a new machine learning algorithm or a new compiler optimisation they're not to just pick that stuff up off of stack overflow are they? Sometimes you need specialist knowledge.
Most of the time you know when a specialist is needed. Also, the specialist's skill tends to be part of the job description, along with the stuff that is unnecessarily restrictive.
You might come across a ML job advert that says the person needs to know Matlab. Well, what if you're a Numpy or R ML guy? It's quite clear to me that such a person is just as well qualified.
If you're looking for a compiler guy, you want someone who understands those optimisations on a theoretical level, ie abstracted away from the concrete language. The toolset may be completely different for each language, but people with equivalent fundamental knowledge should not be separated just by what tool they happened to use.
Do you really think any random programmer can master any given field of computer science in just a few months of starting a job? You could take a compiler engineer and get them doing cryptography, or a cryptographer and get them to write a compiler? No need for any experience to work in either of those fields? Just some JS or Android experience would be fine and you can use Stack Overflow to work out the rest of the details?
Agreed. I think OP's advice only applies to programming languages, and frameworks. But not concepts like machine learning, cryptography, compilers, algorithm design, distributed systems, etc. For those, actual experience does matter.
That's not correct. Who do you think would be able to implement a spam machine learning algorithm in a month? A NLP PhD or a regular Android developer?
You don't need someone to implement the algorithm, you generally need someone to wire together APIs between your application and machine learning libraries. I'd rather take someone who's been hacking around on Django for a year than a freshly minted NLP PhD for adding spam-detection for messages inside a Django app.
What if they spent the month finding out how some open source implementation worked, including what experts thought of it, and wired it up for your needs?
You know, at the bottom of your stack someone, somewhere, needs to know a subject well enough to implement things from scratch. If what you are doing hasn't been done before you can't just google or find an open source library. Even if it has been done before often these things are often barely written down.
Do you think when Google decided to create the V8 JavaScript JIT they hired a couple of Django programmers and asked them to google 'how to write a JIT'? Or do you think they specified a certain level of experience in writing JITs on the job advert?
> You know, at the bottom of your stack someone, somewhere, needs to know a subject well enough to implement things from scratch. If what you are doing hasn't been done before you can't just google or find an open source library.
That really depends on what "from scratch" means. Any particular piece of code can be examined in enough detail that nobody in the whole world would understand all of them. Luckily people who understand specific pieces will produce a part for others to use. Often the only thing that hasn't been done before is one particular mashup of existing parts. Then the skill of coding is like the skill of cooking: using judgement to assemble well known parts into a new whole.
> Do you think when Google decided to create the V8 JavaScript JIT they hired a couple of Django programmers and asked them to google 'how to write a JIT'? Or do you think they specified a certain level of experience in writing JITs on the job advert?
That wasn't the point. I'm not saying that any programmer can do any programming job. But that the selection criterion (language, framework) is in general wrong. If you have a guy who did a JIT for Java, do you disqualify that guy from a JIT for Go job? I don't think so.
Competence in JS is very different from most languages the average dev is exposed to, and decent Android is very far down the Java Super-OO rabbit hole.
What I have noticed in the first paragraph:
"It seems that some schools have caught on and are teaching students the ins and outs of being a mobile developer but for the most part graduates seem to only have a course or two of mobile dev under their belt."
Am I the only one who finds that weird? During my 5 years of college education in "informatics" I never had a 'programming for..." course. We were taught how to tackle problems, algorithmics, patterns and practices, formal languages, ai, distributed mechanisms and so on. We had to get that hard concrete experience on our own. IMHO having courses like "programming for Android" or "Web/JS programming" is a great way to mass produce monkey programmers.
Totally agree with the "mass-produced monkey programmer" comment. I've seen it in practice, when interviewing new graduates.
The grads from good (not necessarily great) schools tend to have solid fundamentals, and even without lots of "buzzword" coursework, they come out better. The grads from mediocre schools often have more buzzwords on their resume, but often lack solid fundamentals.
Of course, that isn't written in stone. We have several great new hires from schools that are typical regional universities.
Worse yet, my college is offering a Mobile App Development class, under Computer Information Systems (as opposed to Computer Science, which I am in) which uses App Inventor. And they're charging $240 per credit for this 4 credit class!
> IMHO having courses like ... is a great way to mass produce monkey programmers.
Definitely, I took a RoR elective and I wish there had been a Lisp/FP elective I could have taken. I'm pretty sure I only had one class where first class functions were even mentioned.
Programming is really hard, much harder than regurgitating facts about AI or distributed mechanisms or 'patterns and practices' sitting in an exam hall. To do that all you have to do is memorize your notes and repeat back what your lecturer told you and you'll pass.
Programming is a barrage of 'problems', you don't need a course on 'problems' if you actually just programmed.
When did you have that education though? I think it's very useful today.
There are relevant different design decisions to make that go much further than just a touch screen vs a mouse. You don't want to drain your users' battery or drain their wallet by going over their data limit, you make different decisions about what part of the workload to keep on the client or move to the server.
When I saw this, I thought it was going in a different direction. I personally despise mobile development because so little of it is elegant writing of code. A lot of it has to do with learning the bizarre ins and outs of thoroughly fragmented targets, as well as documentation that's sometimes very lacking. I had assumed any developer who really loves coding (rather than debugging or trial-and-error) would avoid mobile development, leaving the talent pool very lacking.
Personally, I am on iOS, and while iOS has gone through a number of, occasionally frustrating, framework changes, it has never felt particularly fragmented.
My impression is also, that it has changed a lot on Android.
Personally, I also think that, in the beginning, many parts of iOS-development felt more modern than OSX, and that this was the platform where things were maturing faster.
I would say, that mobile development is like development on most other platforms these days. You have an increasingly larger choice of languages, and the Mobile CPUs are certainly powerful enough that you can build very complex apps, where elegant code is an advantage.
The challenge is to keep things simple from a user perspective.
It really is (still) a wonderful domain to develop in, and I doubt there is more debugging or trial-and-error than any other platform!
I worked for a company that embedded Linux on a custom device about 15 years ago. That was a horrible debugging, trial-and-error experience. The language was C. Chosen as a compromise because of the hardware specs, but not an obvious choice for the applications we built. Certainly not today. Most of the development team was frustrated with the development process, and I actually left the company before the product was put in production.
So I think, I understand your line of thought, but I don't recognize it in today's mobile development.
I can speak for Android, I have no iOS experience. Yes, due to peculiarities of Android - such as OS fragmentation, multitude of devices etc. - you can probably expect more duct tape than elsewhere.
On the other hand, due to limited resources libraries tend to be as they should, light-weight and modular. None of these mammoth frameworks enterprise devs have to cope with. No AbstractSingletonProxyFactoryBean.
I have similar sentiments for similar reasons, after having to write and support apps (ios/android/web) while working in a research lab… however I realized that I was much more likely to enjoy this for playing around with different neural hardware devices (for all the stuff that I want to help build on top of it and automate away, like having to type with my hands most of the day, or when I want to look up information and engage with information, but don't want to manually query for it or it is difficult/annoying to do so the degree I want if I'm out and about like with locked down phones and all compared to a laptop or desktop, which would be more socially acceptable with neural device under hat on a given winter day as most people will be unaware that i'm doing such).
Though, I'm more motivated by what I could theoretically do [faster|more comprehensively|repeatably] with code than I am with it in of in itself, and occasionally I like acting on such. Having x number of beamformers targeting different brain regions that I can customize quickly without interfering with timing of processing a signal or buying/making more analog hardware nor completely wasting cpus/memory to my own tolerance is cool, caring whether I use structs vs this or that some class inheritance scheme vs this or that implementation of shared pointers vs tabs or spaces… I couldn't care less about in of in itself and find myself annoyed when I have to deal with such talk.
Also, most of mobile development seems to be writing another UI for sensors or
SQLite database, or sometimes another game like the plenty out there. Why
would anybody think of that as exciting?
Yeah I did desktop apps for a while (in .NET), but they can be just as repetitive and non-inventive. In my case they were mostly shelf-stacking / syncing apps for internet shops.
I simply don't believe that a domain (understood broadly, ie. desktop/mobile/web) in and of itself dictates whether projects are original and interesting.
Not until you work on something really specialized, like AI etc.
Good point; I noticed that, too. Especially when it's RESTful. People seem to
have forgotten about RPC protocols, where it's operation, not a resource, that
is most important.
Personally, my typical web app is not CRUD, it's merely an R. And I don't
write those daily. Much more often I write network or system daemons.
REST is a shitty abstraction. If you happen to need to do anything with
a resource, you suddenly need to reinvent RPC, except that error reporting
will not be standarized and most probably will cram two different levels
(transport and procedure execution) together.
I used the word "personally" to clarify that I was expressing my own preference and not making an absolute generalization.
What I generally do is write web apps, and much of my enjoyment comes from figuring out how to creatively use other people's libraries and to write very DRY code.
The libraries I use abstract browser fragmentation away from my presentation layer, so I don't have to do endless run-debug-fix loops on the front end.
At this point, the code I write for each new project is 90%+ unique to the project and very thin, so I get to do the fun part of releasing and iterating it.
You're still crapping on other people's stuff. Honestly, what value did your original comment have other than to potentially make you feel better at the expense of others?
As someone that occasional does some hobby Android and WP coding, when time allows, it is quite interesting to see this type of attitude.
Here in Germany many IT companies seem to disregard skills if they aren't listed as part of the previous job. Regards of what one would be able to show at the interview, if HR didn't filter out the CV.
So no chance of switching tech stacks unless one is able to land in a greenfield project that is taking off, inside the same company, before doing the jump.
Wow, so there are situations where a bad programmer with the right CV keywords would be chosen over a good programmer with the wrong keywords on the CV? I thought we were past that now.
Worse than that is when a developer has side projects that may show real expertise that was not required nor used in the previous jobs. Considering only the skills endorsed by previous jobs when doing a hire is a sure way of handicapping your company!
Once an HR drone has explicitly told me that my side projects had zero value for the application, even though they matched what was being searched for.
I can see how it happens at the largest corporations, because they have staff to filter applications they don't understand, or let recruiters do the same work.
As long as someone understands the platform you can do this. I've seen way too much iOS code written by smart people new to iOS with no clue how to structure and ship a successful mobile app. This is likely true for most platforms. A smart developer can learn in time but usually you don't get that time so it's better if at least someone has done whatever it is before. I am sure I could master Android in time but if I started building an app on day 1 it would likely be terrible.
The article addresses this. They have experienced developers on-hand to assist and ensure standards are maintained. Getting the time to learn is a management problem, not a developer problem.
I've taught C# and the .Net framework to several teams over the last two years. The good developers pick it up without much problem. But, we usually start with several sprints of co-located group programming (for complete teams learning the stack).
Agreed, it's important to that the entire teams understands when someone is learning a new skill set or language. We try to allocate more time in our sprint for not only the developer who is learning but also for another developer on the team to help ramp them up.
It takes a bit of working with the PM to determine what is a good task that isn't super time sensitive but it's typically a solvable problem.
It seems like everybody sharing their interview experiences says they all use the same Google-wannabe standard. Ie whiteboarding, implementing an advanced data structure from memory, and rapidly solving CompSci homework questions.
I'm no CompSci grad but I have pretty a pretty experience in many languages/stacks. I have confidence that I could design and implement a reasonably complex distributed architecture if requirements called for one. Yet I'd look like a complete asshole if asked to whiteboard a R/B tree from memory.
It's kind of sad that a person's worth can be reduced to such a useless metric.
HR has no ability to objectively judge technical talent, so they follow the latest 'flavor of the month' interviewing strategy.
The suits are heavily influenced by sources like Business Insider that repeatedly parrot advice about avoiding 'false positives at all cost.'
Meanwhile the tech specialists who should step in and call BS, won't because they're to arrogant to admit their own hindsight bias. Ala judging new candidates based on their current level of ability and specialized knowledge rather than the level they were at when they were initially hired.
It seems like the industry is dead set on forcing the perception that software development is hard science -- as in -- everything can and should be described in terms of fundamental data structures and algorithms. When the reality is, software development is 95% art and about 5% hard science.
The vast majority of clever algorithms and data structures can be implemented in less than 100 lines of code each. What accounts for the other millions of lines of code in a system such as the Linux kernel?
Mobile Devs might be better versed in the UI layer as they would be more familiar. Could end up not having to need a "photoshop-designer".
We've used good developers on iOS projects, and while they would do ok with actual logic, they didn't do well when it came to Core Data or UIKit. It's not an education problem, but an interest issues. Most of them used Androids, the ones with iOS devices really didn't care much about UI.
That's a valid point. Although, we hire developers who are interested in becoming mobile developers. If you want to succeed as an iOS developer, Core Data and UIKit are essential to learn. If the person isn't interested in doing the front end then mobile dev probably isn't for them. That should come up in the interview process.
Small nit pick, but I would be worried about a mobile development place that thinks it has only been around 8 years, not heard of J2ME, Symbian and Windows CE etc?
I do agree with the idea of hiring decent engineers and just training them, my only concern would be that without having had the mobile experience they might not know that they don't like it - it can be quite painful getting code to work nicely on a large collection of different hardware.
That is a pretty small nitpick you've got there ;)
It's a bit like saying "don't they know human flight existed before the Wright Bros." or "don't they know cars predate the Model T?"
Of course, but the iPhone (and subsequently Android) presented a gigantic turning point in mobile development, to such a degree that mobile development can largely be categorized into "pre-iPhone" and "post-iPhone".
Pre-iPhone mobile development was a vanishing small industry compared to today and used technologies that are largely completely not around today (see: J2ME, Symbian, Windows CE).
So yeah, maybe they could've said "mobile development in the modern context centered around multitouch-centric interfaces using soft keyboards has only been around 8 years", but that's kind of a mouthful, and unless we're trying to write an accurate history of mobile software, not particularly relevant to anything.
I would agree with the pre/post iPhone shift - but the likes of EA/Glu and other major games companies shifted an awful lot of apps pre iPhone.
The big shift is that touch became the only game in town (along with all the associated dev wrinkles). Also the demise of all content aggregators meant we could just upload our apps to a store and keep 70% (prior to this we got ~20% i.e. operator took 60-70%, aggregator took 30% and we got 50/50 on that).
True but those who worked with those systems still have relevant mobile knowledge. Optimizing for various screen resolutions, working with many implementations by OEM providers that have slight differences (Android), etc.
I think this is mostly true for all successful companies, not just startups. You are looking to hire for the company and not for a particular position. Hire for talent and let them grow with the product/domain/technology.
On the other hand, programmers jumping from one industry to another build no domain knowledge. The result is that domain experts, such as accountants or telecom technicians, specify the requirements, which then get translated 1:1 to software design, no diligence applied. The result is shitty software.
It bothers me how job ads tend to focus on various buzzwords. I'm quite sure you're better off hiring someone with good fundamentals who can learn your stack, rather than someone who is a novice but has some experience in the stack.
I started off doing financial trading code in c++ / c#. It wasn't a huge leap doing Android and iOS. Principles are the same:
- Some kind of OO. They are all pretty similar for most purposes. C++ is maybe a bit more intricate. Similarly, some kind of functional/lambda.
- Some way to access a database. Plenty of ORMs or ways to do SQL if you need that.
- Some GUI organisational principles. WinForms, WPF, Qt, Android, iOS, whatever. They tend to have things like "only draw on the main thread" and a pile of APIs for organising the screens, buttons, and so forth. But if you've done one, you're not going to need a course to do another.
- StackOverflow will tell you everything that's pure translation. Things like {} dicts for python.
- Get a brief tutorial for each new language/framework. They're all over the web, and the main point it just to highlight the features.