Hacker News new | past | comments | ask | show | jobs | submit | my3681's comments login

This, to me, is one of the most interesting topics of our time, and I find it fascinating the similarities (both philosophically and economically) to that of the railroads at the turn of the 20th century.

Much like net neutrality is the topic of the FCC, cargo neutrality on rail lines was the topic of the ICC [1] in their day. The problem of the early 1900s began with a bounty of riches in the highly competitive American Railway sector. Merchants using the railways were demanding "rebates" from railway companies for large shipments threatening to take business elsewhere and driving their shipping costs down. Of course, the same leverage was not available to smaller shippers, making their rates much higher and forcing them to raise the prices of their goods. The system was out of balance.

In 1903, congress and President Theodore Roosevelt, passed the Elkins Act [2], which eliminated rebates, but had the unfortunate side-effect of increasing price collusion between shippers and the railroads. The Elkins act did not establish a fixed rate, leaving interests to make deals that, again, were disadvantageous for smaller businesses. The only metric of discrimination was the diverting from a fixed railway schedule [3].

To correct the problems of the Elkins Act, the Hepburn Act was passed in 1906 greatly expanding the power of the ICC to regulate the Railroad Industry. The result was fixed prices on shipping deemed "just and reasonable" by the ICC, increased penalties for non-compliance, and an open and standard accounting system for the railroad companies. (As an aside, the depreciation of railway companies contributed to the Panic of 1907. A good word on that here: [4])

If all this back and forth sounds familiar, it is because we are facing the same problem today shipping bits that we were shipping coal and shirts all those years ago. The idea of prioritized delivery is not new or novel, but was dealt with before Netflix and Verizon began suing one another. What interests me is whether we will see legislation similar to the Elkins and Hepburn act for digital goods and services. I doubt very seriously that we will get fixed $/(Mbps) or $/GB mandates from the FCC, but already we are seeing definitions being made and lines being drawn [5]. There is already language from Wheeler about "responsibility to the 20 percent [without 25down/3up]", just as there was strong language from Roosevelt about the railway industries.

2015 will be an interesting year indeed.

[1] http://en.wikipedia.org/wiki/Interstate_Commerce_Commission [2] http://www.theodorerooseveltcenter.org/Learn-About-TR/TR-Enc... [3] http://books.google.com/books?id=g-pCAAAAIAAJ&dq=elkins%20ac... [4] http://books.google.com/books?id=R3koAAAAYAAJ&hl=en [5] http://www.fiercetelecom.com/story/broadband-now-defined-25-...


If you log into the developer center, there is an iOS 8.2 beta available with a new version of Xcode and WatchKit.


I downloaded it, there is no option to create such a project in the xcode 6.2 beta, as of 1pm EST (Nov.18)

Edit: You cannot create an Apple Watch project. You create a standard iOS project, then add the Apple Watch storyboard/methods


Foreign infrastructure funding has been going on for a while now in the form of toll roads. Particularly, there is a Spanish group called Cintra who does a lot of business stateside. See these articles for more information. [1] [2]

[1] http://usatoday30.usatoday.com/news/nation/2006-07-15-u.s.-h... [2] http://www.examiner.com/article/texans-call-for-boycott-of-f...


If I am not mistaken, Apple has been awarded patents in the area of localized haptics using tiny actuators. It may be that no rival solution has emerged. I was also quite upset when the announcement dissolved into a UI metaphor.

http://appleinsider.com/articles/13/02/19/apple-awarded-pate...


If Apple was anywhere close to building it though, I doubt they'd remove all textures and button outlines :(


Imagine trying to teach philology, while also teaching them their first language! This is exactly the problem that CS program directors face.

I work as a researcher in the computer science department at the University of Alabama, and I can tell you this is an ongoing conversation/debate not only here, but at many schools across the southeast. Unlike the valley, we have a tremendous shortage of developers, and that pressure ends up being felt at the university level.

To combat this, classes have been opened that expose the students to software engineering principles while not losing the theory that differentiates computer science from programming. This is largely helping, but is still, in many respects, an experiment.

Also at play is the notion that there are people in computer science that fall more into the design disciplines (i.e. HCI, UI, UX people). Right now, we have nothing for these people except double majoring with art/phycology and the occasional HCI class, but this may change soon. There are certainly frontiers in this area yet to be explored!

Either way, it's a big problem worth discussing, so I am happy I see it here.

*Edited for clarity


>Also at play is the notion that there are people in computer science that fall more into the design disciplines (i.e. HCI, UI, UX people). Right now, we have nothing for these people except double majoring with art/phycology and the occasional HCI class, but this may change soon. There are certainly frontiers in this area yet to be explored!

Whenever I read articles about social network-type sites that are doing UI changes it seems like there's pretty extensive use of data analysis based on A/B testing for different changes that are being made. Wouldn't a practitioner still need solid coding skills and a strong base in statistics?


"Imagine trying to teach philology, while also teaching them their first language!"

It's a bit worse than that. Imagine selecting that first language based on past popularity (Latin), universality (Esperanto), internal consistency (Lojban), precision and concision (Ithkuil), etc.


On reliable help to professors:

I work as a research engineer (MS EE) at the University of Alabama, and we have experienced a similar problem for our faculty members. The problem has been mitigated to a degree by help from professional staff (like myself) at various "centers" on campus that explore more practical applications of the research being conducted.

We do not receive a stipend for any extra work we do for a professor, but in just a few short years, our input has done wonders for building more complete students.

As for the rise in PhDs, I can confirm the our engineering college is pressuring faculty to graduate more PhDs. Our dean just had a meeting saying we need to "double our PhD output" just to stay competitive. As with most things, he admitted that it all boils down to money.

More PhD students means more potential funding. More funding means better facilities. Better facilities means better quality students, which means a better reputation. Better reputation leads to more/better grants coming your way and the wheel goes round and round...


I develop apps for both major mobile platforms, and when I speak with designers and artists about Android and iOS, they overwhelmingly prefer iOS. Since they use iOS devices themselves, it is their go to platform. It is almost always the one they want to work with.

Oddly, when I ask them why, the answers are mostly qualitative. It has nothing to do with "reaching the most people" or "getting exposure". It's a simple matter of taste. Designers tend to choose iOS products because they identify with them. They see Apple as an institution that they would enjoy being a part of in some way. Almost all the designers I know that are worth their salt use and love their macs for working in Photoshop and Illustrator, so it naturally follows that they prefer the iOS platform, regardless of numbers and market share.

Admittedly, my sample size is only around 50 or so, but this has been my experience.


Sounds akin to developers picking the companies they work for based on the programming languages/tech stacks they get to work with.

I can definitely sympathize.


Glad you said that. Apple prepositioned itself as a design oriented company something that designers associate with. Something being easy is highly subjective.


As a designer this is exactly why.

No android phone nor interface is close to the level of what Apple is putting out - both from an experience and product standpoint. This is also the same exact same reason why I purchase all apple products.

I really don't understand the argument of "You're paying more for a lesser product when you buy Apple". Ok - so the hardware isn't as good/powerful, fine, but that's not why I'm buying an Apple product. I'm buying it because of the UX, because of the way the phone feels in my hand, because of the way the keys feel on the keyboard when I press them down, because of how great the applications look and feel when I use them.

If android came out with something that looked and felt great I would certainly consider using it. In fact, I purchased a Samsung Galaxy S3 several years ago when it first came out. The phone was absolute garbage - plastic bevels still had molding pieces around the edges, the unnecessary bloatware (which would have been fine if it was remotely useful) was very poorly designed, and overall the phone felt very light and cheap.


> Ok - so the hardware isn't as good/powerful, fine, but that's not why I'm buying an Apple product.

The funny thing is, that's kind of a myth, particularly in phone land. The A7 (the 5S's SoC) had pretty much class-leading performance when it came out, except in highly parallel tasks, and to a large extent it _still does_.


That's not really surprising, since a large portion of mobile developers probably became mobile developers specifically because of what Apple was doing in iOS.


I'm not knocking this or any particular xxxx-to-mobile platform, but as a native iOS and android developer, I would highly recommend just learning the native languages and frameworks.

First, you have the absolute most control you will ever have over your application. This may not be true when a particular tool becomes popular, say at the release of a new tool, but this definitely shows up as time progresses. Who knows what tools will keep up with the native frameworks terms of functionality?

Another reason that is more qualitative, is that you come into contact different design styles, code styles and paradigms. Objective-C, for instance, is a tricky beast sometimes, but it is an interesting language with a lot to learn both good and bad. The new things you learn will most definitely carry over into your web programming!

The third reason to code natively, and I cannot stress this enough, is third-party library support. Maven and cocoa pods may dependency management much better on the mobile platforms, and using a third-party tool will most likely render all those advancements useless. There are some amazing libraries for both platforms that you definitely don't want to miss. They were reduce the amount of time you spend coding and improve your baseline quality.


> First, you have the absolute most control you will ever have over your application.

You have 0 control over distribution though, which is pretty important to me.

> There are some amazing libraries for both platforms that you definitely don't want to miss.

I don't doubt that the platforms have good 3rd party support, but it's absolutely dwarfed by that of the web community, and that will only grow larger over time.


> You have 0 control over distribution though, which is pretty important to me.

Simply having to wait a week to get approved for the iOS App Store (and less on Android) doesn't constitute 0 control over distribution. You can still release the app to the store when you want, if you plan ahead, and pull it whenever you like. Less control, yes, but not zero.

Also, how is this different from writing a non-native app? The same rules apply to distributing web apps if you want them to be in the store. True that you can release it as a website whenever you want, but that's not really the same thing.


> Simply having to wait a week to get approved for the iOS App Store (and less on Android) doesn't constitute 0 control over distribution. You can still release the app to the store when you want, if you plan ahead, and pull it whenever you like. Less control, yes, but not zero.

You don't have control whether your app will be accepted into the store (like the recent bitcoin apps), you don't have control whether the content in your app will be allowed in the store (such as the many Comixology comics that where banned from being sold).

> Also, how is this different from writing a non-native app? The same rules apply to distributing web apps if you want them to be in the store.

But I don't have to put my web apps in the store at all, I just tell my users to visit a URL.


  > I don't doubt that the platforms have good 3rd party support, but it's absolutely dwarfed by
  > that of the web community, and that will only grow larger over time.
You know why it is funny? The things that native offers without even third party code is leaps and bounds beyond what web community offers for mobile and will stay this way for a long time, if not forever. So far community has been reinventing the same wheel a thousand times over and there is no sign of progress. Guys, look, another MVC framework, how cool is that!


So if you want a web app, an android app, and an iPhone app, you have to code it up 3 different times in three different languages. The reason why the web exploded in popularity as a development target is precisely because it made it easier to target a larger number of customers. Imagine if Facebook had started out as a Windows XP / Linux / OSX native application. It never would have even got off the ground.

My take on this is that if you absolutely need the performance, go native and pick the platform you'll get the most customers on. If you're app is a glorified CRUD wrapper, though, HTML5 will probably suit your needs and you'll only have to code it up once.


I don't buy it. Enterprises definitely buy into that idea and they do have their employees work with these crippled apps because they have no other choice. But the experience for this 'CRUD' app is definitely not good enough. It's by forcing it that it gets used. It depends on how intensively it will be used in reality, but I definitely know quite a few companies who went from BYOD and an HTML5 Cordova app which was very intensely used to just buying Android devices for the employees and making a native app to boost productivity.

If you have to enter / search / whatever information in an app which is horrible to use (a BIT of lag can ruin your day; you tap/swipe; because of the lag it just responds a bit late, the keyboard pops for the wrong field or you submit accidentally; plop seconds lost and so is your good mood) all day, you're not going to want to work with crap. It's not 'good enough' even though management might think so.


You don't have to buy it, it's an undeniable reality. Yes, if you're an enterprise and you can afford to write native apps for all platforms, then that's the better choice. If you're a small startup or a single developer, writing the app 3 times is just not cost effective.


Well then there are still a few options; 1) it's internal and you can force-feed it to your employees, then it might be ok 2) it's external and you're trying to get a lot of people to work with it outside your company.

In both cases I still think you are better/easier/happier/cheaper off building it once and in case 1) giving your employees tablets/phones with the target OS and in case 2) just waiting till you have enough critical mass to warrant writing a version for the next OS.

Not to mention that, in most cases we have made HTML5 'hybrid' apps, we noticed that it takes more time making it uniform and smooth across the plethora of (especially Android) devices than just writing 2 native versions. And the HTML5 version will just never work well on a large amount of 'Alibaba $50' Android devices which surprisingly many people have. Resulting in TONS of bad reviews (if public) and/or very frustrated people.


Depends on whether or not quality of the product is a factor in your cost calculations. It should be, and for many startups and solo developers, it absolutely is.


Most likely you never tried to do what you preach. You will end writing your app only a little bit faster (if you are lucky) and then three times more on debugging.


I have. I built an HTML5 hybrid mobile app for Android and iOS. It was about 90% JS/HTML/CSS and 10% native code. Worked very well for our purposes. You can judge for yourself, it's called Kona and in both major app stores.


Eh yeah. Exactly what I said then reading the Android reviews then? Works on some devices, terrible on most; getting that JS/HTML/CSS to work well on 'most Android devices' is infinitely more difficult than doing it native. Is that worth it? I still want to bet I could've written both apps native in a shorter time with much better results. You can take me up on that any time.


> 90% JS/HTML/CSS and 10% native code.

this was going to be my comment on the original comment: it's not like you have to do only HTML5 or only native, or so i'm given to understand... i'd be interested to hear more about the hybrid approach if anyone has links/blog posts.


10/90 is Phonegap/Cordova; when hybrid is viable (imho) is when you need to show complex documents; what HTML was made for. So the parts in your native app where you need to show documents with nice flowing text, images, charts, etc HTML is a good option, but then the 10/90 is usually not native/html, but the other way around.


Pick a platform, port later if needed. It's what most indie app developers do.


No, the web exploded in popularity as a development target because the labor pool is so diluted that it's easy, quick, and cheap to hire (and fire) any random "webdev" who is good enough for their purposes. And that's why, unless I get desperate, I wouldn't put PHP on my resume and would only list non-webapp projects for other languages I've used professionally but which are also used for webapps (e.g. python)--I don't want to put myself into that (cheap) labor market.


You are forgetting about the modern contender - Xamarin. It's a sweet spot between HTML5 and native: modern high-level language and a very decent performance.


With Xamarian being acquired by Microsoft, I'm really hoping to NOT have to learn native android / ios / windows phone development. I don't have time for that, and I'm already an expert in C#.


I regret investing in MonoTouch and wish I had instead just learned native iOS dev from the getgo. You need to at least be comfortable with native iOS dev to be effective with MonoTouch anyway. I have a MonoTouch written app that I can't do anything with unless I pay for a new version of MonoTouch as it's no longer compatible with current iOS releases. The constant song and dance between releases of MonoTouch and iOS updates gets frustrating after a while.


I always recommend (and when it's about employees demand) people to first learn everything native in Objective-C/Java(/C# if you need wp8 as well) before trying to unify things with C#/Xamarin, C++ or JS. That way you have a solid understanding what is possible and you don't lock yourself in. That said; Xamarin rocks and I hope MS doesn't kill it.


As a C# developer I am well into a project using Xamarin. I have not run into any road blocks and I am very happy with the performance on both iOs and Android. I currently have 80-90% shared code utilizing portable libraries and MvvmCross. Only real learning curve has been at the UI level but they are both rather WPF'ish especially Android. I highly recommend this path for a C# dev.


The language isn't the issue, the frameworks and libraries are. You're going to have to spend your valuable expert time sorting those out too. That's the bulk of it.

If your truly an expert at c# then learning objective-c isn't much of a challenge. At least learning the amount you need to build apps.


The advantage of using a single language is that, for each single app in all the three store, you have to change a single codebase and not to make 3 different changes in 3 different codebases just because each of them require its own language (and then its own specific libraries). So you have surely to spend the time to sort the libraries, but it is a one-time problem. After it, you can just use them without problem in all your projects and you really can focus on your idea

Put all the code in a good ui/codebehind separation framework like mvvm, and you are happier than ever.


It's a nice dream. Pity reality is so very harsh on nice dreams.


See my comment above: I am under no delusions that cross-platform frameworks are truly cross-platform on the UI level. But, for our app, the UI layer is roughly 50% of the app. What about the core of the business logic? The code in a cross-platform framework is truly cross-platform. How do I approach that doing full native development for Android and iOS? Is there a way to create a cross-platform library?


It's called C++.

And I can tell you, from my own experiences, your backend isn't going to be as cross platform as you dream of it being because everything from networking on down is done differently on Android and iOS. Which isn't to say that you can't front-end all of that with a shim in C# or C++ or whatevs, but on mobile, the bulk of the hard stuff is UI anyways and there is no cross-platform panacea as of this writing.


As someone who has used several cross-platform frameworks (Titanium, Cordova/PhoneGap, investigating Xamarin), I have a question about going "full native": how do you handle shared core code? Our app has an engine that drives the business logic, and the cross-platform frameworks have been useful because we don't have to deal with modifying the core codebase for different platforms [0]. Is it possible to create platform-agnostic libraries that can be shared?

[0] Despite the many claims of "cross-platform UIs," we have many cases in our UI layer of doing different things on different platforms.


Xamarin is great for sharing code. You just rewrite the GUI layer per platform. It's great, cannot really say it any differently.


And that GUI layer could be just Views in the end, since it seems you can even re-use ViewModels/Controllers.


This is the exact use case for Hexagonal/Clean Architecture: http://blog.groupbuddies.com/posts/20-clean-architecture

It is contentious on HN, but I implore you to draw your own conclusions.


If have read before that C++ can be used to share logic. An interesting development with iOS7 is that it has a JavaScript runtime of some sort (I don't know the details) - perhaps it will be possible to use JS for cross-platform code.


You're thinking of JavaScriptCore. It allows Objective-C objects to be used inside of JavaScript (and vice versa).

It's actually a pretty cool technology. I gave a talk about it this month at Berlin CocoaHeads, and I'm gonna put some blog posts together about it soon!


I presume the JSCore runtime bridges objects back and forth? Very cool!


>I'm not knocking this or any particular xxxx-to-mobile platform, but as a native iOS and android developer, I would highly recommend just learning the native languages and frameworks.

First off I agree that native wins over some bundled HTML5 app anyday of the week.

But for many reasons companies still choose to use phone gap or another framework to compile HTML into a native app. The first and foremost reason is that a lot of organizations have plenty of web devs available with skills to build web apps.

Managers would rather leverage the skills they already possess. Believe me I do web dev as part of my job and I still try to make the case that Native is better for mobile. In the end it is cheaper to use developers they already have with tools the devs a familiar with (HTML/JS) than to hire several new devs to code against different mobile OS's.

Right or wrong Managers and Organizations like cheaper and faster.


Reminds me of a story...

This year my dad, a football coach, lugged an overhead projector with a bucket full of transparencies and visa-vis markers to a football clinic he was asked to speak at. When the younger coaches saw the ancient contraption, word got out on twitter, and my dad was past capacity for his talk. Many of the youngest coaches had never even seen an overhead projector!

By the time his talk was over, he had drawn (we might say derived) his ideas right in front of them, adapting to the questions that came his way. Most people said they really enjoyed the more interactive style my father taught in contrast to the stale power points of some other presenters. A great reminder that some people (including me!) learn best when things are derived in front of them with a dynamic teacher.


I think a lot could be said here for maintainability. As others have mentioned, it really goes a long way to take the time and refactor the class to inherit the extra/common functionality if you are working on a team or working on a project that you know will live a very long time. Swizzling will certainly work, but at what cost to readability, debugging and reuse? That's normally the question I ask before swizzling or doing fancy, dynamic things.

Sometimes the quickest or even the most elegant solution isn't necessarily the "best" one. Best being a subjective term, I would say it depends on what you need from your code over time and with whom.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: