There is quite a lot of black and white thinking here which is the primary problem.
Look, consumer technology is always a mess - apps vs web vs native vs desktop vs mobile vs ... etc etc. The truth of it is that we have so many choices because technology is meant to solve individual problems as best as possible.
There will never be one winner.
I think the author assumes just because tablets and apps are growing (I assume this is still happening) that this is the end game. Its not - tablets and apps certainly will be prominent for what they are good at. The app environment sucks for some things. Free flow of information and low friction information searching certainly is one use case where the web is so so much better than apps can ever be in their current form.
Lastly "apps" have always existed. Nintendo games were for all intensive purposes apps in that they were self confined experiences you played on what essentially was a technological black box. In truth if you look historically at how we have been using computing - things are more the same than different - you just have to remove the silly titles we give everything.
There will, however, be many losers, and PCs are going to be one of those losers. The form factor won't die, but the philosophy and freedom will die. It is inevitable, given the priorities people have and the money to be made.
If things continue going the way they are going, the only computers that will give users the freedom to do what we can do now will be locked away in research labs, available only to the lucky few who can land jobs in such places. These computers will be so expensive that only people who are doing funded research will be able to afford them. The next best thing will be computers targeted at developers (perhaps "debug" computers), which will be too expensive for most people to consider buying them but (hopefully) inexpensive enough for hackers to get their hands on them; these will still be loaded with restrictions, but at least the user will be able to write and run code and use a debugger. The only computers that will be anywhere near the price point that most people will consider paying will be those that are restricted to running software that was signed by some large corporation (i.e. only approved programs), with only the macro systems for a few "professional" programs (priced beyond what most people can afford) being available for user programming.
Why would software or hardware companies want it any other way? The next "big thing" will have to be filtered through the app store system, and the little startup that makes that software will be bought out before they can overtake the established players. The big media companies will form partnerships with the companies that control app stores (or just mergers, so that the whole stack is controlled by the same entity) that will be immensely profitable. ESPN will require all of Dell's customers to buy the ESPN app, without the option to get a refund or to remove it, and any company that does not take that deal will be barred from having the ESPN app on any of their products. Governments will love it -- no more uncontrolled cryptography, no more Wikileaks, CALEA-style laws for computers, and a chance to spot political movements before they take hold.
"Free flow of information and low friction information searching certainly is one use case where the web is so so much better than apps can ever be in their current form"
You can just pay $1/mo. for a search engine app, right? It will have an integrated web browser/hook into the installed browser. Not much different from having a Bloomberg terminal on your desk, except that it would be terrible for consumers (but profitable for businesses).
"Lastly "apps" have always existed. Nintendo games..."
Gary McGraw has pushed the idea that the security systems seen in video games ultimately become the security systems on all consumer computing devices. At one time, this app store model was something only video game consoles had; now we see it on tablets, and soon we will see it in other form factors. Video game consoles have highly expensive, hard-to-buy "debug" versions and developer systems; is there any reason to think that such a system could not come to exist for other consumer electronics?
I actually don't think that computers capable of doing stuff other than media consumption will get more expensive.
I actually think the opposite will happen and you can already see the start of this.
Look at things like Raspberry Pi , ODROID and arduino. Very hackable stuff at rock bottom prices.
With faster internet connectivity comes access to remote virtual machines which can be rented by the hour and make stuff like huge scale data processing accessible to the teenage hacker in his bedroom.
Ultimately the future will require more people to understand not just how to use the tech on a basic level but people who can innovate with it.
If the US doesn't do this , another nation will and be all the richer for it.
Of course there will always be a market for passive consumption type devices as well as specialised ones. It's just that these used to be radios , TVs , CD players , calculators and nintendo consoles now they are iPads.
"Ultimately the future will require more people to understand not just how to use the tech on a basic level but people who can innovate with it."
This. While in one sense it is profitable to rope off technology and control it - having the general population embrace the skill set of working with computers is far more profitable overall for everyone.
In 100 years, every single person will have grown up in a world dominated by electronics with the presence of the internet. Being a fact of life makes the barrier of entry a lot lower than it is today when there are more options.
And all of that will persist for a short time until someone invents something new that disrupts it all, just like the Internet disrupted old media.
That is why I find the legal wrangling more worrying than anything else. Apple and Google can't stop me from inventing and popularizing a new technology, but the government can do a pretty good job of that, especially the second part.
The #1 most-used app on the iPad is... Safari. [http://rjionline.org/news/rji-dpa-spring-2011-ipad-survey-re...] And given the nature of the survey ("Open question without prompts") I expect the reported figure (21%) is much too low, because users don't even know what a browser is, half the time. It wouldn't exactly be a shock if half of the respondents who mentioned the New York Times app were really reading the Times through Safari.
The bulk of the non-browser apps mentioned are ways to read things (NYT, WSJ, USA Today, AP News, WashPo, Kindle app).
That is, the bulk of the non-browser apps mentioned are more-or-less tarted-up browsers. So, for that matter, are the likes of Siri and Google Search.
I don't think it's the browser that's in danger here.
A standard defined by an RFC will be a win for consumers, not sure if it'll be a win for companies though. If whatsapp can create a monopoly and make money from it, that's a win for them.
Punditry at its worst. I would have called it clueless, but the truth is that nobody knows, and given evidence, this does not even seem like a plausible prediction.
Sure you can say "mobiles apps grew so and so". But then I can say that given the growth of a newborn child, 60 years old adult should be 50m tall.
Agreed. I clicked the article thinking it would be about how regulation is threatening the internet. After 2 sentences I saw it was another article that the old web is dead and that we will only use apps in the future... so tired of hearing that line. As many others have stated, walled garden vs. free-flowing information, etc. Drives me nuts.
Peeling back the exterior and getting at the meat of the argument, I think the author is really arguing that the future "makers" of things are having their perception of the internet and computing shaped by their interactions with walled garden devices and controlled environments. Building off of this, they will proceed to continue perpetuating a world emphasized by unique apps and walled gardens instead of the creativity of current and prior hacker generations. What would be really interesting is if the author did a study looking at past generations and how their initial perception of connectivity shaped what they built using it. I.E. all the phone phreakers and hardware hackers in the 70s and 80s laying the foundation for modern computing.
A more concise argument is that your Android tablet that is Junior's first computing experience is significantly less hackable than your Commodore 64, which was more hackable than your DOS machine, which was more hackable than your Windows 95 machine, which was more hackable than your XP machine, et. cetera.
Devices provide higher levels of abstraction from the tweak and tunables on the system, but only now are we seeing devices where there is no easy way to actually write code for them, on them. You can't write an iphone app on an actual iphone, and I don't know if you can compile an apk on Android (I know Java ides on Android exist, though).
About the only thing you can do is write web pages you can open locally. That still works. And I don't see walled apps overtaking the expanse of the internet - there is just too much content and experience only found surfing old crummy html documents you won't get on your walled garden facebook app experience. And Android is taking the smartphone market, while being relatively open for a platform with the rooting and loading of any apk you want (if you set the option).
Really good (long) summary, zanny. That was what I had garnered from the article's musings. It's something I've thought about a lot myself actually. How would I give a child the opportunity to learn to hack or program if they wanted to? For me it started with c-64 basic cause it wasn't compiled, then visual basic when I was like 13...
I guess C# or Java could probably be taught very easily, but it feels like starting out learning martial arts by teaching them advanced maneuvers. It feels like the wax on, wax off approach really is the wise course but showing them QBasic or something hardly seems like a step forward.
I'm sure if they have the right interest, they will investigate how these devices work. It isn't as nice as being forced into learning a lot of it by necessity of the tech (think TTY only devices) but I'm sure most who possess the correct spark might end up writing web apps instead of bash scripts.
> but only now are we seeing devices where there is no easy way to actually write code for them, on them
Ah, very good point.
Well, today's devices still lack the hacking essentials: pointer/mouse/variant, keyboard, large screen ; maybe a decade of advance will give us usable virtual keyboards/pointers/what-have-you, projection displays on all handhelds and THEN no manufacturer will able to hold back the onslaught of development environments that penetrate the walled gardens.
Honestly, those are just ease of use constraints - people with the right mentality will break through barriers to manipulate the device. A bluetooth keyboard and mouse are sufficient, and you can plug a tablet into a bigger display (though that is definitely not the new-hacker use case).
The real breakpoint is that you can't build software for the device on the device anymore, with the exception of web apps.
> The real breakpoint is that you can't build software for the device on the device anymore
Right, but that's because the open source programmers don't see the point; if it's already a hassle to find and buy the proper hardware to turn your device into a full-blown ergonomic development environment, they figure it's not worth the effort to even think about software.
But if devices already come with easy to use virtual interfaces -- Google Glasses seems to solve the big screen problem, now we just need a virtual keyboard and mouse idea -- it's only a matter of time before veteran hackers say "This is ripe for a community, let's crack this baby open".
you could write your app with a web app ,get a quick preview , compile it and deliver it through a cloud service so you can test localy directly ( it already exists ) , applicationcraft .
My thoughts exactly, and I think that is what the author was trying to say. The argument seems to be "children use tablets, therefore no open Internet." Last time I checked children turn into teenagers and then adults, both of which have different habits and drivers. It's like saying, "children watch cartoons, therefore no live action TV and movies."
The problem with cartoons is we outgrew them, that did not happen with PCs. PCs evolved so fast in the span of a couple decades they became ingrained in most aspects of our lives and works and educations.
Well games on the web has never really taken off as a thing, Zynga I would guess were the people to do it most successfully.
There is huge value in having a standard, indexed repository of information like the www. If I want to find some specific information I don't want to have to find and download a specific app every time.
This was an interesting read that raises many questions, but much of the analysis is ultimately flawed because the author confuses many concepts.
Internet is not a human-computer interface. It is true that most native apps, non-browser apps on mobile and tablets use touch as the main point of interaction. And it is true that many people browse web via http or https using a keyboard and a mouse. But who knows how this will change in the future? People change. Web standards converges and diverges constantly. Technology evolves. Browsers today already do unthinkable things. There's no reason to completely assume that the primary HCI of web will remain as keyboard and mouse. Nor should we completely assume that the touch interface on tablets are the be it and end all method of interacting with native apps.
"How will our children learn to create digitally?" THAT is the problem, tablets and smartphones are designed to CONSUME, not create: There are some crazy guys who write code in their iPad, but let's face it, 95% of users only consume content in their devices.
The only real insight I picked out of this was that children today are living in a walled garden. Before, we had to know about drivers, installers, executables, the filesystem, etc. Today, a lot of that is hidden. A benefit of that is that the system is a lot more stable and people are allowed to maintain the abstraction provided by the os. A question that Im curious about in the future is whether the success of the abstraction will prevent kids from peeking behind the curtain for simple lack of need. Lastly, will that in some way affect the technological capabilities of the kids?
I have an iPad and love it, but haven't bought a single app for it. Sure I use some free apps, bit what is the single most used app? Safari, for, wait for it, browsing the web.
It will be interesting to see how today's kids get into content generation with less keyboard experience and I do believe that tablets will rose over huge chunks of the internet user experience--it is just too easy to consume content using one.
But the end the open web of google and web search? And the rise of the special purpose app for everything? I don't think so. Not as long as tablets ship with standards compliant browsers.
I was looking through old computer magazines from the 1980s and thinking about my youth when there were numerous incompatible microcomputer platforms that had some popularity at some point such as
* TRS-80 Models I and III
* VIC 20, C-64, C-128
* Apple II, II+, IIe, ...
* CP/M systems based on the S-100 bus
* IBM PC
and way too many others to mention. Eventually the PC and the Mac won out, except for a few Amiga fanatics and a specialized "workstation" marked served by vendors like Sun.
It seems the current state of things isn't nearly as fragmented as that!
Any kid whose parents are hackers, writers, journalists, scientists, accountants or any other profession that actually has to type stuff on their keyboards will still have real computers in their homes in addition to tablets.
For that matter, kids might still find machines with big displays and keyboards useful, since writing an essay or term paper on an iPad probably wouldn't be a very pleasant experience. (Or do kids not get writing assignments in school anymore?)
Regarding the gaming angle, I think there will always be some kind of premium gaming platform for people who aren't satisfied with phone and tablet games. It may well be that the standard gamer demographic continues to shift to an older crowd as we give our kids tablets instead of Wii's or Playstations and they have to wait until they're older to get access to a real console or high-end PC.
You voted with your startup creating apps instead of websites and your wallet by buying apps that this is what you wanted. That I warned you (http://drupal4hu.com/future/freedom) doesn't matter but Douglas Rushkoff warned you and you still didn't listen. So crying over the lost freedom is all that's left, congratulations.
The early web was scattered because there was infrastructure in place but the initiatives were short-lived, diverged, or motivated by personal reasons.
It's the nature of our market - or maybe our culture - for things to agglomerate and monopolize though. Just notice how it's more likely you're wearing shoes made by a multinational company, not the shoemaker next door, or even yourself.
I have kids, and I disagree that things are going to play out as black-and-white as the author suggests. My daughter (2) does stick to the curated appstore/app world on her tablet, but she's just as anxious to get her hands on my laptop and uses it to do different things. Like typing "cat" into google images to see cat pictures. Just like the rest of us ;)
Tablets and similar devices are great at what they do, but I can't think of one use case (aside from extreme mobility) where they exceed desktop and laptop computers.
It seems like you are implying that you want them to die. Can you elaborate on this? I think URIs and DNS are pretty nice if used correctly. How else would you identify things across protocols and the entire web?
Look, consumer technology is always a mess - apps vs web vs native vs desktop vs mobile vs ... etc etc. The truth of it is that we have so many choices because technology is meant to solve individual problems as best as possible.
There will never be one winner.
I think the author assumes just because tablets and apps are growing (I assume this is still happening) that this is the end game. Its not - tablets and apps certainly will be prominent for what they are good at. The app environment sucks for some things. Free flow of information and low friction information searching certainly is one use case where the web is so so much better than apps can ever be in their current form.
Lastly "apps" have always existed. Nintendo games were for all intensive purposes apps in that they were self confined experiences you played on what essentially was a technological black box. In truth if you look historically at how we have been using computing - things are more the same than different - you just have to remove the silly titles we give everything.