Project Ara is a cool idea, but I don't know if it's really practical for a device that needs to compete on size. A single motherboard with chips soldered to it will always be thinner than multiple modules which each require their own case and interconnects between them.
And sure, you might be able to upgrade individual modules, but even the interconnect will become outdated after a few years.
I'd love to be proven wrong though, it's a cool concept.
Agreed. If we look at PCs I think we can conclude the vast majority of them were bought, rather than put together with separate parts by consumers. And that's an extremely modular system. I built all my own computers since I was a kid and while I've been tech-savvy, I'm very far from a hardcore tech guy especially when younger. It was just so easy, installing most things is as easy as plugging a charger in a socket or screwing some bolts in. Yet, while you could easily save several hundred dollars by doing it and build it to your liking, a minority of consumers did in the past 15 years.
This notion that we'd all suddenly want to do that for phones seems to me perhaps a little bit unfounded.
Especially when you consider that there was 0 engineering involved in building your own custom computer from different parts (unless you took a deep dive into custom cooling rigs or mini towers). i.e. you generally bought a mid sized tower and put in your parts. Space was barely a concern and there weren't any space related tradeoffs.
On mobile devices however, space is everything. Bigger screen? Won't fit in the pocket, or it'll be too heavy, and it'll draw more power. Bigger battery? Less space for the CPU or storage, or a camera module.
Not only are companies (with this being their core competency) generally better equipped to make these decisions, but they're also much better equipped to minimize the tradeoffs. They can engineer a camera module and a bigger battery, not just by engineering a better camera/battery module separately, but by engineering them to fit into a smaller space together. All the soldering-type stuff I like to complain about has, beyond business reasons, genuine engineering reasons, too.
Add to that the fact that smartphones aren't quite a fashion statement (in the way that creating a modular smartphone would be like choosing what clothes to wear), but are in a way a reputational statement (in that people do like thin, slick looking phones), I don't see how modular phones can really take off, with few people caring that you can customize them for fashion, and most people preferring the non-modular phone that's as thin and slick as you can get, because it's all soldered on and sitting in a unibody.
And none of this is unique. We tend to prefer the completely designed experience. We don't want to buy separate pieces and construct our own chair or table or our cars.
Modularity feels like creating pages from different books and turning it into a story. You don't get the same cohesion. Or going to the golden corral and ordering pasta, a steak, sushi and pie. It doesn't make a good meal no matter how good the individual pieces. A properly designed course around a set of ingredients or a theme is always better, especially when curated by professionals.
Does that mean we don't want choice or can't handle it? No. I just think that we find modularity, in a very loose sense of the word, in the product offering. All the different phone manufacturers ARE the modularisation and personalisation of the smartphone. Hell Samsung alone has put out 200 phones.
Many vendors do, and quite a few of them experiment. e.g. the phablet was a big experiment that saw a market demand. We've seen battery powerhouses. We've seen tiny-bezel phones with a large chin. We've seen phones with an insane amount of megapixels on the camera, phones built for selfies, phones with a curved display, phones that integrate with digital covers etc.
So the choice is really there.
In the end I think the flagship products are so well designed that the need for tradeoffs will diminish. In some ways we're already there. Phones have a good enough camera for everything except professional work. Phones come with storage we see on laptops. We've cheapish 'unlimited' data plans. We see insane resolution screens. We see all day battery life.
In short, when I buy a flagship phone, I don't really think 'damn wish I could switch out the iPhone camera for more storage, could do it if only my phone were modular'. Battery is still a big point of improvement for me, a tradeoff, but that's about it. It feels by the time Ara could pick up steam, the tradeoffs, like battery, are likely so small (ever more efficient chips, mostly) we probably won't have a big enough demand for different phone configs for different times, enough that we're willing to pay extra for the different extra units, and bear the reduced performance and form factor that a modular system offers over a single fully designed experience.
At the end of the day phones are becoming able to do everything, and the personalisation of the phone is all in the software (apps, themes, content), the choice of phone (again, lots of experimentation and catering to niche markets by vendors) and things like cases. I don't really see modularity becoming a big part of that.
That having been said, I love that I've been using my PC for 7 years, switching components every now and then. It's a pleasure. It'd be awesome to see how far modular phones could go. It might be one of those things that somehow just works so well and sticks, even if it's not the best system on paper, like say tcp/ip. Looking forward to ara going live for consumers this year.
I always felt the Atrix and this sort of modular design was the future. Google killed it heavily with their cloud-synced devices push, and Sundar Pichai killing Android's laptop development project. I love the idea of Project Ara, but with Google taking it from Motorola, I fear it's going to be locked down into proprietary services like Google does with all their products.
If they can get things like processing power to be modular, so you can have your apps and data on your phone, but be able to plug in and use desktop power when plugged into a larger station, that'd be when that sort of solution would really take off. I want everything on my phone, but for that phone to access desktop level UI and processing capability when I'm at my desk.
Meh, the Android laptop stuff was long dead before Pichai. Rubin was even stonewalling extending Android into tablets back in the day by insisting that Android was for phones.
And Atrix ended up going nowhere, because Motorola could not keep their designers in check. end result was that no two phones could share a dock, as the ports where spaced differently. their last, "universal", dock basically had two rubberized wires that the user had to manually fit each time.
As for Ara being locked down, you may be right. So far it seems that modules goes through a Google storefront, and they have yet to confirm or deny that trading used modules will be an option. Never mind that it seems Google will the lone provider of the endoskeletons. I find myself somewhat reminded of the early days of the PC, back when it was the IBM PC. I wonder if there will be a clean room endo firmware offered...
Sure, the Webtop design had implementation problems, but the concept was solid. I think it's biggest deathblow was that it was released before Android supported large screens, so it was gimped to a version of Linux with only a couple pre-installed apps, rather than the tablet UI Webtop later could provide, after the concept was already killed in future models.
Yeah, the Google that is making Project Ara is a long way away from the Google that embraced open protocols and standards, and open sourced platforms like Android and Chromium.
I never understood why Chrome OS had to be different from Android. I've also never had a Chrome OS device, maybe that's why I simply don't get it, but can anybody shed light on the differences?
As noted, an Android laptop was in the works. I think an Android laptop with Chrome browser, with extension support, would've been the holy grail. But Sundar Pichai wanted to protect his Chrome OS project, so when he took over Android, he killed that project. (And yes, I can dig up a reference to this, if anyone questions it.)
Chrome OS and Android may seem similar from a distance – they’re technically similar and do similar things, but they have very different goals.
Android was trying to imitate iOS and needed to establish itself and survive in a harsh market environment. It needed to be flashy, appeal to elitists, be affordable enough for the „poorer“ market segments, foster an economy around „apps“, etc. The result was an unholy monstrosity that, nevertheless, managed to beat iOS (on market share) as it was supposed to.
Chrome OS instead was an attempt at reducing a computer as much as possible to being just an interface to the internet – merely a technical artefact required to interface with the digital world, since humans don’t happen to have WiFi built in. Sort of like Google Glass, but envisioned in a world were smartphones did not yet exist. Market concerns and technical viability were secondary. The result was something that functionality-wise works as well as current technology allows, but nevertheless is the only laptop out there that truly „just works“. If anything actually ever breaks, you can go to the store, buy another one and have it work exactly the same like your old one – just type in your login credentials and everything’s back to where you left it.
I agree that in a perfect world both these „things“ should be achievable by only one product, but, reality being the mess hat it is, lead to Google developing two different products and now painfully trying to converge them into only one as much as possible.
Android is a odd one. Google bought it, and until recently it pretty much acted as a separate fief to Google.
Chrome(OS) on the other hand was, i think, started as one employee's personal project. Likely based on a observation that many of us spend our days mainly using a web browser, with the rest of the OS sitting idle around it.
I've always felt that in concept, Chrome OS belonged on phones and Android on laptops. Think about it -- Chrome OS really needs an always-connected device, which is a phone, whereas Android runs all apps locally (except apps which have a sole purpose of getting online such as web browsers).
My ideal mobile device is an x86 phone that can dock into a tablet which can then dock onto a laptop, which can then dock onto a stationary docking station with GPU, monitors, storage (I was one of the people who backed the Ubuntu Edge, such a shame that it didn't make the goal).
I think we're coming very close to making this possible on both the hardware and the software front. Core M seems only a few generations of optimizations away from being feasible for smartphones in terms of power consumption (Atom is already there if performance is not a top priority), and Windows 10 scales pretty well between tablet and desktop at the moment, and integrating the Windows Phone functionality into the core OS seems like it could be a possibility eventually. The power management improvements to Hyper-V in Windows 10 should make it possible to run other OS's (possibly even Android eventually) in parallel with very little overhead and power consumption penalties.
I would very much like a phone, tablet and laptop that are just different views into the same computing environment (obviously with accommodations for the different interfaces), but I don't see any reason why I would want them to snap together.
It would be sort of handy to be able to borrow larger devices, but plugging in a cable and having the phone "take over" doesn't have a whole lot of disadvantages compared to having them snap together.
In the case of portables like tablets or laptops, it might be preferable for the phone to snap in for easy carry. But for a desktop a cable or dock would be more than good enough.
I'm looking at it from a perspective that using borrowed hardware would be a rare thing, so instead of snapping the phone into the tablet, 99% of the time you leave it in your pocket. The hole in the tablet would usually be a hassle, while cabling a phone to a laptop would only occasionally be a hassle.
Hopefully the technical details of USB3 make it possible to build a smart cable that makes it possible to plug a phone into an untrusted device (only allowing the phone to push video and pull power). I guess such a mode could also be built into the device, but the cable would be a handy place to put a fuse or whatever. A special cable can also have a very straightforward user interface, if it can't be configured, it can't be configured incorrectly.
I feel like something like oculus would be better suited. GPU and storage are the size of a cell phone these days. Look at 128G sd cards. How many of those would fit in an additional 1mm thickness of an average cell phone ? 50 ? 150 ? Won't work for the monitors, of course. Also 30"+ monitors are expensive, because they require a large amount of silicon to be precision manufactured, whereas resolution is only really limited by the speed of electronics (that's why a cell phone 1080p screen is 60$ whereas a 30" one is still $300+, and you can barely have higher res displays in bigger sizes. It's cheaper than cpus and the like because it's few layers and very large features, even the worst fab can produce perfectly adequate monitors, but you still need the material).
But an oculus rift can use a 60$ screen to make you think you're in a room with a 600" display (really, check their cinema demo. Wow. Just wow). Or a room with 50 600" displays, no problem. Nimble VR can track your hands, and their competitors claim they can do it accurately enough to make you see a usable a virtual keyboard [1].
Why sit down at a docking station with GPU, monitors, storage when you can take it with you ? Why can't I have 5 30" monitors with full computer games while sitting on the train ? In an airplane ? In a 1 sq meter room ? Why would employers bother with any computers/peripherals other than a cellphone at work when a cellphone + occulus can provide a better and more useful experience ? Why can't I sit down in the morning train, snap my fingers, and have 3 large monitors with my working environment appear around me, usable ? Why can't I sit on a plane in an economy seat and do the same, all the while my mind convinced it's really sitting in a huge outdoors open space ...
Additionally I wear glasses. Let me tell you, there's no way to have the perfect glasses for every environment. Some part of the monitor is a bit blurry. Just enough to be irritating. The bigger the monitor, the more distance difference there is between the closest pixel to my eye and the furthest. That degrades quality. But I can see every pixel on an oculus rift display perfectly. So, at least for me, such an environment can actually be higher quality than reality (or at least compared to real displays).
From my perspective, Atrix-like scaling up is an illusory solution - the engineering work to make it work properly is more than that required to make the cloud work properly, and the latter is better in the long run.
I suppose I need to justify that the cloud is better. It seems like a chicken and egg problem to me: people don't expect all of their data to persist between devices, so most apps (for mobile and desktop platforms) either don't integrate cloud sync, or do it in a haphazard way that requires effort to set up and is usually buggy and/or limited. Because most apps don't integrate cloud sync, people's expectations are set low enough that they don't demand it to work...
And this applies twice over for power users. In part because they're just likely to use more applications and more obscure applications in general, straying further from the beaten path and encountering wilder terrain. In part because they tend to use the command line, and command line interfaces are very portable, which means people still use decades old software practically unmodified, which is great in itself but ensures that it will never be possible to break expectations (e.g. file accesses might take time, and might not succeed) and expect most programs to eventually be updated to adjust to the new expectations. Not even over the span of a decade.
But for heaven's sake, NFS is 30 years old and allowed you to access all your files from any client. It's also completely unsuitable for today's mobile networks (or really anything other than a perfectly reliable wired link), but we've learned a thing or two about syncing protocols since then. Dropbox does a
reasonable job at the protocol level, and works so-so with the command line, but is a centralized service (I think an ideal one would be decentralized) and doesn't aim high enough (you only sync one directory, and some applications use Dropbox-specific APIs to get smarter syncing functionality; it's not integrated deeply into a whole operating system). iCloud (+ Continuity) has the right idea in general, but is also centralized, and not (yet) sufficiently pervasive.
If this is so hard, why not just do the Atrix thing for now? -- Because while that can maybe be an okay solution with some more work, it's only a good solution if apps seamlessly switch interfaces when you dock or undock. Which is in practice is probably going to look very similar to cloud sync (data being transferred between separate applications), just on a single device.
There's the conservation-of-CPUs aspect, but I don't see a single CPU being suitable anytime soon for both maximum speed (when plugged in) and low power (when not). At best you arrive at a compromise that's not ideal for either. And do you really want a laptop dock that's a brick without a phone (i.e. if you leave the phone at a different location, or if you want one person to be able to use the phone while the other uses the laptop), if including a CPU and some minimal storage (enough to get what you need from the cloud...) would not be that much relative increase in price?
Do you really want to hold fast to a world where losing or damaging a device that you take with you everywhere can cause the loss of important data? I found the Google's goofy "people in lab coats destroy Chromebooks in amusing ways" ads (conveying the concept that no data is lost) pretty compelling; too bad Chromebooks are so limited even today.
There's the NSA angle, but the fact that most of today's consumer cloud services do not include client-side encryption and/or are closed source is hardly inevitable or unfixable.
There's the offline experience, but "cloud" can always be a protocol that can operate totally standalone on a LAN if necessary, adding the Internet as a communications channel when available. (Certainly today's cloud services could do better with LAN optimizations - if my phone and laptop are on the same Wi-Fi network, I should never have to wait for the Internet to get content from one on the other. Actually, this is true if they are close together, period. Use Wi-Fi Direct if you have to.)
The latter requires you give all of your data to giant corporations that you can't control. The latter requires that you sacrifice any freedom or choice in what software you run, where your data is stored, and more.
The cloud is one of the world's worst compromises, and we're all just sitting around waiting for it to bite everyone in the rear.
Today it usually does. As I said, though, since green-field development is required either way, there is nothing preventing the 'cloud' from being client-side encrypted, and there is also nothing preventing it from being open. (Well, other than political inertia, but I'd hate to see that compromise a technically superior solution. :/) Think of ownCloud...
Yeah, as long as companies can profit from mining your data, the wealth of cloud services will be built that way. And consumers' obsession with "free stuff" only encourages it. I've played with ownCloud extensively, though Sandstorm.io is the new hotness that I think has a lot of potential down the line.
Maybe it's just me, but it looks to me like things have been going the other "way", all your documents synced to a "cloud" and providing various interfaces to it (phone/tablet/pc...)
What this looks like is all your data on your phone's hd, and "docking" to other form factors to compute/play with the data locally. I really hope we go this route, but I'm not holding my breath.
I don't think it's about where your documents are, I think it's more about where your local CPU is. Your documents will almost certainly be in the cloud. However then you have two radios, two CPU's, two sticks of RAM. I think the idea of convergence is merging your phone, your PC, and any other device with a CPU into one device. The data will still be in the cloud. The benefit is if you want to write something. A keyboard, and large monitor is better. If you want a phone, the smaller form factor is better. You want a better camera, you buy lenses... not a body. etc.
There's no good reason to have a separate CPU in each of these devices. When you can fit a CPU that's as powerful as a laptop in a phone, why not just have it in the phone, and only have one computer? Then take the CPU out of the laptop, and plug in your phone when you need it.
A mid-to-high end ARM SoC is $20 these days, there's really no reason to NOT have a CPU in anything that has a monitor in it. Especially since the CPU in a phone has to deal with a power/thermal profile that most laptops have a little more leeway on. It's such a false economy.
It's still kind of nice that it should become possible, with pretty much no fussing, to plug a phone into a TV (or any other display with USB input), have it charge and also take over the display.
As you say, the hardware savings aren't very interesting, but simpler, better interop looks like it will be.
ARM's big.LITTLE architecture could feasibly bridge the gap between mobile and portable devices. I recently acquired a ODroid XU and a bunch of peripherals to serve as a modular mobile computer. Right now it complements my laptop and phone, but eventually I could see it replacing both by getting a small "tablet" that's basically a display/speaker/mic and a pico projector + tiny wireless keyboard. All of these components already exist, but the LCD and projector are currently out of my price range for what's essentially a fun experiment. There are even battery banks available for around $99 that can charge USB plus many laptop models, and can be used while charging (something most cheaper USB battery packs don't do).
The biggest issue will probably be keeping the thing powered all day. In the medium term I can even see the possibility of a shift away from ultrabooks and toward more modular laptops, since the giant battery needs to live somewhere. Then you could have a mobile device tethered wirelessly that doesn't need the same level of components (except maybe GPU) and the battery can be reduced further.
there's so much more then a CPU though. You still need some storage, radios (bluetooth/wifi/4g), connections for the radios etc.
Plus, if as technology gets better you have to upgrade each device independently. Wouldn't it be nice if every two years you could upgrade your phone AND your laptop for the price of one device?
A phone uses about 5W, a laptop 50W and a ultra-high-end desktop 500W. That's two orders of magnitude of difference, no way the CPU can be the same for all that.
And then your kid wants to use the computer but can't because you have the only CPU in your phone and you kick yourself for buying into the modular convergence snake oil.
If you were buying in to modularised hardware that worked that way then instead of buying your kid a tablet, they'd have a phone-style device.
It's more like you have a phone with thermal imaging built in, and your kid wants to use thermal imaging for a homework project, so now you can't use your phone.
Whereas with modularised hardware – like appification of hardware – then you'll be able to continue to use your phone (maybe to boost the processing capabilities of your desktop whilst you do a render or plugged in to your VR headset) and hand them the thermal imaging module to use with their processing device (be that a phone, laptop, or whatever).
I think the focus is more on the fact that there is hardware and protocol which will enable data transfer at that bandwidth and speed. I completely agree that everything is going into the cloud but that is usually personlized data. What this would allow is a massive pipeline of data transfer between actual hardware. For instance,a range of hardwares like a high res camera to maybe ssds or possibly holographic data storages, etc etc.
I get the 'vision' here but not sure I buy it yet. I'm still wondering if we will have a radio brick that you can use with either a 'tablet' type device, a laptop like device, or a phone like device.
I agree they would be against it, they are after all fighting any and all forms of disintermediation, the last thing they want to to be just a seller of 'airtime minutes'
I no longer buy a SIM for my iPhone when I go abroad, I buy a SIM for my Mifi device: http://en.wikipedia.org/wiki/MiFi - the one I have is a bit smaller than a Nokia 8210 used to be and weighs nothing.
The capability for power distribution is really interesting: 2 A at 5 V, 5 A at 12 V, and 5 A at 20 V.
Assuming some sort of "intelligent" power distribution center, you could plug your monitor into one port, your cpu/memory/ssd into another, and even digital audio (DAC and powered speakers) into a third, with only a single AC power cord needed.
If you are looking to use the DP over Type=C ability, you'd likely have a monitor with a hub instead. Reason being the Alternate Modes cannot traverse hubs, it's PtP only.
Wait, USB-C doesn't let you charge smartphones faster than at 2A? So it won't be any faster than what you can do right now? Why doesn't it let you charge at 10A? Is it because small batteries can't take so much energy so fast? What's the deal with those "charge half of your battery in 15 minutes" statements we see for some of the recent devices?
Type-C works with USB-PD. USB-PD can negotiate up to 100W of power delivery via 20V@5A. Increasing the amperage over 5A makes the connection more dangerous and increases the cabling requirements. If your smart phone or tablet implement USB-PD and can handle 100W of power then it could charge at 20V@5A VERY quickly.
Battery charging is asymptotical. In the beginning, they accept quite a bit of charge current, and thus can charge quickly. As you get closer to fully charged, the battery cannot safely have as much current pumped into it, so you have to back off on the current as you get closer and closer to fully charged. Bigger batteries, like in laptops, have multiple cells, to attain higher voltages, so that extra current can be directed to each cell. So, in short, no, USB-C won't let you charge phones all that much faster than at 2A
The Lytro example seems very dubious. Their solution has a lot of additional hardware in it - you wouldn't be able to package it into a Project Ara type device without having some kind of backpack anyway. From their point of view, it'd then be cheaper to also include the CPU, screen etc - a more unified hardware base to build software for.
There are lots of industrial and commercial devices which could be usefully attached to a smartphone or tablet. Medical diagnostic devices, auto diagnostic readers, and electronic test equipment could all usefully be connected. LabView has a smartphone/tablet interface.[1]
None of this is consumer oriented. It's going to be really useful for getting things done with electronics and in industrial and medical environments, but home and office, no. The consumer stuff is going the
other way. "One tap to get a ride", says Uber PR.
Modular devices make a lot of theoretical sense. Practically, it is almost impossible to achieve cohesion between large corporate players. Just look at the power tools market.
Yeah, modular smartphones are definitely not the "End-Game for Convergence." Just because you think they're cool does not mean they will dominate the market in the way you want them to.
I screamed like a little kid when I read this. I was working on UniPro 6 years ago and thought it was a great idea then. Glad to see it finally coming to market!
Considering ThunderBolt 2 isn't quite up to snuff for PCIe enclosures at bidirectional of 10 GBps, USB 3.1 isn't going to be enough just on bandwidth terms.
Not at all. I'm using a GPU via ExpressCard with a single lane. 10 GBit/s is plenty enough. The question is whether it DMAs correctly, and whether it's actively disabled by intel again (as with Thunderbolt).
Were the connectors really what was holding this back? I would be surprised if it were just that. About the modular phones, super cool idea, but very curious about its feasibility.
And sure, you might be able to upgrade individual modules, but even the interconnect will become outdated after a few years.
I'd love to be proven wrong though, it's a cool concept.