There's a bit of a 'reboot' as such (I forget what it's generally referred to) that contributes to this. By that I'm referring to the way that microcomputers reset progress in software.
When low-power microcomputers hit the masses, the advances that had happened in software in the minicomputer / mainframe world couldn't follow along. So in the first instance you had computers that could only barely run BASIC programs and where you were programming in assembly close to the bare metal if you really wanted to do much more advanced things with the machine.
Now, the kids growing up on these computers in the 70s and 80s (I'm an 80s kid myself) had no idea about things like GUIs or the latest in virtualization technology on minicomputers or all the many problems that had been solved in big-mainframe world but were yet to hit consumer devices. One factor is that consumer hardware of the time wouldn't have run the software efficiently anyway; another one is that there was no Internet where you could just Google anything and find out.
So many of us then grew up in this era and became software engineers that would go on to write operating systems and software of the 90s and beyond. We'd never seen or heard of, for example, Lisp Machines or what they could do. Which is I think why you end up with this weird generational gap, almost like a chasm of knowledge born in the 80s and 90s.
Whenever I watch an Alan Kay video I'm blown away by how much was possible 'back then'. My mental map of technological progress starts with 8-bit micros in the 1980s, which we all thought were 'cutting edge technology' except that they weren't, in the broadest scheme of things. It's this amazement that I think leads to the feeling of a 'Golden Lost Age'.
Remember when Windows 95 touted preemptive multitasking as a groundbreaking new feature? Or when DMA for hard disks was a 'new' thing? (except that the Mother Of All Demos basically showed the concept, from what I vaguely remember from watching it many years back).
We see the same cycle in mobile computing - phones were once close-to-the-metal devices; now they're running full mutli-tasking basically desktop-class operating systems. The difference these days is that we have the Internet and we have the lessons of history actually available to us.
I remember that awkward thrill to see a student thesis from the 60s being more thoughtful than the latest vector drawing program from A. And I've toyed with a long list of advanced packages, none of them has that tiny geometry solver in them.
Time is surely not an arrow in progress. Lot's of colateral and inherited sub cultures are sucking energy. As you mentioned, a new 'market' is also a potentially huge drawback, but that's such a common thing. People will see the world their way, not as a PhD knowing the history and state of the art (even researchers don't know all).
It happens in programming languages too. All the web started a freeform joyful environments, unlike c++ or java with their heavyweight specs and standards. But then complexity hits and they're suddenly bringing a lot of structure, types, conventions, etc etc. It's like a child, who cannot enjoy his parents universe, he needs something compatible with his new mind.
ps: my latest 'the newest is less than the old' moment was realizing Haskell was specified in 1990. At that time mainstream users were given Windows 3.0 and DOS :)
I used to question this, too, but then again, I also sincerely appreciate having a Chromebook that boots in about 5 seconds ... compared to my phone that takes about a minute to go from off to "useful".
I know it doesn't matter for consumers as much, but it's far easier to centrally administer ChromeOS devices than Android devices, too.
The funny thing about that is that my MacBook Pro boots in a few seconds to a full desktop OS, just as my Windows 10 machine does. The cold-boot time of Android is pretty terrible. Admittedly the devices are pretty slow that it boots on, but still.
That's certainly not a fair assessment. APIs like Metal & Vulkan (and AMD's Mantle which arguably started the trend on the PC) are the correct technical direction to go and not an indication of incompetence on AMD's part - if anything they're an indication of trying to fix a broken status quo. You have to understand that drivers are hard to write because they became an over-complicated mess of hacks and impedance mismatch between what OpenGL/DirectX of old made graphics hardware look like and what graphics hardware actually evolved into. The hardware just evolved away from the APIs.
See here for a start -- this post is extremely enlightening on the situation:
The latest Oculus SDK (0.7.0.0) on Windows _absolutely_ fixes this mess. I have nothing but praise for the latest software and this is coming from someone who couldn't use his Rift since around November last year when the AMD drivers and the Rift's extended mode stopped working together.
Basically now the Rift is considered a proper HMD as a category of its' own and not a bastardized monitor. There is no more extended mode. Everything compiled with Oculus SDK 0.6.0.0 and above now works in Direct Mode only. The latency is noticeably better. With the latest AMD/Nvidia drivers, you just plug the headset in and there's no monitor-type configuration at all.
Note this is with Windows 10; I don't have first-hand experience with 8.1 but think it'd be much the same. It really is that much better.
Also, if you haven't checked out the latest VR Desktop (http://www.vrdesktop.net), you need to do this too. In addition to giving you a giant screen to see your desktop on, you can now play full-screen games in it, watch YouTube 360 videos properly in it and watch 3D movies encoded in side-by-side or top/bottom encoding. It's really really awesome :)
> The latest Oculus SDK (0.7.0.0) on Windows _absolutely_ fixes this mess.
Tried to use that to show it off to a friend on his desktop running Windows 10. Didn't work. The demo desk worked - but nothing else did. Also - didn't I read something where .7 is not backwards compatible with apps compiled for .6?
> AMD drivers and the Rift's extended mode stopped working together.
I find extended mode only works for me on my AMD card. And I think this is part of the problem - there appears to be a number of "it works for me"/"it doesn't work for me" threads all over the Oculus Rift forum from people with varying builds including threads of "if you follow this incantation and sacrifice a chicken while modifying your registry - this will make the Oculus Rift work"
> Also, if you haven't checked out the latest VR Desktop
I tried that out - but it crashed for me almost every time I tried using it.
For me the Oculus Rift has pretty much been a fail - I couldn't tell if the fail has been on my end (ie my desktop) or Oculus Rift's end. Though I really think if they hope to get a market share is to hire the guy(s) who has been working on JanusVR and make that a bundled "app". Right now that is the only reason why I don't just put it in a box and forget about it. Yes - I understand this is a "development kit" but I would like to know that it's functional before I develop for it. If people have to read out of the Necronomicon to use it - there is no point in developing for it. I feel like others share this opinion because I think Elite Dangerous implemented support but said that they won't update it until the Oculus Rift has a non-dev kit version.
I'm just frazzled because I purchased this $300 device expecting it to be somewhat stable - and it seems like something I would get out of a kickstarter.
People don't quite get it until they've tried it. The most surprising thing is the way that the 3D stereoscopy of the environment combined with the head tracking in VR conveys scale. The movie theater actually looks AS BIG AS A MOVIE THEATER SCREEN. It's not "strap this thing on your face and get kind of an illusion of a 3D movie floating in front of you", it's "Strap this thing on your face and see a massive screen in front of you that couldn't physically fit in the room you're currently sitting in". Not to mention that you'll ideally get virtual theater surround through headphones that is fixed in space, such that turning your head keeps the sounds coming from their respective speaker positions relative to your head rather than staying the same.
As for 3D stereoscopy in movies; that's an inherently limited format (it's limited because of the fixed viewer viewpoint and the edges of the screen). 3D stereo and VR are not comparable by any means. About the only thing that they have in common is that you use two eyes to view them. Here though, the VR cinema adds an advantage - 3D stereoscopic content can be shown perfectly without any cross-talk between images, which helps with the integrity of the effect. Note effect. IMHO 3D on a fixed movie screen is strictly a special effect. When used in such a way, it's great. When overused or used improperly, it sucks.
TL;DR: '3D movies' and VR shouldn't be uttered in the same sentence.
This is excellent. I know many people for whom this would be cheap enough to just add on to their next phone contract, who otherwise wouldn't have dipped their toes in it.
As an aside, I really wish Google and Oculus could get together and work out how to allow Cardboard apps to take advantage of the extra hardware from the Gear VR if it's available. Using a Gear VR and then trying Cardboard shows how woefully inadequate Cardboard is; but there are some cool Cardboard apps that I'd really love to be able to try with the much much better tracking of my Gear VR.
You are describing turn-key solutions. They are services that provide complete end-to-end consumption methods, like Netflix. In the scenarios of content consumption (be it articles, movies/tv, or even games) even recent history has shown that it isn't the availability of content that is the limiting factor, it is the accessibility of content. What I mean by that is not that you have authorization to consume the content, but that you have the capability of enacting your end of the transaction (in this case, pressing play).
The reason VHS and DVD sales are better than digital is because it is still vastly easier to put a thing in a magic box hooked up to the TV and press play on the remote. Even this is frought with danger (How do I hook it up, how do I get to the right input, where exactly is the button to play the movie on this DVD menu?) This is also why you hear those complaints near constantly.
There's other factors as well (like people not feeling like they own a file, but a DVD is theirs), but again the barrier to entry for consumers on computers is getting past having to think of any of the moving parts. This is true in a lot of places, like automatic transmissions and ATMs (read up on the history of ATM interfaces, it's fascinating if you're into that).
Currently on a PC if you download a file that is a piece of media to consume, you have several important barriers for most people:
- I have to put the file somewhere, and I don't understand filesystems
- I have to have software that uses the file format, that works on my machine
- I have to know how to use that software
- I have to know how to purchase the file, whatever that is supposed to mean
- I have to like how it's being displayed
- I have to know how to get it from the machine I downloaded it on to my TV or whatever
There's probably others in some scenarios, but you probably see the point. A lot of things we tech users take for granted is that these things are very, very hard for most people, and they always have been (again, physical tech like cars or elevators follow this). Even giving someone a link is barely beginning to scratch the surface of this process. So this is, bar none, the reason digital is going to have problems until netflix or youtube or whatever can just play everything.
This is also, as a related aside, why a web browser and mobile apps win in breaking down those barriers. Everyone already has them and it's the "click icon to do thing" model, where icon might be a bookmark or typing a simple address, but the point is the same. No installs, no understanding of local mechanics and difference. It's also why Apple products are viewed by the consumers as more user friendly. There's very little to understand about your machine on a Macbook. You don't have to take my word for it here. Watch your average user use a mac. Or read their user interface guides.
I wonder if part of it is that some techies who aspire to management do so because they don't actually like developing software.
As an individual contributor I can't imagine giving up my day-to-day coding for a management position. Management responsibilities, sure. I'd wager that most software developers who are given enough autonomy are doing a lot of micro-project management anyway.
Giving up on researching new technology and playing around with stuff? Not a chance.
I'm in this boat at the moment (one man band contractor of many years finally starting to take on employees). I'm finding that as I delegate more of the routine day-to-day work away, I'm actually finding myself with more time to play around with new stuff. Trouble is, in my line of work (corporate .NET stuff) customers aren't interested in anything new and exciting - they just want Windows applications and ASP.NET forms connected to SQL Server databases.
Of course, if you're lucky enough to be in a job (I'm thinking frontend web) where playing with exciting new tech is part of the job, I can absolutely see how the move to management and the loss of overall hands-on time with the code can be a bad thing. Thankfully for me it's been somewhat liberating.
What do they mean by "....with the government and with each other" (emphasis on the 'with each other' part)?
The one I can understand - companies being compelled to share with the government. However is this act also giving these companies to share with each other?
I had a read of the article that was linked to from that Quora answer and I still couldn't glean 'evil' from that -- potential to become evil, yes; actively intending to do evil, not so much.
When low-power microcomputers hit the masses, the advances that had happened in software in the minicomputer / mainframe world couldn't follow along. So in the first instance you had computers that could only barely run BASIC programs and where you were programming in assembly close to the bare metal if you really wanted to do much more advanced things with the machine.
Now, the kids growing up on these computers in the 70s and 80s (I'm an 80s kid myself) had no idea about things like GUIs or the latest in virtualization technology on minicomputers or all the many problems that had been solved in big-mainframe world but were yet to hit consumer devices. One factor is that consumer hardware of the time wouldn't have run the software efficiently anyway; another one is that there was no Internet where you could just Google anything and find out.
So many of us then grew up in this era and became software engineers that would go on to write operating systems and software of the 90s and beyond. We'd never seen or heard of, for example, Lisp Machines or what they could do. Which is I think why you end up with this weird generational gap, almost like a chasm of knowledge born in the 80s and 90s.
Whenever I watch an Alan Kay video I'm blown away by how much was possible 'back then'. My mental map of technological progress starts with 8-bit micros in the 1980s, which we all thought were 'cutting edge technology' except that they weren't, in the broadest scheme of things. It's this amazement that I think leads to the feeling of a 'Golden Lost Age'.
Remember when Windows 95 touted preemptive multitasking as a groundbreaking new feature? Or when DMA for hard disks was a 'new' thing? (except that the Mother Of All Demos basically showed the concept, from what I vaguely remember from watching it many years back).
We see the same cycle in mobile computing - phones were once close-to-the-metal devices; now they're running full mutli-tasking basically desktop-class operating systems. The difference these days is that we have the Internet and we have the lessons of history actually available to us.