Oh dear, what were they thinking with that name. First they released the Maxwell-based GTX Titan X, then replaced it with the Pascal-based Nvidia Titan X which nearly everyone called the Titan XP to disambiguate the confusingly similar names, and now Nvidia goes and uses that universally accepted nickname as an actual product name for a different product.
They should really stick with their standard naming convention and just call it a 1090 or a 1080 Titan or something - all the information is there, you know what generation it's from and you know where it stands in the product lineup. No confusing "which Titan X do you mean".
Nvidia has a history with names. Remember when they used "Tesla" to both call a microarchitecture and their HPC line of cards, for which it's still used? When Fermi based Tesla cards came out, everyone was saying 'huh, wait what'?
They were probably thinking next year, they can call the new one Titan XPP. Historically, their naming conventions for GeForce cards haven't been super consistent either.
Really? The first number is always the major release and the second number is always the line from what I can recall? It's pretty high up there in terms of naming consistency.
I don't mean that kind of branding, I mean the kind of rebranding where for example the take the same chip that's in a GTX 680, and re-release it as a GTX 770.
Damn, the 2016 Titan X was confusing because the 2015 Titan X was named the same, so people nicknamed the 2016 one as Titan XP and they fucking went ahead and one-upped the confusion by making an actual Titan Xp.
Bravo Nvidia LOL
I had to check if my hacker news app wasn't updated with the newest post list since April Fools.
Well, at least it's somewhat better than the iPad.
"Here's the new iPad. It's newer than the old one, but good luck trying to tell the difference. So to clear things up, we've named it the "iPad". You can thank us later.".
I have been evaluating buying a 1080Ti recently; it appears to also have the highest cores per $. Is the 1080Ti really then the most efficient card for general purpose HPC work on Linux (numerics)? The 1080 is a step down but also competitively priced. Curious about your thoughts, as I couldn't find a guide on this stuff on the web.
edit: The Tesla K20 is also in competition in my view (despite the much higher cost) due to its focus on higher double-precision performance.
We do a lot of work on video encoding.
We have had a K80, Titan X(Maxwell), Titan X(Pascal), 1080, 1080Ti, and others (including render-farms based on GTX980's).
General thoughts: Don't expect to get _any_ information out of NVidia unless you are running everything on their hardware compatibility lists (i.e. server-case)
Do not mix & match consumer-rated gear with 'professional' gear. (i.e. If you put the K80 in a system with a GTX1080, then the Nvidia drivers restrict the number of available processing cores to 2 per device)
Air-flow: The Tesla's run HOT even with a blower attached, and/or installed in the recommended case.
NVENC: the Pascal-based cards performance is incredibly faster AND better then the Kepler-based cards.
For anybody else doing Video encoding work: Grab an Nvidia TK1/jetson dev-kit. This little card is a MONSTER and can handle everything we throw at it without breaking a sweat.
> Do not mix & match consumer-rated gear with 'professional' gear. (i.e. If you put the K80 in a system with a GTX1080, then the Nvidia drivers restrict the number of available processing cores to 2 per device)
Huh? Not sure what exactly do you mean by "number of processing cores"?
I use two development boxes on a regular basis with Teslas side-by-side with GeForce cards and they all work just fine.
I was unintentionally vague. I should have said 'output'.
At the bottom of this post I have copy/pasted output from my original tests.
That was not CUDA, the task I was working on specifically (and only) used the NVENC encoder (via ffmpeg). I don't know if the situation has changed but these were my observations.
All of my tests were done in 2015, so the situation might be different now.
The k80 could output upto 4 "streams" (aka outputs or threads) at once. A 780Ti can only do 2.
According to nvidia-smi the K80 "appears" to be 2 GPU's on one card. You can actually designate which GPU you want to process ffmpeg streams on.
As soon as you had both devices installed in the same PC, the Nvidia drivers disabled the output of the K80 so that it too would only output upto 2 streams per GPU.
IIRC, there was even a status message that got displayed when installing the Nvidia binary blob:
paraphrasing from memory from 3 years ago
Warning consumer card detected. Limiting available GPU's
Here is a copy/paste dump of my findings at that time.
(The formatting is screwy with the nvidia-smi optput.)
If you're measuring cores per $ or GFLOP (single precision or double precision) per $ the ordering (decreasing) is the same: 1080 ti, 1080, 1070. Tesla wins on GFLOP (double precision) per $. At least according to current lowest newegg prices. It is all quite close though.
I may be totally wrong here, but from what I remember the Titan line handles up to double precision floats whereas the gtx line handles only single? Games don't really need the double precision so it's overkill for gtx. Can anyone confirm?
On the other hand, video cards meant for games generally can't take sustained load. By that I mean you can't run them at 90-100% load for days on end in e.g. a render farm, they invariably melt. You're paying for better build quality and for having enough money for a render farm.
>Currently Mac users are limited to Maxwell GPUs from the company’s 9-series cards, but next week we’ll be able to finally experience Pascal, albeit a $1200 Pascal model, on the Mac.
>We have reached out to Nvidia for a statement about compatibility down the line with lesser 10-series cards, and I’m happy to report that Nvidia states that all Pascal-based GPUs will be Mac-enabled via upcoming drivers. This means that you will be able to use a GTX 1080, for instance, on a Mac system via an eGPU setup, or with a Hackintosh build.
This is one of my biggest feature requests for Apple. I want a tiny little laptop with an integrated GPU when I'm on the road. But when I'm home I also want to be able to run simulations, play games on a big screen & do VR. And for that I want a desktop class GPU when I'm at home. And I want that GPU to be upgradable - CPU speed isn't improving anywhere near as fast as GPU speed, so it makes sense to keep the rest of my system across multiple GPU generations.
The laptops are already there. The RAM fiasco aside, the current laptops are fine little machines. And with thunderbolt 3 they should have no problem supporting external GPUs.
All thats missing is an official apple egpu enclosure and software support! People on the internet have already gotten them working via injecting kexts into the kernel. But first party support would make the whole thing way better, and way more stable. C'mon apple! We're so close! Take my money!
They'd probably lean towards just selling the egpu with gpu included rather than just the shell. Seems more in line wit the company. They don't want people using untested hardware I am guessing. It's bad for brand image or whatever. I figure that's the same reason they don't just sell OSX licenses on their own. They want to know the hardware and software will work well together. Even then, and even with a markup, I think mac users would be pretty happy with at least having that option.
It makes you wonder if this isn't a coincidence. Nvidia and Mac have pretended like the other one doesn't exist for several years now (since Apple adopted AMD and dropped NVIDIA about 4 years ago). Now, less than a week apart Apple breaks its iron curtain randomly to come out and tell us that they are making better Mac Pros and promises that they actually listen to customer feedback and NVIDIA announces mac support coming to their high end GPUs. Its a bit too crazy of a coincidence for me.
Once of the biggest requests from Mac Pro users was the ability to at least have the option to use NVIDIA cards. I had a Macbook pro (2013 i think) which had a dedicated NVIDIA graphics card in it, and then i upgraded to a 2015 model Macbook pro with the same specs (100% decked out) and noticed a huge drop in graphics performence due to the fact that I was stuck with the AMD graphics card (as Apple had dropped NVIDIA as their dedicated graphics card provider during those years). I really regretted upgrading my Mac and wish i could have my older one back which had the NVIDIA card.
My gut tells me this is more than coincidence. This and Apple's newfound dedication and commitment to 'pro' users has me excited (for the time being) about Apple's future
Finally! This is fantastic news and the first real indication in ~10 months that the 1000/Pascal line will even be functional on a Mac. This is going to make a lot of people very happy.
I'm thinking of MBPs with nvidia. I'm guessing Vega only comes in big configurations and there won't be a smaller version suitable for notebooks. Mobile Polaris is non-competitive with Pascal on notebooks. Hope this is the case.
Is there something in Titan XP I would benefit from for ML/DL/AI comparing to 1080Ti (except for extra 1GB)? I am considering getting 8c Ryzen with 1080Ti (1-2x) and am wondering if Titan XP has something that would render 1080Ti obsolete for training models?
In terms of architecture and features the 1080ti and Titan Xp are the same, the only difference is the Titan Xp is slightly faster and has 1GB extra VRAM.
If your workload can be efficiently split between multiple cards then a $1400 pair of 1080tis will vastly outperform a $1200 Titan Xp - 16% more money for nearly double the throughput.
I was more curious if Titans had some lower precision data type or better dataset packing than vanilla Pascals, or something similar that would help with ML.
I was about to ask "What is it we expect from GPU companies in the next ten years? Why/how will they dominate computing/innovate in ways that we care/compute?"
you answered my question. I think now is the time to invest in Nvidia and any other GPU manufacturer as the ML/DL/AI field is on the precipice to explode computing in the next 15 years. (15 years happens much faster than you might think)
The 'problem' with the stock market is that most of these expectations are usually already priced in, i.e. NVDA's market cap today reflects expectations of massive future growth.
That said, NVDA's stock just went 7% down due to an analyst's downgrade (which in the long run is relatively meaningless), so if you'd want to buy NVDA stock and hold it long term now might be a good time.
Disclosure: I am long NVDA, and my stock picking track record is atrocious.
The stock price reflects the market's overall expectation of all future cash flows discounted to the present value - all the way out to infinity.
When you buy at stock, you are implicitly betting that reality will exceed those expectations.
So, you just said that massive growth is priced in, and the fact you hold the stock implies that you are betting on even more massive growth - relative to expectations.
Correct. The bet here is that 1. we're only seeing the very early stages of what's possible with GPU-based DNNs and 2. while Wall St is certainly considering the future applications their fantasies are on the conservative side.
The main risks to this thesis are that 1. DL could turn out to be a short-lived trend or 2. a new semiconductor technology emerges that is 10x better suited for DL/ML (Google's new architecture?)
"invest" can be interpreted multiple ways... stock and brain-space...
I think brain-space is the safer investment if we are not talking about getting rich from the stock but enrichening the cyber-sphere from that which can be developed in the ML/DL/AI space...
Regarding the new Nvidia provided mac driver, does this have any influence on Vulkan or modern OpenGL support? Or would that require changes in macOS itself (that would presumably never happen)?
not sure why anyone would spend double money when you can get about the same performance using a gtx 1080 TI. The performance looks marginal IMO -- optimizing the code on gtx 1080 TI (cuda and/or shader assembly) would probably yield very satisfactory results and definitely better perf/buck.
Why not more memory? Esp. with the 1080Ti at 11GB and half the price, it would've made sense to push this to at least 16GB or even 24GB to distinguish it.
The problem is that they also need to distinguish it from the higher memory and much more expensive Quadro line. For instance, the new Quadro P6000 comes with 24GB and it'll run you $5500.
Yeah I can see that, but they're running into the other problem now where 12GB is just not that much more than 11GB and certainly not worth the 100% price increase. At 16GB they would at least be offering ~50% more memory.
It doesn't. AMD only has low & mid-range cards ATM. Vega is supposed to be released in the first half of 2017, and working silicon has already been demoed, so it does look likely. Rumours have it at 12TFlops, so should be comparable to this card.
It always been about what you stated. Christ, when me and my buddy ran the intel gaming DRG lab in 1997 we were testing all games (this is when SIMD came out, and we proposed stacking cores... but thats a different story) against optimizing (specifically Intel wanted to pay game companies to optimize against SIMD, and would give them $1MM for marketing if it could be proven the game (subjectivly) ran better on their Celeron processors vs AMDs anything... so they were paying devs to opt for the SIMD instructions and then using that as marketing material... it was a fun job.
aside: That was my personal golden age of gaming... Intel had an OC-48 to SC-5 building... so playing UO on 6 machines simultaneously when everyone else was modeming at 56K made us like gods against lag in that game... I still think fondly of that time.
Hmm, that's interesting. I've noticed that with the new 1000 line, NVIDIA has been giving away some games as an incentive. Every single time I've seen it, it's been a ubisoft game. There was probably a similar deal with Ubisoft this gen. Otherwise why would they be promoting those games so much?
It will more than likely demolish it in terms of raw performance; however, AMD's most recent cards are aimed at being more budget-friendly. An RX 480 will only set you back about $200-250 (compared with this $1200 beast of a card). You'll get more "bang for your buck" by going with an AMD card as opposed to a top-of-the-line model such as this one. That may change later this year when AMD releases their Vega architecture, as it's rumored to aim more at the high-end market (which is currently dominated by Nvidia).
Mind you, nvidia also has the 1060 which preforms either better or worse than a 480 depending on what benchmark you use, it's basically identical in performance in practice, for the same price point.
Are there any signs that machine learning libraries and other GPGPU applications will start using the cross platform OpenCL instead of the proprietary Cuda anytime soon? It's a bit of a shame that so many allow themselves to be locked to one vendor, although it's been a while since I used either of them.
AMD should just do it for TensorFlow. They would get a lot of benefit if they could show higher performance per dollar at least on Linux, and it would take just a small team to implement it.
Looking so forward to the Vega release, gonna be a really good year for a GPU upgrade.
The 1080Ti release already flooded the market with cheap 980ti/1080, if AMD can bring the pain then more of these things will hit the used market, making them even cheaper.
This is a very good explanation/speculation which deals with the NV driver optimization for DX11 where they break up the draw calls between threads because the scheduler is software based where AMD is hardware based and can't do the same. In DX12 this isn't needed so AMD scheduler being hardware based can be better utilized.
And on the other hand, Graphics Core Next was initially designed for low level APIs — they actually started the whole "low level API on desktop" trend with Mantle.
Unless you plan to use Steam, which still has no real support for those drivers. You can find guides to modify the LD_PRELOAD to get some stuff working though.
I'm using Steam with those drivers and nearly everything works fine. The only thing I can remember right now that doesn't is Divinity Original Sin. It is broken and the developer Larian Studios refuses to fix it.
You're in the game and look at a house, then you turn around an look at a tree, so you need the geometry and texture of the tree, but no longer of the house. Then you look down and a chicken walks into frame, so you now need that, you kill the chicken and suddenly need the dying chicken animation etc.
Almost never you need all data for a single frame. That would be way too much work for the render pipeline anyway.
He is saying a video game running at 60 Hz has approx 16 ms per frame. 550 GB/s * 16 ms ~ 9 GB. So if you are running full bandwidth for an entire frame you can access 9 GB of RAM.
What the responder was getting at is that despite being able to access only 9GB per frame, it may still be useful to keep more than 9GB of data in there for other purposes, say if you have data that isn't being read/written every frame but is still used for rendering. So it doesn't necessarily follow that memory beyond that which can be addressed per frame is useless.
This. The Titan-line makes more sense if, in addition to viewing it as the top end of their gaming line, you view it as the bottom-end(ish) of their compute line.