Star-Bandit
Star-Bandit t1_ix7avv6 wrote
Reply to comment by Star-Bandit in GPU QUESTION by Nerveregenerator
Actually after going back over the data from the numbers perspective regarding the two cards (bandwidth, clock speed etc,) the 1080 Ti certainly might have the upper hand, I'd have to run some benchmarks myself
Star-Bandit t1_ix7anbd wrote
Reply to comment by Nerveregenerator in GPU QUESTION by Nerveregenerator
No, each K80 is about equal to 2 1080ti, if you look at the cards they each have two chip sets and about 12Gb of RAM to each chip, 24Gb total vram per card. But the issue is they get hot, when running a training model on them it can sit around 70°c. But it's nice to be able to assign each chip set to different tasking.
Star-Bandit t1_ix6l9wf wrote
Reply to GPU QUESTION by Nerveregenerator
You might also check some old server stuff, I have a Dell R720 running two Tesla K80's which is essentially the equivalent of 2 1080s per card. While it may not be the latest and greatest, the server ran me $300 and the two cards ran me $160 from eBay.
Star-Bandit t1_ix9toom wrote
Reply to comment by C0demunkee in GPU QUESTION by Nerveregenerator
Interesting, to I'll have to look into the specs of the M40, have you had any issues with running out of space with vram? All my models seem to gobble it up, though I've done almost no optimizations since I've just recently gotten into ML stuff