It may be that the software is not capable of taking advantage of the higher clock speed beyond a certain point. Just like I thought that having 2 - Titan X's would render (to final delivery format) faster than 1 alone and it didn't pan out (yes, both were set for compute with one handling GUI and compute and the other for compute). I understand - guess the only way to know for sure is to test out (with DR) a Titan X vs the 1080 and see how the software behaves in the real world. The GTX 1080 has a base clock speed of 1607Mhz vs the Titan X's base clock speed of 1000Mhz, so although it has less cores in theory those cores will be running a lot faster It also uses the traditional memory GDDR5X, slower bandwidth than HBM.Īdam Simmons wrote:You also have to take into account the memory speed and the base clock speed of the GPU. The GTX1000 series I believe uses the P104 architecture which doesn't have many fp64 cores (not used by graphics rendering, games or grading software etc., so performance not affected). ![]() Remember that this is memory bandwidth between its HBM memory and its SM (streaming multiprocessor) processing units all housed on the graphics card, not between the graphics card and the host motherboard. Also has fp64 (floating point 64-bit precision) cores of 1/2 the number of cores fp32 CUDA cores. Its P100 architecture is using the HBM gen 2 that gives 1TB/s bandwidth. The GTX 1000 series IS using the PASCAL architecture that features the new 16nm process (down from 28nm I believe) giving reduced die size and more performance per watt. ![]() You know motherboards that have PCI Express ports with a Bandwidth of 1 TB / S? ![]() I seriously think that we must wait for to the "NVIDIA PASCAL" it says "NVIDIA Unveils Pascal GPU 16GB of memory, 1TB / s Bandwidth."
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |