Why is a modular PSU more efficient than a non modular one?
The cables can come out, but that doesn't interfere with the PSU rating, if you get a bronze modular vs a platinum non modular, the later will have better efficiency.
I wouldn't use an SSD as a cache, if you are going the route of x99 cpu's, throw RAM into it and use a RAM disk.
The efficiency curve for power supplies is not that sharp and power derating the power supply results in higher reliability. I am getting really tired of replacing improperly derated capacitors in PC power supplies.
Using SSD as a disk cache is all about reliable persistence. Using a RAM drive as disk read/write cache is not reliable, pointless actually, when you want to persist data.
Does anybody think that this motherboard is a smart choice?
https://www.supermicro.nl/products/motherboard/Xeon/C600/X10DRX.cfm
Does anybody think that this motherboard is a smart choice?
https://www.supermicro.nl/products/motherboard/Xeon/C600/X10DRX.cfm
PCIe slots are x8. This will slow down most GPUs which have x16.
Does anybody think that this motherboard is a smart choice?
https://www.supermicro.nl/products/motherboard/Xeon/C600/X10DRX.cfm
PCIe slots are x8. This will slow down most GPUs which have x16.
Slow down? Good luck even installing them.
I'll be surprised if any GPU doesn't work at lower PCIe lanes. Usually a x16 GPU can happily work on even x1 PCIe link.
Performance penalty is not severe either. In fact, most GPUs don't really need x16 speed at all unless running some bandwidth intensive operations such as GPGPU with badly optimized host memory accessing pattern.
When running graphics tasks such as gaming, x8 has no perceptible difference compared to x16, according to LinusTechTips' benchmark (search "4 way SLI").
You cannot physically install them in the slots.
There are x16 physical slots with only x8 or even x4 signal connection, commonly seen on mobos. In fact, consumer grade Intel chips only have 16 PCIe lanes, so to have a dual GPU configuration, they must operate at x8 mode, let alone 3-way SLI configuration, which usually has x8/x4/x4 or x4/x4/x4 configuration if NVMe SSD is used.
There are advanced enthusiast level mobos with multiple true x16 slots supported by a PLX chip, which is essentially a PCIe hub that allows "on demand" bandwidth allocation, instead of fixed or boot-time selectable allocation implemented on cheap boards.
Also, keep in mind that you can always hacksaw (pun intended) your PCIe slot to make it a semi-open slot to accommodate any length of cards if you wish. Though ID pin will be missing, auto link negotiation will still make sure the connection will work at maximum bandwidth.
I did this on my SuperMicro C226 mobo to fit an Intel Xeon and make it coexist with GTX750Ti.
I did similar things (though non destructive, just riser cards) to my AsRock X99 to bring M2 slot to U2 for SSD, and to bring mSATA/mPCIe to PCIe.
You cannot physically install them in the slots.
I'll be surprised if any GPU doesn't work at lower PCIe lanes. Usually a x16 GPU can happily work on even x1 PCIe link.
Performance penalty is not severe either. In fact, most GPUs don't really need x16 speed at all unless running some bandwidth intensive operations such as GPGPU with badly optimized host memory accessing pattern.
When running graphics tasks such as gaming, x8 has no perceptible difference compared to x16, according to LinusTechTips' benchmark (search "4 way SLI").
If you ever fix any of my computer equipment. I will insist that you DON'T, bring the hacksaw.
If I were to have a brand new, £10,000 Workstation. I would NOT like someone to use a hacksaw on the motherboard, to make it.
In fact, I use a Dremel Micro (regretted not getting a Dremel 12V version) with 405 saw blade tool .
BTW, my main workstation is not a cheapo by any means, especially considering its water cooled 22-core Xeon E5, 64GB REG ECC RAM and 1.2TB Intel SSD plus a pair of Dell premium line LCD monitor plus $5-digit I spent on software. The money I spent on RAM and SSD along can build a top gaming machine.
You cannot physically install them in the slots.
Yes you can (look closely at the right edge of the x8 slot):
On the motherboard in question, note the large open space below the PCIe slot area. This is to take up any overhang from x16 boards.
There are OEM version chips out there that are cheaper and more powerful. These are NOT the crappy non-reliable ES/QS ones (which are essentially silicon lottery).
OEM chips are massively produced chips that target only big customers like Oracle or Cisco, that build specialized servers in massive quantity.
Some of the chips are leaked to eBay through grey market. There is a vivid market that smuggles OEM chips to evade IP cost impinged on retail version counterparts.
I would suggest a single socket E5-2679v4 or E5-2696v4, which is more powerful than 2*your chosen CPU, at only slightly higher price, plus it doesn't have QPI performance penalty for multi-socket systems.
When you really need more power, just add another one to double the performance.
My E5-2696v4 absolutely rocks, it is only 1/3 the price of its retail counterpart, E5-2699v4, while all specs are exactly the same except for max turbo boost frequency -- it is actually 0.1GHz higher than 2699v4, and the specific step (silicon revision) I have can turbo boost to 2.9GHz when all cores are used even in AVX mode while maintaining <110W power dissipation.
The use of single socket instead of dual socket not only circumvented QPI performance penalty, but also saved my $$$ on a proper server mobo as I can use a regular X99 mobo with tons of DIYer friendly features. What's more, it makes ITX form factor possible as now I can own the world's most powerful shoe box computer.
I just want it to be something future-proofed. Something that I can continue to upgrade further and further in the future.
I'm looking forward to Skylake-EP, the new platform purley, and its improvements.
There are already top of the line chips leaked out for LGA3467 platform as well as mobos from SuperMicro on Taobao.
The top of the line chip is 32-core, 2.1GHz E5-2699v5, which has a 220W TDP.
The currently revealed E5v5 has 28 cores at 1.8GHz, but the new unreleased beast will have 32 cores at 2.1GHz.
Even though, 220W TDP makes it not anywhere more advanced compared to v4 platform -- 220W/32/2.1GHz=3.27W/GHz per core, and current flagship E5v4 has 145W TDP, which corresponds to 145W/22/2.2GHz=3.00W/GHz.
Also, it is unknown what will be all-core turbo boost frequency of the new chip, while the v4 generation can turbo boost up to 2.8GHz when all cores are crunching numbers.
For the last 5-10 years, nothing really impressive has been released regarding CPU performance.
Because (as you said in an earlier post), the existing two socket motherboards, have relatively slow inter-memory (QPI) links between them. Making NUMA aware/capable software somewhat important, if you want to get the best performance out of it.
Correct. The new UPI might help, but I don't expect OmniPath to be equipped on low cost LGA2066 platform. It will be an LGA3467 exclusive feature.The next one will be rather "future proof", because it will take (the motherboard), at least two or more generations of Intel server chips. I.e. will remain current for probably 5 or more years (NOT guaranteed), for the new socket type.
I will think twice. Starting from Skylake, HEDT will not be Xeon compatible. There will be an X299 mobo that uses LGA2066, targeting workstation and enthusiast gamers, and there will be a C622 mobo that uses LGA3467, which targets multi-socket servers or MIC chips.
Since starting from then, cheap grey market server CPUs won't be able to be used on cheap consumer grade mobos, I would say the platform will be considerably more expensive compared to current X99/LGA2011-3 platform. You either have to pay Intel for more expensive server mobo, or retail CPU. Use of OEM CPUs on consumer mobo will be the past.