The only reason there's a video card with 8 GB added is currently DaVinci, of which it's specifically stated that it really appreciates a video card with 8 GB of VRAM if you want to do 4K color grading. Puget benchmarks also indicate that the very best card for DaVinci is NVIDIA RTX 2080 Ti, better than any Quadro card. I doubt that's within beanflying's budget though and it should be noted that this doesn't appear to take the recent releases into account. The video posted seems to indicate that the new AMD cards could mean a significant improvement. Apparently Fusion 360 doesn't really care about whether it's running on a consumer or professional card. From a technical point of views it's vastly different from the classic CAD applications like AutoCAD and even those are being modernized. beanflying has also indicated that he'd like to be able to play a game every now and then. It does show that building what you call a "gaming rig" is appropriate for the situation. Which card exactly depends on the budget and how important DaVinci is compared to other tasks.
There's zero evidence for PCIe 4.0 being an upgrade with real world benefits. Without any evidence presented that topic is dismissed. It'd be appreciated if you could dial back the attitude towards other people in this thread. People are spending time and effort helping beanflying making a solid choice and they may actually know what they talking about. Let's have some fun rather than endlessly bickering.
It'd be appreciated if you could dial back the attitude towards other people in this thread. People are spending time and effort helping beanflying making a solid choice and they may actually know what they talking about. Let's have some fun rather than endlessly bickering.
As I am fairly badly colourblind and lack a $1k+ monitor there is little to be gained with looking to hard on grading but a better monitor is planned after the box. The more I look at it the RX580 is the low point and new fits in well for the budget. A lot of what I have been looking at is filtering out 100-200+FPS BS on cards with game X down to some real numbers and productivity.
Much as I have set a budget I only have to justify changing it to myself and my Cal gear is testament to not being bound by $ for a result. Does Beanflying need a 2060super or an RX 5700
Here's my quick thumbnail cost analysis; exclusive of the usual "sundries" which I'm pretty sure bean has plenty:
$150 - +/- Decent Case
$180 - 570X MB
$140 - DDR4 (Corsair Vegeance 32GB DDR4-3200; now sold out) Still average price for decent
$120 - Decent PSU
$125 - nvme SSD ~0.5GB
That leaves ~$285 for video and CPU, + approx 30-50$ if you go with 16GB of name-brand DDR4. The 3600X is available right now for $US249.00 shipped from Amazon. The 3700 is listed right now at $329 pre-order from B&H Photo and the 3900 at $499, just as suggested in the press release.
mnem
Regarding the PC Config, I recommend something like this: (A 1200$ without Monitors and Keyboards/Mouse):QuotePCPartPicker Part List: https://pcpartpicker.com/list/JPWMRJ
CPU: AMD - Ryzen 7 2700X 3.7 GHz 8-Core Processor ($254.99 @ Newegg)
CPU Cooler: be quiet! - Shadow Rock Slim 67.8 CFM Rifle Bearing CPU Cooler ($49.80 @ OutletPC)
Motherboard: Gigabyte - X570 AORUS ELITE ATX AM4 Motherboard ($199.99 @ Amazon)
Memory: Corsair - Vengeance LPX 16 GB (2 x 8 GB) DDR4-3200 Memory ($69.99 @ Newegg)
Memory: Corsair - Vengeance LPX 16 GB (2 x 8 GB) DDR4-3200 Memory ($69.99 @ Newegg)
Storage: Corsair - MP510 480 GB M.2-2280 Solid State Drive ($64.99 @ Newegg Business)
Storage: Seagate - BarraCuda 4 TB 3.5" 5400RPM Internal Hard Drive ($79.99 @ Newegg)
Video Card: Asus - Radeon RX 580 4 GB Dual Video Card ($159.99 @ Newegg)
Case: Fractal Design - Define R5 (Black) ATX Mid Tower Case ($129.99 @ Newegg Business)
Power Supply: Corsair - RMx (2018) 850 W 80+ Gold Certified Fully Modular ATX Power Supply ($129.99 @ Newegg Business)
Total: $1209.71
I added a GPU because in the main configuration the OP posted, the CPU in question doesn't have IGU, so a GPU is needed. I put a normal gaming GPU as reference, it should work with CAD if it have OpenGL available. But if the OP can, search for good deals in FirePro/Quadro Used Models.
Plus this config is Linux Compatible, so if the OP instead of Windows want to use Linux it will work without any problems.
There's a HUGE difference between THAT and building last year's "Budget gaming rig", which is obviously what you're ALL doing. You can see it in where you cut corners; pretty much EVERYTHING that boosts bandwidth and multi-thread is what YOU seem to think is unimportant. B-series MBs? DDR3200? SERIOUSLY?
Even SINGLE nvme SSD performance is markedly improved, and it brings the capability to run MULTIPLE nvme SSDs at full bandwidth AT ONCE.
No evidence? There's evidence right in the video y'all linked that pcie4.0 is a big thing. The difference between supporting it and not supporting it is effing $40-80. Even SINGLE nvme SSD performance is markedly improved, and it brings the capability to run MULTIPLE nvme SSDs at full bandwidth AT ONCE.
Now you attempt to be "the reasonable one"? You were, and still are, being deliberately obtuse. Clearly, you DON'T know as much as you think you do. I called you on it. Not sorry if that hurt your feelings. Also not sorry for telling inane feces-flingers like wraper to stop.
capability to run MULTIPLE nvme SSDs at full bandwidth AT ONCE.
Sir,
Can you use Xeons, yes you can, can you use ECC memory, yes you can. Can you buy old workstations that were decommissioned from a company? Yes you can but if they were decommissioned it's because they are using new stuff that its better. It's the value/perfomance better? Probably depends, of how much you paid.
I deployed exactly the same configuration at my last work but with an Ryzen 1700X back when I was in Portugal with an equivalent Motherboard with proper VRMs with a AMD FirePro V7100 for CAD Working.
The advantages in extra speed in memory are negligible after certain threshold. Plus Memory speed without proper timings its worse that lower speed but tighter timings.
PCIE Gen4 yes is a great deal, no denial, and PCIE Gen5 going to be released next year even more. Although the only hardware released that pings the advantage of PCIE4 currently are some NVME SSDs released this year in Computex.
Current Graphic Cards don't tap the full advantage of PCIE Gen3 16x speed.
For a production environment the use of Water Cooling is basically a risk to be taken. Did you see any company using PCs water cooled, Custom loop builds or AIO even? If it fails is basically production time lost and lost of hardware, specially if the AIO have defects (Enermax AIOs with corrosion, Corsair Pumps failing, etc... If you want I can link recall docs for what I'm saying). A Custom loop should be drained and clean each year, an AIO are good for around 4 years, they are made to be throw away when the pump fails. Never had a Fan in a AirCooler fail in the last 10 years, saw some AIOs fail because of evaporation of the liquid through the rubber tubes:QuoteFinally, tubes are generally made of either FEP or EPDM rubber. The more rigid tubes tend to be FEP, which has excellent reduction of permeation, but less flexibility during installs. Kinking an FEP tube will result in cracking the inner PTFE coating, which results in permeation and poor cooling ability. EPDM tubes have the opposite set of pros and cons: They won’t really get damaged if bent and are more flexible, but it requires an expensive R&D process to get the compound to a point of resisting permeation. Ultimately, all tubes will exhibit the effects of age and will slowly lose fluid to natural processes. It’s just a matter of how long they last. Most CLCs are rated for use in the 4-6 year range, though it’s around years 4-5 that noise begins to get more noticeable. This is because enough of the fluid has permeated the tubes to allow for more air in the line, which gets sucked through the pump and causes gurgling. Users can mitigate this by mounting the tubes down in a vertical CLC install.https://www.gamersnexus.net/guides/2926-how-liquid-coolers-work-deep-dive
An air cooler with good fans just need a blow with compressed air, and it's good as new. If you want to use water in your own computer for your own production, yes go ahead. But in a deployment of 40 machines no thank you I will not do it.
Yes. All over the place. The advent of AIOs has transformed the marketplace. I'm seeing them in high-end workstations, ready-made gaming rigs and even training simulators. Anywhere you have high-demand workload and want to keep it cool quietly.
I'm the guy they call to air-drop in and clean up the mess when nobody else is willing to. My days are spent going from one business or datacenter to another, replacing network gear, CPUs, RAM, VRMs and PSUs in places locked down so tightly you need an escort and you put your phone in a locker before you enter.
Even in THOSE places I've been seeing liquid-cooled servers for years now. They use AIOs specifically configured for those servers; but the general config is the same: pump is still on the CPU, the chassis is designed so you can lift the entire cooler out as an assembly without disturbing the rest of the MB and the cooler is treated as a consumable supply. You are literally thinking 10 years ago technology, not today's.
Cheers,
mnem
I have not read this thread in it's entirety, but figured i'd add my personal experience with the PCIe 3.0x bus in some extreme use circumstances.
I am the author of Looking Glass, a program that allows use of a Windows VM with a passthrough GPU inside of Linux by transferring the captured frame between GPUs via system RAM. We are talking about transferring 4K 100+FPS video across the PCIe bus while competing for GPU and CPU time and resources running pro CAD applications and AAA game titles.
3840 x 2160 x 4 = 33,177,600 bytes per frame x 100 = 3,317,760,000 bytes per second = 3GB/s
We can do this on a PCIe 3.0 bus, while PCIe 4.0 will help in some extremely rare corner cases, it's simply not that huge a deal at this point in time with the current workloads. Getting a CPU with more lanes IMO is far more useful then a PCIe 4.0 system. If you want to ensure you have enough lanes, go for a CPU with a ton of them like a Threadripper (note I am aware that this alone is too expensive for the OP's budget).
It's obvious that the benchmarks will show a notable difference, but the end user sitting in front of his computer and actually noticing a difference was already unlikely with the move from PCIe 2.0 to PCIe 3.0. Even your example would already be rather extreme and far from something any normal or power user encounters.
Just for completeness, fluid (not water) cooling is getting more and more popular in data centres with the advent of fluids like 3M's immersion cooling products.
https://www.3m.com/3M/en_US/novec-us/applications/immersion-cooling/
which we HAVE SEEN can support markedly faster throughput on even a single SSD with CURRENT hardware.
mnementh, as you like watching Linus, here you go. Nothing more than eye candy.
Yes I really wanted to see a Server in a 48U rack, one of the very top start leaking water to the others down him... Yes, a full rack burned...
So I will break my answer in serveral parts:
(SNIP WOT)
You don't know what I do, you don't know where I worked to assume that you are the only one with access to places so secure you need an escort... But well... I really will stop, nothing can really go through thick heads.
I hope the OP get the best config he can with the help of the ones who really know what they are saying... Or say in the correct way without conflict.
I have not read this thread in it's entirety, but figured i'd add my personal experience with the PCIe 3.0x bus in some extreme use circumstances.
I am the author of Looking Glass, a program that allows use of a Windows VM with a passthrough GPU inside of Linux by transferring the captured frame between GPUs via system RAM. We are talking about transferring 4K 100+FPS video across the PCIe bus while competing for GPU and CPU time and resources running pro CAD applications and AAA game titles.
3840 x 2160 x 4 = 33,177,600 bytes per frame x 100 = 3,317,760,000 bytes per second = 3GB/s
We can do this on a PCIe 3.0 bus, while PCIe 4.0 will help in some extremely rare corner cases, it's simply not that huge a deal at this point in time with the current workloads. Getting a CPU with more lanes IMO is far more useful then a PCIe 4.0 system. If you want to ensure you have enough lanes, go for a CPU with a ton of them like a Threadripper (note I am aware that this alone is too expensive for the OP's budget).It's obvious that the benchmarks will show a notable difference, but the end user sitting in front of his computer and actually noticing a difference was already unlikely with the move from PCIe 2.0 to PCIe 3.0. Even your example would already be rather extreme and far from something any normal or power user encounters.
@mnementh, I think that you need to take a step back and cool down, I didn't argue for either side but simply stated my personal experiences.
Just for completeness, fluid (not water) cooling is getting more and more popular in data centres with the advent of fluids like 3M's immersion cooling products.
https://www.3m.com/3M/en_US/novec-us/applications/immersion-cooling/
This is not a benchmark, Looking Glass is being used by hundreds of people over on the L1Tech forums and has been featured both in the L1Tech videos as well as on Linus Tech Tips. It is niche, but sees a ton of good real world usage across many different hardware platforms, from pcie x4 1.0 through to pcie x16 4.0. While I appreciate that you are pointing out that it is niche and not as common, it is a good example of how well the older busses hold up to modern workload with this additional overhead thrown on top.
One of my regular clients is CIARA. That is what they make. Liquid-cooled high-speed servers. They have several clients with datacenters here in Houston FULL OF THEM. CIARA is NOT The only one; Lenovo is also making them, and I've even seen Dell servers with liquid-cooling on some of these locations.
Just because YOU don't believe in it doesn't make it not so. The arrogance of such a statement is simply staggering.
A 2U system with a closed loop water cooling set up made by asetek (called the ORION HF). We have fans that cool the RAD and the PCI cards. So far, our system with the 6950x runs at 4.3GHZ/4.4Ghz (we have 2 profiles loaded on the system, you choose which is more stable for your application). It uses the ASUS X99WS IPMI motherboard, with a special BIOS build made for CIARA by ASUS. Its built more for High Frequency Trading, but you add in a graphics card or GPU and you're set.
So you're saying it's not worth the $40-80 difference to lay the foundation with next-gen architecture THAT WE CAN ALREADY SEE IS MARKEDLY FASTER rather than continually looking backwards? REALLY?
One of my regular clients is CIARA. That is what they make. Liquid-cooled high-speed servers. They have several clients with datacenters here in Houston FULL OF THEM. CIARA is NOT The only one; Lenovo is also making them, and I've even seen Dell servers with liquid-cooling on some of these locations.
Just because YOU don't believe in it doesn't make it not so. The arrogance of such a statement is simply staggering.
So you're saying it's not worth the $40-80 difference to lay the foundation with next-gen architecture rather than continually looking backwards? REALLY?
Cheers,
mnem
I forgot to put something pithy down here.
Not at all, but since the OP is clearly on what I would consider a tight budget, for his limited amount of money that extra $40-80 could mean getting something else that is far more useful to them.
Also, just because it's the latest and greatest doesn't mean you should adopt it the first chance you get.
A-Bit brought out the first ATA 66 motherboards, which had a fatal flaw that randomly corrupted your HDDs making the bus unusable.
Fijitsu brought out the first budget home 6-10GB HDDs, that every single one failed due to the new method of encapsulating the controller IC.
Intel mass produced and sold the Intel Atom CPUs to the enterprise sector for mission critical infrastructure where flip-chip BGA construction was used, which are now all failing due to unforeseen issues with the at the time new technology.
Samsung brought out the first 1TB home SATA SSDs that suffered a fatal performance flaw due to issues with the wear levelling implemented in silicon that was rectified in later models.
AMD brought out the Ryzen 7 series of CPUs that have a critical bug that exhibits under Linux when doing multi threaded workloads causing a full system halt that was fixed in later revisions.
These are just a few examples of new tech having critical bugs/flaws in new unproven technology that has bitten the early adopters.
I don't know how many actually tried AIO water and what have changed for last 5-6 years with tech,
but I were trying to "eliminate" a noise for non-overlocking X-series and Xeon class CPUs that sat in boxes near to me.
This is my one of worst purchase in decade, obsolete sh$t with very creepy noise!
Eventually, money very well spend on a bigger case and traditional "beefy" coolers.