Author Topic: A networking solution for two PCs, or a quicker way to transfer files between?  (Read 12957 times)

0 Members and 1 Guest are viewing this topic.

Offline Lizzie_Jo_Computers_11Topic starter

  • Regular Contributor
  • *
  • Posts: 89
  • Country: us
This is another part to my "proofreading my computer design and building a server rack station topic." Since this was less related to the actual builds I thought it would be better separated.




Part 1

I am building two computer systems. One is for editing video files and designing 3D models/CAD work, and the other one is for rendering all of the aforementioned projects.

I want to be able to simultaneously use both computers without one slowing them down.

As an example, I want to finish a project on computer #1, and then have a separate computer (computer #2) do the rendering work. That way I can continue editing on computer #1 with no slow down in the work process.

What would be the fastest and most efficient way to get this done?

Would setting up a NAS for networking between them be a good idea, or would that increase the slow down. For instance, if I create a NAS where all assets are to be stored so that both PCs can access them simultaneously, would the drawback of a possible slow connection speed of the assets defeat the purpose of the previous slowdown?

Would there be a quicker way to communicate the two PCs without the third NAS pc?

Or a quicker way externally? Something like using external hard drives?


Part 2

If I do any networking whatsoever, and just for general purposes (as my work also relies heavily upon internet usage like download/uploading), I need to set up a good router and modem system for reliable connection.

I do need Wi-Fi, but the LAN/WAN connections would be more of a priority for the two PCs. I do not know much about routers and modems so any help would be greatly appreciated.

I want to thank anyone in advanced if they are able to help me solve these two issues. I am kind of stuck with this as they are not my specialty. Thank you again for any assistance. ;- )
 

Offline shteii01

  • Frequent Contributor
  • **
  • Posts: 266
  • Country: us
Some comments really.

The pc ads usually say that pc has gigabit lan adapter.
The modems usually have lan hub/switch built in.  But.  Have you noticed that they usually do not mention what kind of hub/switch it is.  Is it 100 mbit?  Is it gigabit?  So.  Whatever you buy, find out how fast the lan ports are.

One thing speed wise for storage.  Have you considered using ssd?  These days you can get 120 GB ssd for something like $40.  I got some used micron 128 GB ssd for $32 or 35.  Yes, they probably fail sooner than hdd.  But if speed is what you need, then budget the money and when ssd fail, get new ones.
 

Offline Lizzie_Jo_Computers_11Topic starter

  • Regular Contributor
  • *
  • Posts: 89
  • Country: us
Thank you for your reply either way!

Yes, I have considered using SSDs for speed, although I have heard that they won't make a difference when we're talking about a network solution because the LAN ports cannot utilize the full speed capabilities that an SSD can supply. Which is unlike a built in SSD.
 

Online mariush

  • Super Contributor
  • ***
  • Posts: 5024
  • Country: ro
  • .
For 1 ... there are two approaches to this problem.

With very fast network connection between computers, you can practically create a network share on the render computer (or create a iscsi hard drive) and load the files directly from the render computer.  You edit your project and when you're done, you copy the project over to the render machine and load the project on that computer and hit render.

A deviation from this would be to create proxy files, for example for each file you plan to add to your project, you create a 720p or lower resolution file encoded in some very fast but high bitrate codec. You edit your project using those files, you save the project, and then when you move the project to the render machine you edit the project to work with the 1080p or 4k files you have.

LinusTechTips does something similar .. they have 10gbps network between editor computers and their storage servers and when they start working on a project, they copy all the files in a folder and an automated program starts processing them and converts all content regardless of resolution (1080p or 4k and codec avc standard or xavc-s or motion jpeg or whatever)  to cineform (they wanted  DNxHD but it was buggy) 1080p which has a high bitrate (probably something like 80-100 mbps) but has the benefit of being decoded in hardware by the video card and that it's sort of intra-frame, meaning editors can just randomly in the timeline and they get almost instant jumps to particular frames. With other codecs, when you jump somewhere, often the codec has to decode a certain number of frames before reaching that particular frame.
The 10 gbps network is fast enough that they can stream 100-200 mbps continuously from the storage server almost as if the content is local.  Also, the cineform format is supported very well by both premiere and davinci resolve (which they use for noise reduction using dedicated video cards, for color correction, gamma, overlays on green screen etc)

Regular 10 gbps network cards are still relatively expensive, probably about 130-150$ each. Gigabyte is planning to release a 10gbps network card that's supposed to be under $100 when it launches (very soon). These work with regular Cat6a cables.
There's a new standard 802.3bz (or something like that) which makes it possible to create connections at 2.5gbps using plain cat5e or 5gbps and 10gbps using Cat6a - there may be in a few months cards tat could do up to 5gbps for way less than $100.
Alternatively, you can find on eBay network cards taken from dedicated servers removed from production, which can do 10 gbps or even 40gbps, but they have optical ports, they don't have the regular RJ45 ports which would allow you to use Cat6a cables.  You can make a connection between two such cards using a direct attach spf+ cable, which varies from around 20$ to $100 depending on length, for example a 3m cable would be around 40-50 dollars.

So you could have a direct connection between your computer and the render computer at 10 gbps, and you could have a separate 1gbps network card to connect to the internet router to have internet and access other machines. It's very easy to force the operating system to route all the traffic between two IPs to go through one interface (the 10gbps network card) instead of the regular 1 gbps network card.
These network cards with optical ports can be found for as little as 20$ but there's a catch ... some don't have drivers for anything higher than Windows 7 or Windows 8, or the drivers only install on Windows 2008 or Windows 2012 (the server versions of Windows) ... but often you can force Windows 7 or Windows 8 to load the server version of the drivers or you can find "hacked" drivers.  With Windows 10... I don't know.

Here's an example of such cards : link  and here's an example for a cable  that would work to connect 2 such cards

See this playlist for some networking ideas:  https://www.youtube.com/playlist?list=PL8mG-RkN2uTxvguQ0LitLak61lA9jYANW

See also this:



« Last Edit: January 29, 2017, 07:58:30 am by mariush »
 

Offline ludzinc

  • Supporter
  • ****
  • Posts: 506
  • Country: au
    • My Misadventures In Engineering
Question, rather than a comment.

What is the faster rate you can transfer data between two machines?

Surely the bottle neck is the hard drive speed (data rate)?

Does that come within cooee of Gbps speed?
 

Offline Lizzie_Jo_Computers_11Topic starter

  • Regular Contributor
  • *
  • Posts: 89
  • Country: us
Mariush: Thank you for long and detailed ideas. I'm still looking into everything you said, and I am going to watch the video and anything else related. Sorry if I am slow, I am very busy and working with my current equally slow set up.

I just wanted to say thank you, even beforehand, and throw out a few more things to those who haven't seen my other topic.

I have a budget range (that I'm shooting for anyway, I could go higher if I wanted to push it)between $15,000-$20,000 to spend on the entire workstation. I will post the specifications/components I have in mind so far for the two PCs, but they are both close to or higher in the $5,000-$10,000 price range. Basically I'm spending a lot of money on this set up, as it is meant for my career/business, so I'm willing to spend a higher amount on whichever network solution turns out the best for me.

I also would like to point out that I am certainly a beginner when it comes to all of this networking computers and the like. So as we go along, please continue to be detailed and try to simplify things as much as possible, if you can, please?

I think I did a fairly good job so far in my last topic in keeping up and learning what theo other users were trying to teach me, and I intend to do the same here.
 

Offline BradC

  • Super Contributor
  • ***
  • Posts: 2106
  • Country: au
Question, rather than a comment.

What is the faster rate you can transfer data between two machines?

Surely the bottle neck is the hard drive speed (data rate)?

Does that come within cooee of Gbps speed?

I have a simple raid6 from cheap "green" drives here that will do 900MBps, so no the bottle neck is the network. I currently use a pair of Intel 4 port gige cards with link aggregation which gets me about 360MBps between machines.
I have a raid10 of ssds that will do 1.2GBps but I run into pcie limitations and can't actually do anything with the data. In theory I could saturate a 10G Ethernet connection, but I doubt I could afford a motherboard that had the practical bandwidth to allow it. Anything more than 100MBps is nicer.
 
The following users thanked this post: ludzinc

Offline Twoflower

  • Frequent Contributor
  • **
  • Posts: 737
  • Country: de
How about this: Wait until you have your rigs and see if and were is a bottleneck. Use the build in Network (they're usually reasonable good, beside the 'Killer LAN' for gamers which causes lots of problems). It's easy to upgrade your network if it turns out to be too slow. At least if your LAN cabling is decent (CAT6a) . In this case you need a set of 10G Ethernet cards and a 10GE switch. This way you probably only have only 'wasted' the money for a 1GE switch if you can't use it somewhere else. But you need to check if you have a PCIe slot free and accessible. If your rig uses multiple graphic cards that could be a problem. In that case there's no real option anyway. Except eventually a USB3.1/Thunderbold <-> LAN dongle. The USB3.1 is supposed to have also 10Gbit/s. But that would be in my eyes only a very ugly solution.

You might be able to safe money on the 10GE switch if you do a direct connection between the two machines. But you might think about the network configuration if one or both needs connection to a third (including internet) link as you can create a kind of loop in the network if you use additionally the 1GE links of the machines.

There are also 2.5GE and 5GE options, but I'm not sure if this is already available. And I'm not sure if they will provide much gain for the additional effort and $$$ compared to the 1GE which comes for free.

A NAS is very nice as you have a central point were your data is. If this matches your workflow is another question. But you can also use it as one stage for your backup strategy (it does not replace an external, off-line backup) and lots of other stuff. Some items you should check if you go for a NAS: at lease a option to go 10GE (see above), a decent CPU, and enough places (bays) for drives. There is no need to populate all bays with drives, but that give you the opportunity to increase the storage space.


@BradC
Does link-aggregation work if you have only one connection open? Like copy one huge file. Or do you need to run multiple transfers to feed the NICs? And with link aggregation the counterpart (including the backbone within the switch) needs to have the same bandwidth. So you need may NICs and cables which might as expensive as a 10GE option, plus it needs more PCIe slots in the workstations.
 

Offline BradC

  • Super Contributor
  • ***
  • Posts: 2106
  • Country: au
@BradC
Does link-aggregation work if you have only one connection open? Like copy one huge file. Or do you need to run multiple transfers to feed the NICs? And with link aggregation the counterpart (including the backbone within the switch) needs to have the same bandwidth. So you need may NICs and cables which might as expensive as a 10GE option, plus it needs more PCIe slots in the workstations.

Yeah, with Linux at both ends I use the round-robin aggregator and it all works on one connection. Usually tar or cpio over netcat to minimise cpu usage.  Got a cheap deal on the cards, had spare x8 slots in both machines and already had the patch leads, so it was a quick way of not quite quadrupling my throughput. 10g cards would be nice, but can't justify the $$ just yet. It's a specific task for replicating large amounts of data, so it's all very task specific.

 

Offline Twoflower

  • Frequent Contributor
  • **
  • Posts: 737
  • Country: de
Thanks for your answer. That's what I thought. The performance gain depends on the usage model.

Maybe the $ for the 10GE helps a bit if you look what an alternative would cost. Like InfiniBand (currently up to 290Gbps aggregated). Especially the cables or fibres come with a nice price tag. Compared to this 10GE is a bargain. But I fully agree if it's not blocking you it's hard to argue the investment. But if you make money with it and it wastes daily 10 minutes of your work it might start to calculate. And I assume you already have the frames already enabled on both ends. It's squeezing out the link a bit more (if the whole link supports it of course).
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16615
  • Country: us
  • DavidH
@BradC
Does link-aggregation work if you have only one connection open? Like copy one huge file. Or do you need to run multiple transfers to feed the NICs? And with link aggregation the counterpart (including the backbone within the switch) needs to have the same bandwidth. So you need may NICs and cables which might as expensive as a 10GE option, plus it needs more PCIe slots in the workstations.

Windows only recently added support for round-robin link aggregation but Linux and BSD have had it for a long time.  Older versions of Windows supported round-robin link aggregation in the driver so it depended on the network card manufacturer.  I used to do this between machines using both ports of a dual fast Ethernet card and I know people who used all 4 ports of a quad port fast ethernet card.

For technical and I suspect market segmentation (1) reasons, LACP (Link Aggregation Control Protocol) deliberately does not support round-robin link aggregation so if a switch is used, it will have to be configured for static link aggregation.  If you have a small number of machines, then it is possible to do without a switch.

Is it worth doing when 10G Ethernet is available?  I think so because 10G Ethernet is still too expensive for general use and may always be so. (2) Gigabit Ethernet ports and switches are dirt cheap and the cabling requirements are not excessive.  On the other hand, the original poster's budget requirements are more than enough to include a small number of 10G Ethernet ports and maybe a small switch so that is the way to go for simplicity.

If we are talking about 2 workstations and 1 file server, then use 4 10G Ethernet ports with 2 on the server and connect them all without a switch; the server when properly configured can route between the workstations if necessary.  Alternatively treat one of the workstations as a server for the other so only 1 connection and 2 10G Ethernet ports are required; I did this for a long time when Gigabit Ethernet was still very expensive.

(1) Cisco and others would prefer that customers upgrade to a more expensive solution.
(2) I have high hopes for 2.5G and 5G Ethernet.  Even if 10G Ethernet comes down in price, its power and cable requirements will be a problem.  I do not know why (technical reasons?) but 10G Ethernet does not support POE (Power Over Ethernet).
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16615
  • Country: us
  • DavidH
Question, rather than a comment.

What is the fastest (?) rate you can transfer data between two machines?

With Gigabit Ethernet, speed is usually limited by the network connection even if link aggregation is used.  With 10G, other considerations are likely to limit speed.

Quote
Surely the bottle neck is the hard drive speed (data rate)?

Does that come within close (?) of Gbps speed?

It depends a little on the access pattern but in any demanding application, a small (and good) RAID would exceeded the capability of a network connection very easily at least with transfers of large files.  With SSDs this is even more of the case.  My old 4 drive RAID 5 can stream data at 600MB/sec and is actually limited by the PCIe interface but my Gigabit Ethernet connection can only support 100MB/sec.  In practice I am almost always limited by factors other than the storage devices.
 
The following users thanked this post: ludzinc

Offline Lizzie_Jo_Computers_11Topic starter

  • Regular Contributor
  • *
  • Posts: 89
  • Country: us
I will be back to post a longer reply in a little while. I just wanted to clear up some things...

The main reason I want a fast way to transfer data between computers is because I will have two separate computers with two different purposes.

It is impossible to build a computer that is perfect for both a workstation and a rendering computer. From my last topic it was talked about a lot. Two separate builds made more sense, so the ideal outcome here is any solution to getting those project files between two computers as quickly as possible as to not defeat the purpose of having the two computers to begin with.

Thank you again for your replies.
 

Offline Lizzie_Jo_Computers_11Topic starter

  • Regular Contributor
  • *
  • Posts: 89
  • Country: us
Alright, about the Linus video...

I could use some help figuring out (in a much more simplified manner) exactly what the process is from start to finish. What I think I understand is that the codec is very important. If I were to encode my raw footage as Cineform, it would work much faster on the timeline, and the encoding process is much quicker than H.264?

We had talked about this before, but it looks like it may actually come back down to the GPU? Unless you are using effects specifically designed with GPU utilization in mind, Adobe Premiere makes no use of it. Although, seemingly with the encoding being in Cineform, the GPU is actually being utilized. It would seem it is being used in both the encoding process itself, and in Adobe Premiere during editing? Would this still apply to Adobe Premiere Pro CS6?

A post I found elsewhere. Is this an accurate description, like what Linus was saying...?

"Just want to clear some things up here. A proxy and an intermediate codec are not quite the same thing. Neither cineform nor h.264 make good proxy codecs (cineform is rather large and h.264 is rather slow) but that's okay.

Now the difference between a proxy and an intermediate codec is that a proxy is just used for offline editing because it is small in size and fast (due to low resolution or low bitrate). When you lock your cut with proxies you can then relink back to the original footage for online and grading.

Examples of proxy codecs are DV, DNxHD36 or ProRes proxy

An intermediate codec is used to transcode all your original footage and then is used right throughout the post-production process from cutting to grading and export because intermediate codecs are very high quality and very smooth and fast.

Examples of intermediate codecs are ProRes 422, DNxHR, DNxHD185 and cineform.

I recommend not using proxies as your original footage (h.264) will not be easy to grade. I would transcode all your footage into cineform with Adobe Media Encoder you can even use Adobe Prelude to automate this process. The reason I prefer Cineform to ProRes or the DNx flavours is that Cineform is GPU accelerated which is why it encoded so fast on your high spec machine. It will also work like a dream on your timeline even in 4k or on your netbook but yes the files will be very large."

I found this, it wasn't directed at my project.
 

Offline BradC

  • Super Contributor
  • ***
  • Posts: 2106
  • Country: au
For technical and I suspect market segmentation (1) reasons, LACP (Link Aggregation Control Protocol) deliberately does not support round-robin link aggregation so if a switch is used, it will have to be configured for static link aggregation.  If you have a small number of machines, then it is possible to do without a switch.

I think it's more technical than artificial. It does not take much of an inequality in latency or response time across the multiple interfaces to make re-assembling TCP awkward. As it is with 4 GB ports, even though they are on the same physical card I need to hugely expand the receive buffer and re-ordering windows to enable useful re-assembly of the packets, otherwise things get messy with congestion backoff and retries. Trying to push that through a switch, and deal with additional data from multiple machines would be a nightmare. Trying to do it in a vendor-neutral manner would be worse.

 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16615
  • Country: us
  • DavidH
For technical and I suspect market segmentation (1) reasons, LACP (Link Aggregation Control Protocol) deliberately does not support round-robin link aggregation so if a switch is used, it will have to be configured for static link aggregation.  If you have a small number of machines, then it is possible to do without a switch.

I think it's more technical than artificial. It does not take much of an inequality in latency or response time across the multiple interfaces to make re-assembling TCP awkward. As it is with 4 GB ports, even though they are on the same physical card I need to hugely expand the receive buffer and re-ordering windows to enable useful re-assembly of the packets, otherwise things get messy with congestion backoff and retries. Trying to push that through a switch, and deal with additional data from multiple machines would be a nightmare. Trying to do it in a vendor-neutral manner would be worse.

That is the technical reason that Cisco gives if you pester them enough but they also give a bunch of other specious excuses.  Ethernet by design is suppose to deliver packets in order and round-robin link aggregation through a switch would make this very difficult to guarantee without excessive complexity in the switch.

On the other hand, most (all?) internet protocols do not care except for the extra processing required on the receiving end and lower tier switch vendors have no problem supporting this through static link aggregation and users get it to work satisfactorily despite the higher processing costs.  If a switch is not used, then there is even less of a problem.

All of this is likely irreverent to the original posters problem.  He can either afford 10G hardware or the limited number of machines would allow round-robin link aggregation of multiple 1G interfaces without using a switch at all.  There may even be some suitable low cost 10G alternative interface like SFP+ Direct Attach or 10GBASE-CR which could be used; it isn't like the workstation and server are likely to be separated by more than 15 meters.

I do believe that the original poster will get something out of using a network connection which is faster than 1G.
 

Offline Lizzie_Jo_Computers_11Topic starter

  • Regular Contributor
  • *
  • Posts: 89
  • Country: us
I want to ask a question that I can't find an answer to.

1. As I have stated, the main goal is to keep the computers from slowing each other down, or disrupting the work flow. Just for my understanding. If I have computer 1 connecting to computer 2, and it is pulling the source files (4K, CAD models, images) from computer 2, and not a local SSD/HDD, would that slow the video editing process down?

2. Again, if I store everything on computer 1, and open the finished project on computer 2 and begin rendering on computer 2, would the fact that computer 2 is pulling the source files and project media from computer 1 slow down the rendering process?

3. If I do not use a RAID array, or a third computer made for the assets, would the fact that the files are being pulled from one of the computers while a process is active, still wind of slowing that computer down?

An example... If computer 1 is the storage/editing, and computer 2 is simply the rendering computer, would the fact that computer 2 is accessing files that are located on computer 1's local drives wind up significantly slowing computer 1 down anyway? Does that in any way help to avoid the slowing down of the workflow in that case.

4. How much slower is a 10Gbps Ethernet connection between two computers (as far as accessing files, not transferring them) than if the files were being accessed for video editing and rendering on a local drive?

Thank you.
« Last Edit: January 30, 2017, 04:24:39 am by Lizzie_Jo_Computers_11 »
 

Offline Jeroen3

  • Super Contributor
  • ***
  • Posts: 4078
  • Country: nl
  • Embedded Engineer
    • jeroen3.nl
You're over-complicating things.
You need to buy a faster workstation, and stop messing around with two workstations. This will cost you valuable time.
Except when you have a autonomous conversion server (lile ltt) or distributed computing.

The HDD will always be the slowest in the chain, except when the HDD only has 1 sequential order to fulfill.
Next the network will be the slowest in the chain, unless you get ad-hoc teamed NICs or 10gb.
If you can't afford ssd's, then you can consider raid. It's still slower, but you can do some caching with small ssds.

If you still want to mess around with two pc's, you can buy two the same dual Intel NIC and team them together with an ad-hoc connection. (pc-pc)
The motherboard on-board nic can be used for internet. You will have to manually specify adapter metrics.
« Last Edit: January 30, 2017, 06:42:03 am by Jeroen3 »
 

Offline Lizzie_Jo_Computers_11Topic starter

  • Regular Contributor
  • *
  • Posts: 89
  • Country: us
^Actually, that is far from accurate. If you have seen my other topic, two PCs are very necessary for my workload. There is no possible way to get a faster PC, so to speak. They need to be separated. A workstation PC (for video editing) can be seriously affected by using it as the same PC for video rendering.

In short, for a workstation PC, I need an extremely high clock speed with a single CPU motherboard setup.

For a rendering machine, I need more cores to thread rendering and encoding across. I cannot accomplish both things at once to their full potential.

10Gb, as well, has been the train of thought so far. My budget is very flexible at the moment. I'm prepared to invest into $15,000-$20,000 in this equipment. It is not for a hobby, it is for my professional business, and I need everything in working order.

I am not an expert (obviously) in networking or linking two computers together, but I don't see any other way to get this done (especially not after the long conversations and debates with other members on here) without two PC builds, and having to learn this one way or another. I will re-post my current builds on this topic tomorrow for reference, but just  keep in mind that this is absolutely necessary, and I would very likely benefit greatly from two PCs. Thank you for your reply.

« Last Edit: January 30, 2017, 08:23:50 am by Lizzie_Jo_Computers_11 »
 

Offline Jeroen3

  • Super Contributor
  • ***
  • Posts: 4078
  • Country: nl
  • Embedded Engineer
    • jeroen3.nl
Dual CPU motherboards are no option?

Quote
I'm prepared to invest into $15,000-$20,000 in this equipment. It is not for a hobby, it is for my professional business, and I need everything in working order.
I'm sure there are companies specialized in building these stuff. For business DIY might not be the best choice.
« Last Edit: January 30, 2017, 08:00:33 am by Jeroen3 »
 

Offline Lizzie_Jo_Computers_11Topic starter

  • Regular Contributor
  • *
  • Posts: 89
  • Country: us
That one was my fault, sorry. I meant to imply I can only use a single CPU on the editing computer, not the rendering one. Dual CPUs (if not quad CPUs, but that may be too much) are actually an absolute must for high end video rendering. I've been looking into the Xeon E5 2699 v4 and its relative, the 2696 v4.

There are companies, like Puget systems, but I would be spending far more than necessary if I were to hire someone else. I'm only just beginning to get into PC building (though I do have some, very little, prior experience), but I've built other assorted electronics and assembled varying different projects, so I'm not too far off. I'm also a rather quick learner when I can be taught in specific detail.
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6721
  • Country: nl
Infiniband is the cheapest way to get very high bandwidth interconnects, there's a lot of surplus boards and cables out there. Not really the easiest to work with though.
 

Offline slicendice

  • Frequent Contributor
  • **
  • Posts: 365
  • Country: fi
Please read my reply from the other thread and then reply here regarding the networking.
 

Offline Lizzie_Jo_Computers_11Topic starter

  • Regular Contributor
  • *
  • Posts: 89
  • Country: us
Thank you for joining over here slicendice, your suggestions have been very much appreciated. No one replied on the other topic if it would've been better to make a new one for this in specific, so I went ahead and made it. I think I did forget to mention that over there, though...

I've been reading articles, but the exact definitions of "what I need exactly" to get this network running has still been confusing me, so a few questions?

I need a network card for both computers, right? A 10Gb Ethernet cable, and 10Gb switch? I'm not sure how to hook up the Wi-Fi yet.

the options of what I'm looking to do is either be able to access the same files on both compuiters, or if you think it is more reasonable, if this network connection would run much faster and easier to just copy the entire project to the rendering computer?

I have thought about one PC at a time, but the way I have been looking at it, is either way, I will need at least two computers anyway, so I can continuously work without interruption, and during renders being unable to do anything. That is why I've been trying to figure this out. I'll look into any and all options, however. Whichever is most productive for my needs.

Thank you again!
 

Offline slicendice

  • Frequent Contributor
  • **
  • Posts: 365
  • Country: fi
Let me get back to you once I have checked with a person who does these kinds of things(networks for different solutions) for a living. You want high transfer speeds and also the ability to be upgrade proof in the future, right? I will also check how we can minimize network traffic while maintaining maximum productivity.

While waiting, please let me know how many users/computers currently will be connected to the network at the same time and also how many might be connected in the future if your business grows. Some approximation is good enough, no need for exact numbers. These numbers will also help determine how many file transfers, rendering jobs and Unreal code compilation jobs might be going on at peak hours on the server.

I suspect you will have a project manager/designer, few code monkeys, a few video editors 1 audio editor and 1-2 3D asset creator and a few 3D RAW video creator in the future (for the video and asset parts this could be the same person(s). This is if your business grows. Everybody has to have access to the components their own part of the project depends on.

You need source control, backups, and a lot of stuff we haven't even talked about yet. It would be a shame if your hard worked creations would be lost in cyberspace because of poor data reliability design. Building the network to accomplish all this is the easiest part. Designing the whole thing is the hard part. ;-)
 

Offline madires

  • Super Contributor
  • ***
  • Posts: 7764
  • Country: de
  • A qualified hobbyist ;)
How much throughput do you need for file transfers? High throughput means that you also need some CPU power. I'd suggest a flexible design allowing for a simple upgrade path. Add a 10GigE NIC to each PC and connect them back to back with a dedicated IP address range. Set up shared folders across that 10GigE link. Use the on-board GigE for internet access and other LAN stuff like printers. If the file access slows down the PCs too much you have to upgrade to a NAS, SAN or a third PC just for file access. It might be cheaper to have two 10GigE ports in the NAS/SAN/PC#3 and connect each workstation PC directly than buying a switch with 10GigE ports. If you like to add more workstation PCs for video editing/rendering go for a switch. And please get a network professional for installing everything.
 

Offline slicendice

  • Frequent Contributor
  • **
  • Posts: 365
  • Country: fi
And please get a network professional for installing everything.

I totally agree, worth the extra cost, if the network stuff is a bit confusing.
 

Offline Jeroen3

  • Super Contributor
  • ***
  • Posts: 4078
  • Country: nl
  • Embedded Engineer
    • jeroen3.nl
Quote
There are companies, like Puget systems, but I would be spending far more than necessary if I were to hire someone else.
Only if your solution works first try and does not have compatibility problems.

You seem to be someone who has an hourly rate, whats a few days lost on testing for you?
 
The following users thanked this post: newbrain

Offline MarkF

  • Super Contributor
  • ***
  • Posts: 2548
  • Country: us
I've been reading articles, but the exact definitions of "what I need exactly" to get this network running has still been confusing me, so a few questions?

I need a network card for both computers, right? A 10Gb Ethernet cable, and 10Gb switch? I'm not sure how to hook up the Wi-Fi yet.
Yes.  You need a network card for each computer.  However, if you only need to connect those two computers together, you only need a cross-over cable between the network cards. The cross-over cable is one where the transmit and receive pairs are swapped allowing the computers to communicate directly.

If the computers have built-in network ports (i.e. slower), one of them can connect you to a second network for non-critical things. The wifi can be part of this slower network.
« Last Edit: January 30, 2017, 11:46:31 am by MarkF »
 

Online mariush

  • Super Contributor
  • ***
  • Posts: 5024
  • Country: ro
  • .
Crossover network cables are only needed with 100mbps network cards.
1gbps network cards (and higher) auto detect the data pairs and establish connection with either type of cable, regular or crossover.
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16615
  • Country: us
  • DavidH
In short, for a workstation PC, I need an extremely high clock speed with a single CPU motherboard setup.

For a rendering machine, I need more cores to thread rendering and encoding across. I cannot accomplish both things at once to their full potential.

It makes sense to me and that is how I would do it.  Use the fast single threaded machine for the user interactive workstation and a more economical higher core count but lower clock speed machine for the massively parallel work.

1. As I have stated, the main goal is to keep the comput-ers from slowing each other down, or disrupting the work flow. Just for my understanding. If I have computer 1 connecting to computer 2, and it is pulling the source files (4K, CAD models, images) from computer 2, and not a local SSD/HDD, would that slow the video editing process down?

2. Again, if I store everything on computer 1, and open the finished project on computer 2 and begin rendering on computer 2, would the fact that computer 2 is pulling the source files and project media from computer 1 slow down the rendering process?

3. If I do not use a RAID array, or a third computer made for the assets, would the fact that the files are being pulled from one of the computers while a process is active, still wind of slowing that computer down?

An example... If computer 1 is the storage/editing, and computer 2 is simply the rendering computer, would the fact that computer 2 is accessing files that are located on computer 1's local drives wind up significantly slowing computer 1 down anyway? Does that in any way help to avoid the slowing down of the workflow in that case.

4. How much slower is a 10Gbps Ethernet connection between two computers (as far as accessing files, not transferring them) than if the files were being accessed for video editing and rendering on a local drive?

Too much depends on the interface card hardware, drivers, and OS for me to answer accurately.  My past experience is that the OS is the largest problem at least when dealing with Windows.

Obviously accessing the storage systems remotely impacts system throughput and latency on both systems but lower latency storage like SSDs or a RAID which can process multiple requests helps a lot.  Latency is a bigger problem than throughput on the workstation because it affects the user experience.  For that reason, I would rather have the fastest storage on the workstation rather than the rendering server.

The compromise I sometimes make is simply to have enough storage space on the processing system so that I can make a complete or almost complete copy of the files it needs so network access during processing is minimized.
 

Offline Lizzie_Jo_Computers_11Topic starter

  • Regular Contributor
  • *
  • Posts: 89
  • Country: us
slicendice:


Thank you for looking into it for me. As for how many computers will be connected to the network. I suppose that I'm willing to work with it? I know at the moment, I need at least the two computers connected to each other, as I could work around the others not being directly connected to the network, as they would be less active at first (especially if everyone in one network would greatly impact the performance), but I will speak at the highest count for the possible connections. I think I would be looking at four computers for the time being (I'll get to the future in a moment). The workstation computer, and the rendering computer that we have been talking about, and I have been assuming at least two more general purpose machines that will be able to share the workload, and there will be at least one other computer in the household, but it will not be used for the business, so the access to the network would not be necessary.

In the future, I could see several more computers being added on...

Quote
I suspect you will have a project manager/designer, few code monkeys, a few video editors 1 audio editor and 1-2 3D asset creator and a few 3D RAW video creator in the future (for the video and asset parts this could be the same person(s). This is if your business grows. Everybody has to have access to the components their own part of the project depends on.

Yes, I could see around 10 individual computers in the future. Perhaps not for a few years, but you definitely have the right idea, and being somewhat "ready/future-proofed" for any additional computer workstations/rendering computers would be definitely preferable.

Thank you again!

As for the last part, yes, please, lets eventually get around to talking about everything. I want to make sure everything is in its place, and I'm more than happy to learn whatever I have to learn.


To everybody generally...

Since I have seen some people saying things like "he", I just thought I would mention that I am actually female. That is my actual name in my username, "Lizzie Jo." I just thought I would say it in case I was confusing anybody. ;- )


Madires:


I think I would have the CPU power covered. On the single CPU, I'm looking at the i7 6950X, and for the dual CPU build, it is looking like two Xeon E5 2699/2696 v4s.

I would also imagine quite a bit. I would prefer file access to be as quick as possible. If we're going for files transfers, I would like whatever speeds are definitely fast enough for transferring 20GB of 4K video files/assets/UE 4 projects and the like. Unless simply accessing them all from the other computer is more sensible.

MarkF:


Thank you for clearing that up for me!

David Hess:


I'm going to list all of my components like I was saying yesterday (for me at least) now...


Workstation PC:

CPU - Intel i7 6950X

Motherboard - ASUS X99 Deluxe (unless there is a better alternative...)

Memory - 128 GB of DDR4 2333/3000MHz RAM (none ECC)

Graphics Card - GTX 1080 (one, perhaps two in the future)/ TITAN X? Unless a Quadro would help more?

SSD - Intel 750 series 1.2 TB SSD/Samsung 950/960 Pro (However many I will need) and of course separate storage from the OS and programs drive.

Operating system - Windows 10 Pro/ I don't think I will need Server 2016?


Rendering computer:


CPU - Two Intel Xeon E5 - 2699/2696 v4 CPUs.

Motherboard - ASUS Z10PE D16 WS

Memory - 128 GB of DDR4 2300MHz ECC compatible RAM. (The mother board has x16 DIMM slots, and can accommodate up to 1TB of ECC memory I believe, so I will be upgrading to at least 256GB in the future.)

Graphics card - GTX 1080/TITAN X/ Quadro M5000? (After watching and reading up more, it turns out that Premiere pro an the Cineform codec do indeed utilize the GPU for improved performance, so I need to evaluate again, which kind of graphics cards will help each system individually.)

SSD - Intel 750 series 1.2 TB SSD drive (one for the operating system, and then however many more I might need for the storage array, unless I go for HDDs instead?)

Operating system - Windows 10 Pro/Again, I don't think I would need something like the Winodws server 2016, right?


Actually, I really want to be able to build this for myself, without hiring someone else to do it. It is indeed for my business, and if worse comes to worse I will always put it first, but there is definitely a definite need to be able to accomplish building these PCs with my own hands, rather than paying someone to do it for me. I am sure most of the people on here will be able to understand that?

I would like to continue learning about this network solution, and be able to confidently build up myself (of course you all have been helping, but when the actual pieces are in my hands.)

Which, of course, thank you all again!




 

Offline xani

  • Frequent Contributor
  • **
  • Posts: 400
For technical and I suspect market segmentation (1) reasons, LACP (Link Aggregation Control Protocol) deliberately does not support round-robin link aggregation so if a switch is used, it will have to be configured for static link aggregation.  If you have a small number of machines, then it is possible to do without a switch.

There is a very specific reason why LACP (and L3 equivalent, ECMP) does keep one session (it is usually by hash of src/dst IP + port, but it varies and sometimes is selectable) to one link: TCP really does not like excessing packet reordering and most TCP stacks will suffer performance penalty when it happens.

And it would be pretty easy to introduce reordering by accident, just have slightly different path length to a switch, or have less than perfect balancing between them.

There are protocols to deal with that but all I saw are proprietary ones. IIRC Brocade's VCS actually measures latency of each link and adjusts distribution based on that and link bandwidth. And sadly it is designed to interconnect switches together, not end devices
 

Offline viperidae

  • Frequent Contributor
  • **
  • Posts: 306
  • Country: nz
Google "thunderbolt network"
10gbit pc to pc
 

Offline Lizzie_Jo_Computers_11Topic starter

  • Regular Contributor
  • *
  • Posts: 89
  • Country: us
That is what some other users were saying, but at this point we're talking about future proofing the network system as well. I don't think that will work out in the long run.
 

Offline slicendice

  • Frequent Contributor
  • **
  • Posts: 365
  • Country: fi
Yes, thunderbolt is only good to a certain point, after that it is useless. Save TB ports for other stuff like, displays or external backup/archive storage.
 

Offline Jeroen3

  • Super Contributor
  • ***
  • Posts: 4078
  • Country: nl
  • Embedded Engineer
    • jeroen3.nl
If I were you I'd set it up like this.

Main workstation: (build this one)
- Fast single cpu.
- Lots of ram.
- Fancy GPU.
- Raid card with all the storage.
- 10GB network card.
Used as source for workers.

Workers[n]:  (buy this one, image setup and its easy to scale)
- Dual CPU.
- "Small" memory
- 10GB network card.
Uses storage from main workstation.

Archive server: (buy this one)
- 10 Gb network card.
- Lots of HDD's.
- Caching SSD.
Used as target for workers.

Advantages: No lag while working on files.
Disadvantages: Workstation has to be on.

This is assuming the rendering does saturate on cpu, and not source or target bandwidth.
« Last Edit: February 01, 2017, 12:21:46 pm by Jeroen3 »
 

Offline Lizzie_Jo_Computers_11Topic starter

  • Regular Contributor
  • *
  • Posts: 89
  • Country: us
I am going to get back to the network research later today, as it seems like it is going to be an important part of this set up. I will have more to say later on, but slicendice, did you ever get in contact with that person you were talking about? If so, or if not, what might our main options be as of right now. Are we looking at a switch, any kind of NAS, or something else?

I think I pretty much have the basic concept of the two computers laid out, with some minor adjustments probably on the way, but a future-proofed networking solution could really be of help.

In my other topic, I mentioned the Cineform codec, and its possible inclusion in my workflow, as it could greatly benefit the speed of my timeline reactions, but it is much more space consuming, so the need for a quick connection between the network is still very prevalent.

Thank you again, everyone! ;- )
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16615
  • Country: us
  • DavidH
I did essentially this whole thought process a few years ago when I upgraded my Pentium 4 to an Phenom 940.  I put the biggest and fastest RAID into the Phenom 940 workstation so my workstation tasks have direct access to it for minimum latency and my other boxes either access the workstation RAID over gigabit Ethernet or I copy the files they need to their local storage or to a file server for them to access.  I don't mind if a distributed task runs with higher latency but I want the workstation which I am using to operate with the lowest latency possible.

Back then, "big and fast RAID" meant something like an ARECA 1210 hardware RAID controller and four WD Black drives.  Today it might be a different RAID card and SSDs or PCIe SSDs with bulk storage moved off to a networked file server.
 

Offline Lizzie_Jo_Computers_11Topic starter

  • Regular Contributor
  • *
  • Posts: 89
  • Country: us
That seems like a good idea, David Hess. It is very important (the most important actually) that the workstation is able to be quickly connected to the files that are needed for editing. It is actually not quite as much of a problem if the latency between the RAID solution and the rendering machine is a little bit higher, because the fact that it actually is a different computer from the workstation already frees up the workstation from slowing down while rendering.

I know in the long run I will need a separate storage solution for the mass amount of data that will be collected over time. I intend to have two separate backups (as it should be). One connected to the network, and another one that just stores files separately. Essentially they will just be the same storage however.

I will start looking into RAID cards. Is it affective, though, to use SSD/PCI-E SSD in a RAID configuration?
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16615
  • Country: us
  • DavidH
I will start looking into RAID cards. Is it affective, though, to use SSD/PCI-E SSD in a RAID configuration?

RAID is more beneficial with mechanical storage than SSDs because they later are so much faster; a good PCIe SSD will max out the PCIe interface anyway.  There is still some advantage with slower SATA SSDs though and if you build a SATA RAID, then replacement of failed drives is possible.

SATA 1.0 150MB/s
SATA 2.0 300MB/s
SATA 3.0 600MB/s

PCIex4 1.0 1GB/s
PCIex4 2.0 2GB/s
PCIex4 3.0 4GB/s
 

Offline Rerouter

  • Super Contributor
  • ***
  • Posts: 4694
  • Country: au
  • Question Everything... Except This Statement
I would likely approach it with 1 10gbit card in each machine, a direct cable run between them, and use the onboard ethernet connection on your workstation for internet and local network, (to save on the cost of a switch)

you can then use a trick like this
http://www.speedguide.net/faq/how-to-tell-windows-7-to-use-a-different-default-350
so that it tries the 10gbit link first, so that your file grabs will succeed.

I'm currently only on 1gbit, but both my raid5 of 2tb hard drives, and ssd can easily saturate the link, (ssd is about 420MBps, raid is about 310MBps) so its more of a case of, where is my bottleneck for anything usual, i would say the networking, and above that your getting into hard to manage speeds

The other thing is, if both machines have plenty of ram, windows will cache most of it, meaning for file transfers under half the size of the available ram you can get some very high transfer rates,
 

Offline gnif

  • Administrator
  • *****
  • Posts: 1676
  • Country: au
I have not read through all the posts, but in a enterprise or office environment often we use a SAN (Storage area network) over FC (Fiber Channel) instead of Ethernet which gives up to 128 gigabit/sec data rates depending on the configuration, that is well above the 10 gigabit/sec you can achieve using Ethernet cards. But with this you obviously have to consider the cost of the configuration as a FC setup is not cheap, but could likely be done around the same price as a 10 gigabit setup if you were to source second hand parts.
 

Offline madires

  • Super Contributor
  • ***
  • Posts: 7764
  • Country: de
  • A qualified hobbyist ;)
Just for a better understanding, FC has different link speeds like Ethernet. FC's 32 and 128Gbit/s are quite new. 128Gbit/s FC uses 4 fibers or 4 lambdas similar to 100 Gigabit Ethernet.
 

Offline Jeroen3

  • Super Contributor
  • ***
  • Posts: 4078
  • Country: nl
  • Embedded Engineer
    • jeroen3.nl
I recently watched this video, and the guy made some interesting claims about a whole bunch of disks being faster than a sata ssd.
It probably isn't even as bad of an concept for you since this is easy to scale. Internal drives or pci drives are not.
 

Offline Monkeh

  • Super Contributor
  • ***
  • Posts: 7992
  • Country: gb
I recently watched this video, and the guy made some interesting claims about a whole bunch of disks being faster than a sata ssd.

Faster or faster? In terms of latency, absolutely no chance in hell. In terms of raw read or write speed.. sure, no problem. SATA drives are 'slow'.
 

Offline Jeroen3

  • Super Contributor
  • ***
  • Posts: 4078
  • Country: nl
  • Embedded Engineer
    • jeroen3.nl
Obviously latency will be higher. However, since the workload will be several sequentials (the renderers) and one random (the editor) clients, it might work fine.
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 4035
  • Country: nz
Just how big are your files? And how fast is the renderer going to chew through them?

Every PC now has Gigabit ethernet built in. That does 110 MB/s or so with scp, nfs, 9p etc. You only have to make sure your router has gig ports. Most cheap home ones don't because neither your internet connection nor your WIFI is that fast, but routers with gigE are not expensive. 10 gig is expensive.

If the files you're throwing around are only in the tens of gigs range and you don't mind waiting a minute or two for them to transfer then gigE will do. If your render machine reads them from the editing machine as it goes then gigE is probably more than you need. A render takes more than five minutes, right?

At work where we throw around things like VM disk images and Android build directories our desktop phones have a daisy-chain ethernet port on the back. Most people plug their PC into the phone, and the phone into the wall/floor. The only problem is the phone talks to the switches with gigE but the daisy-chain is only 100 meg! I make sure I get a dedicated gig port for my PC (and get 100+ MB/s speeds to the servers), but most people don't seem to notice or care!
 

Offline gnif

  • Administrator
  • *****
  • Posts: 1676
  • Country: au
I recently watched this video, and the guy made some interesting claims about a whole bunch of disks being faster than a sata ssd.
It probably isn't even as bad of an concept for you since this is easy to scale. Internal drives or pci drives are not.


I run numerous ZFS arrays and to be completely honest they are insane, you would not believe the performance you can get out of these things, and the redundancy features.

At home I currently have an array with 5x 4TB WD RED NAS disks configured in RAIDz2, these are not super fast disks, running at 5400RPM. I also have a pair of Intel 120GB SSDs that are providing the non volatile log (ZIL) and cache (ARC) storage simply for performance. In this configuration I am getting 14.5T of total storage available, and sequential and random read/write performance of a single SSD.

Here is the output of a hdparm sequential read test.
Code: [Select]
/dev/tank/test:
 Timing cached reads:   11510 MB in  1.99 seconds = 5779.83 MB/sec
 Timing buffered disk reads: 1024 MB in  1.27 seconds = 809.14 MB/sec

Also note that this is on an older system with only SATA-II, but since the workload is split between disks it doesn't matter as the bus never gets saturated. You can't argue with a 14.5T storage array that performs as fast as a SSD, has crazy good redundancy, supports snapshots, and and soon under Linux, encryption too.

ZFS is THE file-system to use for any serious storage requirements at the moment, it does not suffer from the issues other solutions have such as silent data corruption, a weekly scrub will find, repair and report of any silent errors that have occurred on the disk giving you a very early warning that the disk is suffering and about to fail.

The list of reasons to use ZFS over other file systems is too long to list here, suffice to say, I have been working in this industry for 20 years and in my experience ZFS just destroys all the alternatives in performance, features and redundancy.
 

Offline slicendice

  • Frequent Contributor
  • **
  • Posts: 365
  • Country: fi
@gnif, yes that solution is worth exploring. I would go for such configuration as it's reliable, fast and highly scalable, plus it has a lot of other stuff built in that makes it's easy to manage file integrity in the long run.

Though ZFS has its advantages, any FS would still do. More important is the actual overall configuration, which improves the performance and scalability.
 

Offline Lizzie_Jo_Computers_11Topic starter

  • Regular Contributor
  • *
  • Posts: 89
  • Country: us
Firstly, I apologize for having been gone from this conversation for the last two weeks. My work has been non-stop, and I had no time to continue researching the things that everyone was posting, so I couldn't make a worthwhile reply to any of it. I am still dedicated to this topic and the builds I am working on, and of course the people who are helping me figure everything out. Thank you again! ;- )

I am going to watch that video comparison later, and I'm currently researching whatever I can find on a ZFS configuration, and what that is like. If it really is a good alternative, I would like someone here to delve on it further, and why it might be better vs a so and so, etc.?

As a side note, the files that I will be transferring, I imagine will exceed my current load of 30GB a file (full 4K footage could make it even higher), which will also be the same data amounts that will be accessed between the computers while working on a project.

Thank you again, I will be back on later, now that I have somewhat less of a workload.
 

Offline Lizzie_Jo_Computers_11Topic starter

  • Regular Contributor
  • *
  • Posts: 89
  • Country: us
After researching the ZFS configuration I immediately ran into an obvious problem I should've realized when it was brought up. I will be running Windows on these two systems, and ZFS is not compatible.

I looked into workarounds, but the need to run the raid through a virtual OS and then this and that is just going to make things even slower or unstable, and I don't see where I would get a benefit in that?

I have no current plans on running a Linux system, so I will need to stick to NTFS and other Windows FS to get this working. I could still use a high quality crossover cable and a network switch off to a separate storage from one of the computers, but I still want to look into the best possible solution overall. Would the accessing of the rendering computer to the storage being a part of the editing computer cause any drawbacks for the editing computer while the rendering computer is still rendering?
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf