Author Topic: A networking solution for two PCs, or a quicker way to transfer files between?  (Read 12953 times)

0 Members and 2 Guests are viewing this topic.

Offline madires

  • Super Contributor
  • ***
  • Posts: 7764
  • Country: de
  • A qualified hobbyist ;)
How much throughput do you need for file transfers? High throughput means that you also need some CPU power. I'd suggest a flexible design allowing for a simple upgrade path. Add a 10GigE NIC to each PC and connect them back to back with a dedicated IP address range. Set up shared folders across that 10GigE link. Use the on-board GigE for internet access and other LAN stuff like printers. If the file access slows down the PCs too much you have to upgrade to a NAS, SAN or a third PC just for file access. It might be cheaper to have two 10GigE ports in the NAS/SAN/PC#3 and connect each workstation PC directly than buying a switch with 10GigE ports. If you like to add more workstation PCs for video editing/rendering go for a switch. And please get a network professional for installing everything.
 

Offline slicendice

  • Frequent Contributor
  • **
  • Posts: 365
  • Country: fi
And please get a network professional for installing everything.

I totally agree, worth the extra cost, if the network stuff is a bit confusing.
 

Offline Jeroen3

  • Super Contributor
  • ***
  • Posts: 4078
  • Country: nl
  • Embedded Engineer
    • jeroen3.nl
Quote
There are companies, like Puget systems, but I would be spending far more than necessary if I were to hire someone else.
Only if your solution works first try and does not have compatibility problems.

You seem to be someone who has an hourly rate, whats a few days lost on testing for you?
 
The following users thanked this post: newbrain

Online MarkF

  • Super Contributor
  • ***
  • Posts: 2544
  • Country: us
I've been reading articles, but the exact definitions of "what I need exactly" to get this network running has still been confusing me, so a few questions?

I need a network card for both computers, right? A 10Gb Ethernet cable, and 10Gb switch? I'm not sure how to hook up the Wi-Fi yet.
Yes.  You need a network card for each computer.  However, if you only need to connect those two computers together, you only need a cross-over cable between the network cards. The cross-over cable is one where the transmit and receive pairs are swapped allowing the computers to communicate directly.

If the computers have built-in network ports (i.e. slower), one of them can connect you to a second network for non-critical things. The wifi can be part of this slower network.
« Last Edit: January 30, 2017, 11:46:31 am by MarkF »
 

Offline mariush

  • Super Contributor
  • ***
  • Posts: 5022
  • Country: ro
  • .
Crossover network cables are only needed with 100mbps network cards.
1gbps network cards (and higher) auto detect the data pairs and establish connection with either type of cable, regular or crossover.
 

Online David Hess

  • Super Contributor
  • ***
  • Posts: 16613
  • Country: us
  • DavidH
In short, for a workstation PC, I need an extremely high clock speed with a single CPU motherboard setup.

For a rendering machine, I need more cores to thread rendering and encoding across. I cannot accomplish both things at once to their full potential.

It makes sense to me and that is how I would do it.  Use the fast single threaded machine for the user interactive workstation and a more economical higher core count but lower clock speed machine for the massively parallel work.

1. As I have stated, the main goal is to keep the comput-ers from slowing each other down, or disrupting the work flow. Just for my understanding. If I have computer 1 connecting to computer 2, and it is pulling the source files (4K, CAD models, images) from computer 2, and not a local SSD/HDD, would that slow the video editing process down?

2. Again, if I store everything on computer 1, and open the finished project on computer 2 and begin rendering on computer 2, would the fact that computer 2 is pulling the source files and project media from computer 1 slow down the rendering process?

3. If I do not use a RAID array, or a third computer made for the assets, would the fact that the files are being pulled from one of the computers while a process is active, still wind of slowing that computer down?

An example... If computer 1 is the storage/editing, and computer 2 is simply the rendering computer, would the fact that computer 2 is accessing files that are located on computer 1's local drives wind up significantly slowing computer 1 down anyway? Does that in any way help to avoid the slowing down of the workflow in that case.

4. How much slower is a 10Gbps Ethernet connection between two computers (as far as accessing files, not transferring them) than if the files were being accessed for video editing and rendering on a local drive?

Too much depends on the interface card hardware, drivers, and OS for me to answer accurately.  My past experience is that the OS is the largest problem at least when dealing with Windows.

Obviously accessing the storage systems remotely impacts system throughput and latency on both systems but lower latency storage like SSDs or a RAID which can process multiple requests helps a lot.  Latency is a bigger problem than throughput on the workstation because it affects the user experience.  For that reason, I would rather have the fastest storage on the workstation rather than the rendering server.

The compromise I sometimes make is simply to have enough storage space on the processing system so that I can make a complete or almost complete copy of the files it needs so network access during processing is minimized.
 

Offline Lizzie_Jo_Computers_11Topic starter

  • Regular Contributor
  • *
  • Posts: 89
  • Country: us
slicendice:


Thank you for looking into it for me. As for how many computers will be connected to the network. I suppose that I'm willing to work with it? I know at the moment, I need at least the two computers connected to each other, as I could work around the others not being directly connected to the network, as they would be less active at first (especially if everyone in one network would greatly impact the performance), but I will speak at the highest count for the possible connections. I think I would be looking at four computers for the time being (I'll get to the future in a moment). The workstation computer, and the rendering computer that we have been talking about, and I have been assuming at least two more general purpose machines that will be able to share the workload, and there will be at least one other computer in the household, but it will not be used for the business, so the access to the network would not be necessary.

In the future, I could see several more computers being added on...

Quote
I suspect you will have a project manager/designer, few code monkeys, a few video editors 1 audio editor and 1-2 3D asset creator and a few 3D RAW video creator in the future (for the video and asset parts this could be the same person(s). This is if your business grows. Everybody has to have access to the components their own part of the project depends on.

Yes, I could see around 10 individual computers in the future. Perhaps not for a few years, but you definitely have the right idea, and being somewhat "ready/future-proofed" for any additional computer workstations/rendering computers would be definitely preferable.

Thank you again!

As for the last part, yes, please, lets eventually get around to talking about everything. I want to make sure everything is in its place, and I'm more than happy to learn whatever I have to learn.


To everybody generally...

Since I have seen some people saying things like "he", I just thought I would mention that I am actually female. That is my actual name in my username, "Lizzie Jo." I just thought I would say it in case I was confusing anybody. ;- )


Madires:


I think I would have the CPU power covered. On the single CPU, I'm looking at the i7 6950X, and for the dual CPU build, it is looking like two Xeon E5 2699/2696 v4s.

I would also imagine quite a bit. I would prefer file access to be as quick as possible. If we're going for files transfers, I would like whatever speeds are definitely fast enough for transferring 20GB of 4K video files/assets/UE 4 projects and the like. Unless simply accessing them all from the other computer is more sensible.

MarkF:


Thank you for clearing that up for me!

David Hess:


I'm going to list all of my components like I was saying yesterday (for me at least) now...


Workstation PC:

CPU - Intel i7 6950X

Motherboard - ASUS X99 Deluxe (unless there is a better alternative...)

Memory - 128 GB of DDR4 2333/3000MHz RAM (none ECC)

Graphics Card - GTX 1080 (one, perhaps two in the future)/ TITAN X? Unless a Quadro would help more?

SSD - Intel 750 series 1.2 TB SSD/Samsung 950/960 Pro (However many I will need) and of course separate storage from the OS and programs drive.

Operating system - Windows 10 Pro/ I don't think I will need Server 2016?


Rendering computer:


CPU - Two Intel Xeon E5 - 2699/2696 v4 CPUs.

Motherboard - ASUS Z10PE D16 WS

Memory - 128 GB of DDR4 2300MHz ECC compatible RAM. (The mother board has x16 DIMM slots, and can accommodate up to 1TB of ECC memory I believe, so I will be upgrading to at least 256GB in the future.)

Graphics card - GTX 1080/TITAN X/ Quadro M5000? (After watching and reading up more, it turns out that Premiere pro an the Cineform codec do indeed utilize the GPU for improved performance, so I need to evaluate again, which kind of graphics cards will help each system individually.)

SSD - Intel 750 series 1.2 TB SSD drive (one for the operating system, and then however many more I might need for the storage array, unless I go for HDDs instead?)

Operating system - Windows 10 Pro/Again, I don't think I would need something like the Winodws server 2016, right?


Actually, I really want to be able to build this for myself, without hiring someone else to do it. It is indeed for my business, and if worse comes to worse I will always put it first, but there is definitely a definite need to be able to accomplish building these PCs with my own hands, rather than paying someone to do it for me. I am sure most of the people on here will be able to understand that?

I would like to continue learning about this network solution, and be able to confidently build up myself (of course you all have been helping, but when the actual pieces are in my hands.)

Which, of course, thank you all again!




 

Offline xani

  • Frequent Contributor
  • **
  • Posts: 400
For technical and I suspect market segmentation (1) reasons, LACP (Link Aggregation Control Protocol) deliberately does not support round-robin link aggregation so if a switch is used, it will have to be configured for static link aggregation.  If you have a small number of machines, then it is possible to do without a switch.

There is a very specific reason why LACP (and L3 equivalent, ECMP) does keep one session (it is usually by hash of src/dst IP + port, but it varies and sometimes is selectable) to one link: TCP really does not like excessing packet reordering and most TCP stacks will suffer performance penalty when it happens.

And it would be pretty easy to introduce reordering by accident, just have slightly different path length to a switch, or have less than perfect balancing between them.

There are protocols to deal with that but all I saw are proprietary ones. IIRC Brocade's VCS actually measures latency of each link and adjusts distribution based on that and link bandwidth. And sadly it is designed to interconnect switches together, not end devices
 

Offline viperidae

  • Frequent Contributor
  • **
  • Posts: 306
  • Country: nz
Google "thunderbolt network"
10gbit pc to pc
 

Offline Lizzie_Jo_Computers_11Topic starter

  • Regular Contributor
  • *
  • Posts: 89
  • Country: us
That is what some other users were saying, but at this point we're talking about future proofing the network system as well. I don't think that will work out in the long run.
 

Offline slicendice

  • Frequent Contributor
  • **
  • Posts: 365
  • Country: fi
Yes, thunderbolt is only good to a certain point, after that it is useless. Save TB ports for other stuff like, displays or external backup/archive storage.
 

Offline Jeroen3

  • Super Contributor
  • ***
  • Posts: 4078
  • Country: nl
  • Embedded Engineer
    • jeroen3.nl
If I were you I'd set it up like this.

Main workstation: (build this one)
- Fast single cpu.
- Lots of ram.
- Fancy GPU.
- Raid card with all the storage.
- 10GB network card.
Used as source for workers.

Workers[n]:  (buy this one, image setup and its easy to scale)
- Dual CPU.
- "Small" memory
- 10GB network card.
Uses storage from main workstation.

Archive server: (buy this one)
- 10 Gb network card.
- Lots of HDD's.
- Caching SSD.
Used as target for workers.

Advantages: No lag while working on files.
Disadvantages: Workstation has to be on.

This is assuming the rendering does saturate on cpu, and not source or target bandwidth.
« Last Edit: February 01, 2017, 12:21:46 pm by Jeroen3 »
 

Offline Lizzie_Jo_Computers_11Topic starter

  • Regular Contributor
  • *
  • Posts: 89
  • Country: us
I am going to get back to the network research later today, as it seems like it is going to be an important part of this set up. I will have more to say later on, but slicendice, did you ever get in contact with that person you were talking about? If so, or if not, what might our main options be as of right now. Are we looking at a switch, any kind of NAS, or something else?

I think I pretty much have the basic concept of the two computers laid out, with some minor adjustments probably on the way, but a future-proofed networking solution could really be of help.

In my other topic, I mentioned the Cineform codec, and its possible inclusion in my workflow, as it could greatly benefit the speed of my timeline reactions, but it is much more space consuming, so the need for a quick connection between the network is still very prevalent.

Thank you again, everyone! ;- )
 

Online David Hess

  • Super Contributor
  • ***
  • Posts: 16613
  • Country: us
  • DavidH
I did essentially this whole thought process a few years ago when I upgraded my Pentium 4 to an Phenom 940.  I put the biggest and fastest RAID into the Phenom 940 workstation so my workstation tasks have direct access to it for minimum latency and my other boxes either access the workstation RAID over gigabit Ethernet or I copy the files they need to their local storage or to a file server for them to access.  I don't mind if a distributed task runs with higher latency but I want the workstation which I am using to operate with the lowest latency possible.

Back then, "big and fast RAID" meant something like an ARECA 1210 hardware RAID controller and four WD Black drives.  Today it might be a different RAID card and SSDs or PCIe SSDs with bulk storage moved off to a networked file server.
 

Offline Lizzie_Jo_Computers_11Topic starter

  • Regular Contributor
  • *
  • Posts: 89
  • Country: us
That seems like a good idea, David Hess. It is very important (the most important actually) that the workstation is able to be quickly connected to the files that are needed for editing. It is actually not quite as much of a problem if the latency between the RAID solution and the rendering machine is a little bit higher, because the fact that it actually is a different computer from the workstation already frees up the workstation from slowing down while rendering.

I know in the long run I will need a separate storage solution for the mass amount of data that will be collected over time. I intend to have two separate backups (as it should be). One connected to the network, and another one that just stores files separately. Essentially they will just be the same storage however.

I will start looking into RAID cards. Is it affective, though, to use SSD/PCI-E SSD in a RAID configuration?
 

Online David Hess

  • Super Contributor
  • ***
  • Posts: 16613
  • Country: us
  • DavidH
I will start looking into RAID cards. Is it affective, though, to use SSD/PCI-E SSD in a RAID configuration?

RAID is more beneficial with mechanical storage than SSDs because they later are so much faster; a good PCIe SSD will max out the PCIe interface anyway.  There is still some advantage with slower SATA SSDs though and if you build a SATA RAID, then replacement of failed drives is possible.

SATA 1.0 150MB/s
SATA 2.0 300MB/s
SATA 3.0 600MB/s

PCIex4 1.0 1GB/s
PCIex4 2.0 2GB/s
PCIex4 3.0 4GB/s
 

Offline Rerouter

  • Super Contributor
  • ***
  • Posts: 4694
  • Country: au
  • Question Everything... Except This Statement
I would likely approach it with 1 10gbit card in each machine, a direct cable run between them, and use the onboard ethernet connection on your workstation for internet and local network, (to save on the cost of a switch)

you can then use a trick like this
http://www.speedguide.net/faq/how-to-tell-windows-7-to-use-a-different-default-350
so that it tries the 10gbit link first, so that your file grabs will succeed.

I'm currently only on 1gbit, but both my raid5 of 2tb hard drives, and ssd can easily saturate the link, (ssd is about 420MBps, raid is about 310MBps) so its more of a case of, where is my bottleneck for anything usual, i would say the networking, and above that your getting into hard to manage speeds

The other thing is, if both machines have plenty of ram, windows will cache most of it, meaning for file transfers under half the size of the available ram you can get some very high transfer rates,
 

Offline gnif

  • Administrator
  • *****
  • Posts: 1676
  • Country: au
I have not read through all the posts, but in a enterprise or office environment often we use a SAN (Storage area network) over FC (Fiber Channel) instead of Ethernet which gives up to 128 gigabit/sec data rates depending on the configuration, that is well above the 10 gigabit/sec you can achieve using Ethernet cards. But with this you obviously have to consider the cost of the configuration as a FC setup is not cheap, but could likely be done around the same price as a 10 gigabit setup if you were to source second hand parts.
 

Offline madires

  • Super Contributor
  • ***
  • Posts: 7764
  • Country: de
  • A qualified hobbyist ;)
Just for a better understanding, FC has different link speeds like Ethernet. FC's 32 and 128Gbit/s are quite new. 128Gbit/s FC uses 4 fibers or 4 lambdas similar to 100 Gigabit Ethernet.
 

Offline Jeroen3

  • Super Contributor
  • ***
  • Posts: 4078
  • Country: nl
  • Embedded Engineer
    • jeroen3.nl
I recently watched this video, and the guy made some interesting claims about a whole bunch of disks being faster than a sata ssd.
It probably isn't even as bad of an concept for you since this is easy to scale. Internal drives or pci drives are not.
 

Offline Monkeh

  • Super Contributor
  • ***
  • Posts: 7992
  • Country: gb
I recently watched this video, and the guy made some interesting claims about a whole bunch of disks being faster than a sata ssd.

Faster or faster? In terms of latency, absolutely no chance in hell. In terms of raw read or write speed.. sure, no problem. SATA drives are 'slow'.
 

Offline Jeroen3

  • Super Contributor
  • ***
  • Posts: 4078
  • Country: nl
  • Embedded Engineer
    • jeroen3.nl
Obviously latency will be higher. However, since the workload will be several sequentials (the renderers) and one random (the editor) clients, it might work fine.
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4032
  • Country: nz
Just how big are your files? And how fast is the renderer going to chew through them?

Every PC now has Gigabit ethernet built in. That does 110 MB/s or so with scp, nfs, 9p etc. You only have to make sure your router has gig ports. Most cheap home ones don't because neither your internet connection nor your WIFI is that fast, but routers with gigE are not expensive. 10 gig is expensive.

If the files you're throwing around are only in the tens of gigs range and you don't mind waiting a minute or two for them to transfer then gigE will do. If your render machine reads them from the editing machine as it goes then gigE is probably more than you need. A render takes more than five minutes, right?

At work where we throw around things like VM disk images and Android build directories our desktop phones have a daisy-chain ethernet port on the back. Most people plug their PC into the phone, and the phone into the wall/floor. The only problem is the phone talks to the switches with gigE but the daisy-chain is only 100 meg! I make sure I get a dedicated gig port for my PC (and get 100+ MB/s speeds to the servers), but most people don't seem to notice or care!
 

Offline gnif

  • Administrator
  • *****
  • Posts: 1676
  • Country: au
I recently watched this video, and the guy made some interesting claims about a whole bunch of disks being faster than a sata ssd.
It probably isn't even as bad of an concept for you since this is easy to scale. Internal drives or pci drives are not.


I run numerous ZFS arrays and to be completely honest they are insane, you would not believe the performance you can get out of these things, and the redundancy features.

At home I currently have an array with 5x 4TB WD RED NAS disks configured in RAIDz2, these are not super fast disks, running at 5400RPM. I also have a pair of Intel 120GB SSDs that are providing the non volatile log (ZIL) and cache (ARC) storage simply for performance. In this configuration I am getting 14.5T of total storage available, and sequential and random read/write performance of a single SSD.

Here is the output of a hdparm sequential read test.
Code: [Select]
/dev/tank/test:
 Timing cached reads:   11510 MB in  1.99 seconds = 5779.83 MB/sec
 Timing buffered disk reads: 1024 MB in  1.27 seconds = 809.14 MB/sec

Also note that this is on an older system with only SATA-II, but since the workload is split between disks it doesn't matter as the bus never gets saturated. You can't argue with a 14.5T storage array that performs as fast as a SSD, has crazy good redundancy, supports snapshots, and and soon under Linux, encryption too.

ZFS is THE file-system to use for any serious storage requirements at the moment, it does not suffer from the issues other solutions have such as silent data corruption, a weekly scrub will find, repair and report of any silent errors that have occurred on the disk giving you a very early warning that the disk is suffering and about to fail.

The list of reasons to use ZFS over other file systems is too long to list here, suffice to say, I have been working in this industry for 20 years and in my experience ZFS just destroys all the alternatives in performance, features and redundancy.
 

Offline slicendice

  • Frequent Contributor
  • **
  • Posts: 365
  • Country: fi
@gnif, yes that solution is worth exploring. I would go for such configuration as it's reliable, fast and highly scalable, plus it has a lot of other stuff built in that makes it's easy to manage file integrity in the long run.

Though ZFS has its advantages, any FS would still do. More important is the actual overall configuration, which improves the performance and scalability.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf