General > General Technical Chat

New work computers (tech refresh), a conversation looking for ideas

<< < (3/5) > >>

abeyer:
Since it sounds like the desktops would largely be accessed remotely anyway, rather than issuing a separate single desktop machine to each user, you might consider desktop virtualization running on a pool of servers (either your own, or cloud based, or some combination of both depending on your needs.)

This can give the flexibility to provision large cpu/gpu/memory-heavy instances when they're needed, smaller ones when they're not, and suspend them entirely when they're not being used.

Halcyon:

--- Quote from: abeyer on January 21, 2024, 11:36:00 pm ---Since it sounds like the desktops would largely be accessed remotely anyway, rather than issuing a separate single desktop machine to each user, you might consider desktop virtualization running on a pool of servers (either your own, or cloud based, or some combination of both depending on your needs.)

This can give the flexibility to provision large cpu/gpu/memory-heavy instances when they're needed, smaller ones when they're not, and suspend them entirely when they're not being used.

--- End quote ---

You beat me to the punch! Depending on what the OP's workflow is, virtualisation is a very viable options and has advantages over end-user equipment. For example, it's scalable, can be made redundant and it's centrally managed.

I've started to virtualise our heavy workloads at work, rather than replacing our big, heavy forensic workstations. Having that kind of technology sitting on everyone's desk and basically doing nothing most of the time is wasteful.

jpyeron:

--- Quote from: Halcyon on January 22, 2024, 02:15:30 am ---
--- Quote from: abeyer on January 21, 2024, 11:36:00 pm ---Since it sounds like the desktops would largely be accessed remotely anyway, rather than issuing a separate single desktop machine to each user, you might consider desktop virtualization running on a pool of servers (either your own, or cloud based, or some combination of both depending on your needs.)

This can give the flexibility to provision large cpu/gpu/memory-heavy instances when they're needed, smaller ones when they're not, and suspend them entirely when they're not being used.

--- End quote ---

You beat me to the punch! Depending on what the OP's workflow is, virtualisation is a very viable options and has advantages over end-user equipment. For example, it's scalable, can be made redundant and it's centrally managed.

I've started to virtualise our heavy workloads at work, rather than replacing our big, heavy forensic workstations. Having that kind of technology sitting on everyone's desk and basically doing nothing most of the time is wasteful.

--- End quote ---

Thinking hard on that one, sounds very reasonable. Worried about what I have not thought of.

ajb:
Seconding or thirding the virtualization idea.  I am currently typing this message on a workstation VM that I've been piloting and it's been working great. 

We recently set up our first "real" virtualization infrastructure, with a small vsphere cluster.  We're doing this on-prem, partly because we are limited to not-amazing ISPs that serve our building, partly to take advantage of our 10G+ LAN, and generally because it's easier for a lot of our applications. 

A few things I've learned as a relative IT amateur along the way in case they help (no particular order):


- The 'right' way to do GPU intensive desktop work is to use nVidia's vGPU system, but this is fuck-off expensive.  IIRC, it would be something like $20k for a couple of supported GPUs and first year's software licenses (yes you have to pay both vmware and nVidia every year!) for just a handful of users.  It does give a lot of flexibility in terms of scaling users vs GPUs, and easily moving VMs around within a cluster, if you have enough users to warrant those things, but we don't.



- So instead of doing that, we bought a handful of much cheaper Quadros and are using the standard PCI passthrough capabilities provided by our ESXi hosts to expose them to the workstation VMs.  The main downside to this is your individual workstations are tied to a GPU in a specific host, but this is not a problem for our workflow.  There are barebones servers available with a dozen or so PCIe x16 slots (although IME a lot of CAD applications will work fine with a GPU in an x4 or x2 slot) which is probably where we'd move as our need for that sort of thing grows.  One thing to keep in mind with GPUs, is you will be relying on video encode on the VM for a lot of the higher-performance remote desktop solutions, so there is a higher baseline GPU demand, especially with multiple/larger displays.  You also may need a client with decent hardware decode, depending on throughput you need.



- Some applications simply don't perform well with some remote desktop systems.  For example, our Solidworks users are perfectly happy with standard Windows remote desktop, but Altium is absolutely awful with RDP.  Something about the way it round-trips user input and graphics updates I think -- it gets really laggy and squirrely.  I tried out Parsec for a while, but even on our 10G LAN it would have terrible drops in image quality that would take minutes to recover, if they recovered at all.  They use a fairly conventional video codec that simply doesn't work well for CAD type applications.  It also had a few other annoyances, like occasionally needing to log into your remote VM some other way because you needed to refresh your Parsec log in on that machine, not locking one remote connection when another is established, etc.  I am currently using HP Anyware (formerly Teradici), which is more expensive, but purpose-built for workstation use and it generally works great.  I'm running 2x 4k displays and it's indistinguishable from running a local machine.  I can also run 2x 4k displays at home via VPN and even with our crappy upload speeds at the office it's really good -- occasional frame rate drops when large areas of the displays are updating, but it stays crystal clear on every frame instead of turning into a mess of artifacts.   



- USB passthrough is another tricky point if you need to support things like debug interfaces or USB instruments.  Even before we got into virtualization, I was messing with this so I could basically bring my workstation with me to the bench with all of the test equipment.  (That's super convenient, by the way, and I highly recommend it.  Even without virtualization, being able to go from my office to the lab and use all of the equipment there without having to log into a different computer, start up whatever applications I was using, etc, was really nice.)  Remote desktop apps will do keyboard/mouse, audio, maybe USB drives, but generally not much else, IME. 

There are a few software options for remote USB out there, and I tried several of them and found them to work fine.  A lot of them require buying a 'server' license for a certain number of USB devices, which can add up if you have a lot of users needing to redirect a handful of devices each.  I settled on Flexihub which uses a per-connection 'credit' system.  That would be annoying long term, but is cheaper and more flexible, and for the most part has worked fairly well.  They have an Android app, so once or twice I've walked around with a J-Link plugged into my phone, and my iPad RDPed into my computer to do firmware updates on equipment.

More recently, I've been using a Digi AnywhereUSB 8, and it's been absolutely rock solid.  They explicitly support USB hubs, so you can expand the number of connected devices quite easily, and it's handled every device/use case I've tried with it -- including a J-Trace doing streaming cortex trace at 100MHz.



- If you do any network development, getting direct network connections to VMs for testing also requires consideration, since you can't just plug an extra USB-Ethernet adapter in when needed.  We assigned a set of VLANs for development/testing, and our VM hosts expose each of those to each dev/test VLAN.  For the other end of the equation, we have a handful of 5-port PoE switches configured with an 'all VLANs' uplink port, and then four ports assigned to those VLANs.  We have PoE (almost) everywhere, so it's very easy to just grab one of the preconfigured switches, plug it in anywhere, and select/enable the corresponding network connection in the VM.  We rely on this a lot for testing and customer support -- since our product is used with third-party control software, we have a set of otherwise-clean VMs prebuilt with different versions of that software we can easily fire up to troubleshoot if a customer has a problem.



- Storage needs consideration here as well.  Hopefully you already have decent network file storage, but you may want a dedicated NAS/storage network specifically for VMs, since they will be hitting the storage a lot harder.  You can of course use internal drives in the VM Hosts, but that makes it slower/harder to migrate VMs and implement backups properly.  This doesn't need to be anything insane for a small number of VMs, there are ready-made boxes from Synology for example that can do appropriate RAID configurations and bonded 10G network links that will work fine.

nightfire:
We have a handful of developers at my workplace and we also are using this route. Basically the normal developer has a HP Probook as his desk machine and to be able to work from homeoffice, if needed, and the CPU-intensive stuff like compiling and debugging C++/c# code is done in a VM on a big server. As the developers rarely compile everything at the same time, this approach works for us.
But in todays servers you usually scale with more cores in the CPU, so your application should support such an approach.

For desktop virtualization, I also would prefer VMWare Workstation, especially if you need access to atttached devices that have to be mapped in the VM.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

There was an error while thanking
Thanking...
Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod