Distributed compilation is quite common in the software world, it would make sense in the hardware world.
Compilation isn't that much of a problem, it's the place and route that is a killer, as quite a few of the decisions made during the P+R process have global impacts.
A lot of the implementation stages are still poorly threaded so it does not map well onto highly parallel systems.
Distributed compilation is quite common in the software world, it would make sense in the hardware world. Not only do we have compilers that distribute the job to other computers on the network to do the work, there are tools such as Atlassian Bamboo that offload the entire build process.
As someone who as worked at a hardware company that provided software tools, the general mindset is generally very conservative and slow to change. Vivado using TCL is a simple example of that.
Rather than run the IDE on the cloud, maybe see if you can get the backend engine running there instead. I expect it would take quite a bit of hacking unless you can get Xilinx support to take you seriously.
No way to just upgrade CPUs instead of replace the entire computer?
It sounds like you have very little familiarity with the FPGA tools, they can be run as any other build process using make and integrate well with many of the established software development tools. You can have regression testing and builds being run automatically distributed across build farms, but not many projects reach those scales to warrant the investment.
My computer is not fast enough to run the Xilinx Vivado HLS toolset.
This means that I have to buy a new and more powerful computer.
Doing a rough calculation I would have to spend between 1000-1500 USD to get a decent setup that is future proof in the years to come.
But I was thinking about a different solution, given that we are 2016 now.
Can the Xilinx Vivado HLS toolset be run from the Cloud, still having access to USB, Ethernet and other necessary ports locally to connect my Xilinx target board (e.g. Xilinx Zybo FPGA board)?
The tools run well on very little resources, a virtual machine with just a few GB of ram allocated will happily build nontrivial systems in realistic amounts of time (less than an hour), running that natively instead on a bleeding edge CPU might only reduce the build time by half at best. CPU compute performance has come only a short distance in the past years so any computer from the last 5 years should be up to speed and anything from the last 10 will still run it ok if given a little more RAM:
http://www.tomshardware.com/reviews/The-500-Gaming-Machine,1147-9.htmlIf you're having long build times you'll want to drop back on the utilisation of the device (or move to a bigger device), and slow down the clock rates. Trying to eek out the last few percent can blow out build times by an order of magnitude only to discover that it wont meet timing. With long implementation times you have a big incentive to test and simulate the design, or when it needs debugging in hardware make only the part of the system that needs to be worked on. This goes into classic engineering trade-offs, you can easily complete the project with a larger or faster device competing against the cost savings of making the design fit into a smaller device. For low volume work its usually biased towards just buying larger devices, having an FPGA at 30% utilisation (even some of the resource types at 0%!) is not failure its just being pragmatic.
Finally Xilinx have very simple licensing setups, and they offer great support and options for it. Tying any of it to an online requirement would make me very unhappy, as the tools evolve so rapidly you often need to have legacy versions running to support particular designs or IP. As it is now they can just sit on legacy hardware or in virtual machines with no need to have them exposed to the internet.