As I understand the FPGA design cycle, it is heavily based upon simulation, so it is a bad idea, IMHO, not to learn to use the simulator from the very begining.
In earlier times with Xilinx ISE, the simulator wasn't free. The IDE was free, the synthesis and place/route tools were free but not the simulator. As a result, I have never used the simulator.
Now the simulator is free in Vivado but I still don't use it. I liken simulation to single stepping through pages of assembly code - a colossal waste of time. Yes, it could be useful for small pieces of a project (perhaps observing the ALU or not, see below) but I'm not convinced it brings anything to the dance when the actual circuit fails about a million cycles into booting the OS. And the OS is KNOWN to work...
The new Internal Logic Analyzer of Vivado, OTOH, can be quite handy because I can trigger it just before where I expect a problem. Getting the constraints file correct can be a bit of a challenge but much of it is automated.
I can also arrange for an address breakpoint that will change the CPU mode from Run to Single Step and pause. Now I can single step in the area where I expect to find a problem.
There are a lot of ways to debug projects, FPGAs or otherwise, and all have a place. That's why I want a board with lots of LEDs and 7-segment displays.
A long time back I was messing with the design of an ALU for the PDP 11/45. That thing is complicated! So, I coded it up on an FPGA and I connected the inputs and outputs (data and flags) to a long shift register. That allowed me to insert test vectors and grab the results. I used another processor (Blackfin) to handle the data transfer while grabbing vectors from an NFS server. And that's the way you test an ALU. With a Raspberry Pi it might be even easier since it could handle the transfer and contain the files. Handling the transfer on a typical Linux box seems daunting.