That's mostly correct.
Let's look at the case where an external device is driving a signal 'A' (synchronous to a clock 'CLK') into an FPGA input. Inside the FPGA, we must use CLK to clock a flip flop FF1 whose D input is connected to the IO pad. The output of FF1 is another signal 'B' that goes to a second FF (FF2), also clocked by CLK. The signal has some propagation delay from the IOB to FF1, let's call that t_PD1. Delay from FF1 to FF2 is t_PD2. Each FF's outputs change state t_CO after the clock edge - this is the clock to out time.
When you apply a PERIOD constraint to the CLK signal, you are telling the FPGA tools what the clock period of CLK is, so that any signal always arrives at a FF within some setup time of the next clock edge. In our example, the tools would ensure that t_CO (of FF1) + t_PD2 + t_SETUP is always less than t_PERIOD. This is easy enough for signals that stay inside the FPGA, since
t_HOLD t_CO is well known.
An OFFSET IN constraint is the equivalent of specifying t_CO of the device that is driving the signal 'A', plus any propagation delays of the circuit board trace carrying that signal. Once this is known, the tools can ensure that t_OFFSET + t_PD2 + t_SETUP is less than t_PERIOD, and if this is not feasible the tools let you know by failing timing.
https://www.xilinx.com/support/answers/10020.htmlYou will need to know the value of t_CO of the device driving the FPGA, and also the board delays. Board delays can be calculated from signal integrity analysis. For the most part, you won't need to specify OFFSET constraints unless you're dealing with rather high speed signals (like a wide parallel bus) but even there they can be skipped unless your board is messing up. I've done designs at ~200 MHz that didn't need any OFFSET constraints. The rule I follow is to always use FFs in the IOB, which makes the delay time (t_PD1 in the example) fixed and very small. It is however always a good idea to add a PERIOD constraint based on actual clock frequencies. In either case, the constraints are not used to 'adjust' any delays, they are merely constraints to pick an optimum solution for placement and routing.
Now one might ask why a similar thing doesn't occur with the CLK signal - it does, but the CLK signal is assumed to be input into the device using a global clock input, and clocked to all device FFs using the global clock tree. If that's true, then the skew between the CLK IOB and the input to each FF is carefully controlled and known to the place-and-route tools and can therefore be subtracted out of all the measurements. In other words, the constraint is actually (t_CO + t_PD + t_SETUP) < (t_PERIOD - t_CLOCKSKEW). In some cases, t_CLOCKSKEW can be made zero, by using the Digital Clock Managers (or MMCMs in newer devices) to implement a zero delay clock buffer.
Also, when you drive a clock off chip, you should use a DDR FF in the IOB (ODDR2) with C0 and C1 fed clocks 180 degree out of phase, D0 = '1' and D1 = '0'. This ensures that the clock skew between the internal clock net and the IOB pad is both minimized and deterministic. Otherwise, the clock signal ends up taking a random route from somewhere inside the FPGA to the IOB, which changes each time your resynthesize. If you do this, and make sure your data signal goes through output FFs in the IOBs the skew between them will be minimal and deterministic.
Edit: fixed notation