Time is not something we can measure directly, only
elapsed time; and in different locations, between any two events, the elapsed time varies. In some extreme cases, even the observed order of cause and effect can be reversed.
There are different classes of problems stemming from this. The nastiest one, I believe, is synchronization: consider GPS, or communication with space probes.
There are several
clocks, or timekeeping bases, we use here on Earth. The most common is UTC or
Coordinated Universal Time, based on TAI or
International Atomic Time, with leap seconds added or subtracted as needed. (Thus, one minute can be 59 to 61 seconds long, even though almost all minutes are 60 seconds long.)
Unix time as used on most computers is based on UTC.
It is notable that TAI is the average of over 450 atomic clocks all over the Earth, so it does not represent the elapsed time on any point, but the
average elapsed time on the surface of Earth since a preset
epoch, that epoch itself purely notional (not physically verifiable or measurable at all).
To convert from UTC to local time, you need two databases: one is the leap second adjustment table (identifying when leap seconds were added or removed), and the other is the local time database including daylight saving times. You need the leap second adjustment table even for counting the exact number of seconds between two UTC date-times.
All this means is that our timekeeping is not, and cannot be
exact.
Realizing this helps, because then you start thinking about what kind of precision is
useful. And then, the timekeeping related problems start falling into different categories (depending on how important they are to the problem at hand), becoming just another numerical/physical (as opposed to algorithmic/theoretic) problem to be solved.
In numerical physical simulations with discrete time steps – involving anything from individual atoms, to stars and galaxies –, one key "trick" is to dynamically adjust the time step size to the scale at which events can be represented at useful precision. (It makes it MUCH harder for us humans to observe the time evolution of the system, though; our brains cannot really deal with time "speeding up" or "slowing down".)
In many cases from programming to humans waking up from sleep, the exact moment something is scheduled to happen in the future is less important than that moment being
suitable. (You want to wake up from non-REM sleep, because waking up from REM sleep leaves you groggy. You don't want your computer to do housekeeping stuff like checking for updates while you're working or watching media full-screen, because it leads to jerkiness or added latencies.)
All this leads to moving away from "do X at time Y", into "do X after time Y1 but before time Y2": moving away from specific
moments or points in time, into time
intervals.
When you start really working with different aspects of
time, it starts changing the way you think about it. (In a good way, I believe.)