SEI/CLI are an acronym for a BSET/BCLR SREG instruction. SEI/CLR are absorbed inside the instruction set, but they are not unique CPU instructions as such. If you look up the opcode patterns you'll find that they just do the exact same operation as you've described.
So in terms of output code you could get the same results. However I would always use the SEI/CLR macro's, as they are most predictable to use. Your bitmask operation is subject to the optimizations effort of the compiler, which in debugging may be set very low.
Whether this will fix your program timing problems (i.e. missing deadlines) with the inclusion of these programs is impossible to say for sure without lots of info.. There is a whole branch of theoretical computer system that performs analysis for real time systems such that it can be proven that a program will always meet it's deadline given X processors, Y tasks, Z execution times, A dependencies, B buffer sizes, etc. It still remains mostly an academic exercise, though.
I just mean to say this is not a problem that a few (disabling/enabling) interrupts will fix, nor can be explained in 1 sentence. People often do "best effort" engineering and lots of testing/verification instead.
In practice, you might see disabled interrupts temporarily if there is a critical section. This is perhaps when you:
- Atomic operations; you want exclusive access to a variable in your program, especially needed if your interrupt also uses this variable.
- You may want exclusive access to a peripheral, and without a RTOS/semaphores you could disable interrupts instead..
- There is a piece of code that is timing sensitive. For example: if you want to send a bitstream to a WS2812 led strip. Most AVR drivers I've seen use assembly for that, with carefully added NOP instructions to get the timing right. If an interrupt needs to be served during that assembly routine, the bit transmission is screwed up.
You may notice that the first 2 are actually countermeasures to avoid that timing behaviour will influence the functional behaviour of a program. By "functional behaviour" I mean reordering of operations. Something to always look out for.
On the other hand note that disabling interrupts does in fact mean that you're screwing up the timing of interrupt routines. Those may also have deadlines before they need to be served, otherwise a hardware overflow/underflow may occur. For example: if you have a serial UART receive interrupt, it only has a limited FIFO (often only 1 byte). Obviously that byte needs to be read before the next is received, otherwise the program won't be able to keep up with the serial communications. Given a certain baudrate you can calculate how long that time is.
Since UART is an asynchronous protocol, data can be received at any time. In that case you must also make sure that the longest critical section is sufficiently short that you won't be too late to receive the byte from the hardware.
In practice people thereby only write the shortest interrupt and critical sections possible.