It is the caching and cross-core synchronization wrt. virtual memory that makes the most theoretically interesting options not so simple in practice.
In your example, the kernels both have the exact same choices, the kernel copies the data or it remaps shared memory with all the TLB shenanigans involved (hopefully the architecture is sane enough the data cache can be left alone, with only the TLB cache being involved). The overhead of message passing is defacto irrelevant and they have to do the same things on the hardware, so it really doesn't matter.
I did not intend the preceding paragraphs to be an example of caching and cross-core synchronization, but as an example of issues with isolation.
I probably should have put a dotted horizontal line or something to indicate focus change.
(When you consider a message-passing-based microkernel and a monolithic kernel, yes, they both can utilize the same methods. However, whether you can rely on
all hardware being able to pass a reference across privilege boundaries (between different processes) and maintain privilege separation correctly, i.e. isolate processes reliably when doing large data transfers that way, is in question. This means that while in theory, one could claim that say
"message passing has less overhead, because it allows passing only the reference to the data, without copying the actual data", it is not generally/universally true in practice. Marco is also right in that that statement would be silly/malformed/wrong, because monolithic kernels can do the same. Switching the point of view to isolation, you see my point: isolation is much more complicated in practice than it sounds in theory. It is also why I believe security has to be designed in to the very core, and cannot be added on top later on.)
_ _ _
Note that overall in this thread, my focal point is that the practical details like hardware architecture and even programming language used, affects my opinion on which I would prefer, monolithic kernel or microkernel, even if using pure performance as my only measurement.
Currently, the hardest topics to 'grok' seem to be the interaction with privilege separation and caching mechanisms. We see this in the relatively recent design bugs wrt. speculative execution. If you follow e.g. the Linux kernel mailing list, caching-related bugs and hardware fixes tend to occur at a constant rate that only depends on the complexity of the caching architecture; with typical problems stemming from the fact that humans did not understand all the implications of the designed behaviour, rather than silicon bugs.
_ _ _
If we harken back to the argument between Torvalds and Tanenbaum, one way to look at it (if we are neither of them ourselves) is to see it as former arguing based on practice, and the latter arguing based on theory and models. Neither was wrong, per se, in my opinion, exactly because theory ≠practice; and the argument itself was only possible between humans that place very different importance or weights to practical implementation vs. theoretical basis.