Latency becomes a problem if you are doing a lot of random accesses in memory. However if you are only doing continuous accesses then using transceivers could give you better performance.
A similar idea was used in the late 90s with the Rambus interface on the Pentium 4 machines (not actually using transceivers, but replacing the parallel bus with a half serial protocol using fewer signals). Even if in theory the bandwidth was higher than SDR, in practise Rambus systems were a lot slower in most practical uses, because of the added latency, and it was abandoned, at least in PCs.
And yet modern PCs use DDR4 memory with the best-case latency of about 15 (if reading a row that's already open) and a typical latency of 30-40 cycles.
You've got to remember that Rambus competed in very different era than what we have today. It was ahead of it's time. But it looks like now its' time is coming. And the reason I think so is design of LPDDR and especially GDDR5/6, which both trade large (in case of GDDR - huuuge) latency for extra bandwidth. If you look at LPDDR2 datasheet (the most recent version of LPDDR which has publicly accessible datasheets), you will see that they use address/control bus in DDR mode as opposed to SDR (like it is used in "normal" DDR2/3/4) and this way use only 10 pins. As far as I know, more recent versions of LPDDR (3, 4 and 4X) take this to the next level with using even less address/control pins such that command now takes more than a cycle to clock in. So I think the only thing that prevents desktop class CPUs from using these high-latency high-bandwidth memories is the fact that they have to support connectors, which seriously limit maximum achievable frequencies for SI reasons.
Now here is the thing - those of you who are old enough will recall that this problem has already been solved in the past with PCI bus. Back in the old days "classic" PCI bus has the exact same problem of physical design (multi-drop parallel bus) limiting available bandwidth. How was that problem solved? Yep, by serializing the protocol and using several multi-gigabit serial lanes instead of a crapton of parallel ones. Meet the PCI Express. That bus has proven that you can reach very high speeds even through connectors (16 Gbps per lane for PCI Express 4, 32 Gbps for PCI Express 5).
And this is why I think the future is with more "serialized" memories. Maybe they will never go full serial like PCI Express, but looking at where the bleeding edge memories (GDDR6) are, it seems fairly logical to think this way. As for random access - it is becoming less and less relevant with every further iteration of DDR/LPDDR/GDDR because it's relatively easy to work around by using cache, while insufficient bandwidth is a brick wall that can't be circumvented to any meaningful degree. A good case study is the current generation of graphic cards - NVidia solved the problem head-on with inventing GDDR6X memory that gives extra bandwidth it needs, while AMD has tried to work around that with massive on-chip cache. Guess what - at 1440p and below that seems to be working for AMD, but at 4k they suffer from insufficient bandwidth and lose to NVidia cards. And I suspect at 8K AMD cards will lose even more badly because all tricks and smarts with the cache will only get you so far, and at some point you just got to face the problem head-on as opposed to working around it.