Haha. This is right up my (old) alley!
I actually did honest-to-god research on the feasibility of a concept like this while being employed as R&D engineer at one of the top-5 cellphone ODMs.
The most important factor in a concept like this is lag. I.e, the time from when you click a button, to when you see something change on the screen. Without any screen-mirroring, you're talking about <16.6ms of lag (i.e, the next frame will show the change on a 60Hz screen).
First off, there's _no_ existing standard (AirPlay, DLNA, MiraCast, IWD) that can come close to a latency like that.
However, if you do a proprietary protocol (yey, just what the world needs, another protocol) over WiFi, you can get closer. In fact, you can theoretically get down to 4/60th of a second (3 frames lookahead for H.264 compression, and 1 frame for decompression). Transmission-times are negligible if you're not using a connection-oriented protocol like TCP as the L4 protocol. (No, really, the amount of data is so small, that it goes really quick).
Ok, 4/60th is longer than 1/60th, but lets say it's okay enough..
This still won't work very well.. How come? Well, the way you would do this is to have the video-encoder "scrape" the framebuffer and encode that. Seems perfectly reasonable, doesn't it? And it'll even work for your homescreen, facebook app, web-browser, etc.
What it _won't_ work on though, is probably the main use-case for a device like this. This use-case would be _video watching_.
DRMed content doesn't end up in the framebuffer at all (on a reasonable platform at least), so the video-encoder would be encoding a giant black screen instead.
Now, you might be saying.. "But hey, supposed expert. You're wrong. This works with AirPlay, MiraCast, etc". And I'd be inclined to agree with you. Those are _system_ integrations with signed binaries, and legal contracts regulating them. I could (and did) modify the system so that I also had access to the protected buffer where the video ends up. However, this is _way_ outside the realm of possibility for a user-installable application, even no matter how hard you've rooted your phone.
So, basically.. With _a lot_ of technical know-how about video-encoding and how to send data over wifi with low latency, they might end up with a product that works for most things, except DRMed video. But, I don't see it happening anyways. The technical know-how needed to do this is massive, and obtaining some of it more or less requires you to have access to NDAd documentation from the platform vendor (how do you get the video-encoder to encode the framebuffer directly, without having to double-buffer and getting a 1/60th of a second penalty for it?), so the chance of some young entrepreneur pulling it off is minimal at best.