Not if the entirety of the firmware (save for the basic bootstrapper, which, if the manufacturer was determined to prevent what we're talking about, could be cryptographically signed its contents verified and enforced in hardware) is also encrypted with the same private key as the individual features. Attacking that would require a violation of copyright law, because the manufacturer could claim copyright on the public key.
I agree you can make things a lot harder, but you also have to factor in that the supplier needs to introduce a lot of versatility into the system allowing stuff like time trials, licence transfer etc etc. This would not be so simple to develop and manage so a common solution is to approach a third party company that specialise in this stuff and let them 'protect' the system using their own licensing system. That's where the problems start because it becomes much harder to keep it all secure.
Those things are important for general purpose computer systems, of course, but aside from time trials, really aren't terribly relevant for special purpose devices such as oscilloscopes.
Implementation of time trials in the framework I described would be trivial: the various attributes of the time trial could be encoded along with the serial number and feature name, and included in the packet that is cryptographically signed.
I'm getting old and very rusty on stuff like this but in the past I've successfully attacked systems (these were not TEqpt systems) that came in an encrypted shell or wrapper that could also detect debugging and could self check itself and the protected code for signs of tampering.
Tampering, reverse engineering, etc., is becoming much more difficult with the advent of "system on a chip" technology. An architecture that is nearly tamperproof is quite trivial with such a system: you store the bootloader and decryption key in PROM inside the SOC (note:
not EPROM! It has to be write-once), and the bootloader can load the encrypted code from flash into the SOC's RAM for execution. As long as the decryption key remains undiscovered, the entire system is essentially hack-proof, since hacking would then require that one gain access to the chip's internals -- a step that only the most well-heeled organizations might be able to pull off.
Of course, if the decryption key is discovered, it could be used to decrypt the firmware. But even
that doesn't help you if the decryption key is half of an asymmetric key pair, because you'd need the other half in order to encrypt a modified version of the firmware for execution in the SOC.
You'd have to replace the SOC itself with your own in order to go any further with the above. At that point, you've probably hit the point of diminishing returns. A company that is selling the device will, of course, be much more concerned about someone learning the techniques they used in their code, but that's what patents are for. And someone who considers such examination of the code to be "wrong" had better think carefully about whether their stance is consistent with their stance on reverse engineering, since they're really the same thing.
In the end, everything depends on just how concerned the manufacturer is about these things. The system I described above
easily takes care of all but the most determined hackers. The more determined a hacker is, the smaller his impact will be on the marketplace, as long as he is unable to share his hacks with others in such a way as to make them easy to deploy. Replacing the SOC with one that someone has programmed their own bootloader into is a
relatively involved thing, something that most people here wouldn't bother with.
In any case, the real point of all of this is that a manufacturer that is concerned about people "hacking" their products so as to enable features that are otherwise disabled is
easily capable of preventing that. It's not like we're talking about some technologically ignorant company here, we're talking about a company that does hardware and software design as its
business. It will deploy the kind of measures I'm talking about if it really wants to prevent its customers from easily enabling features. Otherwise, it will do as Rigol has done: make it relatively easy to "hack" the product to enable features, but difficult enough to maintain the illusion that someone who buys a higher end model of the line is getting something for their money (in reality, they
are getting something for their money: support, such as it may be, for the features they purchased). Such easy hackability is not without its business benefits, as has already been pointed out, so to insist that "hacking" such a scope is "wrong" is amusing, to say the least, seeing how the manufacturer
wants the scope to be "hackable" in that way.