The criteria for "reclassifying" - well, it's sort of already been done, from the moment they classified the 8088 as a 16 bit CPU. Yes, it works on 16 bits internally, but it's a pretty poor 16 bit CPU because the data bus is only 8 bits wide. Made sense at the time, used the same peripheral chips as the 8088/8085, all of which only worked with an 8 bit data bus. Simplified design and reduced overall cost, but at a penalty.
Not every application of a computer requires the maximum possible speed. Otherwise you wouldn't see people plugging 16 MHz Arduinos into 4 GHz PCs and actually doing useful things with them.
The #1 lesson of the last 40 years (if not more) in the computer business is that software lives forever if there is compatible and competitive hardware for it to run on.
You can take a program that ran on that shitty 4.77 MHz 8088 in 1979 and run it without modification and at native speed (not emulated) on a 5 GHz i7, which is going to be something around 20000 to 30000 times faster.
This is what made Microsoft and Intel two of the richest companies in the world.
TO reclassify somethign even less 16 bit than the 8088 as a 1 bit CPU is madness. So what if a Z80 can multiple 16 bit numbers directly? How many clock cycles was that? There was a good article back in the day comparing the use of the Z80's LDDR and LDIR vs the small routine that would be needed on another CPU, in thic case the 1802. At the same clock speed, the 1802 small program was faster than the one Z80 instruction. I don;t recall if there was a similar comparison to code to multiply 16 bit numbers vs the single instruction. But to decide 'bitness' of the CPU based on how many bits it could add/subtract/multiply/divide in a single instruction is just plain silly - internally that was many clock cycles to perform that operation, which is very little different from a short program that uses multiple instructions, but each instruction operates in 2 or 3 clock cycles. I mean, you cna certainly write math routines that do 64 bit math on an 8 bit CPU - that doesn;t make it a 64 bit processor. And those built in more complex instructions in the Z80 didn;t turn it into a next level CPU, really. It's just that they incorporated that 'code' in the execution unit.
You are confusing the instruction set that software is written with and ONE PARTICULAR IMPLEMENTATION of that instruction set.
That's an easy trap to fall into, because in those days every new faster shinier CPU had a completely new and incompatible instruction set.
There was only ever one implementation of the Z80 instruction set and, yes, it used a ridiculously large number of clock cycles for many operations, which is why I could often beat it for speed with visually inferior 6502 programs.
But it would be trivial today (or even in the late 1980s) to make a chip that ran unchanged Z80 programs at 1 clock cycle per instruction. And in the mid 1990s you could make a chip that ran three or four Z80 instructions per clock cycle.
No one did this, of course, because the Z80 is only a 16 bit instruction set, and as such can only conveniently use 64 KB of memory, which was by then far too little to be interesting for people who want or need fast computers. It was also simply a very bad instruction set, especially if you wanted to run a modern programming language such as C or Pascal needing stack frames, recursion, structs etc.
Even if you're happy with a 64 KB limit today, there are much better instruction sets around than Z80, such as AVR or MSP430, both of which probably use about the same number of transistors as a Z80 (and all have about 32 bytes of registers) and are far more pleasant to program.