Basically one has to keep two things in mind when talking CPUs:
1) Marketing makes some decision to name several branches of the product to designate them fit for a certain segment. In the early days of I3/5/7 it was simple:
i3=entry level, i5= mainstream, i7=power users. Fo enthusiasts there were special versions of the i7 with more cores etc. Later they were mostly named as i9...
In those early days, the i3 was a dual-core CPU, i5 and i7 had 4 cores, the i7 had more cache and hyperthreading and a bit higher frequency that the others.
2) Usually a complete family of CPUs is generated from the blueprint of the most powerful cpu ih the series. In the old days, where the above CPU types (Gen 2 and 3 (Sandy/Ivy bridge) were manufactured, it was simple. Basically the theoretical output would be designed as an i7, so we are talking 4 cores and 8MB of cache.
This setup would have to do a minimum of 3.4 GhZ (i7-3770), to be able to marketed as an i7 CPU.
During the course of the manufacturing process, however, lots of things can happen, that will render some portion of the silicone unusable. And now begins the magic of salvaging what is still good and sell it. So, if some problem arise at the area where the 2nd level cache sits, but the rest is still good and usable, sell it as an i5, that has less cache than the i7- and to differentiate further between the models, do not enable hyperthreading.
If a core is unusable, mark it as an i3, and do the cache in half or less (ok, with Intel caache has been associated to the respective cores, as I remember, so there is an automatic decrease in available cache in this way.
If the embedded GPU is broken, mark it as a Xeon CPU for servers that have a dedicated GPU on the mainboard.
This goes for the first generations of CPUs, later they beefed up the systems so that the target CPU would have 6 or 8 cores in later generations.