Drop in programming voltage was process driven, they changed to a smaller process, which meant a thinner oxide could store the charge, and needed a lower voltage to tunnel the electrons across the oxide layer. Older ones are however slightly more reliable, simply because the bucket of charge is so many more electrons in size, and thus leaks away slower. However in most situations this only will show up after 100 years or so, as most will keep data for at least 25 years in practise, provided you use a proper opaque label over the window, not a plain paper one but one with an integrated aluminium foil inner layer to block light.
Later on EEProm was developed to a more reliable method, and this, along with some of the later Eprom variants, have a built in voltage generation source, used to generate both the internal reference voltages ( the chip internally is not digital, it is analogue, with a single bit AD converter for EPROM and now with MLC flash a few bits more) and the higher voltages required to erase ( for EEPROM) the required cells/blocks before programming, and to program the cell when they are blank.
in all cases EProm has a built in voltage converter to generate substrate bias, except for the very early types, which needed a -21V substrate bias voltage, and were very unhappy if it was not present, but did not blow up till it was at around +2v, when the chip would crowbar quite nicely. Was a very big advertising headline with the 2716, advertised with bold "Single 5V supply voltage only needed" as a selling point.
The good Eprom programmers verify the devices after writing, using a first pass at 5V00, a second pass at 6V00 and a third pass at 4V50, to test that the programming will be valid data over the full voltage range the device will experience in use. Military ones repeated the test 2 times again, at -55c and at +125C, to ensure they would work under all circumstances.