First of all, the sensor is a tiny current source and the voltage is measured as the drop across a (large) precision resistor. That means you really don't want to connect it directly to an ADC as the ADC will affect the load seen by the sensor and consequently result in inaccurate voltage readings. You need to buffer the output of the sensor. If the output of the sensor already made good use of the range of the ADC, a unity gain buffer would be used. But in this case you probably want to add gain anyway. So, as far as I'm concerned, the question of whether or not to add gain is answered.
Secondly, the ADC in a typical MCU is going to be noisy. You have a very slow moving signal so you can average as much as you need or want. In noisy laboratory setting I have averaged 2048 samples over 100 ms (6/60 second or exactly six power line cycles) and have been able to get 12 ENOB out of a noisy 10 bit ADC. You could probably average over several seconds.
Third, if you really want 1% accuracy, then I would make some effort to calibrate the final system. The ADC gives you an integer over some fixed range that you need to multiply by some constant to convert to %PPO2. If you have other sensors (with electronics and readouts) that you trust, then you might be able to use them determine that constant experimentally. That would mean your sensor is as well calibrated as your existing ones. Or calculate the constant based on information from the sensor datasheet, which is what you were probably going to do anyway.