Adding gate resistance will provide some reduction in EMI, but at the penalty of greater heat dissipation of the MOSFET and reduced efficiency. The gate resistance will increase the rise time of the switch node waveform, so that the MOSFET will spend greater time in the transition region between the "on" state (very low resistance, Rdson) and "off state" (effectively infinite resistance for practical purposes.) The finite resistance between drain and source will give you I^2*R loss throughout the extended transition time.
I consider adding gate resistance as a last resort. There are 2 very effective approaches to consider first. First of all, try and reduce the size of the "antenna" that is doing the radiating> That means making the output loop, containing the MOSFET, inductor, Schottky diode (since you said it's a non-synchronous converter), and output capacitor(s) as tight as possible. Secondly as mentioned above, add an RC snubber between the MOSFET drain and source - or in severe cases, add 2 snubbers, with the other between the anode and cathode of the Schottky diode. The MOSFET snubber, if chosen properly, should critically-damp ringing of the switch node waveform transition when the MOSFET opens, and the diode snubber should critically-damp the ringing on the other transition. You really need a scope to measure the frequencies of the original ringing (usually around 100-200MHz), and how those frequencies change when you add in some trial snubber capacitor values with zero snubber resistances. Then the calculations in the excellent TI app noted above will allow you to select the proper R and C values for critical damping.
Edit: The other thing some people do is go to a 4-layer PCB and configure 2 of the layers as ground planes, whereby you'll effectively "bury" the radiating loop in a Faraday cage. A lot simpler and cheaper to add a snubber.