How much computing resources does it actually take to do a search that's not cached? My understanding is that it's a few orders of magnitude more than what it takes to make the simple query in the first place.
The English dictionary has about 470k words. Obviously, most of those are obscure terms that the majority has never heard of. For simplicity, let's pick 65536 of them to use for making search phrases, a nice binary number that lends itself nicely to efficient implementation. If we allocate 16 bytes per word (for CPU efficient access), that's 1MB of memory needed - it can even fit in the cache of many CPUs.
String just 4 of those words together and the number of possible combinations will be about 1.84 x 10^19 - completely impossible to fully cache. If that's not an impressive enough number for you, consider increasing that to 5 words or more. We can therefore conclude that the vast majority of searches will make actual database lookups and won't be fully cached.
The actual operation of stringing the words together with spaces in between takes so little time on a modern CPU that it's not worth thinking about. Turning that into a HTTP request is also trivial. Then you actually make the requests, something that can take advantage of parallelism.
All in all, it doesn't take much resources to make a lot of requests, I'd be interested to know how it can be possible to make the search itself use a lot less resources than I would expect it to.