Hmm. I haven't actually used LEX, but I never got the impression that it could be used similarly to modern regexp libraries, or that it aimed at producing particularly efficient code.
Lex is a bit specialised, in that it's intended to really be used to produce the lexical front end for parsing tools. But at its heart it's just a regex => FSM generator. The generated code is
very fast, table based, and as efficient as you're going to get without actually generating object code that directly executes the FSM. More modern implementations such as GNU
flex are functionally similar but a bit easier to use. Lex and
flex are really at home when you're trying to evaluate several regexes in parallel and finding which one matches, which is what they are designed for.
If you're desperate for a solution, you could cobble one with
lex or
flex, but, were it me, it's not the route I'd take if I could find something better fitted. As someone else said, it's unlikely that there isn't a tool out there that's a better fit but, offhand, I don't know of one.
By pure chance it happens that I may well be starting on writing a
lex/
flex style tool myself in the near future. If and when I do I'll consider whether it's practical to produce a sideline to it that's targetted at generating simple standalone recogniser routines from regexes.