I've been thinking about that too but I sadly don't have the required skills to pull it off. -At least not with my current knowledge.
Might be a fun project to learn from.
Few things I've noticed:
1. Logic is WASTLY different
2. gcc use moveq instead of clr.l
3. stack operations are done in opposite order when fetching data. (and they're using different registers)
4. Intermetrics C has a weird tendency to use d6/d7 for extra storage instead of gcc's ascending order scheme. d0/d1 first, then push and use registers in numbered order.
5. Intermetrics panics once the functions are large enough and start using any register free-for-all style instead of pushing and popping like gcc. be it address registers for data or data registers as temporary storage for an address.
6. Maths. DIVISON in particular is very, very different. intermetrics has a tendency to split up operations while gcc would rather work on huge numbers. both are using bitshifts and subtraction / addition to get the correct result but they do it differently. Some things you can work around by writing code menace style even tho I'm certain no one would ever write it that way.
7. gcc is very premature with everything and will load up registers much earlier than necessary (probably to make it more readable. Flags can slightly alleviate this but it's still quite severe).
They both adhere to one ABI rule tho: a0/a1 and d0/d1 are used as scratch registers