Well, most embedded architectures, come run time, the stack is just a pointer into memory... allocation, what's that? In general, the compiler has no way to possibly control that -- not without severely hindering performance, anyway (checking the stack pointer before or upon entering each and every function -- because, I mean, who even checks their array bounds anyway, come on, really?... </sarcasm>).
Most advanced architectures (anything with an MMU, virtualization, etc.) can support implicit memory allocation, but the kernel needs to set it up in the first place, and needs to actually do something about it to matter (i.e., usually terminating the program at fault and popping up a dialog box, or at least resetting). An example like MIPS or ARM, or even just good old fashioned DOS on a 486, most of the time the user's going to have kernel mode control and a flat, unrestricted memory space anyway. (It literally wan't until 2000 (Win2k, WinXP) when mainstream PCs finally started to use properly isolated user space. And that's PCs, on a family that's supported full virtualization since the 386 was introduced in 1985. Virtualization itself having been invented, and implemented, even farther back than that, in supercomputer architectures.)
Tim