But Linux won't even boot on a plain 68000. Imagine what happens when it tries to run something? Linux is totally based on having a working VM system - it will use everything from copy-on-write address space cloning to memory mapping .so and executable files and loading pages on demand. All address space is backed by open files (which can be an unstructured device, like /dev/zero). Linux had all the old, early support for anything else thrown overboard a loooong time ago. It's not part of the kernel anymore, for good reason. It has what 20 years ago was a modern, unified, pressure based VM system. Today it's just baseline commodity design.
It's not my project of course, but personally I'd never build a 68k anything to run Linux on it. A Sitara big ass BGA package, some DDR ram, an SD card slot, ethernet PHY, USB ULPI PHY, and done. 2-4 days for the hardware, then 2-4 months to get the software working - and that's what a project like this is - a medium sized software project, really. The hardware, whether a monster Sitara SoC or 68k discrete, is just cookie cutter stuff. And you don't want to be stuck with a slow-ass 68k for a non-trivial software project. Again, personally, just my view, the only reason for a 68k design would be to run legacy software.
Maybe dig up an old version (1, 1.5, maybe 2) of Minix or something? That should run and would be very easy to get up and running, especially v1. I haven't looked closer at it, but v3 looks like a very different animal, a microkernel services based system similiar to Plan 9 or GNU Hurd.
Unless you're going to run something totally without memory management, the use of system will dictate MMU needs - so I'd start at that end, then implement what my chosen system requires to function properly. (Minix IIRC is somewhat tied to x86 segment registers, but only for specific kernel address space operations.)
Anyway, just my 2c.
