It can't be that simple. An SDRAM controller is a complex beast, with variable latency depending on what the SDRAM happens to be doing at any given time. It might be fetching some other block of data at the time you need something, or it might be refreshing, or it might be idle awaiting a read request, or the data you want might already have been read as part of a recent burst, and doesn't need fetching from the SDRAM at all.
In an FPGA, a signal assignment inside a clocked process is deterministic. Somewhat simplified, it means, 'this signal gets its value from <wherever>, right now!'.
If data has to be fetched from SDRAM, it could be many clock cycles before it's available. A process that uses data from SDRAM needs to be able to issue a request, then stall until the data is available before continuing. This is done in hardware for you in a CPU, but in an FPGA it's completely up to you to implement this logic.
Even if you use an off-the-shelf SDRAM controller, you'll still need to interface and handshake with it. It can't make the underlying latencies just go away.