Having used ZFS for over 10 years both on home recycled hardware and enterprise configurations, I had often wondered the same. Experience has taught me that scrubbing is crucial for several reasons:
1) The obvious, it will detect minor silent corruptions and repair them.
2) Even if you have ECC and SAS drives, there is still a guaranteed undetectable error rate.
3) The scrub rather heavily exercises all the disks in the array, if a disk is about to fail, this is when its most likely to occur. As scrubs are usually scheduled during periods of low activity, it can be handy to have it fail when it's most agreeable to have a disk go offline (read retry can cause quite a long stall depending on the mode of failure, causing the entire OS to stall on FS accesses)
I schedule a weekly scrub, and perform a scrub on every unsafe shutdown (ie, power outages, failures, etc). I have lost far too much data due to silent errors in the past and wont be bitten by this again if I can at all avoid it.
My recommendations if you need a reliable storage solution are:
1) Use ECC RAM. This is critical for ZFS, many people argue against this today, but with the idiotic overclocking of binned memory being the norm, your rate of error is stupidly high.
2) If it's a home NAS, don't use SATA disks, use SAS. For a home/budget build, I'd use used SAS drives from eBay any day over a brand new SATA disk. If you're worried about the used nature note that enterprise SAS drives have far higher MTBF rates, and as nobody wants a used SAS disk for a enterprise server, can be had very cheap, so buy extras for additional redundancy.
3) Don't use ZFS on Linux... I know it's much better today then it was in the past (I lost a lot of data years ago due to a silent bug in the Linux/BSD compatibility shim for ZFS), it's still IMO not suitable for prime time use. Go for BSD and use ZFS on the platform it was developed for.