I run the HX1000i in my Asus build which is slightly more efficient than the RM1000x being Platinum vs Gold on the 80Plus rating. Both have a 10 year warranty and boast "Japanese" caps but don't specify the brand.
It works fine, but I found a couple of the cable runs in my build with the supplied modular cables were tighter than I'd like, it was marginal but it works. It depends on your particular enclosure, motherboard and how you manage your cables as much as anything.
One thing to check with your enclosure is whether it supports directly your motherboard, frequently for EATX and EEB sized boards you need to take care that no blind mounting hole standoffs are present or your board risks shorting out. You're also likely to need to drill and place your own standoffs in some instances. EATX does not share the same mounting holes as EEB despite them being the same board dimensions.
The biggest problem with self-builds at this end of the market is that the "gene pool" of builders rapidly dinminishes, so the characterisation of them isn't as wide as consumer boards. In your average server farm, although there'll be hundreds of servers, they tend to work on a very limited number of pre-certified and tested platform configurations.
I'd be very reluctant to state categorically that a given build will work faultlessly without having had direct experience of it. If you follow the board vendor's QVLs then you will, at least, have some form of comeback, and with that I'd put more trust in SuperMicro's support than in Asus for this type of board. You only have to look at a few of the Linus Tech Tips videos on their rendering machine a year or so ago to see that even they had an enormous amount of difficulty getting their boards going.
I would also check that any form of multi GPU environment that you're going to use is supported.