I work with audio a lot, so here's my 2ยข (or maybe 10).
I use a pro audio interface (MOTU 1248) and don't need a dedicated ADC or DAC for my work. Most modern ADCs and DACs on interfaces handle 32-bit conversion, if they cost more than $20. 32-bit 192k is pretty standard specs even for mid range desktop interfaces now with 2 ins and outs. I personally see no benefit to using those higher sample rates as they can add in harmonics and other garbage above 20k during a mix which brings the entire level of the mix up. Some engineers swear the highs are clearer at 192 and 384, but I have no need for it. Tokyo Dawn Labs has an old plugin specifically made for filtering those ultrasonic frequencies out of a mix. 32-bit could be useful though if you need it.
In the late 90s when I went from 16-bit to 24-bit, there was a big difference because the signal to noise ratio is so much better with 24. No longer did we need to record hot signals. These days I try and record levels around -18dB to -12dB so I have plenty of headroom when mixing. 24-bit to 32-bit the noise floor is usually the same. It's the massive headroom over digital 0 that differs. If you hear noise in your 24-bit recording at rational listening levels, it's probably interference, or you're recording too low and raising the gain too much, or your monitoring signal chain is noisy, or maybe you have grounding problems (typically hum though).
I used to use 96k 24-bit back when many plug-ins didn't upsample because I could hear a difference in the output, but these days pretty much everything upsamples to some ridiculous degree to avoid aliasing and other artifacts and then downsamples on the fly, unless you are using some old plug-ins that aren't being updated. I can hear no difference in output anymore so I stick to 48k sample rate as a standard unless a client demands a higher one. It keeps files smaller, avoids the garbage above 20k, is the standard for video and cinema, and drastically frees up your CPU to use way more plugins. I can do mixes on my modern 10-core CPU using dozens and dozens of tracks with tons of plugins that would need accelerator hardware in the past. My heaviest mixes usually don't pass 50% CPU usage. God bless multi-core.
As for bit rate (or depth if you prefer), 32-bit float I find to be beneficial when recording because of the extra headroom. You can record values past digital 0 to avoid clipping and then bring down the gain when mixing (or surgically with bite volume or in a 3rd party app like RX or Audacity). I think the headroom surpasses the decibel levels of sounds we can even make on earth. I find it useful when recording voices and sound effects, because you can get spikes if someone goes closer to the mic and ruin a take. If you like to record hot, 32-bit is also your friend for the same reason, you can always mix down to 24 once things are tamed and most DAWs these days let you mix different bit rates in the same sequence if you really want to.
In pro studios, 24-bit is the standard because they won't record hot and most engineers won't mix to the ceiling. Some swear by 88k because it's double 44k used for CDs, and supposedly there will be no rounding errors when downsampling. You may find 32-bit being used in the final mixing phase at some studios so if something goes over the mastering engineer can tame it and not be dealing with a clipped square wave. It all depends on the workflow. These days people send stems to mastering engineers sometimes so nothing will be clipping (in theory). If you have the hard drive space and want to avoid having clipping problems, use 32-bit float. If not, 24 is fine. There's even free plugins you can drop on your tracks and buses to check if there are inter-sample peaks the meters don't register.
I personally do all my recording and audio mixes at 48k 32-bit float for A/V projects to avoid clipping and because Adobe After Effects does not natively export 24-bit. There is no audible difference between 24 and 32, it's merely so I can avoid clipping and don't have to change formats when doing intermediate renders and when working with different clips, or different audio sources and music. I keep everything at 32-bit 48k throughout the entire production phase so regardless of what software I have to use, no problems arise. If I have to convert sample rate or bit depth, I do it at the start using iZotope RX7 because it has one of the best software converters. Some people prefer Weiss Saracon or Saracon and RX (or a few others), though RX is price tiered and has more tools.
If you work on A/V for the web, YouTube even accepts 48k 32-bit float linear PCM embedded in video (even if the video is compressed as H264), so you can let them do the final audio compression after upload for the best audio quality and avoid double compression.
So basically, if you have a good "pro-sumer" or professional level interface, the bit depth/rate you choose will really depend on what you're doing and most likely it will matter most in the software when recording or doing a final mix. All the well-known manufacturers use great preamps and their ADC and DAC is top notch. Even modern soundcards on motherboards handle 32-bit audio playback these days. Use 32-bit for practical reasons, not because of a sales pitch.