In some unreal future, everyday engineers could design or extend their own ASICs from open-source blocks that would do the same things at a fraction of the cost, lead time, pin count, and environmental impact. These blocks would be anything from a MOSFET, to an opto-coupler, to an opamp, to a mid-range CPU or even flash memory.
I mean part of that is already a thing. A lot of companies don't design their own in-house I2C, HDMI, ethernet, ESD, whatever. You buy/license them from companies like Rambus, Synopsis, Cadence, etc...
The problem ends up being the high up-front cost to tapeout. Even if the design was cheap (sometimes these IPs are paid in a per-die-manufactuerd way so little upfront cost in the design), taping out a chip in a <100 nm technology is going to have up-front costs ranging from 1 million USD to many million USD, just to have your maskset made. You need a huge volume to make up for that.
Ofcouse, you can make chips in older technologies too, and a lot of times this is what they do for true ASICs (I mean, a customer goes to an ASIC designhouse and asks for a very specific chip for a single product). But unless you have huge volume, it is just not feasible (and it is high risk!)
To come back to the original question/topic:
"ASIC design" is microelectronics, it's of course part of EE but as a specialization as others have said.
The "divide" is that it's two different activities. Analogies have their limits, but it's a bit like comparing a web front-end software developer with someone developing firwmare in assembly.
Is it software in both cases? Sure. Do you expect one to be able to do the job of the other? Usually not, unless they have managed to master both. Which is not as common.
So it's largely a high-level/low-level kind of divide. Not sure what kind of opinion you are after though?
Yes these are two pretty different activities with a different mindset and different tools, even though it's electronics in both cases.
I think this is a good analogy to use.
A place where I see more and more overlap is when you get to anything high-speed/high-frequency, because in a lot of those situations you are constrained by pretty much every single thing in the signal path. So the people working on the 112 Gbit/s serdes need to understand what the limitations are for each part of the chain. At least the system level guys do, those who come up with the standards and architectures. Though now that I think about it, once those (usually research) engineers paved the way, the guys following just have a standard to follow and I guess don't really need to know what is going on anymore.