and, as you said: " these devices are universally horrible"
which is exactly why I'm trying to re-invent the wheel, I don't like what I've been seeing so far, and rather then throwing in the towel before I start, and just complain. I've decided to try and tackle this challenge myself.
Well, good luck. From what you are writing, you seem to lack quite a bit of knowledge about how the electronics and haptic rendering in general actually work, but you certainly don't lack enthusiasm

That's what I have meant about doing your research and learning first. Haptic rendering is a fairly complex subject, building electromechanical gadgets is only a small part of it.
"Haptic Workstation used to cost over a half a million dollars"
youch, I'm aiming to try and get the consumer price to under $500 a pair. Guess I have quite the challenge ahead of me.
The problem with Haptic workstation is twofold - first, it is an extremely niche, made-to-order device that pretty much is a solution looking for a (research) problem. As-sold, there is essentially *no* application that it could be used with out of the box - you get and SDK, documentation - and that's it.
The second problem is that this is classical VR hardware - that means that there are enormous margins being charged by the vendors. Until the Oculus Rift came along for $400 or so, it was common to sell much much worse HMDs for $20k upwards, even $80k was not uncommon. The CyberGlove data glove costs around $10k per hand - for the last 20 years. The only thing that has changed over time was adding of a USB to serial converter, because RS232 connectors fell out of fashion ... Silly Inertia Cube 3 which is nothing else but a 3 DOF innertial tracker comparable with MEMS gyro/accelerometer/magnetometer combo in every cellphone today is being sold for $600-$800. Invensense MPU9250 that does very much the same costs about $5 in singles, full IMU with a PC interface is doable for about $50-100 with it.
So my point was not to say that it cannot be done cheaper (it certainly can), but that even hardware that is being sold for such exorbitant prices didn't manage to solve the problem using the approach you are attempting.
The problem with touch is that most of the sensation is in your finger tips, not the fingers as such
I thought so (but still glad to hear from someone with real experience). and I admit, my glove still won't be perfect.
but I have already had Idea's that add detail to the finger tip parts, to compensate for what I'm quite sure will be a bit mushy feedback in the hand/fingers. these idea's include:
-small air pockets that inflate/deflate in the finger tip portion to add localized pressure (speed will be an issue here)
-small vibrations (several different way's to do this, ranging from motors, to pizzo buzers) to give textures
None of the above is going to give you a meaningful tactile sensation. Human fingertips are extremely "high-resolution/high DPI" sensors. If you have seen a Braille printer, that is closer to the idea (but still extremely crude). There have been attempts to use ultrasound, various pin cushion-like devices, even small electric currents etc. to simulate various textures and edges, but the work is pretty much in its infancy. Look on Google Scholar, there are quite a few papers published on this (search for "tactile displays").
BTW, forget the vibrating motors. Unless your goal is simulating the tremors of a Parkinson disease sufferer, it is not at all realistic. Think of it like getting an electric shock when you touch something live - yeah, you have got the information that you have touched something, but it certainly wasn't delivered in a way your brain expects...
you need strong and fast motors that can actually provide quite a lot of torque quickly
yes, this is one of my biggest concerns at the moment. so far steppers with their own processor (so you don't have to wait for the computer) is the best option I've come up with so far. and some other problems you also mention.
You seem to be under the impression that the problem is the speed of the host computer. That is, in fact, not the case. You will certainly have some sort of local controller to drive the motor, but there is no point in offloading the haptic rendering to the motors. It would only make the integration of the system with a graphic engine of your choice incredibly complicated.
The issue with the motor speed is different. Imagine your hand is moving towards an obstacle. What you want to have in the ideal case is a system that is behaving neutrally (potentially compensating for the friction/mass of the assembly, but not applying any other force) until you hit that obstacle. At the moment you hit it (you see yourself doing so), it needs to become rigid instantly so that the movement is stopped in the same way as a real obstacle would stop it.
That is obviously not possible because every motor requires some non-zero time to develop the torque to overcome any slop/backlash in the mechanical parts and to accelerate/decelerate any moving parts. The more powerful and faster spinning the motor is, the quicker the reaction could happen and more rigid will the apparatus feel. Of course, there is always a tradeoff - the bigger the motor, the heavier it will be, the more momentum it will have and more energy/torque will be required to quickly get it moving or to stop it. The same applies to any mechanical parts attached to it - they add mass, thus you have larger momentum and the motor must work harder.
This is tightly coupled with the update rate of the haptic system (from collision detection to the force rendering) - rates of 1kHz or even more are common here, otherwise the response times would be just too slow (compare to 60-120fps of a common game engine!).
This is why small motors like the ones in your pictures are not going to work - they are too weak, so you must gear them down a lot which will preclude them from reacting quickly enough. It also shows why it is pointless to evaluate the "speed" of your system while driving it from something like the 555 timer - you need a closed loop where you can actually see yourself hitting an obstacle and then you can perceive the delay before the force feedback kicks in.
A sizeable stepper could potentially work, but then you cannot mount it on the glove - that is why the CyberGrasp system has the "backpack" and tethers.
in short. yes, finding a way to convey force, fast, accurate, safely, sleekly, and affordable. will be quite a challenge. I have some less conventional idea's as well, but they are far more complex (using ferro fluids & multiple sliders, etc) so I want to try with simple steppers first.
Those things don't have sufficient force. You could use hydraulics, but that only moves the problem to the sizing of the required pumps.
google PERCRO and haptics
Thanks! I don't think I've stumbled across them yet 
(but I have seen that youtube vid. it's one of the better one's I've seen)
PERCRO in Pisa (
http://www.percro.org/) are one of the world leaders in robotics and haptics. Again, you should do more research before you start designing and building. I know, it is more fun to just start hacking on it, but you will waste a lot of time solving problems that are solved already.
Another note:
I'm going to use the fact that this is for gaming to my advantage. rather then have the glove respond to every little thing in the virtual world. the game will load up tactile objects into the glove itself. and when the player reaches out to those objects + a grasping motion, those objects will snap to the players hand. and the gloves on-board (dedicated) processor will take over from there, until that object is let go.
Forget it. That would be horrible for the user. So I can touch this cube, but then my fingers can go straight through the desk, because you didn't "upload" the desk to the glove?
Do you want to embed a PC in your glove? Do you realize that haptic rendering is tightly integrated with collision detection/collision response of a typical game engine? A normal game could have hundreds of objects with tens of thousands of polygons that you will need to test for collisions against. Again, you are trying to address a wrong problem here.
I suggest that you google "haptic rendering" first, there are quite a few introductory papers and lectures available online that explain the basic concepts and show the requirements for a typical system. Only then it makes sense to actually start designing something. That something is "for gaming" doesn't mean it can ignore basic physics or how a typical VR/game engine works.
What you can embed on the glove/controller is closed loop control that will help to deliver the commanded amount of force in response to the finger movement of the user (so that you don't break neither the finger nor your gears). That needs to be very fast and going to the host would be way too slow for it. But the calculation of the forces themselves, which fingers to close, etc has to be done only at the host - you would need a very beefy embedded computer with a fast link to the host machine if you wanted to do all the calculations on board.