Hi,
Yes this is a fantastic idea. I was wondering if you managed to create even a rudimentary prototype (say using a smartphone camera/app or laptop with webcam)? It seems to me the device will have to be cloud-connected to tap into enough computing power to handle the recognition (much like Google Goggles). Will the user be required to be connected to a smartphone, or will the glasses have everything built in?
While I applaud your efforts, I think the relatively low amount of money being raised for the challenges ahead, as well as the relatively cheap cost you are offering for these glasses, makes me concerned. Since the prototype hardware itself should be very inexpensive, I see all of the work involved being in software. It should be possible to at least build a proof of concept for next to nothing. [edit:] But even then... the costs in miniaturizing hardware can add up, and finding a manufacturer to make even a small run would be prohibitively expensive.
For example, you can strap a smartphone to your head with the camera on, and have it recognize objects with cloud-based recognition (you could have it link up and send a video feed to a server in your network which does all the grunt-work and then relays results back as text , which the app will convert using text-to-speech API's). [edit:] I see in the video you are using an IP-based camera... Is it broadcasting the image to a PC-based software that is parsing the video for recognizable objects?
Anyways, it sounds a lot like a video real-time version of Google Goggles. You should talk to the people at Google about your idea, I'm sure you can leverage their technology and maybe they have an API to save you a lot of work in creating a recognition engine yourself. Or if you have, perhaps Google can use it.
I look forward to seeing what you develop. Good luck with this ambitious but much-needed project.
