| General > General Technical Chat |
| The long road to singularity |
| << < (3/5) > >> |
| Picuino:
An interesting point of view: Why general artificial intelligence will not be realized. https://www.nature.com/articles/s41599-020-0494-4 I agree that general intelligence is very difficult to achieve for a mind that does not interact with reality. But we will succeed in making robots that can learn like humans do, interacting with reality and learning by interacting with the world. |
| Picuino:
DeepMind AI learns simple physics like a baby Neural network could be a step towards programs for studying how human infants learn. https://www.nature.com/articles/d41586-022-01921-7 |
| CatalinaWOW:
For some of the implications of a machine smarter than us you need look no farther than those who are smarter than most of us. While we can argue all day about how to measure intelligence it is clear that some are much brighter than average. Reputedly there have been people with an IQ well into the 200s, and there are dozens in the 150 to 200 range. For all of that the Newtons, Einsteins and their like haven't dominated society or completely overturned the world order. And these are humans that have as much ability to directly affect the world around them as anyone else. An intelligence trapped in a server field, even if it escapes to the the web has little way to interface with the world. The singularity, if it happens in the sense feared by Kurzweil and others will come when intelligence not just higher than ours, but dramatically higher is combined with ability to directly interface with the world and the ability to self reproduce. I am fairly confident that even my grandkids won't live to see that. |
| Picuino:
Some experts believe that artificial intelligence will serve to increase our capabilities and that it will combine with humans in a symbiotic way. This future is not presented in a dystopian way, but as a society that will be able to advance much faster in scientific discoveries and technical developments that will benefit us all. The reality will probably be that AI will bring us both benefits and drawbacks. It is already doing so right now. An interesting book I read recently that deals with the subject: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. By Cathy O'Neil https://www.amazon.com/Weapons-Math-Destruction-Increases-Inequality/dp/0553418831/ref=sr_1_2 |
| Zero999:
--- Quote from: Picuino on July 12, 2022, 10:57:13 am ---Some experts believe that artificial intelligence will serve to increase our capabilities and that it will combine with humans in a symbiotic way. This future is not presented in a dystopian way, but as a society that will be able to advance much faster in scientific discoveries and technical developments that will benefit us all. The reality will probably be that AI will bring us both benefits and drawbacks. It is already doing so right now. An interesting book I read recently that deals with the subject: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. By Cathy O'Neil https://www.amazon.com/Weapons-Math-Destruction-Increases-Inequality/dp/0553418831/ref=sr_1_2 --- End quote --- It appears to be more political, than anything else. Statistical models can accurately determine risk, even if the results are politically incorrect. Here in the UK, insurance companies used to give women cheaper car insurance than men, because they're statistically shown to drive more carefully and have fewer accidents, yet this was found to be sexist and they were forced to charge women the same. The result has been women paying more, not men paying less. In the past, banks in the US have charged higher rates and avoided giving mortgages and insurance in areas where it's not profitable to do so. They have been accused of racism, because such areas are dominated by ethnic minorities, but I doubt that was the case. It seemed to be more about them not wanting to do business in areas which aren't profitable. The problem with machine learning is it's opaque. In the case of a hand coded algorithm, it's possible to explain to someone why they were refused a mortgage, but this isn't possible when the decision is the product of machine learning. There might be an argument for not including data about ethnicity, religion, sex etc. in the data, but otherwise I can't see any good coming from tweaking models to give politically correct results. I'm very cynical about a singularity emerging any time soon. As far as I'm aware, the current generation of AI models aren't able to distinguish simple things as cause from effect. |
| Navigation |
| Message Index |
| Next page |
| Previous page |