General > General Technical Chat
The long road to singularity
SiliconWizard:
--- Quote from: Miyuki on July 13, 2022, 08:32:25 am ---
--- Quote from: RoGeorge on July 13, 2022, 07:23:02 am ---To make an AI you'll need a pile of data to train the AI. Whatever bias or motivation is in that training pile of data, your AI will manifest it.
For example, look at the disclaimer in this DALL·E mini demo webpage:
--- Quote ---While the capabilities of image generation models are impressive, they may also reinforce or exacerbate societal biases. While the extent and nature of the biases of the DALL·E mini model have yet to be fully documented, given the fact that the model was trained on unfiltered data from the Internet, it may generate images that contain stereotypes against minority groups. Work to analyze the nature and extent of these limitations is ongoing, and will be documented in more detail in the DALL·E mini model card.
--- End quote ---
Source: https://huggingface.co/spaces/dalle-mini/dalle-mini
--- End quote ---
That is a little worrying about who and how will it filter it
There can be a blurry line between stereotypes/racism and unpleasant facts, as Zero999 mentioned and many times showed, like when applied to things like crime prediction
--- End quote ---
Yep, definitely. But while actual people can get called on this (especially if they are white and male these days), nobody (at least not in the next few decades) will accuse AI!
AI is progress, it's all based on actual data, right? Machines are supposed to be much more neutral compared to humans! That's the beauty of it all! :-DD
The legal status of "AI" may change over time and get close to that of humans, and thus could in time get accused of racism, but I don't see it happening before a few more decades at least.
And of course it will bring a ton of funny and interesting issues.
Zero999:
--- Quote from: SiliconWizard on July 13, 2022, 07:01:40 pm ---
--- Quote from: Miyuki on July 13, 2022, 08:32:25 am ---
--- Quote from: RoGeorge on July 13, 2022, 07:23:02 am ---To make an AI you'll need a pile of data to train the AI. Whatever bias or motivation is in that training pile of data, your AI will manifest it.
For example, look at the disclaimer in this DALL·E mini demo webpage:
--- Quote ---While the capabilities of image generation models are impressive, they may also reinforce or exacerbate societal biases. While the extent and nature of the biases of the DALL·E mini model have yet to be fully documented, given the fact that the model was trained on unfiltered data from the Internet, it may generate images that contain stereotypes against minority groups. Work to analyze the nature and extent of these limitations is ongoing, and will be documented in more detail in the DALL·E mini model card.
--- End quote ---
Source: https://huggingface.co/spaces/dalle-mini/dalle-mini
--- End quote ---
That is a little worrying about who and how will it filter it
There can be a blurry line between stereotypes/racism and unpleasant facts, as Zero999 mentioned and many times showed, like when applied to things like crime prediction
--- End quote ---
Yep, definitely. But while actual people can get called on this (especially if they are white and male these days), nobody (at least not in the next few decades) will accuse AI!
AI is progress, it's all based on actual data, right? Machines are supposed to be much more neutral compared to humans! That's the beauty of it all! :-DD
The legal status of "AI" may change over time and get close to that of humans, and thus could in time get accused of racism, but I don't see it happening before a few more decades at least.
And of course it will bring a ton of funny and interesting issues.
--- End quote ---
The problem is those who develop and train AI will be accused of sexism or racism, if the results it generates hurt people's feelings. If a certain ethnic group more likely to commit a certain type of crime, then this will be shown in the data used to train the AI, thus affecting the results. The model is working. It's taking all the data given to it and generating an accurate result. It's just many people don't like to hear the truth. Now it won't tell us why said ethnic group is more likely to commit a certain type of crime. Figuring that out is more complicated and even then the objective truth might offend some.
TimFox:
--- Quote from: Picuino on July 11, 2022, 06:43:09 pm ---I think right now there is a race to build the most powerful AI. Rather, you could say there is a war between the US and China to design the most powerful AI. That will sooner or later lead to making intelligence greater than ours. It has already been achieved in some specific fields (chess, go, Jeopardy!) and general intelligence will be achieved in a few decades, if not years.
It is not something we can stop and other more "human" goals, such as getting food and energy for everyone, are on the secondary plane.
--- End quote ---
At least one of my human classmates was able to beat Watson at Jeopardy:
https://www.cbsnews.com/news/rep-rush-holt-beats-watson-in-jeopardy-challenge/
When he was elected to the US House of Representatives, he doubled the number of PhD physicists in the chamber.
SiliconWizard:
--- Quote from: Zero999 on July 13, 2022, 08:31:03 pm ---
--- Quote from: SiliconWizard on July 13, 2022, 07:01:40 pm ---
--- Quote from: Miyuki on July 13, 2022, 08:32:25 am ---
--- Quote from: RoGeorge on July 13, 2022, 07:23:02 am ---To make an AI you'll need a pile of data to train the AI. Whatever bias or motivation is in that training pile of data, your AI will manifest it.
For example, look at the disclaimer in this DALL·E mini demo webpage:
--- Quote ---While the capabilities of image generation models are impressive, they may also reinforce or exacerbate societal biases. While the extent and nature of the biases of the DALL·E mini model have yet to be fully documented, given the fact that the model was trained on unfiltered data from the Internet, it may generate images that contain stereotypes against minority groups. Work to analyze the nature and extent of these limitations is ongoing, and will be documented in more detail in the DALL·E mini model card.
--- End quote ---
Source: https://huggingface.co/spaces/dalle-mini/dalle-mini
--- End quote ---
That is a little worrying about who and how will it filter it
There can be a blurry line between stereotypes/racism and unpleasant facts, as Zero999 mentioned and many times showed, like when applied to things like crime prediction
--- End quote ---
Yep, definitely. But while actual people can get called on this (especially if they are white and male these days), nobody (at least not in the next few decades) will accuse AI!
AI is progress, it's all based on actual data, right? Machines are supposed to be much more neutral compared to humans! That's the beauty of it all! :-DD
The legal status of "AI" may change over time and get close to that of humans, and thus could in time get accused of racism, but I don't see it happening before a few more decades at least.
And of course it will bring a ton of funny and interesting issues.
--- End quote ---
The problem is those who develop and train AI will be accused of sexism or racism, if the results it generates hurt people's feelings.
--- End quote ---
Call me when that happens. I haven't seen anyone training ML AI being held liable for any problem caused by AI so far. Which is part of why some people/companies like it so much.
This is a crucial problem that we have been discussing on a regular basis here about "AI".
And it's anyway extremely hard to handle in this way, because in effect ML, at least the systems trained from very large datasets, becomes almost impossible to predict, and the data sets are so large that nobody can be really held responsible: data is not "hand-picked" by engineers, and the data sets are largely out of control, especially the very large ones dealing with, say, people's behavior. So that becomes intractable and very easy to circumvent any direct liability.
That said, sexism and racism may be hot enough topics these days that we may see some confrontation emerge from this. Which would be uh. Interesting. Deaths caused by some AI-based system? Who cares? Just some casualties, and it still does better than humans, right? But racism? That may very well be pushing it too far. We'll see. :popcorn:
Picuino:
One of the problems of machine learning that is being attempted to solve is precisely the large amount of data needed to train the neural network. One of the solutions is to use pre-trained neural networks, which have much faster learning with a small set of data that you can control manually. GATO is an attemp in this direction. But we are back to the same problem of knowing how to pre-train the network with unbiased data.
Another current problem with neural networks is that they cannot explain why they have made a decision. According to what I read in a Scientific American article, efforts were being made to solve this problem, although it seems to be a long way off.
Navigation
[0] Message Index
[*] Previous page
Go to full version