EEVblog > EEVblog Specific

eevBLAB 106 - ChatGPT AI Has Changed EVERYTHING

<< < (76/91) > >>

Neutrion:

--- Quote from: Bicurico on March 16, 2023, 07:53:11 pm ---Call center and front desk receptionist will indeed be replaced by AI.
So what? Disruptive technologies have done this to many trades in the past. Anyone remembers Kodak and analogue photography?

If you are not willing to allow this, then you cannot allow technological progress.

--- End quote ---


First help desk receptionist, and factory workers, than teachers, translators, and than slowly but safely everything.
Or did you see any type of declaration, specification as for what job or human quality is not wanted to be affected? The aim is clearly to excel everywhere, where it is technologically possible. I don't see
any restricted area. (The military is probably working hard on actual terminators as well.)
People start to feel bad and try to calm themself and point out areas where the AI does not doing well yet.
Is it a good strategy to wait and hope that might some areas would be saved, with no idea what they will be? For religious people maybe.

For those who are not techno-religious, thech is there to solve problems and not for fun only. This thing will definitely create more problems than it solves. Especially the way it is implemented.

Decades ago especially during the industrial revolution people had actual problems to solve and were happy to have solutions. From there we got to the point to have a lot of tech to solve the problems of technology (CO2, pollution etc.) And then we end up here to argue how the great new tech may have some positive aspects as well while it is really difficult to find it.
For me this is a sign that we might got to the point, where not the tech advancement should be the number one priority for humanity. Kind of the end of the curve (or the sine wave :) )
It is like realizing that the wave you were riding is not there anymore, so you have to look for another one.



So yes technological progress is not a natural law, nor is it for itself. And if I don' see any logical way this would make things better, than I think we could survive without progress, or at least progress on this field. But I clearly see that we actually won't survive if this process progresses as expected.

It is amazing to see what a religious taboo technological advancement became. "Even if we face extinction, we have to "progress" because that is the way to go!!!"


Remember in the fifties it seemed obvious to even use nuclear bombs to create harbours, and use them for mining etc. Because that was the state of art. And than we got a bit more info of the broader effects.





--- Quote from: Bicurico ---While many jobs will be replaced, though, new ones will be created. And the loss might be compensated by the population decrease in western society.


Why should students learn? For the same reason you learn math when you could just use your CAS enabled calculator. Or why you learn how to read and write, when you could just as well use text reader or autocorrect. Why learn foreign languages if you can use Deepl?

For me these are tools and you need to learn how to use them. Like FEA software. Anyone can run a simulation, but it needs a trained engineer to interpret the results, to make sure the input and output are correct and what to do based on the results.

If you don't know how to program, you won't build any application using ChatGPT. But if you are learning how to code, ChatGPT will be a great tutor. And if you already know how to code, it will help you find a solution when you are stuck or lazy. It works great for that.

--- End quote ---

You are confusing the IS state with the near future. All what you describe is valid now, but maybe in 3 years time you will not need any programming knowledge at all.
There is no one, not even the developers who are able to tell you what this thing will be capable of, and what not.
But we definetly know that such great and fast changes in society almost always end up in a disaster.


--- Quote from: Bicurico ---Again, the danger is with who owns the AI. Imagine a government uses AI to automatically monitor and interpret what people are doing. What if you are flagged due to an error? How can you clear your name?

Your comparison with China is a good example: it is not the technology that is bad. It is the abuse you can use it for.

--- End quote ---

If the owner is not able to tell you what is exacly going on inside, than the short term private interests can be as dangerous as government ownership. Especially if we have clearly no clue what it cauese even if we only suddenly get rid of those jobs which are at danger here and now.



--- Quote from: Bicurico ---On the other hand, could an AI run government be finally without corruption and lobbying? Wouldn't that be a better government, taking decisions to the actual benefit of the people, planning in long terms and not being worried about reelection?

Could we have different AI motors running in parallel? We could elect the one we like the most.

I am really unable to further discuss this, as we cannot really anticipate what the future of AI will bring.

What really worries me is that the world has private companies that have more funds than whole countries.

Regards,
Vitor

--- End quote ---

Yes an AI run government to not to have corruption... May I suggest a human-less planet to avoid pollution?

"as we cannot really anticipate what the future of AI will bring"

And this in itself brings a complete instability into the system which makes any planning both privately or in a broader society impossible. No one can be sure now which jobs can not be made useless in the near future. What kind of motvation loss can this be to any students who would like to prepare for the future?

Goverments can be as problematic as private companies, as basically only one government in the World is truly democratic.

bob808:

--- Quote from: Neutrion on March 16, 2023, 04:31:19 pm ---I also watched this:

To watch a smart guys talking about positive solutions.
Well this is some tech optimistic Stanford professor and still. 
His idea: Well we have to teach children to think creatively, to work with the AI.  |O
And than? When the AI will become more creative what happens  than?

Next step: Try to teach the childrens properly to pray to the AI to not to be eradicated, and to get some extra scores and food...

--- End quote ---

I don't think we have the time to polish things, like we did until say the 2010's max. Past that period things began snowballing and we just don't have the time for much. There's a lot happening at the same time, and this will only get worse from now on.

dietert1:
Stanford Dr. Li Jiang appears to be rather optimistic and i liked the way he is talking about the dreams of children.
After viewing the video i remembered Stanislav Lems "Solaris", a science fiction story where he invented an infant alien existence that would play nasty games to its human researchers. And the AI taking control in another famous story by Arthur C. Clarke. Probably the younger generation doesn't know those stories.

Regards, Dieter

magic:
Solaris researched its human researchers, showing them stimuli that they reacted to and watching their reactions :popcorn:

SiliconWizard:
That's so cute!

Navigation

[0] Message Index

[#] Next page

[*] Previous page

There was an error while thanking
Thanking...
Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod