Author Topic: eevBLAB 106 - ChatGPT AI Has Changed EVERYTHING  (Read 63722 times)

0 Members and 1 Guest are viewing this topic.

Offline tszaboo

  • Super Contributor
  • ***
  • Posts: 7495
  • Country: nl
  • Current job: ATEX product design
Re: eevBLAB 106 - ChatGPT AI Has Changed EVERYTHING
« Reply #125 on: January 27, 2023, 01:46:01 pm »
Actually, I was thinking about the possible uses on how this can change our society into something better (I know, right)?
One of the biggest issue IMHO we are facing now is the post-truth society. Fake news and political polarization is a huge issue, and people can live their lives in an echo chamber. The political right and the left both seems to have taken a anti-scientific stance, where they just don't follow scientific evidence. So I asked our robot (which wasn't overloaded tonight, only very slow) if it can do this. It's really reluctant to say that it can detect fake news. And I know the training data ended in 2021 September. But I cannot help but wonder if this can be used to fix this issue. Or if it will make things even worse, since the repeated propaganda pieces will train the model with bad data, and it has no way of distinguishing it from real news.
Basically as it has been for the past 3000 years, at laest. For an average person with only minor nudges in either direction. With people equally deceiving themselves that “in the past it was better”. :popcorn:
Ok, take it this way. The effort, money and resources required to spread misinformation has been reduced to the point, and the return on said effort is so big, that it's always worth doing it. As a stable constant, 1-3% of people are narcissist, psychopaths and  their goal will always be different than cooperation or truth or finding solutions for problems. So there is a big payout of misinformation, you can literally ruin a country by funding media outlets that will produce 24/7 nonsense, and create a large "virtual followers" for them to normalize it.
You can change election results with misinformation, and state capture entire countries. I am from one of those countries.
One of the biggest issue IMHO we are facing now is the post-truth society.

I don't think there has ever been a time when humanity had more access to truth than it does today. The real issue is that it's mixed up with a lot of distortions of it. But we have access to most truth we ever had.
That is why you see the discrepancies of both left/right political spectrums. Because up until the internet started being mainstream they could more easily sell their bullshit.
And anyway, people don't deal that good with truth.
Well controversy creates clicks. Then there are search bubbles, and being in groups sharing this nonsense without fact checking.
 

Offline bob808

  • Frequent Contributor
  • **
  • Posts: 281
  • Country: 00
Re: eevBLAB 106 - ChatGPT AI Has Changed EVERYTHING
« Reply #126 on: January 27, 2023, 03:43:01 pm »
That is just a power struggle that will always happen due to our nature and the environment. Humans are going to try to find new ways of getting that power, in any framework you set them in.
 

Offline Neutrion

  • Frequent Contributor
  • **
  • Posts: 305
  • Country: hu
Re: eevBLAB 106 - ChatGPT AI Has Changed EVERYTHING
« Reply #127 on: January 27, 2023, 06:02:28 pm »
Actually, I was thinking about the possible uses on how this can change our society into something better (I know, right)?
One of the biggest issue IMHO we are facing now is the post-truth society. Fake news and political polarization is a huge issue, and people can live their lives in an echo chamber. The political right and the left both seems to have taken a anti-scientific stance, where they just don't follow scientific evidence. So I asked our robot (which wasn't overloaded tonight, only very slow) if it can do this. It's really reluctant to say that it can detect fake news. And I know the training data ended in 2021 September. But I cannot help but wonder if this can be used to fix this issue. Or if it will make things even worse, since the repeated propaganda pieces will train the model with bad data, and it has no way of distinguishing it from real news.

Too funny. The same certain peeps in the middle who are poisoning scientific research with 'science', as it were, will end up pumping money and nefarious efforts into this contraption as well. Think Wiki on steroids.

 :popcorn:
Exactly. And masses will belive, that now the ultimate truth is coming from that machine, everythin else is a lie.
On the other hand those who were following the story with the double slit experiments and it's phiosophical aspects know, that actually scientifically there is no such a thing as one final objective truth.

I clearly can't even see what is a general aim with a strong AI, apart from "it might will have some good uses". As if it would be some intended God or religion substitution for some. Who is going to doublecheck
the results of the ultimate wisedome?   Or what is 42?
Quote from: golden_labels
Machine learning solutions permit finding answers to otherwise computationally expensive problems. After understanding of how specific ML models work, they may become the next step after current statistics. There is a strong coöperation between machine learning researchers and neuropsychologists, boosting knowledge in both branches. Even the current meme smortnets offer philosophical insights, which may be as crucial and influential as those which led Galileo and Darwin.
Machine learning on some specific tasks and a general AI replicating human thinking or being better at it (or letting peolple belive doing so) are not the same thing.

And  unleashing even these not too so strong AI's on the general public as it now happens.  And yes mostly driven by short term interests.


I clearly can't even see what is a general aim with a strong AI, apart from "it might will have some good uses". As if it would be some intended God or religion substitution for some. Who is going to doublecheck
the results of the ultimate wisedome?   Or what is 42? (…)
Quote from: golden_labels
That entire fragment is so all over the place and incoherent, that it’s hard to understand it. But it gives me a strong impression, it’s written as if development was a matter of someone’s choice.

If you create a complex machine using it's results without doublechecking them with humans, than you created a God or a Totem which you only have to belive in. But if you don't, and doublecheck all the results, what is the point of the machine?
Development (mostly only in a technical form) is happening because human society belives that it is a good thing, and it makes our life better and happier, it is not a law of nature.



Sometimes it would be also nice to ask, what is the actualy aim of technical advancement, what do we want to achieve? Is it achievable through technical advancements? Did we get closer to what we wanted in the last....50 years? Or 150?
Quote from: golden_labels
Ponder on that more, taking advantage of not having to pull a plough to marginally offset your hunger and even being aware of such abstract issues.
But why didn't we we wanted to pull the plough? To suffer less I guess.
Does the manager who works 10 hours a day than goes to the gym (to replicate the plough pulling on some machine  :-DD ) feels himself better?
The answer is: we don't know. He might doesn't even have more freetime than those plough-pullers back than.
The belief in technical development being the good thing which is making us happier can not be backed by science. All what we know is, that we live longer, at our current state of mind don't want to miss some modern comforts like 100 years before,(most of the people at least) and screwed the planet in about 100 years.
Anything more is just a belief.
Most current science do not back the idea that you need to live in a technically developed enviroment to be happy. Neurologically maybe  the happiest peolple on earth if I remember right are some monks who spend many hours a day under a waterfall. (Without watching catvideos on their smartphones.)

Actually, I was thinking about the possible uses on how this can change our society into something better (I know, right)?
One of the biggest issue IMHO we are facing now is the post-truth society. Fake news and political polarization is a huge issue, and people can live their lives in an echo chamber. The political right and the left both seems to have taken a anti-scientific stance, where they just don't follow scientific evidence. So I asked our robot (which wasn't overloaded tonight, only very slow) if it can do this. It's really reluctant to say that it can detect fake news. And I know the training data ended in 2021 September. But I cannot help but wonder if this can be used to fix this issue. Or if it will make things even worse, since the repeated propaganda pieces will train the model with bad data, and it has no way of distinguishing it from real news.
Quote from: golden_labels
Basically as it has been for the past 3000 years, at laest. For an average person with only minor nudges in either direction. With people equally deceiving themselves that “in the past it was better”. :popcorn:
But  the real ansver is, that we don't know wheter in the past it was generally happier or not.
So those who suppose that it is a bullshit and now it is better to live are also just following their gut feelings.
Possibly this is the main religion today, the "Developmentism" since the renaissance but mostly since the end of the second world war.
But neighter philosophy or science is backing this Idea. If we actually check the enviromental science, we are not developing but degrading -taking  into account the whole system-.





« Last Edit: January 27, 2023, 06:05:38 pm by Neutrion »
 

Offline bob808

  • Frequent Contributor
  • **
  • Posts: 281
  • Country: 00
Re: eevBLAB 106 - ChatGPT AI Has Changed EVERYTHING
« Reply #128 on: January 27, 2023, 10:55:52 pm »
Don't you think it's objectively and clearly way better to live today than a hundred years ago? Penicillin wasn't even discovered. People who had lots of wealth had to accept the loss of loved ones from simple infections. All the money they had couldn't save them. 
I don't see this as something particular hard to be agreed on. There's no question we're way way way better off than before. 
Even if somehow people were "happier" back then, something extremely subjective anyway, would that extra "happiness" be worth it in balance with all the people that didn't die because of the advances we have today?
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14698
  • Country: fr
Re: eevBLAB 106 - ChatGPT AI Has Changed EVERYTHING
« Reply #129 on: January 27, 2023, 11:48:55 pm »
Don't you think it's objectively and clearly way better to live today than a hundred years ago? Penicillin wasn't even discovered. People who had lots of wealth had to accept the loss of loved ones from simple infections. All the money they had couldn't save them.

That's mixing fundamentally objective and subjective concepts. It leads nowhere.

I don't see this as something particular hard to be agreed on. There's no question we're way way way better off than before.

Define better off. The human population has literally exploded, that is an objective measurement. In that regard, it may mean that humans are better off as a species.
But it has grown so much that we are now pondering on how to curb it.

Even if somehow people were "happier" back then, something extremely subjective anyway, would that extra "happiness" be worth it in balance with all the people that didn't die because of the advances we have today?

Would you prefer living a short, but very happy life, or a very long, unhappy, dull life? Note that everyone is very likely to have a completely different answer to this question.
 

Offline bob808

  • Frequent Contributor
  • **
  • Posts: 281
  • Country: 00
Re: eevBLAB 106 - ChatGPT AI Has Changed EVERYTHING
« Reply #130 on: January 28, 2023, 12:00:00 am »
You kind of taken me by surprise. Also I did give an example with medicine. Would you accept a higher risk for your loved ones for some totally subjective thing like "more happiness"?
Is the transistor a net negative for humanity's "happiness"? Also in which part of the world more exactly? US? UK? India? Everywhere?
 

Offline EEVblogTopic starter

  • Administrator
  • *****
  • Posts: 37934
  • Country: au
    • EEVblog
Re: eevBLAB 106 - ChatGPT AI Has Changed EVERYTHING
« Reply #131 on: January 28, 2023, 01:59:58 am »
And anyway, people don't deal that good with truth.

Ultimately, people will not accept AI telling the truth.
 
The following users thanked this post: BrianHG

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14698
  • Country: fr
Re: eevBLAB 106 - ChatGPT AI Has Changed EVERYTHING
« Reply #132 on: January 28, 2023, 02:10:34 am »
And anyway, people don't deal that good with truth.

Ultimately, people will not accept AI telling the truth.

At least, hopefully not.
 

Offline adx

  • Frequent Contributor
  • **
  • Posts: 279
  • Country: nz
Re: eevBLAB 106 - ChatGPT AI Has Changed EVERYTHING
« Reply #133 on: January 28, 2023, 03:49:14 am »
I don't get the "standard of living" argument. It seems to be based on rosy eyed support for 20th century "science (and money) begat everything" elitist parochialism, with differing simplistic definitions depending on the patriarchy involved (medical, economists etc). Rich people still lose loved ones through simple infection, perhaps even a result of overprescription of penicillin. Or worry longer on tenterhooks for their loved ones to die, simply by virtue of longer more reliable individual life. It's extremely subjective. Animals that were domesticated for companionship and general utility became 'must eats' in mechanised slaughter fests, then become valued for their sentience again. It's extremely variable, and not a single shift to better (or worse).

Having said that, medicine does seem to be a no-brainer for an example of general improvement. I was thinking about this a few days ago, and I'll also add digital cameras and lasers to that list.

But being conned into believing that a 9 to 9 job to not even cover rent is "improved", that's a hard sell. Life, for humans as an apex predator, used to be relatively easy (if not in a carefree sense). Something else has taken over - is it alive?

I wonder what the AI entities will think about this, when determining our future.
 

Offline golden_labels

  • Super Contributor
  • ***
  • Posts: 1262
  • Country: pl
Re: eevBLAB 106 - ChatGPT AI Has Changed EVERYTHING
« Reply #134 on: January 28, 2023, 06:56:42 am »
Machine learning on some specific tasks and a general AI replicating human thinking or being better at it (or letting people believe doing so) are not the same thing.

And  unleashing even these not too so strong AI's on the general public as it now happens.  And yes mostly driven by short term interests.
“Artificial intelligence” is a term popularized in the first half of the 20th century, based on simplistic concepts now long considered outdated. A product of beliefs, mostly rooted in religion, which are not accepted as a part of knowledge for decades. Between late 50s and early 90s the term entered popular culture. Not only in the most shallow version and devoid of any philosophical depth, but also ignorant of any later advancements. In that form it survived until now, brought back to wide audience as a marketing tool to invoke purely emotional responses of “modern”, “technology” and “better”.

Among people actually involved or interested in the subject, it is used only in informal (“fuzzy”) or whimsical manner. Neither the archaic notion of “intelligence” nor attempts to follow anthropocentric ideas are now a part of the research. If precision is required, terms more accurately describing the technology are used. Machine learning algorithms are usually the subject. This is what received enormous attention in recent years, when GPU computing allowed smortnets to grow in complexity.

If you create a complex machine using it's results without doublechecking them with humans, than you created a God or a Totem which you only have to belive in. But if you don't, and doublecheck all the results, what is the point of the machine?
Development (mostly only in a technical form) is happening because human society belives that it is a good thing, and it makes our life better and happier, it is not a law of nature.
Quite opposite, I would say. Development doesn’t happen “because”. It seems it just happens. A consequence of humans being curious beings that, like everything, maximize survival chances. With that last part not being a choice either, if considered on population scale.

But why didn't we we wanted to pull the plough? To suffer less I guess. (…)
You are arguing a sarcasm now.
People imagine AI as T1000. What we got so far is glorified T9.
 

Offline tszaboo

  • Super Contributor
  • ***
  • Posts: 7495
  • Country: nl
  • Current job: ATEX product design
Re: eevBLAB 106 - ChatGPT AI Has Changed EVERYTHING
« Reply #135 on: January 28, 2023, 11:53:23 am »
Don't you think it's objectively and clearly way better to live today than a hundred years ago? Penicillin wasn't even discovered. People who had lots of wealth had to accept the loss of loved ones from simple infections. All the money they had couldn't save them. 
I don't see this as something particular hard to be agreed on. There's no question we're way way way better off than before. 
Even if somehow people were "happier" back then, something extremely subjective anyway, would that extra "happiness" be worth it in balance with all the people that didn't die because of the advances we have today?
Ok, fair enough, today is better than any time in history, if you look at it from very far. How about compared to 15 years ago? People don't talk to each other anymore, everyone is looking at their phone all the time, dating is ruined, inflation is out of control and many basic things are becoming unaffordable to the middle class. How is this better? Just look at some basic statistics. People are not getting married because dating apps ruined everything, and they are told that career is more important than family. More people than ever suffer from some kind of mental health problems. The democracy index has been in decline since 2008 or so, worldwide. Sure we invented absolutely mind boggling thigs like mRNA vaccines, and then some idiots are taking horse dewormers or say that they should invent some sort of internal disinfectors.
 

Offline bob808

  • Frequent Contributor
  • **
  • Posts: 281
  • Country: 00
Re: eevBLAB 106 - ChatGPT AI Has Changed EVERYTHING
« Reply #136 on: January 28, 2023, 01:15:01 pm »
I think the general bullshit index has stayed constant throughout human history. We just get to see it more with all this turbo-improved access to information we have today. If people knew what kind of crap happened throughout the ages... Most of it has been taken out from (documented) history by the victors. 
These days humanity is only getting to know itself better. And as Dave mentioned, humanity doesn't accept it. What irks us is the fact that we know about it. But we all know it always happened as it does today.
I'm not saying we're not a few steps away from total collapse. I'm just saying that your average Joe is way better off today than any hundred years chunks in the past. You get fleas you can buy a spray. You get a headache you pop a pill and forget about it. Want a varied diet? Accessible in many parts of the planet. 
Also in this discussion we need to take into account our goals as a species. Why are we doing all that we're doing? To wait for our deaths comfortably? To build something?
I mean you could maybe argue from the point of view of someone counting his days, with no clear purpose. I mean sure, I guess. Office work is pretty shitty compared to just staying out in a field. Provided you have food and shelter, I guess staring in the distance might seem more appealing, than staring at spreadsheets.
 

Offline Neutrion

  • Frequent Contributor
  • **
  • Posts: 305
  • Country: hu
Re: eevBLAB 106 - ChatGPT AI Has Changed EVERYTHING
« Reply #137 on: January 28, 2023, 02:58:11 pm »
Machine learning on some specific tasks and a general AI replicating human thinking or being better at it (or letting people believe doing so) are not the same thing.

And  unleashing even these not too so strong AI's on the general public as it now happens.  And yes mostly driven by short term interests.

Among people actually involved or interested in the subject, it is used only in informal (“fuzzy”) or whimsical manner. Neither the archaic notion of “intelligence” nor attempts to follow anthropocentric ideas are now a part of the research. If precision is required, terms more accurately describing the technology are used. Machine learning algorithms are usually the subject. This is what received enormous attention in recent years, when GPU computing allowed smortnets to grow in complexity.

Well, this chatbot, and the artistic ones following exactly anthropocentric ideas. And although strong AI maybe not, it would not make it less dangerous.
After more than years of misery the EU just starte to think about FB algorithms, which are far simpler stuff, still the effects were not foreseen by society.

Quote from: golden_labels
If you create a complex machine using it's results without doublechecking them with humans, than you created a God or a Totem which you only have to belive in. But if you don't, and doublecheck all the results, what is the point of the machine?
Development (mostly only in a technical form) is happening because human society belives that it is a good thing, and it makes our life better and happier, it is not a law of nature.
Quite opposite, I would say. Development doesn’t happen “because”. It seems it just happens. A consequence of humans being curious beings that, like everything, maximize survival chances. With that last part not being a choice either, if considered on population scale.

It may seems so, but it doesn't. If it would be like that, we would still develop bio weapons on a mass scale, just because we can, actually could do it much better than before. There were also a huge amount of ridiculous nuclear stuff in the fifties. But society reacted and curbed down these things massively.
Most of people also don't do jobs they are completely  disagree with, or see it as harmful. Or if they do, they are not really effective at it.
I see this as one of the main causes of China being the best in non secret mass surveilance technologies.
I don't think that so many western engineers are so keen on working on this, having som ideas about the implications.
In the middle ages many advancement was held off by the worldview of the society. When that changed, it became faster.
If people would focus on what makes them really happy, and be a bit more methodic about it, the need for new gadgets could slow down, and developement also. Ideas about the 4 working days already show in this direction.

In the sixties when enviromental problems became obvious people started to work on renewable energies, and mostly not those people who wanted to make a fast profit.
This is why I said that the timeframe is the problem. If things are happening too fast these good mechanisms won't have the time to get through.

Curiosity, and experiments are not necessasily leading to maximisation of survival chances.
Evolutionary we might actually did the opposite during our "great" development age.
Do not give monkeys a handgranade to experiment with it.



And anyway, people don't deal that good with truth.

Ultimately, people will not accept AI telling the truth.

At least, hopefully not.
I am not that optimistic. There are masses of people without critical thinking out there.
 Also it is hard to tell the  truth if you can't even define it.

But the great stuff about the delyed choice quantum eraser experiment is, that now not just on macro scale philosophically, but also on the atomic scale  it is obvious that the deterministic, materialistic, or developmentist :) world view won't lead  us anywhere. It is like when it was first explored that the world is round. Kind of a last second info to step on the brake. Just the message didn't get through yet.
« Last Edit: January 28, 2023, 03:00:01 pm by Neutrion »
 

Offline bob808

  • Frequent Contributor
  • **
  • Posts: 281
  • Country: 00
Re: eevBLAB 106 - ChatGPT AI Has Changed EVERYTHING
« Reply #138 on: January 28, 2023, 03:06:23 pm »
I don't think that's settled yet:


 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14698
  • Country: fr
Re: eevBLAB 106 - ChatGPT AI Has Changed EVERYTHING
« Reply #139 on: January 29, 2023, 12:09:39 am »
And anyway, people don't deal that good with truth.
Ultimately, people will not accept AI telling the truth.
At least, hopefully not.
I am not that optimistic. There are masses of people without critical thinking out there.
Also it is hard to tell the  truth if you can't even define it.

I'm not either. For most people, the truth is whatever people that have higher authority than them have defined.
 

Offline EEVblogTopic starter

  • Administrator
  • *****
  • Posts: 37934
  • Country: au
    • EEVblog
Re: eevBLAB 106 - ChatGPT AI Has Changed EVERYTHING
« Reply #140 on: January 29, 2023, 01:14:43 pm »
And anyway, people don't deal that good with truth.
Ultimately, people will not accept AI telling the truth.
At least, hopefully not.

Depends on the topic.
I can guarantee that politicans will not allow AI to tell the truth about politics, they will ban it before that happens. The entire business of politics is built on the perception of "truth", not truth itself.
 

Online Bicurico

  • Super Contributor
  • ***
  • Posts: 1725
  • Country: pt
    • VMA's Satellite Blog
Re: eevBLAB 106 - ChatGPT AI Has Changed EVERYTHING
« Reply #141 on: January 29, 2023, 04:49:13 pm »
ChatGPT is one of several AI tools that is/will be available in 2023.

It is a disruptive technology and it will change humanity over time, just like the mobile phone or the internet.

It will replace many jobs: from help desks to phone services, from translators to general content generation.

I have no doubt about that.

As a University teacher, I have scheduled a meeting with all my department colleagues to explain them what ChatGPT is and what will change this semester: I will actively introduce students in my class (Programming in a non IT course) to ChatGPT and I am giving my colleagues a head start to prepare for the consequences.

There are 3 possible reactions a University can have to ChatGPT:

1) Simply ignore its existance
2) Forbit its use
3) Embrace it

Those teachers who chose 1, will have all (!) students using ChatGPT to aid them in writing all sorts of essays, reports, etc. They will be scored for a document written by a software...
Those who try to forbid it's use will be in the same situation as the former group, because they won't be able to detect if a text was written by ChatGPT. I have generated articles about random technical subjects and then submitted them to plagiarism software and the text were considered 100% genuine. The students I asked about ChatGPT all knew about it and some even tested if ChatGPT would produce the same text upon the same input made by different students (accounts and computers). The result were different texts! This means that you give the same subject to several groups and they all hand in different texts, all generated independently by ChatGPT.

Also, I tried ChatGPT on different subjects I know about and the texts created were of surprisingly high quality. I even tried questions of current exams of mine (relative to CAD/CAM/CAE subjects) and the responses would get full scores.

This means that ChatGPT has made it obsolete to ask students to write reports about some sujects. A machine will do it better and much quicker. Period.
Want bibliographic references? Ask ChatGPT and it will provide you with the relevant keywords and tell you what databases to search. With little effort you get the references and you can plant them on the text generated by ChatGPT.

I fell 100% confident to do a 30 minute presentation about ANY subject in the world if I get two hours of preparation with ChatGPT and Google. This is disruptive! I am no longer a specialist about a given subject, I am a specialist in using ChatGPT and Google to produce the content I need, as well as, to filter and process said information.

And this is the key point one needs to focus: how to use ChatGPT to quickly learn about any subject or to quickly get the starting of an essay.

Universities need to change the task of "write an essay about XYZ" to "use ChatGPT to obtain an essay about XYZ and then discuss the outcome, verify the statements and complete them".

As a result, people will take less time to learn something, to get the relevant information.

A different aspect is that ChatGPT presents wrong information on certain subjects like math or logic. It is fundamental to understand how ChatGPT works and where it will fail.

Simple example: "5 machines produce 5 parts in 5 minutes - how long do 100 machines take to produce 100 parts?"

Most people will answer 1 minute, 100 minutes or 500 minutes. The reason being that you need to correlate 3 things, while the brain normally is used to correlate just two items.

As a result people might focus in 5 parts in 5 minutes and conclude 1 part takes 1 minute, forgetting that there were 5 machines.

ChatGPT will give the wrong answer over and over.

Same goes for simple algebra. While the result given looks very good, it is simply wrong.

But all in all, this is an amazing technology!

I expect to be able to call my bank and instead of having to fight through a stupid voice enabled option tree, I can just say what I want:
"Hello, my name is Vitor XYZ, my birth date is xxxx, please unlock my credit card, because I keyd the wrong pin 3 times."
"Sure. Can you please confirm me your VAT number?"
"It's 1234"
"Thank you. I am happy to inform you that your credit card is now unlocked. Anything else?"
"No. Bye"

The same with any other service accessed through phone.

And why should I worry about the content of our web site? I hope they offer us a service, where I specify the subjects and products, so that ChatGPT can generate new content every month.

Next Microsoft Office Word will have a function where I start to write a report and after a few sentences, I click on AUTOCOMPLETE and the remaining report is written automatically.

Same for emails.

Hope I gave you some ideas...

Regards,
Vitor
 
The following users thanked this post: EEVblog

Offline bob808

  • Frequent Contributor
  • **
  • Posts: 281
  • Country: 00
Re: eevBLAB 106 - ChatGPT AI Has Changed EVERYTHING
« Reply #142 on: January 29, 2023, 06:08:57 pm »
Yeah it clearly makes it harder for a teacher but depending on subject and level of implication in some cases you could weed out the AI generated content. But I guess it depends on the subject, and requires extra attention: 
 

Offline adx

  • Frequent Contributor
  • **
  • Posts: 279
  • Country: nz
Re: eevBLAB 106 - ChatGPT AI Has Changed EVERYTHING
« Reply #143 on: January 29, 2023, 11:25:02 pm »
And anyway, people don't deal that good with truth.
Ultimately, people will not accept AI telling the truth.
At least, hopefully not.
I am not that optimistic. There are masses of people without critical thinking out there.
Also it is hard to tell the  truth if you can't even define it.

I'm not either. For most people, the truth is whatever people that have higher authority than them have defined.
'It never ceases to amaze me' just how true that is (not trying to be recursive :) ).

I guess it must be a result of being a social species. To a wolf "the food is over here" makes sense if you don't really know where it is. But an individual's sense of logic is not the purpose for a social species, maybe it takes an instinct for unshakeable belief to really avoid wasted duplicated effort and ensure the pack functions as one (minus any number of expendable individuals).
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6772
  • Country: nl
Re: eevBLAB 106 - ChatGPT AI Has Changed EVERYTHING
« Reply #144 on: January 30, 2023, 01:24:56 am »
Among people actually involved or interested in the subject, it is used only in informal (“fuzzy”) or whimsical manner. Neither the archaic notion of “intelligence” nor attempts to follow anthropocentric ideas are now a part of the research.

A lot of them are not doing research exactly, they are building portfolio to get into a startup and get rich ASAP. They shy away from the term intelligence not because it's too ill defined, but because their money maker fundamentally can't get closer. There are still general intelligence conferences and researchers, there's just no money in it ... that's how you know it's research :)

Something like Dreamcoder is research. Ever more elaborate ANN generators with ever larger networks, ever larger corpuses and ever more humans used during the supervised training is engineering.
« Last Edit: January 30, 2023, 01:26:58 am by Marco »
 

Offline madires

  • Super Contributor
  • ***
  • Posts: 7890
  • Country: de
  • A qualified hobbyist ;)
Re: eevBLAB 106 - ChatGPT AI Has Changed EVERYTHING
« Reply #145 on: January 30, 2023, 10:45:50 am »
In case you're looking for a way to detect ChatGPT: DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature (https://ericmitchell.ai/detectgpt/)
 

Offline Neutrion

  • Frequent Contributor
  • **
  • Posts: 305
  • Country: hu
Re: eevBLAB 106 - ChatGPT AI Has Changed EVERYTHING
« Reply #146 on: January 30, 2023, 02:06:40 pm »
I don't think that's settled yet:




They are talking about going back in time. I did not say that. I said that there is no objective reality. It is observation dependent. Which is actually fully logical on the macro scale, but because we had exact measuring equipments in the last centuries, we had the illusion, that this is an illusion, because we can measure thing fully objectively. We may not it seems.


And anyway, people don't deal that good with truth.

Ultimately, people will not accept AI telling the truth.

At least, hopefully not.


Depends on the topic.
I can guarantee that politicans will not allow AI to tell the truth about politics, they will ban it before that happens. The entire business of politics is built on the perception of "truth", not truth itself.

But it doesnt't work like this, that the politician goes with the audience into a big hall and asks the AI.
Political oppinions(of politicians as well) are based on some science, some interpretation, some discussions about it. Now just a lot of interpretation and discussion will go missing, because people will get used to just ask the AI.
Like Bicurio wrote later which I will respond to as well. Lets not read thorugh all that stuff , just use the AI to collect is. But this will be lessen the time and effort people will spend on the subjects and the details will be lost. Also human experts will have to debate with the AI. But after some tests to compare them, the humans will loose on many subjects, and that will be interpreted, as the AI being smarter.

And of course the AI can be also biased intentionally by the creator which is also a trap.

Just like now some "scientific evidence" in some popular topic-I don' want to name them of course, but I am not talking about the climate crisis because that one is really checked by many scientist for a long time now- is just bullshit, any anyone with a minimum IQ can debunk it, but is sold as science, and some politics is based on it. (Also I belive some politicians truly belive it, they are just like other people.)

ChatGPT is one of several AI tools that is/will be available in 2023.

It is a disruptive technology and it will change humanity over time, just like the mobile phone or the internet.

It will replace many jobs: from help desks to phone services, from translators to general content generation.

I have no doubt about that.

As a University teacher, I have scheduled a meeting with all my department colleagues to explain them what ChatGPT is and what will change this semester: I will actively introduce students in my class (Programming in a non IT course) to ChatGPT and I am giving my colleagues a head start to prepare for the consequences.

There are 3 possible reactions a University can have to ChatGPT:

1) Simply ignore its existance
2) Forbit its use
3) Embrace it

Those teachers who chose 1, will have all (!) students using ChatGPT to aid them in writing all sorts of essays, reports, etc. They will be scored for a document written by a software...
Those who try to forbid it's use will be in the same situation as the former group, because they won't be able to detect if a text was written by ChatGPT. I have generated articles about random technical subjects and then submitted them to plagiarism software and the text were considered 100% genuine. The students I asked about ChatGPT all knew about it and some even tested if ChatGPT would produce the same text upon the same input made by different students (accounts and computers). The result were different texts! This means that you give the same subject to several groups and they all hand in different texts, all generated independently by ChatGPT.

Also, I tried ChatGPT on different subjects I know about and the texts created were of surprisingly high quality. I even tried questions of current exams of mine (relative to CAD/CAM/CAE subjects) and the responses would get full scores.

This means that ChatGPT has made it obsolete to ask students to write reports about some sujects. A machine will do it better and much quicker. Period.
Want bibliographic references? Ask ChatGPT and it will provide you with the relevant keywords and tell you what databases to search. With little effort you get the references and you can plant them on the text generated by ChatGPT.

I fell 100% confident to do a 30 minute presentation about ANY subject in the world if I get two hours of preparation with ChatGPT and Google. This is disruptive! I am no longer a specialist about a given subject, I am a specialist in using ChatGPT and Google to produce the content I need, as well as, to filter and process said information.

And this is the key point one needs to focus: how to use ChatGPT to quickly learn about any subject or to quickly get the starting of an essay.

Universities need to change the task of "write an essay about XYZ" to "use ChatGPT to obtain an essay about XYZ and then discuss the outcome, verify the statements and complete them".

As a result, people will take less time to learn something, to get the relevant information.

A different aspect is that ChatGPT presents wrong information on certain subjects like math or logic. It is fundamental to understand how ChatGPT works and where it will fail.

Simple example: "5 machines produce 5 parts in 5 minutes - how long do 100 machines take to produce 100 parts?"

Most people will answer 1 minute, 100 minutes or 500 minutes. The reason being that you need to correlate 3 things, while the brain normally is used to correlate just two items.

As a result people might focus in 5 parts in 5 minutes and conclude 1 part takes 1 minute, forgetting that there were 5 machines.

ChatGPT will give the wrong answer over and over.

Same goes for simple algebra. While the result given looks very good, it is simply wrong.

But all in all, this is an amazing technology!

I expect to be able to call my bank and instead of having to fight through a stupid voice enabled option tree, I can just say what I want:
"Hello, my name is Vitor XYZ, my birth date is xxxx, please unlock my credit card, because I keyd the wrong pin 3 times."
"Sure. Can you please confirm me your VAT number?"
"It's 1234"
"Thank you. I am happy to inform you that your credit card is now unlocked. Anything else?"
"No. Bye"

The same with any other service accessed through phone.

And why should I worry about the content of our web site? I hope they offer us a service, where I specify the subjects and products, so that ChatGPT can generate new content every month.

Next Microsoft Office Word will have a function where I start to write a report and after a few sentences, I click on AUTOCOMPLETE and the remaining report is written automatically.

Same for emails.

Hope I gave you some ideas...

Regards,
Vitor



Well, than we just have to reorganize the whole university education system within a year in the whole world. Easily done isn't it?

Also attaining something using the AI and not collecting it will mean less thinking about the whole subject. If you have some  thoughts about a topic you clearly don't have a problem to write a lot about it.
And if you have a good general idea, it will make it more difficult to weed out of the unknown incoherences of the AI created part.
So this will make students focusing less on the subject, and more on how to use the AI.

And when the whole university world got used to one AI there comes another or a modified one.

But if university professors didn't  have too much to do until now, from  now on they can spend most of the time tackling the AI stuff, which will also change constantly, so the detection methods and usage methods would  also constantly change. Also the political censorship will weed out a lot of things from the science, because some things will suddenly disapear from the studies and some will appear and that disproportionally.
"
And why should I worry about the content of our web site? I hope they offer us a service, where I specify the subjects and products, so that ChatGPT can generate new content every month.

Next Microsoft Office Word will have a function where I start to write a report and after a few sentences, I click on AUTOCOMPLETE and the remaining report is written automatically.
"

And do you employ some people to actually doublecheck what you offer, or you just trust the AI blindly?
And how does the autocomplete know what YOU wanted to write?

BUT if everything works really well that means that humans are not needed for thinking. Than for what else are they needed for?

 
 

Offline Richard_Oh2

  • Newbie
  • Posts: 3
  • Country: gb
Re: eevBLAB 106 - ChatGPT AI Has Changed EVERYTHING
« Reply #147 on: January 30, 2023, 02:49:57 pm »
According to ChatGPT itself;

Quote
As an AI model, I can tell you that it's likely that AI technology like myself could enhance the work of electrical systems engineers in the future rather than replace them entirely. AI can assist in areas such as data analysis, simulation and prediction, freeing up engineers' time to focus on creative and strategic problem-solving. However, the complex nature of electrical systems engineering and the need for human judgment and decision-making means that it is unlikely AI technology would fully replace the work of electrical systems engineers in the near future.

Having used the tool, and other AI tools before it, for many months, I believe that the technology presents more opportunities than threats. For most people, it's unlikely the technology will do anything other than make their jobs easier and more productive, used as our assistant rather than our replacement. It does throw up some challenges though. For example, having used it myself to write a facts and figures article, every single fact it produced was just wrong. It's one thing making up facts for a blog article, but another thing making up instructions for wiring up an electrical component.
But the current technology, GPT3, is working from old data, Later this year, it will be incorporated into Bing and Google, and then GPT4 will come along and change everything again. In the end, it may be better at recognising untrue information than we are.

As for education, I believe that students will end up being marked on the research rather than the report, as that's the bit which will show their understanding of the subject.

Either way, look our for a new job role of AI Operator coming to job boards soon.

Anyway, just my ramblings!
 

Offline bob808

  • Frequent Contributor
  • **
  • Posts: 281
  • Country: 00
Re: eevBLAB 106 - ChatGPT AI Has Changed EVERYTHING
« Reply #148 on: January 30, 2023, 03:21:36 pm »
The real problematic truth that will come from AI is that we humans don't have a "soul". Not more than it can have. Because what we call "soul" or "conscience" is just a side-effect of the multitude of things that are happening all at once in our brain. Lots of stuff deep down, lots of stuff on the surface. That creates "the conscience" as a side effect, it's something virtual that doesn't exist materially. 
And it's not something that will be told to us by the (general) AI, it's something we will observe while making that (general) AI. That is a very hard to swallow pill, for most people. 
 

Offline Neutrion

  • Frequent Contributor
  • **
  • Posts: 305
  • Country: hu
Re: eevBLAB 106 - ChatGPT AI Has Changed EVERYTHING
« Reply #149 on: January 30, 2023, 03:51:43 pm »
According to ChatGPT itself;

Quote
As an AI model, I can tell you that it's likely that AI technology like myself could enhance the work of electrical systems engineers in the future rather than replace them entirely. AI can assist in areas such as data analysis, simulation and prediction, freeing up engineers' time to focus on creative and strategic problem-solving. However, the complex nature of electrical systems engineering and the need for human judgment and decision-making means that it is unlikely AI technology would fully replace the work of electrical systems engineers in the near future.

Having used the tool, and other AI tools before it, for many months, I believe that the technology presents more opportunities than threats. For most people, it's unlikely the technology will do anything other than make their jobs easier and more productive, used as our assistant rather than our replacement. It does throw up some challenges though. For example, having used it myself to write a facts and figures article, every single fact it produced was just wrong. It's one thing making up facts for a blog article, but another thing making up instructions for wiring up an electrical component.
But the current technology, GPT3, is working from old data, Later this year, it will be incorporated into Bing and Google, and then GPT4 will come along and change everything again. In the end, it may be better at recognising untrue information than we are.

As for education, I believe that students will end up being marked on the research rather than the report, as that's the bit which will show their understanding of the subject.

Either way, look our for a new job role of AI Operator coming to job boards soon.

Anyway, just my ramblings!

And did the AI lie in what it said, or did it tell the truth? Does it wan to tell the truth?  What does it depend on, whether you trust the results?  Now, and two weeks on or two hours?
The students have the AI now so in about a year time you have to reorganise the whole education system. Is it good if only research is valued from now on, and no theoretical paper? If so, why wasn't it like that before?

The real problematic truth that will come from AI is that we humans don't have a "soul". Not more than it can have. Because what we call "soul" or "conscience" is just a side-effect of the multitude of things that are happening all at once in our brain. Lots of stuff deep down, lots of stuff on the surface. That creates "the conscience" as a side effect, it's something virtual that doesn't exist materially. 
And it's not something that will be told to us by the (general) AI, it's something we will observe while making that (general) AI. That is a very hard to swallow pill, for most people. 


Actually in buddhistic philosophy anything can have its own consciusness, even a peace of stone. Though not the term "soul" is used. Which is also quiet logical. So yes, suffering can happen there as well, unless we can scientifically exclude it, which we can't see the objective reality vs observer paradoxon.
And because human society already succesfully solved all the other moral problems in society, to not to talk about about the climate crisis, this will be an easy one too.
« Last Edit: January 30, 2023, 03:54:25 pm by Neutrion »
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf