Actually, I was thinking about the possible uses on how this can change our society into something better (I know, right)?
One of the biggest issue IMHO we are facing now is the post-truth society. Fake news and political polarization is a huge issue, and people can live their lives in an echo chamber. The political right and the left both seems to have taken a anti-scientific stance, where they just don't follow scientific evidence. So I asked our robot (which wasn't overloaded tonight, only very slow) if it can do this. It's really reluctant to say that it can detect fake news. And I know the training data ended in 2021 September. But I cannot help but wonder if this can be used to fix this issue. Or if it will make things even worse, since the repeated propaganda pieces will train the model with bad data, and it has no way of distinguishing it from real news.Basically as it has been for the past 3000 years, at laest. For an average person with only minor nudges in either direction. With people equally deceiving themselves that “in the past it was better”.
One of the biggest issue IMHO we are facing now is the post-truth society.
I don't think there has ever been a time when humanity had more access to truth than it does today. The real issue is that it's mixed up with a lot of distortions of it. But we have access to most truth we ever had.
That is why you see the discrepancies of both left/right political spectrums. Because up until the internet started being mainstream they could more easily sell their bullshit.
And anyway, people don't deal that good with truth.
Actually, I was thinking about the possible uses on how this can change our society into something better (I know, right)?
One of the biggest issue IMHO we are facing now is the post-truth society. Fake news and political polarization is a huge issue, and people can live their lives in an echo chamber. The political right and the left both seems to have taken a anti-scientific stance, where they just don't follow scientific evidence. So I asked our robot (which wasn't overloaded tonight, only very slow) if it can do this. It's really reluctant to say that it can detect fake news. And I know the training data ended in 2021 September. But I cannot help but wonder if this can be used to fix this issue. Or if it will make things even worse, since the repeated propaganda pieces will train the model with bad data, and it has no way of distinguishing it from real news.
Too funny. The same certain peeps in the middle who are poisoning scientific research with 'science', as it were, will end up pumping money and nefarious efforts into this contraption as well. Think Wiki on steroids.
I clearly can't even see what is a general aim with a strong AI, apart from "it might will have some good uses". As if it would be some intended God or religion substitution for some. Who is going to doublecheck
the results of the ultimate wisedome? Or what is 42?Quote from: golden_labelsMachine learning solutions permit finding answers to otherwise computationally expensive problems. After understanding of how specific ML models work, they may become the next step after current statistics. There is a strong coöperation between machine learning researchers and neuropsychologists, boosting knowledge in both branches. Even the current meme smortnets offer philosophical insights, which may be as crucial and influential as those which led Galileo and Darwin.
I clearly can't even see what is a general aim with a strong AI, apart from "it might will have some good uses". As if it would be some intended God or religion substitution for some. Who is going to doublecheck
the results of the ultimate wisedome? Or what is 42? (…)
That entire fragment is so all over the place and incoherent, that it’s hard to understand it. But it gives me a strong impression, it’s written as if development was a matter of someone’s choice.
Sometimes it would be also nice to ask, what is the actualy aim of technical advancement, what do we want to achieve? Is it achievable through technical advancements? Did we get closer to what we wanted in the last....50 years? Or 150?
Ponder on that more, taking advantage of not having to pull a plough to marginally offset your hunger and even being aware of such abstract issues.
Actually, I was thinking about the possible uses on how this can change our society into something better (I know, right)?
One of the biggest issue IMHO we are facing now is the post-truth society. Fake news and political polarization is a huge issue, and people can live their lives in an echo chamber. The political right and the left both seems to have taken a anti-scientific stance, where they just don't follow scientific evidence. So I asked our robot (which wasn't overloaded tonight, only very slow) if it can do this. It's really reluctant to say that it can detect fake news. And I know the training data ended in 2021 September. But I cannot help but wonder if this can be used to fix this issue. Or if it will make things even worse, since the repeated propaganda pieces will train the model with bad data, and it has no way of distinguishing it from real news.
Basically as it has been for the past 3000 years, at laest. For an average person with only minor nudges in either direction. With people equally deceiving themselves that “in the past it was better”.
Don't you think it's objectively and clearly way better to live today than a hundred years ago? Penicillin wasn't even discovered. People who had lots of wealth had to accept the loss of loved ones from simple infections. All the money they had couldn't save them.
I don't see this as something particular hard to be agreed on. There's no question we're way way way better off than before.
Even if somehow people were "happier" back then, something extremely subjective anyway, would that extra "happiness" be worth it in balance with all the people that didn't die because of the advances we have today?
And anyway, people don't deal that good with truth.
And anyway, people don't deal that good with truth.
Ultimately, people will not accept AI telling the truth.
Machine learning on some specific tasks and a general AI replicating human thinking or being better at it (or letting people believe doing so) are not the same thing.
And unleashing even these not too so strong AI's on the general public as it now happens. And yes mostly driven by short term interests.
If you create a complex machine using it's results without doublechecking them with humans, than you created a God or a Totem which you only have to belive in. But if you don't, and doublecheck all the results, what is the point of the machine?
Development (mostly only in a technical form) is happening because human society belives that it is a good thing, and it makes our life better and happier, it is not a law of nature.
But why didn't we we wanted to pull the plough? To suffer less I guess. (…)
Don't you think it's objectively and clearly way better to live today than a hundred years ago? Penicillin wasn't even discovered. People who had lots of wealth had to accept the loss of loved ones from simple infections. All the money they had couldn't save them.
I don't see this as something particular hard to be agreed on. There's no question we're way way way better off than before.
Even if somehow people were "happier" back then, something extremely subjective anyway, would that extra "happiness" be worth it in balance with all the people that didn't die because of the advances we have today?
Machine learning on some specific tasks and a general AI replicating human thinking or being better at it (or letting people believe doing so) are not the same thing.
And unleashing even these not too so strong AI's on the general public as it now happens. And yes mostly driven by short term interests.
Among people actually involved or interested in the subject, it is used only in informal (“fuzzy”) or whimsical manner. Neither the archaic notion of “intelligence” nor attempts to follow anthropocentric ideas are now a part of the research. If precision is required, terms more accurately describing the technology are used. Machine learning algorithms are usually the subject. This is what received enormous attention in recent years, when GPU computing allowed smortnets to grow in complexity.
If you create a complex machine using it's results without doublechecking them with humans, than you created a God or a Totem which you only have to belive in. But if you don't, and doublecheck all the results, what is the point of the machine?
Development (mostly only in a technical form) is happening because human society belives that it is a good thing, and it makes our life better and happier, it is not a law of nature.Quite opposite, I would say. Development doesn’t happen “because”. It seems it just happens. A consequence of humans being curious beings that, like everything, maximize survival chances. With that last part not being a choice either, if considered on population scale.
And anyway, people don't deal that good with truth.
Ultimately, people will not accept AI telling the truth.
At least, hopefully not.
And anyway, people don't deal that good with truth.Ultimately, people will not accept AI telling the truth.At least, hopefully not.I am not that optimistic. There are masses of people without critical thinking out there.
Also it is hard to tell the truth if you can't even define it.
And anyway, people don't deal that good with truth.Ultimately, people will not accept AI telling the truth.At least, hopefully not.
And anyway, people don't deal that good with truth.Ultimately, people will not accept AI telling the truth.At least, hopefully not.I am not that optimistic. There are masses of people without critical thinking out there.
Also it is hard to tell the truth if you can't even define it.
I'm not either. For most people, the truth is whatever people that have higher authority than them have defined.
Among people actually involved or interested in the subject, it is used only in informal (“fuzzy”) or whimsical manner. Neither the archaic notion of “intelligence” nor attempts to follow anthropocentric ideas are now a part of the research.
I don't think that's settled yet:
And anyway, people don't deal that good with truth.
Ultimately, people will not accept AI telling the truth.
At least, hopefully not.
Depends on the topic.
I can guarantee that politicans will not allow AI to tell the truth about politics, they will ban it before that happens. The entire business of politics is built on the perception of "truth", not truth itself.
ChatGPT is one of several AI tools that is/will be available in 2023.
It is a disruptive technology and it will change humanity over time, just like the mobile phone or the internet.
It will replace many jobs: from help desks to phone services, from translators to general content generation.
I have no doubt about that.
As a University teacher, I have scheduled a meeting with all my department colleagues to explain them what ChatGPT is and what will change this semester: I will actively introduce students in my class (Programming in a non IT course) to ChatGPT and I am giving my colleagues a head start to prepare for the consequences.
There are 3 possible reactions a University can have to ChatGPT:
1) Simply ignore its existance
2) Forbit its use
3) Embrace it
Those teachers who chose 1, will have all (!) students using ChatGPT to aid them in writing all sorts of essays, reports, etc. They will be scored for a document written by a software...
Those who try to forbid it's use will be in the same situation as the former group, because they won't be able to detect if a text was written by ChatGPT. I have generated articles about random technical subjects and then submitted them to plagiarism software and the text were considered 100% genuine. The students I asked about ChatGPT all knew about it and some even tested if ChatGPT would produce the same text upon the same input made by different students (accounts and computers). The result were different texts! This means that you give the same subject to several groups and they all hand in different texts, all generated independently by ChatGPT.
Also, I tried ChatGPT on different subjects I know about and the texts created were of surprisingly high quality. I even tried questions of current exams of mine (relative to CAD/CAM/CAE subjects) and the responses would get full scores.
This means that ChatGPT has made it obsolete to ask students to write reports about some sujects. A machine will do it better and much quicker. Period.
Want bibliographic references? Ask ChatGPT and it will provide you with the relevant keywords and tell you what databases to search. With little effort you get the references and you can plant them on the text generated by ChatGPT.
I fell 100% confident to do a 30 minute presentation about ANY subject in the world if I get two hours of preparation with ChatGPT and Google. This is disruptive! I am no longer a specialist about a given subject, I am a specialist in using ChatGPT and Google to produce the content I need, as well as, to filter and process said information.
And this is the key point one needs to focus: how to use ChatGPT to quickly learn about any subject or to quickly get the starting of an essay.
Universities need to change the task of "write an essay about XYZ" to "use ChatGPT to obtain an essay about XYZ and then discuss the outcome, verify the statements and complete them".
As a result, people will take less time to learn something, to get the relevant information.
A different aspect is that ChatGPT presents wrong information on certain subjects like math or logic. It is fundamental to understand how ChatGPT works and where it will fail.
Simple example: "5 machines produce 5 parts in 5 minutes - how long do 100 machines take to produce 100 parts?"
Most people will answer 1 minute, 100 minutes or 500 minutes. The reason being that you need to correlate 3 things, while the brain normally is used to correlate just two items.
As a result people might focus in 5 parts in 5 minutes and conclude 1 part takes 1 minute, forgetting that there were 5 machines.
ChatGPT will give the wrong answer over and over.
Same goes for simple algebra. While the result given looks very good, it is simply wrong.
But all in all, this is an amazing technology!
I expect to be able to call my bank and instead of having to fight through a stupid voice enabled option tree, I can just say what I want:
"Hello, my name is Vitor XYZ, my birth date is xxxx, please unlock my credit card, because I keyd the wrong pin 3 times."
"Sure. Can you please confirm me your VAT number?"
"It's 1234"
"Thank you. I am happy to inform you that your credit card is now unlocked. Anything else?"
"No. Bye"
The same with any other service accessed through phone.
And why should I worry about the content of our web site? I hope they offer us a service, where I specify the subjects and products, so that ChatGPT can generate new content every month.
Next Microsoft Office Word will have a function where I start to write a report and after a few sentences, I click on AUTOCOMPLETE and the remaining report is written automatically.
Same for emails.
Hope I gave you some ideas...
Regards,
Vitor
As an AI model, I can tell you that it's likely that AI technology like myself could enhance the work of electrical systems engineers in the future rather than replace them entirely. AI can assist in areas such as data analysis, simulation and prediction, freeing up engineers' time to focus on creative and strategic problem-solving. However, the complex nature of electrical systems engineering and the need for human judgment and decision-making means that it is unlikely AI technology would fully replace the work of electrical systems engineers in the near future.
According to ChatGPT itself;QuoteAs an AI model, I can tell you that it's likely that AI technology like myself could enhance the work of electrical systems engineers in the future rather than replace them entirely. AI can assist in areas such as data analysis, simulation and prediction, freeing up engineers' time to focus on creative and strategic problem-solving. However, the complex nature of electrical systems engineering and the need for human judgment and decision-making means that it is unlikely AI technology would fully replace the work of electrical systems engineers in the near future.
Having used the tool, and other AI tools before it, for many months, I believe that the technology presents more opportunities than threats. For most people, it's unlikely the technology will do anything other than make their jobs easier and more productive, used as our assistant rather than our replacement. It does throw up some challenges though. For example, having used it myself to write a facts and figures article, every single fact it produced was just wrong. It's one thing making up facts for a blog article, but another thing making up instructions for wiring up an electrical component.
But the current technology, GPT3, is working from old data, Later this year, it will be incorporated into Bing and Google, and then GPT4 will come along and change everything again. In the end, it may be better at recognising untrue information than we are.
As for education, I believe that students will end up being marked on the research rather than the report, as that's the bit which will show their understanding of the subject.
Either way, look our for a new job role of AI Operator coming to job boards soon.
Anyway, just my ramblings!
The real problematic truth that will come from AI is that we humans don't have a "soul". Not more than it can have. Because what we call "soul" or "conscience" is just a side-effect of the multitude of things that are happening all at once in our brain. Lots of stuff deep down, lots of stuff on the surface. That creates "the conscience" as a side effect, it's something virtual that doesn't exist materially.
And it's not something that will be told to us by the (general) AI, it's something we will observe while making that (general) AI. That is a very hard to swallow pill, for most people.