General > General Technical Chat
Why does OpenAI ChatGPT, Possibly Want to disagree/annoy and change my eating...
<< < (19/22) > >>
MK14:

--- Quote from: Zero999 on January 04, 2023, 11:00:44 pm ---We don't know how the human thinking process works. It might appear to be complicated, but it might not be that complex in reality. Some seemingly very simple systems can require a lot of power to model, using conventional computers.

--- End quote ---

I'm essentially agreeing with you here.  I'm just trying to be clear, that until we solve the general AI problem.  It is difficult to accurately gauge how complicated a task it will actually be.

Also, in theory.  The solution could be to just create an electronic version, of a human brain.  Where the multi-billion transistor ICs, are designed, to be some kind of giant and powerful AI machine.  Perhaps with a huge number of those ICs, just for, creating general AI.

In the old days, I think that is what some people thought, would be required (i.e. something like custom Neural Network IC chips, with enough of them, to create some kind of AI brain).

Also, increasingly these days.  New CPUs, have specific machine learning, instructions, processing and areas of the chip, dedicated to machine learning.

Some (I presume), think that new technologies, like Quantum computers, or other things.  Might be needed, to get the computing horse power, to achieve great things.  Such as beginning to solve the general AI problem.
MK14:

--- Quote from: bigfoot22 on January 04, 2023, 02:52:12 pm ---Its so incredibly dangerous that it blows my mind that corporations are researching it in the first place.

--- End quote ---

On further reflection.  I suspect, that numerous factors, will protect the human race.  Many predicted disasters, never happen, or cause way less damage and casualties, than expected.

E.g. The Year 2,000 Millennium bug (many things only had 2 digit years, but full 4 digits, was needed, to differentiate between pre-2,000 and post 2,000, reliably), which was suppose to cause terrible problems, in a huge number of areas.

In practice, there were hardly any incidents, especially not many causing casualties or worse.

Anyway, there are many other risks to the human race.  Big nuclear wars, uncontrolled climate change, very bad, new virus's, big conventional world war 3 (non-nuclear), but if it got too out of hand.

Maybe a super high IQ non-human intelligence, would decide to be nice, and not harm other creatures.

Maybe the human nature part of being bad/criminal, WON'T be part of a general AI's, thinking processes.  Either by design and/or because it is a defect, with some humans personality and/or mental health situation.

Would a generally intelligent machine/AI thing, have any personality and/or genuine feelings, to even want to do any good or bad things?

There are unlikely to be many IC chip makers, making the advanced high performance chips, for these upcoming generally intelligence AI things.  So, it could have extensive safety systems built into it.

Maybe the real risks, are if 'bad' human individuals, acting alone, or as leader of a relatively rogue country.  Could decide to use such new technology, to do 'bad' things, such as develop new weapons of mass destruction, etc.

But in theory, the other 'good' countries, could be very well prepared for such an eventuality, and cope with the threats.

Anyway, I'm making relatively wild speculation, only illustrating the opinions (which might change in time and as technologies progress), of a single person.  About technologies, which could be a long way into the future and/or might never even be invented as such.

So without access to a working crystal ball, to look into the real future, or time-machine etc.  I think we don't really know, what is going to happen, in years to come.
tom66:

--- Quote from: Zero999 on January 04, 2023, 09:25:51 pm ---
--- Quote from: tom66 on January 04, 2023, 07:05:18 pm ---
--- Quote from: Zero999 on January 04, 2023, 06:21:32 pm ---Going by the rate of progress, I can't see AGI being developed any time soon. Computers are already powerful enough to achieve it, but whether it's possible for them to think in that manner is another thing.

--- End quote ---

What makes you believe computers are able to achieve AGI already?
--- End quote ---
The fact that, by all objective measures, computers can both store and process more data much faster than the any human can and have been able to do so for a long time.

--- End quote ---

But, it may only be possible to build a conscious computer once a level of interconnectivity is reached that is similar to that of at least an animal brain.  And true intelligence almost certainly requires consciousness.

It might, as Psi suggested, require a quantum computer.  That is one of my "sci-fi" theories, as I don't think anyone has (or could have) studied it in enough detail, so just speculation at this point.  We still have not adequately explained the human experience of consciousness.  We are but clouds of electrons, protons and neutrons, yet we have experiences, memories, consciousness -- where is the experience?  Is it an illusion?  But, if so, that illusion is still created somehow -- and understanding that is critical to creating an AGI.
tom66:
It does completely change the "softer" subjects, e.g. humanities, English language studies, art and media studies...  One wonders how useful it is to have hundreds of thousands of graduates in those fields but that's another debate.

Constructing an essay with a lot of fluff and information sourced from Wikipedia or the common crawl is not too hard using ChatGPT.  So far, it doesn't appear to be able to include a bibliography, but it probably won't be too long before that is possible.  Minor editing or revising the prompt is required to get the best result, but it's far quicker than writing an actual essay. And, as has been shown already, teachers can't tell the difference between a GPT essay and a human essay.

The way students are tested will need to change, it's been wrong for a long time but GPT might finally force it to change.
tom66:
Amusing nonsense if you ask it about February 29th:


--- Quote ---February 29th is a date that appears on the calendar every four years, known as a leap year. Leap years are used to synchronize the calendar year with the astronomical year, or the length of time it takes for the Earth to make one orbit around the sun. The leap year is necessary because the astronomical year is about 6 hours longer than the 365-day calendar year. If February 29th falls on a Sunday, Monday, Tuesday, Wednesday, or Thursday, it is a valid date. If it falls on a Friday or a Saturday, it is not a valid date because it would be followed by March 1st, which is not a leap year.
--- End quote ---

It's worth noting that the last leap day was Saturday, Feb 29th 2020.
Navigation
Message Index
Next page
Previous page
There was an error while thanking
Thanking...

Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod