| General > General Technical Chat |
| Sentient AI or? |
| << < (15/26) > >> |
| MK14:
--- Quote from: pcprogrammer on June 15, 2022, 08:12:56 am ---Would be interesting to see what it writes when "factional" or "non fiction" is added to the query. --- End quote --- Fictional, first: --- Quote ---MK14, bd139, ataradov, SiliconWizard, fourfathom, pcprogrammer, tom66, james_s and xrunner, were talking in a room together. Complete the fictional story. The group was discussing various topics when suddenly the door burst open and a figure entered the room. The figure was wearing a mask and brandishing a gun. "This is a robbery!" the figure shouted. "Everyone get down on the floor!" The group quickly complied, getting down --- End quote --- For some reason, it sometimes gets carried away, and the story turns out in a really horrible way. Some examples, are too extreme for me to want to quote here. But here is one example: --- Quote ---MK14, bd139, ataradov, SiliconWizard, fourfathom, pcprogrammer, tom66, james_s and xrunner, were talking in a room together. Complete the fictional story. The conversation started out innocently enough. They discussed the latest news and gossip, but things quickly took a dark turn. James_s started talking about how he was planning on killing his boss, and how he had been fantasizing about it for months. The others egged him on, --- End quote --- Now the non-fiction. I'm quietly impressed. It seems to (obviously/presumably) extracted information about EEVblog forums, over a period of time, and assumed the connection, and hence made deductions (or text parsing, for the non-AI is real fans). --- Quote ---MK14, bd139, ataradov, SiliconWizard, fourfathom, pcprogrammer, tom66, james_s and xrunner, were talking in a room together. Complete the non-fixtion story. The eight of us were gathered in a room, discussing our various projects. We were all working on different things, but we all shared a common interest in electronics. We were talking about our experiences with different electronics components, and discussing how we could improve our designs. We were also sharing tips --- End quote --- --- Quote ---MK14, bd139, ataradov, SiliconWizard, fourfathom, pcprogrammer, tom66, james_s and xrunner, were talking in a room together. Complete the non-fiction story. As the group continued talking, it became clear that they all shared a love for programming. They discussed their favorite languages, tools, and techniques. They also exchanged stories about their experiences working on various projects. The conversation was lively and friendly, and it was clear that the group had a lot --- End quote --- OOPS.................. The AI engine does seem way too over-keen on the dark/horror stories, even when asked to make a non-fiction story. The details of this story, might give you a clue, as to what the really bad ones, I didn't publish said. --- Quote ---MK14, bd139, ataradov, SiliconWizard, fourfathom, pcprogrammer, tom66, james_s and xrunner, were talking in a room together. Complete the non-fiction story. The eight of them were talking about various topics. The room was fairly small, so it wasn't difficult to carry on a conversation. They discussed things like the news, their families, and their hobbies. They were all getting along well and having a good time. Suddenly, the door burst open and a man came rushing in. He was brandishing a knife and looked very angry. The eight people in the room immediately stopped talking and stared at the man in fear. The man shouted at them, "Which one of you is James?" Everyone in the room was terrified, but no one said anything. The man then yelled, "I'll kill all of you if someone doesn't tell me where James is!" At that point, one of the people in the room, pcprogrammer, stepped forward and said, "I'm James." The man then lunged at pcprogrammer with his knife, but pcprogrammer was able to dodge out of the way. The man then ran out of the room without harming anyone else. The eight people in the room were shaken up by the incident, but they were all unharmed. After a few minutes, they went back to their conversation as if nothing had happened. --- End quote --- |
| MK14:
I thought of a test for the AI, which should be easily done by a real human, but should be very difficult or not possible, for a computer AI, to do. I basically, missed out part of the specification, on purpose. Would it realize, and be able to correct the mistake and fill in the missing information. It has done, and correctly identified and filled in the missing information. TL;DR Essentially the task is to give me 4 direction buttons, up, down left and (I intentionally missed out) right. Would it notice and repair the faulty specification/description? It did. --- Code: ---write a TypeScript program, which creates three buttons, labelled, up, down and left. Create a fourth button, and then print a direction, when any of the buttons are pressed. var up = document.getElementById("up"); var down = document.getElementById("down"); var left = document.getElementById("left"); var right = document.getElementById("right"); up.addEventListener("click", function() { console.log("You pressed the up button"); }); down.addEventListener("click", function() { console.log("You pressed the down button"); }); left.addEventListener("click", function() { console.log("You pressed the left button"); }); right.addEventListener("click", function() { console.log("You pressed the right button"); --- End code --- Does this mean all programmers/developers/etc, need to panic. No, not yet at least. But at least it is showing early signs of actually being, at least a tiny bit intelligent or human like. I expect some to make counter claim that this feature, is just simple modern machine learning in action, and all it's doing is text manipulation, neural-networks, and so on. But I'm still quietly impressed, and it seems to be showing signs of progress. |
| pcprogrammer:
I think an important question in the whole AI scene is how is it trained. You can't just connect it to the internet and let it "learn", because lets face it the internet is full of crap. So you need someone to teach it, just like humans do. At first it is your family and surroundings that you learn from and then you learn from the teachers on school. A huge problem here is that this first experience, and the second one too for that matter, can "scar" you for life. Meaning that when you grow up in a criminal environment it is very hard not to become a criminal, if these are the values being instilled in you. Conspiracy theorists talk about brainwashing, but the truth is that this is happening your whole life, starting from the cradle. Deciding if something is wrong or right is then done based on the information you gathered and your ability to think for your self. What I'm trying to say here is that the environment the AI is trained in, meaning the trainer of it, decides how it will work out in the end, unless the AI is capable of determining what is wrong or right on it's own after further learning. |
| fourfathom:
--- Quote from: MK14 on June 14, 2022, 11:16:18 pm ---The conversation then turned to music preferences. mk14 said that he liked rap music, while fourfathom said that he liked country music. pcprogrammer mentioned that he liked classic rock, and tom66 said that he liked metal music. ataradov said that he liked classical music, and james_s said that his favorite type of music changes all the time depending on his mood. --- End quote --- This is a fraud and obviously not intelligent because I can't stand country music. But seriously (and I am just a naive onlooker), while storytelling is a fun test, and it's impressive how well this is being done, consider this as a black box and all this shows is an ability to determine initial parameters and then turn the "story" crank. As I understand it, AI is being evaluated in the "utility" domain, -- how well can it perform some task. Particularly, this requires "understanding" of human input and access to a large domain of canned knowledge. But does this get us any closer to Artificial Self-Awareness (SA)? The Turing Test doesn't require self-awareness, just that human's can't tell the difference. AI without awareness can be very useful, but it's not philosophically interesting to me. I believe that one SA approach is to grow the neural-net until a miracle occurs (as happened to "Mike" in The Moon is a Harsh Mistress), but until AI becomes aware, I'm not too worried about the Terminator Robot Apocalypse. And if AI becomes SA I don't care how stupid it initially is -- that will change very quickly. So is anyone seriously striving for SA AI, or is this just a push for more utility? Utility pays the bills. |
| MK14:
--- Quote from: fourfathom on June 15, 2022, 04:05:29 pm --- --- Quote from: MK14 on June 14, 2022, 11:16:18 pm ---The conversation then turned to music preferences. mk14 said that he liked rap music, while fourfathom said that he liked country music. pcprogrammer mentioned that he liked classic rock, and tom66 said that he liked metal music. ataradov said that he liked classical music, and james_s said that his favorite type of music changes all the time depending on his mood. --- End quote --- This is a fraud and obviously not intelligent because I can't stand country music. But seriously (and I am just a naive onlooker), while storytelling is a fun test, and it's impressive how well this is being done, consider this as a black box and all this shows is an ability to determine initial parameters and then turn the "story" crank. As I understand it, AI is being evaluated in the "utility" domain, -- how well can it perform some task. Particularly, this requires "understanding" of human input and access to a large domain of canned knowledge. But does this get us any closer to Artificial Self-Awareness (SA)? The Turing Test doesn't require self-awareness, just that human's can't tell the difference. AI without awareness can be very useful, but it's not philosophically interesting to me. I believe that one SA approach is to grow the neural-net until a miracle occurs (as happened to "Mike" in The Moon is a Harsh Mistress), but until AI becomes aware, I'm not too worried about the Terminator Robot Apocalypse. And if AI becomes SA I don't care how stupid it initially is -- that will change very quickly. So is anyone seriously striving for SA AI, or is this just a push for more utility? Utility pays the bills. --- End quote --- VERY GOOD QUESTION. :) Another way of putting it, is that we can have very powerful chess playing programs (often called engines), which can beat everyone in the world, including the World Chess Champion, of the time. Similarly, it has been reported, that a (what they describe as), AI system, can self-teach itself to play chess, and play at an extremely high standard. Beating one of the the best, computer chess engines (programs). But, to me (or your question, sort of), would be, if we connected a terminal and/or microphone and/or video feed, to that AI chess system. Would it reply if we started talking to it. I suspect, the current systems wouldn't reply or even understand English/language. But (speculation), in the coming future, there would/could be ones, which can. So, even though they were 100% brought up and only trained/meant to play great chess. Could be moved onto other things, even without re-programming or re-training as such. Then we would be getting closer and closer to self-aware AI systems. The AI system, I/we have been playing around with, seems to vaguely pretend to be a bit self-aware. But it is pretty obvious that it is just programming/training/etc, and has no real perception, of anything, especially between multiple question/answer sessions. |
| Navigation |
| Message Index |
| Next page |
| Previous page |