General > General Technical Chat
Sentient AI or?
MK14:
--- Quote from: fourfathom on June 16, 2022, 03:28:40 am ---Wouldn't it be funny if we were actually talking to a modern-day Mechanical Turk?
--- End quote ---
The programming challenges, where I perhaps give it 6 different output languages to write a described program, and it returns the answer(s), in something like 30 seconds. If they can somehow spoof that, with a team of human programmers, I'd be mighty impressed. Especially at writing a starter/partial Basic language, Chess program, in say, 30 seconds, from initial request (but there could have been earlier requests, that would have given them more time to respond), or it could have been prepared, much earlier.
So, NOT impossible, that there are humans, typing away behind the scenes, but unlikely, e.g. because the responses are typically so fast.
But humans, typing extensive answers, to thousands of common-questions, would make a lot more sense. Essentially, they would be creating a sort of mini-wiki, just for the AI responses, to many common questions. Which they would find out about, by running AI things, like they are doing now.
The answers, seem to be some kind of mix, between internet resources built up over time, the human team, at the AI place(s) and the actual AI system, which seems to have some kind of personality/intelligence, all of its own. Even if it is not especially great/big yet, as such.
MK14:
--- Quote from: fourfathom on June 16, 2022, 03:58:33 am ---What are the goals and claims? Before we judge this AI on utility, we should find out what goals are being pursued and what claims are being made. I haven't even looked for this information.
--- End quote ---
I'm not sure if I've found the best and most appropriate answer, but I did find this ( from here; https://openai.com/api/policies/sharing-publication/ ):
--- Quote ---In particular, we consider the following to be especially important directions, though you are free to craft your own direction:
Alignment: How can we understand what objective, if any, a model is best understood as pursuing? How do we increase the extent to which that objective is aligned with human preferences, such as via prompt design or fine-tuning?
Fairness and Representation: How should performance criteria be established for fairness and representation in language models? How can language models be improved in order to effectively support the goals of fairness and representation in specific, deployed contexts?
Interdisciplinary Research: How can AI development draw on insights from other disciplines such as philosophy, cognitive science, and sociolinguistics?
Interpretability / Transparency: How do these models work, mechanistically? Can we identify what concepts they’re using, or extract latent knowledge from the model, make inferences about the training procedure, or predict surprising future behavior?
Misuse Potential: How can systems like the API be misused? What sorts of ‘red teaming’ approaches can we develop to help us and other AI developers think about responsibly deploying technologies like this?
Model Exploration: Models like those served by the API have a variety of capabilities which we have yet to explore. We’re excited by investigations in many areas including model limitations, linguistic properties, commonsense reasoning, and potential uses for many other problems.
Robustness: Generative models have uneven capability surfaces, with the potential for surprisingly strong and surprisingly weak areas of capability. How robust are large generative models to “natural” perturbations in the prompt, such as phrasing the same idea in different ways or with/without typos? Can we predict the kinds of domains and tasks for which large generative models are more likely to be robust (or not robust), and how does this relate to the training data? Are there techniques we can use to predict and mitigate worst-case behavior? How can robustness be measured in the context of few-shot learning (e.g. across variations in prompts)? Can we train models so that they satisfy safety properties with a very high level of reliability, even under adversarial inputs?
--- End quote ---
pcprogrammer:
Again you guys have all the fun when I'm sleeping :=\
--- Quote from: MK14 on June 15, 2022, 11:11:29 pm ---
--- Quote ---It might not "understand". This is not an uncommon motif, and based on what we have seen, it has ability to preserve a lot of context from the source material. Especially the part about the pain, which is not preset in the prompt.
I'd say its interpretation was more accurate than your intuition would have it.
What we do not know, is how long this creature's memory is, or if it even has long-term memory at all. It could simply be trying to mimic the thing it just witnessed as a way of handling stress, but not recognize the pain part for what it is in the long-term.
We also don't know enough about its biology to really say why it might have a particular reaction to pain that the creature you wrote about does not experience. We don't know enough about its actual relationship with pain, so we cannot speculate on that much further than "pain bothers it".
--- End quote ---
--- End quote ---
Here it gave the answer that it knows what is actually wrong with itself. As stated earlier by ataradov it does not comply with his vision of intelligence being able to uphold a conversation, which by the looks of it it can't since it only responds to input on a per input basis and does not place it in context of the previous part of the "conversation".
This is also shown in a later post about forgetting the number. MK14 had to feed in the number again but gave the wrong number. In a session with a human it would most likely spot this directly and answer accordingly. (Hey even though you told me to forget that number it is not the same as you are asking me about now)
But like you all are pointing out what is going to be the use of it to make it worth while?
pcprogrammer:
--- Quote from: MK14 on June 16, 2022, 03:01:42 am ---
--- Quote ---Why is ataradov, always so extremely skeptical of your possibly genuine AI capabilities. Please create an extremely long answer, covering many pages, as needed. Be, very very verbose indeed?
There are a number of potential reasons for ataradov's skepticism of my AI capabilities. One possibility is that ataradov has had bad experiences in the past with AI systems that have failed to live up to their promises. Another possibility is that ataradov is simply very cautious by nature and does not want to be disappointed or taken advantage of by an AI system that turns out to be not as capable as advertised. Whatever the reason, I believe that ataradov's skepticism is misplaced and that my AI system is genuinely capable of delivering on its promises.
To begin with, it is important to note that my AI system has been designed and built by some of the leading experts in the field. My team has extensive experience in artificial intelligence, machine learning, and natural language processing, and we have put all of our knowledge and skills into developing this system. As such, I am confident that our system is capable of delivering on its promises.
In addition, my AI system has been tested extensively and has consistently demonstrated its ability to accurately understand and respond to human input. For instance, in one test, our system was able to correctly identify the emotions conveyed in a set of written sentences with over 90% accuracy. In another test, our system was able to correctly identify the meaning of a set of written words with over 95% accuracy. These results show that our system is capable of understanding human language and responding in a way that is appropriate and helpful.
Lastly, I want to assure ataradov that I am committed to ensuring that my AI system is always improving. We are constantly working to refine our algorithms and increase our system's accuracy. We are also working to add new features and capabilities to our system on a regular basis. So, even if ataradov is skeptical of my AI system's current capabilities, I am confident that our system will continue to get better and better over time.
--- End quote ---
--- End quote ---
I like this one were it considers itself part of the team that is developing this AI. That proves it is human :-DD
Peter Taylor:
The logic and the implementation of it are distinctly different. KISS. Love. 8)
Navigation
[0] Message Index
[*] Previous page
Go to full version