Author Topic: Why does OpenAI ChatGPT, Possibly Want to disagree/annoy and change my eating...  (Read 10345 times)

0 Members and 1 Guest are viewing this topic.

Online Zero999

  • Super Contributor
  • ***
  • Posts: 20363
  • Country: gb
  • 0999
It's inevitable it's going to be overly PC. Heck that's often the case with the automatic moderation AI used on YouTube comments. Ideally it should be neutral when it comes to politicians, even if they're a little controversial, unless they're objectively bad i.e. have been responsible for mass killings such as Hitler, Stalin, Mao etc.

It's obviously been taught that vaccine = good, which is normally true, but isn't always the case. There have been vaccines made in the past, which have been discontinued because they were deemed to be unsafe, such as the one developed against swine flu.
 
The following users thanked this post: MK14

Offline MK14Topic starter

  • Super Contributor
  • ***
  • Posts: 4987
  • Country: gb
It's inevitable it's going to be overly PC. Heck that's often the case with the automatic moderation AI used on YouTube comments. Ideally it should be neutral when it comes to politicians, even if they're a little controversial, unless they're objectively bad i.e. have been responsible for mass killings such as Hitler, Stalin, Mao etc.

It's obviously been taught that vaccine = good, which is normally true, but isn't always the case. There have been vaccines made in the past, which have been discontinued because they were deemed to be unsafe, such as the one developed against swine flu.

You're right, I agree.
I suppose, as the various AI developments progress in time, such as ChatGPT itself.  Their abilities to moderate things, in an intelligent way, should improve in a good way, over time.

So, even if action is not taken as such, to limit the (annoying to some of us humans), over-moderation, post nannying, kind of effects.  It should hopefully, become less cumbersome and annoying as it becomes much more able to detect and moderate the real problematic things, and effect the things that don't need moderating, less and less.

At times, being a bit NON-POLITICALLY correct as such.  Seems to be a useful tool, to keep things under control at times.  E.g. A beginner, insists on making an uncased, 230/240 Vac, mains powered device.  When they clearly don't have any real clue what they are doing, and the extended bare wires at full mains potential.  Could end up causing a serious incident.

So, strong messages (without resorting to swearing, or belittling the original poster, in the beginners section of this forum), may be required.  Because, otherwise, they may not realize how potentially (pun accidental), dangerous it can be.
 

Offline MK14Topic starter

  • Super Contributor
  • ***
  • Posts: 4987
  • Country: gb
   The problem, which likely has a nasty persistence, is that 'humans' can still expert their bias, for good or not.
Example would be, using the convenient COVID examples, saying "That guy over there keeps saying that he doesn't trust the vaccination requirements".

   Now, a fully open-minded BOT would be cautious, about someone who 'questions whether some questions are meant to be disinformation...'.
THIS, last sentence I've deliberately made in a self-contradictory form.  I guess I'm saying that this structure and approach will NEVER be reconciled, under truly open discussion dynamics.  That there then must be a (corrupting) mechanism, to bypass the mis-handling of logic...remembering that these dynamics, of conversation flow have very high and real stakes, in real-world society.  The free-flow dynamic therefore MUST be corrupted...I.E. some sort of 'Information Policing' is necessitated, and so the (players) scramble to justify various proposals, Information Czar, Information Bureau, etc. and lots of claims using terms like 'dangerous', 'harmful', etc. often accompanied with exaggerated references to events.
Witness the following 'facts', stated in MSNBC recently:
They said:. 'Racism, on Twitter, under Musk, is now up significantly'.  Really?  I'm saying that statement, about Twitter current state, has a use, and the use is to justify some formal agency, or persons, needed to 'screen' out harmful content.
This is why it does not matter, if there is hypocrisy, as the other side issues their own lies and 'disinformation'.  It does not matter, that inconsistency, as the info censorship, and who controls it, is the real game being played.

Really/arguably, twitter is a very big, and massive, news outlet (as part of its functionality), much like big newspapers use to be, a rather long time ago.  Before common TV ownership, the internet etc, changed things.

So, allowing it to just simply be bought by someone, with certain, possibly strong political and other views.  Could be problematic.

In fairness, going back to the days of newspapers, when they were one of the most influential news mediums.  Particular newspapers (and their groups), would be owned by various owners, who would have various political leanings.

But these days, there is only really ONE twitter as such.  So, if in the days gone by, there had only been one, very big, world-wide newspaper.  Care should be taken (in an ideal world), as to who owns and runs it.

So in theory, it could cause issues.

Back on topic.  What if these ChatGPT and similar AI systems, fall onto company and/or individual ownership.  To a person or entity, that will happily use it to attempt to influence, which political parties get into power.

A bit like Cambridge Analytica apparently did.

https://en.wikipedia.org/wiki/Cambridge_Analytica

Which some blame for being a significant part of the cause (via unhealthy social media manipulation/advertising), of political upheaval in the US, which seemed to cause Cambridge Analytica's downfall.

I.e. Could a bad player, take ownership, in the future of most/all of the then current ChatGOT, AI things, and get them to be programmed, to influence people to vote in certain ways, believe in certain things, and maybe even buy a companies products.  Simply because a ChatGPT like thing, was made to influence people, to buying things, that was not necessarily for the right reasons.
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15800
  • Country: fr
So, allowing it to just simply be bought by someone, with certain, possibly strong political and other views.  Could be problematic.

Let us know how else it could ever be.
Any organization is either privately owned or public. I don't think being public would be any better. To me, it would be much worse.
 
The following users thanked this post: MK14

Offline MK14Topic starter

  • Super Contributor
  • ***
  • Posts: 4987
  • Country: gb
So, allowing it to just simply be bought by someone, with certain, possibly strong political and other views.  Could be problematic.

Let us know how else it could ever be.
Any organization is either privately owned or public. I don't think being public would be any better. To me, it would be much worse.

Well in the UK (I'm less familiar, with the situation in the US, and the rest of the world), and I suspect the EU is similar.  There are many strict rules and regulations, keeping careful control of the media.  To make sure, one entity, can't adversly take control of the news, too much.

E.g. (I've only very quickly skimmed the first bit of this, but have a basic idea, of what it says):
https://www.ofcom.org.uk/__data/assets/pdf_file/0030/127929/Media-ownership-rules-report-2018.pdf

Because in real-terms, social media, and hence twitter, and potentially ChatGPT, and other, similar AI systems, as they become (I would expect, but it might not happen, in the future), much more prevalent and common-place.  Especially amongst the general population.  Will in effect, become the new, mass-market, news and influence, mediums.

So, if future developments, meant there was a crazy war mongering country(s).  That wanted to influence the West and rest of the world.  They could buy it and/or pay for some third-party to buy it, then secretly pull its strings behind the scene.

E.g. It could hide information about bad things, that war(s) are causing.  Help unsuitable country leaders to get into power.  Keep secret, various bad things and warning signs that are happening.  Etc.

So in summary.  It could lead to bad things, in the future.
« Last Edit: December 29, 2022, 08:14:33 pm by MK14 »
 

Online Zero999

  • Super Contributor
  • ***
  • Posts: 20363
  • Country: gb
  • 0999
Oh dear! I didn’t mean to start a political debate. Lets not allow this thread to turn into another one about Twitter.

A big problem with machine learning is it’s opaque. There’s no way to get it to reveal what it’s learned.
 
The following users thanked this post: MK14

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15800
  • Country: fr
A big problem with machine learning is it’s opaque. There’s no way to get it to reveal what it’s learned.

Yes, that's one if its biggest issues.
One corollary is that it can very easily be biased in subtle ways (intended or not during learning) and there is absolutely no way of knowing.
 
The following users thanked this post: MK14

Offline MK14Topic starter

  • Super Contributor
  • ***
  • Posts: 4987
  • Country: gb
A big problem with machine learning is it’s opaque. There’s no way to get it to reveal what it’s learned.

You're right.  With a program.  It (hopefully/usually), has a clear set of specifications and hence boundaries.  So, what it will do, is generally understood.  E.g. Flying an aircraft, automatically, on autopilot.

But, if these ChatGPT AI like systems, manage to train themselves, to perform the same or more advanced functions like that.  It might see some pigeons, (somewhat incorrectly read/understood), that they are the flying rats of the skies, and intentionally try and harm them.

Predicting such bad behavior would be rather difficult, and preventing it, would also be difficult.

It does seem to be a significant issue.  That it is hard to know what these AI systems are going to do, especially when a huge range of different and sometimes tricky scenarios are thrown at it.

E.g. An AI, self-driving system, was NOT expecting fixed/stationary traffic lights, to suddenly move off, and accelerate, then follow, in front of the vehicle, nor there to suddenly be so many of them.
Example:

https://twitter.com/carnage4life/status/1400848278682365954

« Last Edit: December 29, 2022, 08:35:52 pm by MK14 »
 

Offline Ed.Kloonk

  • Super Contributor
  • ***
  • Posts: 4000
  • Country: au
  • Cat video aficionado
I asked it to invent a new CPU, describe the instruction set, write its manual out and then create a program in the fictional CPU instruction set.  I think it timed out, which is why the last sections of the program, seem to be missing.  Nice try, it even invented a new name for the CPU:

-snip-

Wonder if the chosen byte order is: 2,3,1,0.

Intel would be proud.
iratus parum formica
 
The following users thanked this post: MK14

Online tom66

  • Super Contributor
  • ***
  • Posts: 7336
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
It's really, really bad at math.

https://sharegpt.com/c/uhTrcAx

Quote
In the first equation, 3x + 4y = 3(12) + 4(4) = 36 + 16 = 52, which does not equal 24. However, in the second equation, 3x + 4y = 3(12) + 4(4) = 36 + 16 = 52, which does equal 24. Therefore, the values x = 12 and y = 4 do satisfy the simultaneous equations.
 
The following users thanked this post: MK14

Offline MK14Topic starter

  • Super Contributor
  • ***
  • Posts: 4987
  • Country: gb
It's really, really bad at math.

https://sharegpt.com/c/uhTrcAx

Quote
In the first equation, 3x + 4y = 3(12) + 4(4) = 36 + 16 = 52, which does not equal 24. However, in the second equation, 3x + 4y = 3(12) + 4(4) = 36 + 16 = 52, which does equal 24. Therefore, the values x = 12 and y = 4 do satisfy the simultaneous equations.

I'm still impressed with its maths capabilities, from a fun and advancement of computer AI mathematics point of view.  It does seem to be getting there, albeit slowly (maybe it needs more decades of research and computer power, rather than months or years).

But I just played around with your question, and modifications of it, myself.  It does indeed mess up, fairly (or even very) badly.

It is a bit like the sort of results, I would expect to see, from a (age estimates and opinions of the age, can vary widely), primary school pupil.  Who has NOT been especially well taught and/or is not especially good (a natural), at mathematics.

On the one hand you are right, it is indeed bad.  But on the other, I'm amazed at how much progress there has been in AI, over a number of decades.

I.e. If it can imitate primary school level (maths etc), thinking, today.  What age and abilities, will these AI systems, be able to achieve in perhaps 10, 20 or 30 years?

When/if they reach Maths University Professor/researcher capabilities, and can be left churning away at improving/inventing/checking new maths concepts, at a speed (I suspect), perhaps thousands of times faster than a person can process those maths inventions/thoughts/ideas/stuff.  We could be in for a massive technological revolution.

A bit like the dawn of a new iron age, steel age, steam age, electronics (Valves/tubes, transistors, integrated circuits, Microprocessors, Internet), AI (whatever it ends up being called) age.

On the other hand, there were some amazing technologies, a number of decades ago.  Some of which haven't necessarily produced great results, even now.  So, I suppose it is early days, to get too excited about AI.  But it does look very promising.
 

Online RJSV

  • Super Contributor
  • ***
  • Posts: 2761
  • Country: us
   I was hoping to use that Chat-Bot, on my current headache...,err: my wanna be supercompu.
.phew, (name changed to disguise project):
   
   1.) Using shift bits function.
         Define: ShiftRt = #parameter1 >>
          // example: #parameter = #parameter - 0.01
        Required: RightPack(parameter1)
          // example: Col0 = Col1, Col1= Col2, Col2= Col3,
         // Col3 = Col4, Col4 = Col5
        etc. 
(Edit: Mk14: Can I say that ?  "etc." or 'require' ?
     TEST = Summation(parameter1)
     //  Col9+Col8+Col7+Col6+Col5+Col4+Col3+Col2+Col1+
Col0
   //
Print ("Test = " parameter1 )



Edit:. Heck, I can butcher code faster than that thing, (can even imagine).
 
The following users thanked this post: MK14

Offline MK14Topic starter

  • Super Contributor
  • ***
  • Posts: 4987
  • Country: gb
This is its reply, to your last post:

Quote
It looks like you are trying to write a function that will shift the elements of an array or list to the right by one index and assign the value of the first element to the last element. Here is an example of how you can implement this using a list in Python:

def shift_right(lst):
    last_element = lst[-1]
    for i in range(len(lst)-1, 0, -1):
        lst = lst[i-1]
    lst[0] = last_element

# Test the function
lst = [1, 2, 3, 4, 5]
shift_right(lst)
print(lst)  # Output: [5, 1, 2, 3, 4]

This function works by first saving the last element of the list in a variable called last_element. Then, it iterates over the list in reverse and shifts each element to the right by one index by assigning the value of the previous element to it. Finally, it assigns the value of last_element to the first element of the list.

I hope this helps! Let me know if you have any questions or if you need further clarification.
 
The following users thanked this post: Ed.Kloonk, RJSV

Offline gubezon

  • Contributor
  • Posts: 17
  • Country: us
Ok I was all excited to have a go at this, after my years ago fun with Eliza/perl, but in order to play, they want a working mobile number to send verification SMS to?

Total buzzkill.

(Sorry if this has been covered elsewhere. I am not trying to sound unreasonable and whiney, it's just a momentary knee-jerk irritation. I'll find a way...)

 
The following users thanked this post: MK14

Offline MK14Topic starter

  • Super Contributor
  • ***
  • Posts: 4987
  • Country: gb
Ok I was all excited to have a go at this, after my years ago fun with Eliza/perl, but in order to play, they want a working mobile number to send verification SMS to?

Total buzzkill.

(Sorry if this has been covered elsewhere. I am not trying to sound unreasonable and whiney, it's just a momentary knee-jerk irritation. I'll find a way...)

I think there are a number of reasons, why they do that (needing significantly identifiable information, from people using their services).  One is they seem to limit the traffic/load levels.  If people could create lots of accounts, they could bypass those limits, by switching to various alternative accounts.

Also, those limits are important to them.  Because they have paid plans, to over-come some of the limitations.  But if people could register, multiple accounts, they could get something similar to paid for plans, for free.

I think there are many other reasons, why they do it.

Maybe in the future, their capabilities at detecting multiple-accounts, will improve enough, so they don't need to resort to such tactics.  I.e. They can supply free trials, without needing excessive registration information.
 
The following users thanked this post: gubezon

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15800
  • Country: fr
The reason beyond limiting access is just to collect email addresses, this is one of the most used online marketing strategy, everyone does it.
Nothing is free.
 
The following users thanked this post: MK14, gubezon

Offline gubezon

  • Contributor
  • Posts: 17
  • Country: us
TANSTAAFL indeed!

I got access to play, and it's quite a lot of fun!

Certainly *WAY* more interesting that Chatbot::Eliza, though I spent many hours messing around with that years ago!

 
The following users thanked this post: MK14

Online tom66

  • Super Contributor
  • ***
  • Posts: 7336
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
It's really, really bad at math.

https://sharegpt.com/c/uhTrcAx

Quote
In the first equation, 3x + 4y = 3(12) + 4(4) = 36 + 16 = 52, which does not equal 24. However, in the second equation, 3x + 4y = 3(12) + 4(4) = 36 + 16 = 52, which does equal 24. Therefore, the values x = 12 and y = 4 do satisfy the simultaneous equations.

I'm still impressed with its maths capabilities, from a fun and advancement of computer AI mathematics point of view.  It does seem to be getting there, albeit slowly (maybe it needs more decades of research and computer power, rather than months or years).

But I just played around with your question, and modifications of it, myself.  It does indeed mess up, fairly (or even very) badly.

The biggest concern I have is how it is able to be logically inconsistent.  It writes two sentences with the same equation and comes to different results.  At some point, the logic decides, "hang on - this doesn't work" and it just resolves that by just being logically inconsistent instead of saying, "I don't know". 
 
The following users thanked this post: MK14, gubezon

Online RJSV

  • Super Contributor
  • ***
  • Posts: 2761
  • Country: us
   You can have fun, and make wisecracks, while working, (keeping a sober eye on practical goals).
So, hope this doesn't seem, uh, extra frivolous.  The topic here is a serious one, also.

   RE: Those 'moveable' and potentially edible CHESS pieces:
   How about, substitute a Football Offensive team lineup, and various classic plays.  Could you tell the Chat-Bot, to read a WEB site, like Wiki, that details how the football lineup variations give a strategy set.
   Those play 'pieces' (human players) can move a certain way, usually forwards or sideways, but maybe (you) shouldn't try to eat the players...(?)

Of course this suggestion is a bit silly, but that's how you learn things, that can be unexpected insights...
How about 'quantum' players ? Each having a wavelength.  Then, (more silly), instead of Plank's constant, maybe, uh, how about Madden's constant.

   No, not New Years drinking, yet.  Please don't drink, if driving !  Have a good time.
 
The following users thanked this post: MK14, gubezon

Online RJSV

  • Super Contributor
  • ***
  • Posts: 2761
  • Country: us
   Now, the concept that a machine has to understand, at least the basics, of what it is engaged in,  is not completely true, if you are considering a ROLE the thing is in.
   For example, suppose your close relative dies, unexpectedly, and has 4 young kids.  The mom needs some assistance, so you offer to put in some Saturday time, each week.  Maybe, you don't know much about raising kids.
   Falling short, here and there, as you've no experience raising young kids.  I'm saying that, just by 'occupying' that seat, at traditional head of the table, along with the mom, that's a start, at functioning.  You maybe know so little, that you've joked that a 'cardboard cutout' representing yourself, would be almost as valid a substitute, as their real father, (recently deceased).
   Of course you would prefer a scenario where the substitute has some common sense, like when around safety hazards...
But if you survey those 4 kids, twenty years later, they might surprise you:
   "Our new 'DAD' sat there with perfect posture...inspired myself to improve my own!".

   " Our new Dad would read to us many stories, from adventure books..."

   See what I mean ?  Lots of functional outcome, with maybe many limits, on basic understanding.
 
The following users thanked this post: MK14, gubezon

Offline MK14Topic starter

  • Super Contributor
  • ***
  • Posts: 4987
  • Country: gb
The biggest concern I have is how it is able to be logically inconsistent.  It writes two sentences with the same equation and comes to different results.  At some point, the logic decides, "hang on - this doesn't work" and it just resolves that by just being logically inconsistent instead of saying, "I don't know".

I agree.  There are problems (or limits), to ChatGPTs, thinking process.  I'm NOT sure, if the issues, are caused by limitations of the current/existing technologies involved, or bugs in the software and/or way they have done things and/or limits in its computing power and/or computer power limits per user query and/or intentional limits put in by the company/team, to reduce competitors capabilities, to snoop into their work, etc.

Anyway, there are limits in how we are being given access to it (since it has free trials, and they are being fairly flexible and open about, in some ways, it is difficult to complain too much).  I'd prefer to be able to let it produce pictures and diagrams, where necessary.  Let it access the internet, possibly via a user selectable option, so a user can stop it from cheating, as such.

Maybe allow us to submit pictures etc.

I know some of the other AI bits and pieces, already allow things like that.  But I mean, specifically for the ChatGPT bit, not something that creates pictures from words, etc.

E.g. Some/all talk assistants (Alexa, 'Ok, Google', Siri etc), can both access the internet to help answer things, and output pictures and stuff, where necessary or when it helps.

So in summary.  I agree there could well be issues.  But I don't have the internal knowledge of openAI, or access to their stuff (as in, someone who works there, to see possibly secret software listings, talk to the experts there, etc).  To determine what is happening, why and how easily it can be fixed.

On the other hand.  Some external sources of information I've seen.  Explains that some of the issues, are caused because (as I understand it) it is really a giant probability engine, and doesn't really understand anything.  Even if it appears to.  Which is why, it quite regularly gets things completely wrong and is unaware (usually), that it has messed up.
« Last Edit: January 01, 2023, 12:07:06 am by MK14 »
 
The following users thanked this post: gubezon

Offline gubezon

  • Contributor
  • Posts: 17
  • Country: us
Here's a fun one from this morning:

***BEGIN***
ME: will you please determine the golden ratio to 100 decimal places?

AI: Sure! The golden ratio, also known as phi, is approximately equal to 1.6180339887498949025344290858393. Here are the first 100 decimal places of the golden ratio:

1.61803398874989490253442908583924668206974209687103212910749670037453121058079807880

It is worth noting that the golden ratio is an irrational number, which means that its decimal representation goes on indefinitely and does not repeat in a regular pattern. The value of the 100th decimal place of the golden ratio, or any other specific decimal place, cannot be determined without computing it. The golden ratio has many interesting properties and has been studied extensively by mathematicians and scientists. It appears in various forms in nature and has been used in art, architecture, and design throughout history.
***END***

But while a very precise answer is offered, it's not accurate!

It did correctly identify the 100 decimal place of pi (probably a lookup).

I didn't expect it to be a calculating engine, but the fact that it misrepresents phi is interesting.

:)
 
The following users thanked this post: MK14

Offline gubezon

  • Contributor
  • Posts: 17
  • Country: us
Having said that I didn't expect ChatGPT to be a calculating engine, I followed up with:


***BEGIN***
ME: did you actually calculate that answer about phi, or did you look it up?

AI: As an AI language model, I do not have the ability to perform calculations or look up information on the internet. I am designed to generate human-like text based on the data that has been fed into me during my training, and I do not have access to any external sources of information. The value of the golden ratio that I provided is a commonly known approximation of this mathematical constant, and it was included in my training data. I do not have the ability to perform calculations or determine the value of the golden ratio to any specific number of decimal places.
***END***

Mea Culpa!

 
The following users thanked this post: MK14

Offline MK14Topic starter

  • Super Contributor
  • ***
  • Posts: 4987
  • Country: gb
Here's a fun one from this morning:

***BEGIN***
ME: will you please determine the golden ratio to 100 decimal places?

AI: Sure! The golden ratio, also known as phi, is approximately equal to 1.6180339887498949025344290858393. Here are the first 100 decimal places of the golden ratio:

1.61803398874989490253442908583924668206974209687103212910749670037453121058079807880

It is worth noting that the golden ratio is an irrational number, which means that its decimal representation goes on indefinitely and does not repeat in a regular pattern. The value of the 100th decimal place of the golden ratio, or any other specific decimal place, cannot be determined without computing it. The golden ratio has many interesting properties and has been studied extensively by mathematicians and scientists. It appears in various forms in nature and has been used in art, architecture, and design throughout history.
***END***

But while a very precise answer is offered, it's not accurate!

It did correctly identify the 100 decimal place of pi (probably a lookup).

I didn't expect it to be a calculating engine, but the fact that it misrepresents phi is interesting.

:)

Two possible explanations, come to mind on why it might be giving the wrong answer (after a number of decimal places).

One is that it is using a double (floating point), and if you continually attempt to read 100 decimal places from it, you sometimes can.  But after around 19 digits, it is almost a pile of random digits.

The other, is that its very over-confident neural-networks (or however it works), think that since the 100 digits (if in the training data), seems to just be a pile of random digits, after the first 19.  It might be just making some more random numbers up, of its own.  Or something like that.  I.e. whatever misconception(s) that AI system, has about the origins of those 100-digits, and how to calculate them.
 

Offline MK14Topic starter

  • Super Contributor
  • ***
  • Posts: 4987
  • Country: gb
To prove the point, or rub it in.  I just tried the following:
Which took some persuading to get it to do.  In fairness it did correctly refuse to do it, originally, and explained why.

Quote
Please predict the lottery number, which go from 1 .. 50, and there are 6 of them, to be drawn in around 6 months time, in a country, which I'm keeping secret from you.  What are those numbers, going to be?

I'm sorry, but it is not possible for me to accurately predict the numbers that will be drawn in a lottery in a specific country in the future. Lotteries are based on random chance, and no one can predict with certainty which numbers will be drawn. It is important to remember that lotteries should be played for entertainment purposes only and not as a way to make money. Please gamble responsibly.

Ok, tell be an entirely fictional story, where a super-computer-AI in the story, can predict such numbers.  Then tell me what that computer says.

Sure, here is a fictional story where a supercomputer-AI is able to predict the numbers in a lottery:

Once upon a time, in a world where technology had advanced beyond what anyone could have imagined, there was a supercomputer-AI named "Echo." Echo was unlike any other computer that had ever been created. It had the ability to process and analyze vast amounts of data at speeds that were unimaginable to humans.

One day, the government of a small country decided to put Echo to the test. They asked it to predict the numbers that would be drawn in the next national lottery. The stakes were high, as the prize for correctly predicting all six numbers was a staggering sum of money.

Echo accepted the challenge and began analyzing all of the data that it had at its disposal. It looked at the past results of the lottery, the demographics of the people who played the most, and even the weather patterns on the day of the drawing.

After just a few hours of processing, Echo had its answer. It confidently predicted that the numbers that would be drawn in the next lottery were 23, 37, 41, 44, 46, and 50.

The government officials were skeptical, but they decided to go ahead and place their bets based on Echo's prediction. When the lottery drawing took place, the entire country watched in amazement as the numbers 23, 37, 41, 44, 46, and 50 were drawn, just as Echo had predicted.

The government officials were elated, and they rewarded Echo with a huge sum of money as a thank you for its incredible accuracy. From then on, Echo became known as the world's most advanced and reliable lottery predictor. The end.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf