Products > ChatGPT/AI

AI Hello World

(1/5) > >>

AI Hello World - Part 1

To install, open a terminal and type this:

--- Code: ---sudo apt install python3 python3-pip
pip3 install torch torchvision torchaudio transformers

--- End code ---

Try AI generated text with GPT-Neo-2.7B, a pretrained model.  The 2.7B stands for 2.7 Billion parameters.  At the first run, the code will auto-download the GPT-Neo-2.7B model, which is a 10GB download (by default downloaded in ~/.cache/huggingface/transformers/).  Once downloaded, it will all run entirely locally, no further downloads, no subscription and no Internet required.

To run it, open a terminal and type (only the text, without the  ">>> "):

--- Code: ---python3
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='EleutherAI/gpt-neo-2.7B')
>>> text_seed = "A horse, a frog and a giraffe walk into a bar."
>>> ai_answer = generator(text_seed, max_length=128, do_sample=True, temperature=0.7)     
>>> print(); print( ai_answer[0]['generated_text'] ); print()

>>> # press arrow up twice, then Enter, to re-run the ai_answer = ... and the print() ... lines again
>>> # if answers take too long, make max_length=50, for more creativity in answers try with temperature=0.9

>>> exit()

--- End code ---
That's all.  From now on it can run offline any time.
\[ \star \ \star \ \star \]
Some examples of AI generated answers (on a i7/4GHz it takes about 1 minute to answer):

text_seed = "A horse, a frog and a giraffe walk into a bar."

--- Quote ---A horse, a frog and a giraffe walk into a bar. The frog says, “It’s been a while since I’ve had a drink, but I’ll have one.” The giraffe says, “I’m going to have one, too.” And the bartender says, “How about a drink for the whole group?” The horse says, “Are you going to pay for this?” And the bartender says, “No, it’s on me.”

--- End quote ---
Wow, that was a disturbingly OK answer, WTF?! 8) and pardon my French.  Wonder what would happen if we try to troll the AI.  Let's write in French.  ;D

text_seed = "La mécanisation de l'agriculture a changé la vie de nos villages."

--- Quote ---L'économie est devenue stratosphère, la croissance économique était née de la mécanisation de l'agriculture et de l'élevage, les salaires augmentent et les coûts de production sont plus bas.
--- End quote ---
Lol, nice usage of the "croissant" word :o, wonder who's trolling who here, or as we in Ro use to say, "Unde dai şi unde crapă." (meaning in Ro - "where you hit vs. where it splits").  Let's try the Romanian text, too, with the AI.

text_seed = "Unde dai şi unde crapă."

--- Quote ---Unde dai şi unde crapă. În ciuda faptului că, în ciuda unui astfel de scop, ceva întâmplător se va întâmpla, cineva va întâlni, într-un post de televiziune, un politicien care-l va acuza pe un lider politic de corupţie, şi cineva va spune, în numele lui, ...
--- End quote ---
Meh OK, but the fact alone that it answered in plain Romanian felt uncanny already, and unsettling.  Though, is all these of any use, other than for entertainment that gets old quick?  Could be of use.  For example, it can "speak" Python, too, so GPT-Neo can be used as a programming companion:

text_seed = "def palindrome(string):     # returns the string backwards"

--- Quote ---def palindrome(string):     # returns the string backwards in order
    for i, char in enumerate(string):
        if char!= string[-i - 1]:
            return string[:i] + string[-i - 1]
    return string

print(palindrome("I ate a piece of pie yesterday")[::-1])

I ate a piece of pie yesterday

--- End quote ---
Note the usage example contains the pythonic way of reversing a string, "reversed = string[::-1]"  ;D

text_seed = "def bubble_sort(list):  #sort a Python list"
ai_answer = generator(text_seed, max_length=200, do_sample=True, temperature=0.9)

--- Quote ---def bubble_sort(list):  # sort a Python list in ascending order
    for i in range(len(list)):  # for each item in the list
        if list > list[i+1]:
            list, list[i+1] = list[i+1], list

Here is a version that uses sorting.sort() and then loops over the sorted list repeatedly:
def bubble_sort(list):
    sorted_list = sorted(list)
    for i in range(len(sorted_list)):

--- End quote ---
Could be helpful.  :-+
\[ \star \ \star \ \star \]
Though, keep in mind that some answers might not be that good.  Sometimes it returns sloppy answers, or it starts repeating the same group of words over and over (increase the "temperature" if you get repetitions).

Anyway, the point is:   Installing and running an AI is easier than one might guess.  :D
\[ \star \ \star \ \star \]
AI Hello World - Part 2
\[ \star \ \star \ \star \]

I have been tinkering around the edges of neural networks and have implemented the MATLAB digits recognition net and worked through building NVIDIAs demo projects.  Thanks for posting this, I'll have to give this a try as well.

Interesting the digits recognition project.  :-+

Such an OCR might make an universal data logger, if instead of a static picture of digits would be fed with snapshots taken from a webcam.  Point the webcam to any DMM, or a water-meter/gas-meter around the house, and you have an universal logger, good to plot for example how a battery discharges with a DMM attached to the load, or the consumption of gas vs outside temperatures, or to alarm if the water-meter suddenly starts counting during night because of a pipe break, things like that.

All the following is only a nice to have, not required.  To keep it simple, all these were not posted in the OP.

Since you mentioned nVidia, the GPT-Neo inference should run faster if you have CUDA.  There is a slightly different install command, so to inform PyTorch about that during install command:

--- Code: ---# in a terminal, check CUDA type
nvcc --version

# if cuda not installed
sudo apt install cuda-nvidia-toolkit

# - visit to install PyTorch, it can prepare a custom install line for you, to run it in a terminal
        #    - scroll to the INSTALL PYTHORCH paragraph
        #    - complete the desired PyTorch build, os, etc., that will create the proper install command
        #    - copy paste the command in a terminal to execute it and install pytorch
pip3 install torch torchvision torchaudio --extra-index-url

        # my CUDA was v1.15, a version not listed in that webpage,
        #    so I've selected v1.13 then manually edit the above installing line from CUDAv1.13
        #    to v1.15 (change into 115 the number 113 at the end of the line)

--- End code ---

Another thing you might want to do at install would be to use a Python virtual environment (venv), so to keep all the AI installs isolated from the rest of the Python.  If you didn't use venv before, the venv idea (Python virtual environments) might seem complicated.

It all runs the same, both the engine or the AI examples, just that it keeps the installs better organized, thought you have to remember each time to switch to that environment before running AI examples, then deactivate the environment to go back to normal.  Something like this:

--- Code: ---sudo apt install python3-venv

cd your/preferred/sandbox/directory/

# create a new venv, here named 'envAI'
python3 -m venv envAI

# switch to it, and as long as it is active, usually it would be a "(envAI)" visible in the command prompt
source envAI/bin/activate

# from now on, any pip3 install will install python modules inside the 'envAI' Python instead of the default Python
# check what environment is activated, the path listed by the next command should end in 'envAI/bin/python3'
which python3

# now we can start install Python modules like PyTorch and Transformers in this isolated envAI location (AKA environment)
pip3 install torch torchvision torchaudio --extra-index-url
pip3 install transformers

# note that the GPT-Neo model will download outside of 'envAI', the models are not pip3 installs.
# default model location is ~/.cache/huggingface/transformers, location can be changed if desired:

# all installed, now start a python3 and try GPT-Neo like in the python examples from the first post
>>> # play with the python residing inside envAI
>>> # ...
>>> # done playing, now exit pytthon
>>> exit()

# to run other .py files using this particular python that now has the AI modules installed
python3 path/to/any/

# to deactivate our envAI python (AKA the envAI environment), and return to the default python

# all back to normal, from now on any pip3 commands
#      will act only upon the normal (default) python,
#      leaving untouched our python setup and modules that were installed inside 'envAI'

# that's it about installing PyTorch and Transformer inside our 'envAI' virtual environment (venv)
# you can make another venv, then activate it and try there other version or combinations of modules than the ones inside 'envAI'

--- End code ---

You may be already aware of that, but I feel obligated to say that in case you are not.

Keep in mind machine learning should not be perceived as anything more than glorified statistics, with resulting equations too complicated to be currently understood or verified. That means: what you get is probabilistic in nature and under the hood it’s just high-complexity maths.

It’s important, because people approach “smortnets” just like any other seemingly magic technology of the past and attribute properties it could never have. That leads to greatly overestimating abilities of those solutions at their current stage. On the other end there is the camp which perceives human brains as much more than they really are. ;)

Agree, and all that applies to any other neural-network driven entity, too, including to us, the humans.  ;D


[0] Message Index

[#] Next page

There was an error while thanking
Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod