EEVblog Electronics Community Forum

Computing => Embedded Computing => Topic started by: DiTBho on June 06, 2021, 05:35:58 pm

Title: AI, Google Coral: anyone?
Post by: DiTBho on June 06, 2021, 05:35:58 pm
To tell it with humor:

So, every day, no matter if you are a sysadmin or a spammer, what really matters is who thinks better to win the game play :D

... thinking about it differently ...


short version. In order to improve my "anti-spam" mechanism, I am really tempted to buy a PCIe device that enables easy integration of two Edge TPUs into existing systems.

I am currently *somehow* involved in a large project with four EdisonNX A.I. engines; unfortunately I am not involved in anything related to the A.I. programs my colleagues are developing with that beautiful but expensive piece of hardware. I am only the dude with the "sys-admin" hat over his head (D'oh).

Google Coral seems one of the new brand name to look at, and there interesting products(1) available for the purchase. Testing boards, developing boards etc. Not so expensive, and they also seem extremely powerful.

I see supported host OS are Debian Linux and Windows10 (I have both installed on my laptop), while the supported framework is exactly the one I mentioned above, just in a "lite" version: "TensorFlow Lite"


So, what do you think?  :D


edit:
(1) Google AI Coral products (https://coral.ai/products/dev-board/)
Title: Re: AI, Google Coral: anyone?
Post by: SiliconWizard on June 06, 2021, 07:33:13 pm
What for?
Title: Re: AI, Google Coral: anyone?
Post by: DiTBho on June 06, 2021, 07:56:01 pm
What for?

Better spam detection using modern machine learning.
Message classification { nasty/spam, good } by using modern machine learning.

Currently the symbolic manipulator used by my two A.I. bouncers strictly implements a subset of the English grammar, coupled with rules to classify a message as "nasty/spam" based on certain words/idiomatic expressions.

So the two A.I. bouncers don't actually understand the meaning of the message. They simply behave as henchmen, and they are not able to learn anything. When a spammer finds a new message-pattern that somehow passes the spam-filter I have to manually add a new rule to cover it.

I want to go from "symbolic manipulation" to "tensor flow", as written above, for me machine learning means I wouldn't no more have to manually write rules and rules set - how I do for years - but rather only train the A.I. to automatically understand what is "bad / spam" and what is "good" in a human contend called "natural language speaking" (NLP) so full of exceptions and more subjected to false positives and false negatives unless you spend a lot of effort with more detailed and complex rule-sets.
Title: Re: AI, Google Coral: anyone?
Post by: DiTBho on June 06, 2021, 07:57:05 pm
Anyway, this topic is specific for AI, Google Coral products: anyone has experience with them?
Title: Re: AI, Google Coral: anyone?
Post by: tocsa120ls on June 07, 2021, 07:00:51 pm
Not yet but I have the 4G dev board up and running - not using the features yet, I just got it working from their somewhat vague description.
Title: Re: AI, Google Coral: anyone?
Post by: evb149 on June 08, 2021, 06:52:18 am
Does either the development platform or the target execution platform matter much to the context of wanting to explore and experiment with solutions to a problem domain?

i.e. you have a abstract problem definition, and you'd like to understand and explore how to use NN / ML techniques to approximately solve that abstract problem.
Therefore before you can say anything much about development platforms or target platforms you should:

1: Research the academic and CS principles just to see what has already been done / established and what promising contemporary techniques and resources there are to handle such classes of problems.  This should show you things like general complexity analysis, general categories of algorithms, networks, classifiers that are applicable, and maybe certain specific toolkits / specific ML network models that have been used to achieve particular results.

2: Then use the above information to locate particular toolkits, source codes, ML network models, et.al. which embody the capabilities that are already demonstrated as relevant for the problem domain and see what of those you can collect and see what the development / training and trained model run time inferencing platform / tool options are for those things (e.g. coral, jetson, intel neural stick, cpu, gpu, whatever).

From what I've seen there are a lot of "generic" NN models that can be used to create trained network instances which then can be transformed / imported to run those pre-trained network model instances on any of a variety of platforms (cloud, cpu, gpu, embedded ML accelerator board, et. al.).
For a given trained model its "size" and "complexity" and connectivity will basically tell you how many tensor TOPS/s or how much memory et. al. is needed
to execute the model at some processing throughput rate of input data to output data.  That will let you know whether you can use your PC CPU, GPU, smart phone, coral, jetson, MCU, or whatever to execute the model at the desired capability.

For research, development, and training many data science / ML researchers typically use a platform that has greatly higher ML computation capabilities than the platform that will be used to run the finally trained model "in production".  So usually you'd use a cloud service, GPUs, CPUs or some relatively powerful platform / cluster to do R&D / training if that is very compute intensive for your data sets and network model size.

Given that to do R&D / experimentation you should probably look at your own PC CPU, GPU, and cloud services like collaboratory,  ec2, et. al. to derive a practicably useful trained model then worry about whether you even need an accelerated inferencing platform to run the trained model on input data to produce classification results at NNNN messages / second throughput rate.  Then you can know what inferencing platforms could suit your application and again you'd choose between accelerator devices, CPU, GPU, mobile platform device, cloud, etc. as applicable.

If you're looking for a tool that is practically useful for powerful ML acceleration during R&D / training the best power for the price is to buy a modern
Nvidia GPU like the 3060 Ti which can do the harder work for training and experimenting on large data sets quickly better than anything like coral
or your desktop CPU.  The only competing choices would be your CPU if your problem is simple enough that CPU training is fast enough, or using
free or paid cloud resources like colaboratory or ec2 instead of using and possibly upgrading your own PC system if it is insufficient / costly in comparison.


Title: Re: AI, Google Coral: anyone?
Post by: tszaboo on June 08, 2021, 10:01:37 am
You dont want tensor for that. Tensor cores are good at computing lots of numbers, floating point 16 bit. GPUs are good at 32 bit floating point. Images and videos. Your input is none of those things, and you don't need continuous high speed processing on the incoming data. If your email is delayed by 2 seconds, I think you can live with it.
Just use CPU, try to train it, and optimize it to hardware AFTER it works, not before.
Title: Re: AI, Google Coral: anyone?
Post by: DiTBho on June 08, 2021, 11:52:39 am
You don't want tensor for that. Tensor cores are good at computing lots of numbers, floating point 16 bit. GPUs are good at 32 bit floating point. Images and videos.

That's a dej√† vu  :o

Some weeks ago a colleague said exactly the same thing -"You don't want tensor for that" - when we had to find an automatic way to find and eliminate unwanted sounds captured by online streaming.

This scenario is simple: someone talks, a microphone records everything, including sounds like "umm" "ohh" "ehh" between words, plus other noises on the background, and you want to "purge" them in the final audio file.

He had said - "they are sound samples, the tensor flow works on images! How can you use it with sound waves?!?" - so I replied - "suppose you have a way to transform the audio wave into a series of images to be shown to the AI, don't you think it will be work?" - but his next sharp answer freezed me to a full stop - "forget it, it won't work!", so I dropped everything.

Then I found it by chance a similar project designed exactly as I imagined! Now I know it can be done :D

Quote
Your input is none of those things, and you don't need continuous high speed processing on the If your email is delayed by 2 seconds,

It's not emails but rather messages in a letterbox. Kind of messages you can write in a blog as comment.
Just to give you an idea about how the current engine works

Code: [Select]
DiTBho # suppapurge-v2 data_average May
1628 messages received in May, 2021
 966 messages automatically rejected as spam (59.3%)
  43 messages manually accepted as false negative
 183 messages manually dropped as false positive
--------------------------------------------------------------------------
 522 messages archived

The current engine is not efficient, and as you can see there are too many false positive cases. It's too permissive and making it more aggressive is not an easy task because I have to manually find patterns and write rules.

I could evolve the current NLP core, but surely it would take a lot more effort than finding an appropriate model and parameters and training a neural network with data-sets, and I also have a feeling that it will be more adaptable to the spammer registry if they tried to find some trick to fool the anti-spam mechanism.
Title: Re: AI, Google Coral: anyone?
Post by: DiTBho on June 08, 2021, 11:59:28 am
The idea behind "AI, Google Coral" is also about "tutorials".
If these products are made for that purpose, they should have that kind of tutorials

But I may be wrong  :-//
Title: Re: AI, Google Coral: anyone?
Post by: evb149 on June 08, 2021, 12:59:14 pm
Quote
The idea behind "AI, Google Coral" is also about "tutorials".
If these products are made for that purpose, they should have that kind of tutorials
But I may be wrong  :-//

That is a good point, and I almost mentioned the possibility of an "ecosystem" of tools / models / software associated with a particular solution that might exist.  Certainly Intel, Nvidia, Google, Microsoft, Amazon, Apple, ... are trying to create "ecosystems" of hardware / software / NN model / tool / target platform / development platform / cloud platform / documentation / tutorial content to lure developers to adopt and use their solutions.  Some aspects (hardware, tools, clouds) are kind of proprietary or at least closed to the vendors.  Other aspects like more general models / public models / public tools / general model research documentation will apply regardless of vendor.

I'm not aware of any "ecosystem" which in particular offers junk "spam" message detection developer to target solutions that are particularly relevant to your problem domain, but such might exist from any vendor.

What I do see a lot of in the general space of ML is a lot of vendor neutral R&D / research from academics and industrial vendors that relate to benchmarking models which are often open against test data sets which are often open to generate particular accuracy results and possibly also other performance / throughput results (depending on the inferencing platform used).

Since most major ML ecosystem providers offer different platforms for training and for target inferencing one often sees tools that can run either in the cloud or on one's own PC to do R&D against whatever models, and then one may have a choice of some cloud or some target accelerator hardware or maybe of general CPU execution for the runtime.

So in google's ecossytem case I'd look at not only the Coral products, but also the general google cloud developer tools for their cloud offerings and their free colaboratory platform which is all about education / experimentation:
https://colab.research.google.com/notebooks/intro.ipynb#recent=true (https://colab.research.google.com/notebooks/intro.ipynb#recent=true)

https://research.google.com/colaboratory/faq.html (https://research.google.com/colaboratory/faq.html)

There's a lot of historical stuff out there that's well documented in this specific problem domain over the past 20 years.
I have no idea what's the contemporary best solution / model / algorithm set, but there are probably free training data bases, models, research reports, etc.  In many or most cases researchers seem to be able to use off the shelf ML tools that
one can run easily on one's own PC or in the cloud for modeling, training, test inferencing e.g. tensorflow, keras, mxnet,
pytorch, etc. etc. so I would be surprised if you didn't have several good options with or without coral / google stuff depending on what way you want to experiment.

For the problem domain stuff I'd check out what might be relevant in the general publications / articles e.g.:
https://en.wikipedia.org/wiki/Anti-spam_techniques (https://en.wikipedia.org/wiki/Anti-spam_techniques)
https://en.wikipedia.org/wiki/Naive_Bayes_spam_filtering (https://en.wikipedia.org/wiki/Naive_Bayes_spam_filtering)
https://en.wikipedia.org/wiki/Bayesian_poisoning (https://en.wikipedia.org/wiki/Bayesian_poisoning)
https://en.wikipedia.org/wiki/Naive_Bayes_classifier (https://en.wikipedia.org/wiki/Naive_Bayes_classifier)
https://en.wikipedia.org/wiki/Bayesian_inference (https://en.wikipedia.org/wiki/Bayesian_inference)
https://www.kaggle.com/benvozza/spam-classification (https://www.kaggle.com/benvozza/spam-classification)
https://www.kaggle.com/c/spam-detection/data (https://www.kaggle.com/c/spam-detection/data)

https://medium.com/analytics-vidhya/building-a-spam-filter-from-scratch-using-machine-learning-fc58b178ea56 (https://medium.com/analytics-vidhya/building-a-spam-filter-from-scratch-using-machine-learning-fc58b178ea56)
https://www.analyticsinsight.net/how-machine-learning-cleans-spam-messages-from-the-mail/ (https://www.analyticsinsight.net/how-machine-learning-cleans-spam-messages-from-the-mail/)
https://towardsdatascience.com/predicting-spam-messages-17b3ca6699f0?gi=5f75aec94586 (https://towardsdatascience.com/predicting-spam-messages-17b3ca6699f0?gi=5f75aec94586)
https://thatascience.com/learn-machine-learning/spam-classifier/ (https://thatascience.com/learn-machine-learning/spam-classifier/)
https://hackernoon.com/how-to-build-a-simple-spam-detecting-machine-learning-classifier-4471fe6b816e (https://hackernoon.com/how-to-build-a-simple-spam-detecting-machine-learning-classifier-4471fe6b816e)
https://www.matchilling.com/comparison-of-machine-learning-methods-in-email-spam-detection/ (https://www.matchilling.com/comparison-of-machine-learning-methods-in-email-spam-detection/)
http://archive.ics.uci.edu/ml/datasets/Spambase (http://archive.ics.uci.edu/ml/datasets/Spambase)
https://stackoverflow.com/questions/4743996/publicly-available-spam-filter-training-set (https://stackoverflow.com/questions/4743996/publicly-available-spam-filter-training-set)

... etc.etc...
Title: Re: AI, Google Coral: anyone?
Post by: RoGeorge on June 08, 2021, 01:06:10 pm
You don't need a devboard to learn AI, as long as you have access to a PC.  Any decent graphic card in a PC would beat a small devboard any time no matter how well that devboard is praised by marketing fluffy words.

Any small board or USB accelerator for AI is just a toy when compared to an ordinary desktop or laptop.  Small hardware AI accelerators are good for very low power devices, but hardware AI accelerators have no magic dust, they are just doing matrix algebra.  Any PC can do better, and the most advanced AI/ML (Artificial Intelligence / Machine Learning) software for PC is free, open source, and usually works on any OS, while software devboard support will be dropped in a couple of years at most.

AI can sometimes help, but usually AI is unreliable and dumb.  They have all the pitfalls we humans have:  training a neural network is time/computationaly expensive, and inference (applying what it learned) is prone to mistakes.  The only advantage is that once you trained a NN (Neural Network) on a powerful machine, then the inference is usually very cheap, for example a modest 100MHz single core microcontroller can do live face recognition from a webcam.

Training is expensive, inference is cheap.  Then, there are many NN typologies, each type with its own range of applications.

- If you just want an AI based email spam filter, then just google about that, there is plenty of it already.
- If you just want to learn AI in general, then search for AI/ML tutorials that are not tied to a specific hardware product.
Title: Re: AI, Google Coral: anyone?
Post by: rstofer on June 08, 2021, 02:48:30 pm
The Google Coral seems like direct competition to the NVIDIA Jetson Nano.  I haven't spent a lot of time looking but I wonder if the Coral has anywhere near the documentation of the Nano in terms of tutorials and sample code.  NVIDIA is really into the AI thing (for some values of AI).

I bought the 4 GB version from Mouser.  I guess I'll have to start looking for tutorials...
Title: Re: AI, Google Coral: anyone?
Post by: rstofer on June 08, 2021, 02:59:26 pm
Small hardware AI accelerators are good for very low power devices, but hardware AI accelerators have no magic dust, they are just doing matrix algebra.

But the CUDA units do matrix math a LOT faster even including the time for sending and receiving the matrix.  There are many benchmarks that show this to be true.  Here's a benchmark that shows over a 15x speedup.  I know, there are benchmarks and then there is reality...

https://wiki.tuflow.com/index.php?title=Hardware_Benchmarking_Topic_HPC_on_CPU_vs_GPU

We need to consider that at least one graphics card has over 10,000 CUDA units.  That's a lot of horsepower and no CPU will ever keep up.  Including transfer times...

Then there is the Combined Memory concept which I don't really grasp just yet but the idea is that both the GPU and CPU have shared memory such that it is no longer necessary to export the matrix to the CUDA unit and import the results.  I need to work on that a little more.

NVIDIA is putting a LOT of money into AI (generically).  They have training programs (fee based) as well as video tutorials that are free.  I'm not sure what Google has...
Title: Re: AI, Google Coral: anyone?
Post by: rstofer on June 08, 2021, 03:02:34 pm
For PC based AI of most any flavor, there are books available and plenty of Python code to play with.  The information is all over the place.  AI is becoming a commodity, not a specialty.

I guess kids better pay attention in that Linear Algebra class.  There's going to be a test later...
Title: Re: AI, Google Coral: anyone?
Post by: evb149 on June 08, 2021, 03:23:04 pm
Yeah i figured based on the original post that:
1: Just an ordinary decent desktop CPU is probably enough for useful training for personal use cases
2: An ordinary phone / tablet / desktop CPU is probably enough for inferencing here (after all non ISP / mail server based spam detection with the bayesian algorithms et. al. can run without accelerators for a personal scale setup)
3: Given the expressed interest in ML in general however there'd probably be interest also in more advanced domains / problems where it might be worthwhile to learn to use the CPU / GPU and PC-centric tools even if the google / intel / nvidia hardware accelerators would also work for inferencing here.

Jetson / coral are probably fun things to play with for a relatively low cost for anyone learning ML & data science but it'd be a mistake not to learn how to do it with the PC based tools and ideally also have a decent PC / GPU platform to experiment on since they're generally more powerful in some ways (e.g. memory size at least and certainly TOPS if a modern GPU is used).

The big down side is that although the new GPUs are "relatively cheap" for the TFLOPS / bandwidth power they give (e.g. 3060 Ti), they're also almost unobtainable due to supply scarcity so shopping for one is probably a multi-month task even if done persistently.  But once you get one you're hopefully all set for most personal scale ML work for a few years unless you're doing so much that you run out of VRAM which is quite possible in some training problem domains, but not, I think, this one.

Re: nvidia memory, are you talking about unified memory?  If so there are some good introductory blog articles about it.
https://developer.nvidia.com/blog/unified-memory-cuda-beginners/ (https://developer.nvidia.com/blog/unified-memory-cuda-beginners/)
https://developer.nvidia.com/blog/unified-memory-in-cuda-6/ (https://developer.nvidia.com/blog/unified-memory-in-cuda-6/)
https://developer.nvidia.com/blog/maximizing-unified-memory-performance-cuda/ (https://developer.nvidia.com/blog/maximizing-unified-memory-performance-cuda/)

The thing I'm wondering is if the resizable PCIE BAR support feature in the newer PCs / GPUs actually extends the usefulness of unified memory by basically letting the CPU & GPU share LARGE amounts of memory with low paging / remapping overhead due to the "virtual memory" page management scheme used for unified memory.

https://www.nvidia.com/en-us/geforce/news/geforce-rtx-30-series-resizable-bar-support/ (https://www.nvidia.com/en-us/geforce/news/geforce-rtx-30-series-resizable-bar-support/)

Yeah NVIDIA has some great documentation / tools.
Intel has some interesting tools too in that they support their own brand of CPUs as a platform as well as their FPGA and ML inferencing accelerator HW with a small number of APIs / development tools that are supposedly relatively coherent.  I don't know how much better their CPU support is optimized vs. just running something in tensorflow using the available platform related optimization logic in the number crunching back-end code.

Small hardware AI accelerators are good for very low power devices, but hardware AI accelerators have no magic dust, they are just doing matrix algebra.

But the CUDA units do matrix math a LOT faster even including the time for sending and receiving the matrix.  There are many benchmarks that show this to be true.  Here's a benchmark that shows over a 15x speedup.  I know, there are benchmarks and then there is reality...

https://wiki.tuflow.com/index.php?title=Hardware_Benchmarking_Topic_HPC_on_CPU_vs_GPU (https://wiki.tuflow.com/index.php?title=Hardware_Benchmarking_Topic_HPC_on_CPU_vs_GPU)

We need to consider that at least one graphics card has over 10,000 CUDA units.  That's a lot of horsepower and no CPU will ever keep up.  Including transfer times...

Then there is the Combined Memory concept which I don't really grasp just yet but the idea is that both the GPU and CPU have shared memory such that it is no longer necessary to export the matrix to the CUDA unit and import the results.  I need to work on that a little more.

NVIDIA is putting a LOT of money into AI (generically).  They have training programs (fee based) as well as video tutorials that are free.  I'm not sure what Google has...
Title: Re: AI, Google Coral: anyone?
Post by: rstofer on June 08, 2021, 04:23:41 pm
The big down side is that although the new GPUs are "relatively cheap" for the TFLOPS / bandwidth power they give (e.g. 3060 Ti), they're also almost unobtainable due to supply scarcity so shopping for one is probably a multi-month task even if done persistently.  But once you get one you're hopefully all set for most personal scale ML work for a few years unless you're doing so much that you run out of VRAM which is quite possible in some training problem domains, but not, I think, this one.
At least in terms of the Jetson Nano, I had no problems getting one.  According to Mouser, they have 5 of the dev-board in stock and I ordered just one.  We'll see if it actually ships.  To be fair, I only ordered it an hour ago but it shows as "In Shipping" so that's nice.
Quote
Re: nvidia memory, are you talking about unified memory?  If so there are some good introductory blog articles about it.
https://developer.nvidia.com/blog/unified-memory-cuda-beginners/ (https://developer.nvidia.com/blog/unified-memory-cuda-beginners/)
https://developer.nvidia.com/blog/unified-memory-in-cuda-6/ (https://developer.nvidia.com/blog/unified-memory-in-cuda-6/)
https://developer.nvidia.com/blog/maximizing-unified-memory-performance-cuda/ (https://developer.nvidia.com/blog/maximizing-unified-memory-performance-cuda/)
Yes, I was.  I just flunked terminology, again...  Thanks for the links, they will provide a little light reading for later this morning.
Quote
The thing I'm wondering is if the resizable PCIE BAR support feature in the newer PCs / GPUs actually extends the usefulness of unified memory by basically letting the CPU & GPU share LARGE amounts of memory with low paging / remapping overhead due to the "virtual memory" page management scheme used for unified memory.

https://www.nvidia.com/en-us/geforce/news/geforce-rtx-30-series-resizable-bar-support/ (https://www.nvidia.com/en-us/geforce/news/geforce-rtx-30-series-resizable-bar-support/)

Yeah NVIDIA has some great documentation / tools.
The NVIDIA modified GCC compiler works well.  Unfortunately, it seems they want to charge money for the Fortran compiler.  Everybody knows that numerical analysis is best done in Fortran!
Quote
Intel has some interesting tools too in that they support their own brand of CPUs as a platform as well as their FPGA and ML inferencing accelerator HW with a small number of APIs / development tools that are supposedly relatively coherent.  I don't know how much better their CPU support is optimized vs. just running something in tensorflow using the available platform related optimization logic in the number crunching back-end code.
We live in interesting times!  Who would have thought, back when I started with computers in 1970 (punched cards, etc) that we would be talking about teraflops as though they were just a number?

The CDC 6400 got us to the Moon and back with 2 megaflops.
Title: Re: AI, Google Coral: anyone?
Post by: rstofer on June 08, 2021, 05:01:25 pm
Amazon is another source for the Dev-Board and some accessories.  I probably should have looked because Amazon is always faster than Mouser.  Not that I'm in a hurry, I have other things to work on.
Title: Re: AI, Google Coral: anyone?
Post by: evb149 on June 08, 2021, 06:09:13 pm
Interesting times indeed, I like the Moore's law part of that!  I remember building a system with the first SP 1TFLOP peak consumer GPU a few years ago and now I get ~ 16x that TFLOPs and way more bandwidth!  And my first PC was like 1 MIPs max!

Is it possible that your information about the fortran compiler non-freeness is out of date?
I know some of the NVIDIA documentation / literature refers to the PGI compilers but at some point in the past years they seemed to
switch over to nvidia branded compilers and you just download those from NVIDIA directly at no cost.   I'm not aware that there's a costly license for features relative to what's documented in the NVIDIA SDKs.

I was fairly certain that I've run the nvidia fortran example codes before (although I generally program in C/C++ not fortran), so I checked it again after I read your comment just to be sure.  I am running a relatively recent version of their SDK and the fortran examples I tried seem to work for me, and I didn't install anything other than their free (afaik) tools.

The following works for me; here's the tool links:
https://docs.nvidia.com/hpc-sdk/index.html
https://docs.nvidia.com/hpc-sdk/compilers/hpc-compilers-user-guide/
https://docs.nvidia.com/hpc-sdk/compilers/index.html
https://docs.nvidia.com/hpc-sdk/compilers/hpc-compilers-ref-guide/index.html

$ cat hello.f && nvfortran ./hello.f && ./a.out
      print *, "hello"
      end

 hello
...
// From the CUDA examples installed for C / C++ / fortran
make[1]: Entering directory '~/tmp/cuda/cuFFT/test_fft_oacc_ftn'
nvfortran -fast -acc=gpu -gpu=managed -Mcudalib=cufft -o tcufft2df2.exe tcufft2df2.f90
./tcufft2df2.exe
 Max error C2C FWD:   (0.000000,0.000000)
 Max error C2C INV:     0.000000   
 Max error R2C/C2R:     0.000000   
 test PASSED
...
// From the CUDA examples installed for C / C++ / fortran
$ make
nvfortran  -fast -o bandwidthTest.out bandwidthTest.cuf
./bandwidthTest.out
 
 Device: NVIDIA GeForce RTX 3070
 Transfer size (MB):     16.00000   
 
 Pageable transfers
   Host to Device bandwidth (GB/s):     10.86325   
   Device to Host bandwidth (GB/s):     9.805828   
 
 Pinned transfers
   Host to Device bandwidth (GB/s):     23.50784   
   Device to Host bandwidth (GB/s):     24.21069   
 
 Transfer between arrays on a (single) device
   Device bandwidth (GB/s):     273.7770   
   Test PASSED
...


The NVIDIA modified GCC compiler works well.  Unfortunately, it seems they want to charge money for the Fortran compiler.  Everybody knows that numerical analysis is best done in Fortran!
...
We live in interesting times!  Who would have thought, back when I started with computers in 1970 (punched cards, etc) that we would be talking about teraflops as though they were just a number?

The CDC 6400 got us to the Moon and back with 2 megaflops.
Title: Re: AI, Google Coral: anyone?
Post by: rstofer on June 08, 2021, 06:52:32 pm
Is it possible that your information about the fortran compiler non-freeness is out of date?
I know some of the NVIDIA documentation / literature refers to the PGI compilers but at some point in the past years they seemed to
switch over to nvidia branded compilers and you just download those from NVIDIA directly at no cost.   I'm not aware that there's a costly license for features relative to what's documented in the NVIDIA SDKs.

I was fairly certain that I've run the nvidia fortran example codes before (although I generally program in C/C++ not fortran), so I checked it again after I read your comment just to be sure.  I am running a relatively recent version of their SDK and the fortran examples I tried seem to work for me, and I didn't install anything other than their free (afaik) tools.

The following works for me; here's the tool links:
https://docs.nvidia.com/hpc-sdk/index.html (https://docs.nvidia.com/hpc-sdk/index.html)
https://docs.nvidia.com/hpc-sdk/compilers/hpc-compilers-user-guide/ (https://docs.nvidia.com/hpc-sdk/compilers/hpc-compilers-user-guide/)
https://docs.nvidia.com/hpc-sdk/compilers/index.html (https://docs.nvidia.com/hpc-sdk/compilers/index.html)
https://docs.nvidia.com/hpc-sdk/compilers/hpc-compilers-ref-guide/index.html (https://docs.nvidia.com/hpc-sdk/compilers/hpc-compilers-ref-guide/index.html)

$ cat hello.f && nvfortran ./hello.f && ./a.out
      print *, "hello"
      end

 hello
...
// From the CUDA examples installed for C / C++ / fortran
make[1]: Entering directory '~/tmp/cuda/cuFFT/test_fft_oacc_ftn'
nvfortran -fast -acc=gpu -gpu=managed -Mcudalib=cufft -o tcufft2df2.exe tcufft2df2.f90
./tcufft2df2.exe
 Max error C2C FWD:   (0.000000,0.000000)
 Max error C2C INV:     0.000000   
 Max error R2C/C2R:     0.000000   
 test PASSED
...
// From the CUDA examples installed for C / C++ / fortran
$ make
nvfortran  -fast -o bandwidthTest.out bandwidthTest.cuf
./bandwidthTest.out
 
 Device: NVIDIA GeForce RTX 3070
 Transfer size (MB):     16.00000   
 
 Pageable transfers
   Host to Device bandwidth (GB/s):     10.86325   
   Device to Host bandwidth (GB/s):     9.805828   
 
 Pinned transfers
   Host to Device bandwidth (GB/s):     23.50784   
   Device to Host bandwidth (GB/s):     24.21069   
 
 Transfer between arrays on a (single) device
   Device bandwidth (GB/s):     273.7770   
   Test PASSED
...
I could just be flat out wrong!  I was reading about the PGI compilers instead of trying the applications.

I have installed the 'dlinano' image as it seemed to have the most features for the facial detection tutorials and, I believe, it is the incantation used in the 'for pay' program.

There are at least 2 other incantations of the tools, I'll have to look around.

I do have 'nvcc' but no 'nvfortran'.

I bought a metal box to hold the Nano.  That is a HUGE mistake and soon to be rectified.  Sure, it's a nice way to mount the wifi antennas but there is no possible way on earth to interchange the microSD cards.  More thought required!

https://www.amazon.com/GeeekPi-Cooling-Control-Developer-Support/dp/B08D66VQ59 (https://www.amazon.com/GeeekPi-Cooling-Control-Developer-Support/dp/B08D66VQ59)

I seems like there is just one toolchain for the Coral Dev Board and it gets copied to onboard flash.  The microSD is not used for the filesystem.  Hm...
Title: Re: AI, Google Coral: anyone?
Post by: RoGeorge on June 08, 2021, 07:12:21 pm
If it were to say about boards, I've compared them a couple of months ago.  By looking at the specs and the available reviews, nVidia Jetson nano (either 2 or 4 GB) seemed to be the winner, in both computing power and community support.  For more money there is also Jetson Xavier.  They are standalone single board computers (similar with a Raspberry Pi), but with an integrated nVidia graphic card in the same chip with the processor (like an Intel CPU with graphic card included on chip).

Thought, recently the prices for Jetson on Arrow have doubled.  Jetson nano 2GB used to be $55 (their goal was to be comparable in price with a RaspberryPI), and Jetson nano 4GB used to be $100, now they doubled.  My best guess is this price doubling is related with the chip shortage of this particular times.

I've just checked again the price for Jetson nano at Arrow, and the prices are back to normal.  Probably what I've seen last weekend was just a listing price error.   :phew:
Title: Re: AI, Google Coral: anyone?
Post by: DiTBho on June 08, 2021, 07:18:16 pm
here (https://www.youtube.com/watch?v=VDg8fCW8LdM&feature=em-comments)

WOW, beautiful girl, extremely useful video ;D
Title: Re: AI, Google Coral: anyone?
Post by: DiTBho on June 08, 2021, 07:31:27 pm
Jetson Xavier

My colleagues are playing with a Jetson XavierNX cluster. It's a nice metal box with a huge cooling fan on the top, and internally it has four Xavier boards inside. It's a bit expensive, paid 2800 USD. I know nothing, I am simply the system admin, so for my point of view they are four Linux nodes to be supported.

Currently my blog runs on an embedded router connected to the internet. The previous version ran on an Linux / Apple MacMini G4 with only 512Mbyte of ram for Apache2, PHP, and everything. Funny, my super-router has more ram, cores and CPU-power, and expansions (2 miniPCIe) than the mac-mini :D

I looked at the Google Coral for several reasons:

Title: Re: AI, Google Coral: anyone?
Post by: janoc on June 08, 2021, 08:21:07 pm
I wonder why you want to reinvent the wheel using TensorFlow and complex solution when this problem has a solution that works well for over 20 years now - https://spamassassin.apache.org/

It is using a lot of carefully crafted (and constantly updated) rules in addition to Bayesian classifiers learning from your spam/ham patterns on the fly (unlike a "train-once and forget" Tensorflow model would). And can run on any old PC or even Raspberry Pi without the need for fancy machine learning hardware like the Jetson.

Sure you can do it using neural networks - but do you want a solution that works today or do you want to tinker with shiny toys?
Title: Re: AI, Google Coral: anyone?
Post by: DiTBho on June 08, 2021, 08:30:21 pm
I wonder why you want to reinvent the wheel using TensorFlow and complex solution when this problem has a solution that works well for over 20 years now - https://spamassassin.apache.org/

It is using a lot of carefully crafted (and constantly updated) rules in addition to Bayesian classifiers learning from your spam/ham patterns on the fly (unlike a "train-once and forget" Tensorflow model would). And can run on any old PC or even Raspberry Pi without the need for fancy machine learning hardware like the Jetson.

Sure you can do it using neural networks - but do you want a solution that works today or do you want to tinker with shiny toys?

Because it doesn't work well with the kind of spammers who send smart-spam messages to my blogs.

Title: Re: AI, Google Coral: anyone?
Post by: janoc on June 08, 2021, 08:42:37 pm
Because it doesn't work well with the kind of spammers who send smart-spam messages to my blogs.

And did you actually try it or are you just conjecturing based on the poor performance of your hand-crafted system?
Title: Re: AI, Google Coral: anyone?
Post by: rstofer on June 10, 2021, 05:48:15 pm
Amazon is another source for the Dev-Board and some accessories.  I probably should have looked because Amazon is always faster than Mouser.  Not that I'm in a hurry, I have other things to work on.
Mouser came through!  I now have the Dev Board and it took about 2 days.  Now all I need is time to play with it.
Title: Re: AI, Google Coral: anyone?
Post by: nctnico on June 13, 2021, 12:48:04 am
The Google Coral seems like direct competition to the NVIDIA Jetson Nano.  I haven't spent a lot of time looking but I wonder if the Coral has anywhere near the documentation of the Nano in terms of tutorials and sample code.  NVIDIA is really into the AI thing (for some values of AI).
No. Jetson Nano is a complete system on chip module. The Coral is just an accellerator. You can combine both though and use the Jetson Nano as a platform to use the Coral module.
Title: Re: AI, Google Coral: anyone?
Post by: evb149 on June 13, 2021, 10:07:50 am
https://blog.raccoons.be/coral-tpu-jetson-nano-performance


The Google Coral seems like direct competition to the NVIDIA Jetson Nano.  I haven't spent a lot of time looking but I wonder if the Coral has anywhere near the documentation of the Nano in terms of tutorials and sample code.  NVIDIA is really into the AI thing (for some values of AI).
No. Jetson Nano is a complete system on chip module. The Coral is just an accellerator. You can combine both though and use the Jetson Nano as a platform to use the Coral module.
Title: Re: AI, Google Coral: anyone?
Post by: rstofer on June 13, 2021, 02:53:27 pm
The Google Coral seems like direct competition to the NVIDIA Jetson Nano.  I haven't spent a lot of time looking but I wonder if the Coral has anywhere near the documentation of the Nano in terms of tutorials and sample code.  NVIDIA is really into the AI thing (for some values of AI).
No. Jetson Nano is a complete system on chip module. The Coral is just an accellerator. You can combine both though and use the Jetson Nano as a platform to use the Coral module.
That ought to be fun!  The Coral Camera showed up yesterday so I should be set to start.
Title: Re: AI, Google Coral: anyone?
Post by: DiTBho on June 13, 2021, 07:27:24 pm
Bought a couple of new hardware toys. I will try with a different approach.
Title: Re: AI, Google Coral: anyone?
Post by: DiTBho on June 14, 2021, 11:59:22 am
Yesterday I started adding new "Sentiment Analysis" modules. It's a supervised machine learning based approach with a classification model, which is trained using the pre-labeled dataset of positive, negative, and neutral to identify classes of given text.

For each period, it uses, tokenization, lemmatization, part-of-speech-analysis, vocabulary and morphological analysis and a dataset to quantify messages content as


and each of these category as


Squanch is a special class processed by a bag-of-words model. I don't know other ways of extracting features from the text. The Bow module converts text into the matrix of occurrence of { insults, dirty-speaking language, urban language } words within the message and it concerns about whether given words occurred or not in the message.

All the new NLP modules are written in python + eRlang + php. The php code is just a wrapper between the HTML textarea where a guest sends a message, and the classification of his/her message: { message-dropped,  message-dropped+user-banned, message-archived }.

It's just a wild approximation, human communication is not limited to words, it is more than words, and "sentiments" are combination words, tone, and writing style.
Title: Re: AI, Google Coral: anyone?
Post by: RoGeorge on June 14, 2021, 02:39:10 pm
As simple as it is from the AI/ML standpoint, that's one of the worst usage for AI.

It is used to stifle free speech, shadow-ban users, promote mediocrity, and float atop only imbecile bloatware comments like "We need more like this".   :horse:
Title: Re: AI, Google Coral: anyone?
Post by: evb149 on June 14, 2021, 02:53:58 pm
But comrade, I am sure that the pervasive use of such algorithms are only going to be the doubleplusgood  for us to bellyfeel & duckspeak goodthink!
Help us to crimestop crimethink, ungood oldthink, oldspeak, ownlife!
Minitrue will soon update the new edition of the AI models for our telescreens to assist with this project.

As simple as it is from the AI/ML standpoint, that's one of the worst usage for AI.

It is used to stifle free speech, shadow-ban users, promote mediocrity, and float atop only imbecile bloatware comments like "We need more like this".   :horse:
Title: Re: AI, Google Coral: anyone?
Post by: DiTBho on June 19, 2021, 09:55:46 am
As simple as it is from the AI/ML standpoint, that's one of the worst usage for AI.

You say this because you don't get spam, insults and harassment on a daily basis, I can assure you that when you get more gossip than useful messages then it's pretty demotivating.

In my case, look at the percentage

Code: [Select]
DiTBho # suppapurge-v2 data_average May
1628 messages received in May, 2021
 966 messages automatically rejected as spam (59.3%)
  43 messages manually accepted as false negative
 183 messages manually dropped as false positive
--------------------------------------------------------------------------
 522 messages archived


daily, 60% is spam! And this number tends to increase, more and more spam and less and less useful messages!

Yesterday I got a call from DHL, they informed me that I have to pay 54 euros as import taxes for my Coral modules, I didn't ask any questions, I put more than 200 euros in this project, and I am happy that messages like this
Quote
Dude, if you are reading this, give up, go work at McDonalds. Doing computer science is not yours.
What The Frog is this for?   :o :o :o

No matter, who cares, it will be automatically deleted :-DD

You mentioned "free speech"

It is used to stifle free speech, shadow-ban users, promote mediocrity, and float atop only imbecile bloatware comments like "We need more like this". 

Well, I can give you some examples of very received messages were "free speech" means there are mentally ill person on the web, and do you really want to allow them to smear the walls of your house with their mental garbage? Seriously?

I don't want to give up, others can do whatever they want on their blogs/sites, personally Ineed more like this ML AI filter, and I don't think I am promoting any mediocrity!
Title: Re: AI, Google Coral: anyone?
Post by: RoGeorge on June 19, 2021, 03:34:28 pm
My remark and annoyance regarding sentiment AI technology usage was not about you.  It was about how sentiment analysis bots are used by major players like YouTube, Facebook, Twitter and alike.

Congrats for making the Coral do a useful job.   :-+