Author Topic: The Julia thread  (Read 4294 times)

0 Members and 1 Guest are viewing this topic.

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 9543
  • Country: fr
The Julia thread
« on: January 06, 2022, 07:15:22 pm »
So, probably like many of us here, I had heard of Julia, but never really got interested.

I very recently stumbled upon a conference about it, and it got me curious. Then I read docs, downloaded Julia and played a bit with it. I think it's worth a look, especially for people who don't like or want an alternative to Python for a range of applications, in particular numerical stuff, graphing, etc. Can also be an alternative to Matlab.

https://julialang.org/

A couple additional links that I've found useful so far:

https://docs.juliaplots.org/latest/
https://www.matecdev.com/posts/julia-fft.html
https://docs.juliadsp.org/latest/contents/

What are your thoughts? Does anyone use Julia here?
 

Offline DrG

  • Super Contributor
  • ***
  • !
  • Posts: 1199
  • Country: us
Re: The Julia thread
« Reply #1 on: January 06, 2022, 07:56:43 pm »
So, probably like many of us here, I had heard of Julia, but never really got interested.

I very recently stumbled upon a conference about it, and it got me curious. Then I read docs, downloaded Julia and played a bit with it. I think it's worth a look, especially for people who don't like or want an alternative to Python for a range of applications, in particular numerical stuff, graphing, etc. Can also be an alternative to Matlab.

https://julialang.org/

A couple additional links that I've found useful so far:

https://docs.juliaplots.org/latest/
https://www.matecdev.com/posts/julia-fft.html
https://docs.juliadsp.org/latest/contents/

What are your thoughts? Does anyone use Julia here?

I don't know, I am reading about it and, it does sound interesting. I need to learn more about - for example, the plots - how is device-dependency handled...if I were using one of those example plots, how would I use it on a typical PC screen or a little oLED screen. Like I said, I need to learn more and it would be a big investment in time and effort - but the thought of a python alternative is appealing. Thanks for posting - I did not know about it and have bookmarked it.
- Invest in science - it pays big dividends. -
 

Online xrunner

  • Super Contributor
  • ***
  • Posts: 6243
  • Country: us
  • hp>Agilent>Keysight>???
Re: The Julia thread
« Reply #2 on: January 06, 2022, 08:21:33 pm »
Interesting. I went through one tutorial at Julia academy and they were using emojis as variables. Well that's new to me!
[hp] Hewlett . Packard
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 9543
  • Country: fr
Re: The Julia thread
« Reply #3 on: January 07, 2022, 12:30:06 am »
Interesting. I went through one tutorial at Julia academy and they were using emojis as variables. Well that's new to me!

It fully supports UTF-8 even for identifiers, which can be cool for some things, like using the greek alphabet, and obviously useless for some others. :)
 
The following users thanked this post: xrunner

Offline Mattjd

  • Regular Contributor
  • *
  • Posts: 229
  • Country: us
Re: The Julia thread
« Reply #4 on: January 07, 2022, 10:53:47 pm »
I prefer Matlab to Numpy for purely numeric things strictly because of syntax but I don't find it compelling enough ditch the inarguably better general programming that Python provides to use favor Matlab.

Then you have big large scale modeling tasks with possible porting to hardware platforms. Simulink + Matlab win hands down here and afaik, the large majority of automotive and aerospace companies operate in this fashion.

I'm not sure what Julia would provide over Python in general computing and good enough numeric computing syntax to favor it. Might be worthwhile, I just haven't had a reason to want to try it. I heard Julia has multiple dispatching, you can do this in Python with descriptors/decorators, that might be why I would like to try it. I heard multiple dispatching in Julia is far superior to the version provided by python. Which isn't a built in but an extension constructed through normal exercise of the language.

On the big large scale modeling tasks and porting to hardware I doubt it even comes close to Simulink + Matlab and don't think I even want to bother looking into it. I could be completely wrong and its awesome but I don't know of any companies that come close, let alone an open source project.
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 2910
  • Country: nz
  • Formerly SiFive, Samsung R&D
Re: The Julia thread
« Reply #5 on: January 08, 2022, 12:04:30 am »
I've been following the growth of Julia but haven't really tried it yet.

I'm very happy to see a lot of ideas from the Dylan language that I was actively championing from 1998 to around 2010 (and used to win a few prizes in the annual ICFP Programming Contest) make it into a proper compiled language that is gaining serious popularity.
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 9543
  • Country: fr
Re: The Julia thread
« Reply #6 on: January 08, 2022, 12:26:35 am »
I've been following the growth of Julia but haven't really tried it yet.

I hadn't until very recently either. But I think it's worth a shot.

I'm very happy to see a lot of ideas from the Dylan language that I was actively championing from 1998 to around 2010 (and used to win a few prizes in the annual ICFP Programming Contest) make it into a proper compiled language that is gaining serious popularity.

I never really got to know Dylan, although I remember another discussion mentioning it. What are the ideas you're mentioning that made it into Julia?
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 2910
  • Country: nz
  • Formerly SiFive, Samsung R&D
Re: The Julia thread
« Reply #7 on: January 08, 2022, 01:11:38 am »
I never really got to know Dylan, although I remember another discussion mentioning it. What are the ideas you're mentioning that made it into Julia?

Dynamic typing, with optional type declarations for documentation, optimisation, and choosing which function to call.

Generic functions aka multiple dispatch

Proper hygienic lexical macros. Most Dylan control structures (for example) are macros in the standard library, built on simple if/else and (recursive) function calls.

Everything is an object.

Everything is an expression.



For examples of new syntax via macros, see:

https://github.com/dylan-hackers/gwydion-2.4/blob/master/d2c/runtime/dylan/macros.dylan

These macros result (after optimisation) in the same machine code you'd expect from loops etc in C.
« Last Edit: January 08, 2022, 01:27:05 am by brucehoult »
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 9543
  • Country: fr
Re: The Julia thread
« Reply #8 on: January 08, 2022, 03:11:18 am »
I'm just reading about the macro system in Julia, and yes, it seems pretty powerful. I suppose it's close to Dylan macros, although I have a bit of a hard time telling for sure at the moment.

Yes the typing system and the multiple dispatch are pretty nice.

One thing that you note when starting with Julia (and I've read that was a common complaint) is that loading modules, and even running functions for the first time takes a while. It's due to the fact all functions are JIT compiled. But once they've been compiled, it becomes extremely fast. At the next session though, everything gets recompiled again. (There are ways of caching it and saving the result for later, didn't quite get into that yet, but there are caveats.) Of course it's all due to using a JIT compiler. Once things are compiled, it beats the pants out of Python and Matlab any day as far as I've seen. (Sure you can use modules written in C with Python, but with Julia, there's virtually no need to write C modules.)
 

Offline PKTKS

  • Super Contributor
  • ***
  • Posts: 1687
  • Country: br
Re: The Julia thread
« Reply #9 on: January 08, 2022, 01:37:09 pm »
annnhhh  a small BIG problem  imho...  :-\

Having LLVM and GPU  dependency today it means we are
depending on GPU manufact. madhouse good will and
cash flow business ..

which .. annnhhh  is ... like... a black box tunnel with no light anywhere..

Paul
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9156
  • Country: us
Re: The Julia thread
« Reply #10 on: January 08, 2022, 07:03:42 pm »
If you want to have fun with MATLAB and parallel computing, just use some version of the newer NVIDIA graphics cards with GPUs.  Yes, this requires the Parallel Computing Toolbox but you can play for 30 days free (I am).

My HP Envy All-In-One has an NVIDIA GTX 950M with 640 CUDA cores - this is tiny when compared to the RTX 3080 with 8960 cores.

The MATLAB Deep Learning Toolkit (which I own) has a digit recognition example and the training time using only the CPU is on the order of 25-26 seconds.  With the GPU, the time falls to 15-16 seconds.  That is a tiny problem where the 10,000 images are 28x28 pixels.  I am saving on the order of 40% in execution time on a relatively trivial classification problem.  Just wait until I get rich and buy that Dell Data Science Laptop - like that will ever happen.

https://www.dell.com/en-us/work/shop/dell-laptops-and-notebooks/precision-7760-data-science-workstation/spd/precision-17-7760-laptop/xctop776017dswsus_vp

Want more fun?  Are you a Fortran Programmer?  I am...

There's a book "CUDA Fortran for Scientists and Engineers" written by a couple of NVIDIA folks.  The Linux only HPC (High Performance Computing) SDK includes a Fortran compiler.  The laptop comes with Ubuntu 20.4.  There's probably a reason for that.  The Laptop version of the RTX3080 has only 6144 CUDA cores but that seems like a lot for anything I'll work on.  With 16GB of CUDA memory, you can multiply some awesome arrays.

This stuff is a lot of fun!

BTW, the standard NVIDIA toolkit contains a C/C++ toolchain and there is a Windows version.

The cuBLAS library recreates the Basic Linear Algebra System library optimized for CUDA computing.  It's included.

But using MATLAB with CUDA is where things get to be really fun!  Particularly for Deep Neural Networks.
« Last Edit: January 08, 2022, 07:06:31 pm by rstofer »
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 9543
  • Country: fr
Re: The Julia thread
« Reply #11 on: January 08, 2022, 07:57:28 pm »
Thanks, this is interesting but makes absolutely no mention of Julia, which is the topic here. =)
Next thing you know, there will be a herd of "Pythonists" invading the thread. If we could avoid that... unless, of course, they talk about Julia. Then they are welcome.

And I didn't get PKTKS' rant. (As often...) What's the link between using LLVM and being dependent on GPUs?
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9156
  • Country: us
Re: The Julia thread
« Reply #12 on: January 08, 2022, 09:07:17 pm »
You're right, of course but I did just find out that Julia can use Fortran libraries.  There appears to be a way to use Julia for CUDA programming and given the visualization capabilities of Julia, the language might actually be useful.  Maybe I can find some examples.

It is unlikely to pull me away from something mainstream like MATLAB but it could be in the running with Fortran.

I term MATLAB as mainstream as the local university has a required course for first semester STEM students and they use it extensively through the remining terms.  I'm actually amazed that NVIDIA supports Fortran but I guess they are playing to the engineering and scientific communities.


 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9156
  • Country: us
Re: The Julia thread
« Reply #13 on: January 08, 2022, 10:06:34 pm »
annnhhh  a small BIG problem  imho...  :-\

Having LLVM and GPU  dependency today it means we are
depending on GPU manufact. madhouse good will and
cash flow business ..

which .. annnhhh  is ... like... a black box tunnel with no light anywhere..

Paul

Lacking an established standard for CUDA cores, I don't see how there can be multiple sources.  It doesn't seem that the other manufacturers want to compete in that marketplace.  I suppose there could be a shim layer to convert nvc and nvfortran code to some other architecture but I don't see it optimizing well.

NVIDIA is HUGE in AI and they're spending a lot of money on developing applications that run on CUDA, Tensor Cores and other technologies.  It's no wonder they own the market, they have a bunch of really smart math types working on this stuff.

Since these compute capabilities are a natural outgrowth of NVIDIA's graphics business and, in fact, are required for high performance graphics, I don't see them going away as long as there are gamers and Bitcoin miners continuing to buy high end graphics cards.  Look at their Market Cap and 5 year stock price.  Pretty impressive!

FWIW, some of our national laboratories are working with NVIDIA to come up with open source compilers.  This may allow other manufacturers to get into the market with compatible hardware/software.


 

Offline magic

  • Super Contributor
  • ***
  • Posts: 4667
  • Country: pl
Re: The Julia thread
« Reply #14 on: January 08, 2022, 10:18:38 pm »
I'm actually amazed that NVIDIA supports Fortran but I guess they are playing to the engineering and scientific communities.
That's exactly the case. They even bought an HPC compiler vendor (PGI) a few years ago.
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 3718
  • Country: fi
    • My home page and email address
Re: The Julia thread
« Reply #15 on: January 08, 2022, 10:53:44 pm »
Lacking an established standard for CUDA cores, I don't see how there can be multiple sources.  It doesn't seem that the other manufacturers want to compete in that marketplace.  I suppose there could be a shim layer to convert nvc and nvfortran code to some other architecture but I don't see it optimizing well.
People like me who do GPGPU HPC stuff use OpenCL instead of CUDA.  Unlike CUDA, which is NVidia-only, several vendors support OpenCL.

It is worth the difference to me to not be dependent on a single vendor –– or more importantly, dependent on the black-box support of a single vendor.
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 9543
  • Country: fr
Re: The Julia thread
« Reply #16 on: January 09, 2022, 12:25:50 am »
Yeah. Julia can also use OpenCL using a number of packages: https://opencl.org/coding/languages/julia/
I would certainly stay away from CUDA as well.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9156
  • Country: us
Re: The Julia thread
« Reply #17 on: January 09, 2022, 01:31:59 am »
Lacking an established standard for CUDA cores, I don't see how there can be multiple sources.  It doesn't seem that the other manufacturers want to compete in that marketplace.  I suppose there could be a shim layer to convert nvc and nvfortran code to some other architecture but I don't see it optimizing well.
People like me who do GPGPU HPC stuff use OpenCL instead of CUDA.  Unlike CUDA, which is NVidia-only, several vendors support OpenCL.

It is worth the difference to me to not be dependent on a single vendor –– or more importantly, dependent on the black-box support of a single vendor.
Makes all the sense in the world!.  OTOH, I am just a simple hobbyist and I want an end-to-end solution much like I get with Xilinx.  The same vendor makes the hardware, distributes the tools and provides a forum, videos and examples to get the beginners started.  At some point, I'll look into OpenCL to see what it's about.  The thing is, I like Fortran having used it since 1970 and that may be the biggest driver for me even getting involved with CUDA.

As Julia supports CUDA, it is also of interest.  I need to wander through the web site.

I just watched the Flux.jl video and constructing a DNN in Julia is a lot like doing it in MATLAB.  Just layers and layers stacked together,  Very nice!
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 9543
  • Country: fr
Re: The Julia thread
« Reply #18 on: January 09, 2022, 01:36:24 am »
The language itself is pretty advanced. I was not expecting that either until I took a deeper look!
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9156
  • Country: us
Re: The Julia thread
« Reply #19 on: January 09, 2022, 01:54:06 am »
There's a lot to like in Julia.  I need to get serious and do something with it.
 

Offline PKTKS

  • Super Contributor
  • ***
  • Posts: 1687
  • Country: br
Re: The Julia thread
« Reply #20 on: January 09, 2022, 09:20:41 am »


And I didn't get PKTKS' rant. (As often...) What's the link between using LLVM and being dependent on GPUs?

Amazing how labels attach.. where is rant related i have no idea..   you posted the link yourself..  have you even read the first call?
https://julialang.org/

Quoting them..
Julia in a Nutshell
Fast
Julia was designed from the beginning for high performance. Julia programs compile to efficient native code for multiple platforms via LLVM.


Anyone experienced with LLVM already got that it is meant to workaround GPU proprietary Instruction Set

SO ...  SOOOOO..  if you deposit your hopes in a language biased towards that...

Geez.. you are in the hands of GPU makers which shrunk the market to 2 options in the last decades...

What is good about that? Rant?
give me a break about this nonsense rant thing


Paul
« Last Edit: January 09, 2022, 11:53:57 am by PKTKS »
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9156
  • Country: us
Re: The Julia thread
« Reply #21 on: January 09, 2022, 04:52:40 pm »
Geez.. you are in the hands of GPU makers which shrunk the market to 2 options in the last decades...

What is good about that? Rant?
give me a break about this nonsense rant thing


Paul

Like any other market, there is a cost to enter.  Apparently, other manufacturers and potential manufacturers don't see a way to enter the market and make a profit.  If it was easy, we would be knee deep in competitive graphics boards and programming them would take on an Arduino feel.

NVIDIA, OTOH, has invested a HUGE amount of money developing the technology and backing it up with free tools.  Just the cost of feeding the researchers, engineers, mathematicians and technical writers must be costing them a TON of money.  From their stock price, I have to assume the market thinks they're doing excellent work.

Don't get me wrong, I have always hated the NVIDIA card in my Linux machine.  When I bought it nearly 20 years ago, I had to rebuild the drivers every time the kernel changed and it wasn't a pleasant prospect.  Closed source drivers kept me away from NVIDIA for a long time.

Things change, I want to play with massively parallel computing.  Where else can I buy an 8192 FP core processor?  What other company has all of the bits and pieces to make it possible for a hobbyist to work on such things.  And their community forum is excellent.  Couple the hardware with Julia or MATLAB and even the elderly can play in the big leagues.

We're talking multiple teraflops of compute power in a laptop.  We went to the Moon with machines performing, at best, 3 megaflops in a CDC 6600 mainframe.  I can have more compute power in my workshop than existed on the entire planet in 1969.  Without the huge electric bill!

Anybody want to compete?  Well, step right up!  Go hire a raft of the smartest people on the planet and give it a shot!  We'll know from the stock price whether they are making money.

To bring this back to Julia, it certainly looks like the language has all of the libraries and such to take advantage of NVIDIAs offerings plus a way to adapt to newcomers.  I don't know that Julia will catch on like Python but it certainly has possibilities.

The Julia code for digit recognition over the MNIST dataset is readily available:

https://github.com/crhota/Handwritten-Digit-Recognition-using-MNIST-dataset-and-Julia-Flux/blob/master/src/Handwriting%20Recognition.ipynb

There is a ton of comments and the code is pretty easy to follow.  I may eventually start to like the Jupyter Notebook approach.

In terms of Julia and Flux, here's a gentle introduction:

https://fluxml.ai/tutorials/2020/09/15/deep-learning-flux.html
« Last Edit: January 09, 2022, 04:55:00 pm by rstofer »
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 3718
  • Country: fi
    • My home page and email address
Re: The Julia thread
« Reply #22 on: January 09, 2022, 05:33:20 pm »
Lacking an established standard for CUDA cores, I don't see how there can be multiple sources.  It doesn't seem that the other manufacturers want to compete in that marketplace.  I suppose there could be a shim layer to convert nvc and nvfortran code to some other architecture but I don't see it optimizing well.
People like me who do GPGPU HPC stuff use OpenCL instead of CUDA.  Unlike CUDA, which is NVidia-only, several vendors support OpenCL.

It is worth the difference to me to not be dependent on a single vendor –– or more importantly, dependent on the black-box support of a single vendor.
Makes all the sense in the world!
There is nothing wrong in relying on CUDA, though, if you accept the single-vendor dependency.  Like said before, a lot of scientists rely on it (via the software they use) and use it happily.  Me, I'm a toolmaker myself, so my attitude differs from those who just use the existing tools to get useful interesting results.

Indeed, the CUDA vs. OpenCL in HPC is a bit of a hot potato, because currently, with CUDA, you can get more out of NVidia hardware, which is common in HPC clusters.
(It gets a bit nasty if you drag in the topic of "reproducible results" when that exact hardware is no longer available.)
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9156
  • Country: us
Re: The Julia thread
« Reply #23 on: January 09, 2022, 05:47:13 pm »

It is worth the difference to me to not be dependent on a single vendor –– or more importantly, dependent on the black-box support of a single vendor.

There is nothing wrong in relying on CUDA, though, if you accept the single-vendor dependency.  Like said before, a lot of scientists rely on it (via the software they use) and use it happily.  Me, I'm a toolmaker myself, so my attitude differs from those who just use the existing tools to get useful interesting results.

Indeed, the CUDA vs. OpenCL in HPC is a bit of a hot potato, because currently, with CUDA, you can get more out of NVidia hardware, which is common in HPC clusters.
(It gets a bit nasty if you drag in the topic of "reproducible results" when that exact hardware is no longer available.)

I've been retired for 18 years so dependency issues are not a concern.  What is a concern is documentation.  I'm not so much a developer as a copy-and-paster.  I need examples!  Community forums are highly regarded and Julia has that covered!  They have some outstanding projects and videos and they don't assume I have a PhD in Applied Mathematics.

From what I have seen of CNNs, all weights and biases start with a random value.  Results, run to run, are not exactly identical.  There's some kind of tolerance, I suppose.  I have yet to tumble to how they separate a local minimum from a global minimum when there is a mountain in the way in thousand dimensional space.  Working on it...
« Last Edit: January 09, 2022, 05:49:26 pm by rstofer »
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 9543
  • Country: fr
Re: The Julia thread
« Reply #24 on: January 09, 2022, 06:51:01 pm »
Indeed, the CUDA vs. OpenCL in HPC is a bit of a hot potato, because currently, with CUDA, you can get more out of NVidia hardware, which is common in HPC clusters.

Of course. It's like with everything else in computing. Either you want something fully open and portable, or you want maximum performance. Both are usually mutually exclusive, except maybe in very rare cases.

Regarding Julia, both are supported in various ways, so you can pick what works for you.
 
The following users thanked this post: Nominal Animal


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf