Author Topic: Apples new M1 microprocessor  (Read 9149 times)

bd139, DiTBho, nfmax, borjam and 11 Guests are viewing this topic.

Offline dr.diesel

  • Super Contributor
  • ***
  • Posts: 2197
  • Country: us
  • Cramming the magic smoke back in...
Re: Apples new M1 microprocessor
« Reply #175 on: November 21, 2020, 11:35:02 pm »
Sure wish this hardware could be had for us Linux guys.

Any rumors of higher performance ARM laptops on the horizon?

Online bd139

  • Super Contributor
  • ***
  • Posts: 17141
  • Country: gb
Re: Apples new M1 microprocessor
« Reply #176 on: November 21, 2020, 11:41:04 pm »
 

Offline Berni

  • Super Contributor
  • ***
  • Posts: 3167
  • Country: si
Re: Apples new M1 microprocessor
« Reply #177 on: November 22, 2020, 09:03:34 am »
Probably because video encoding is one of the most demanding CPU tasks. When not hardware-accelerated, it’s incredibly taxing on a system, as it involves gargantuan amounts of data and processing. (We use compressed formats that have staggering compression ratios. The raw data the encoder needs is huge, and the amount of processing needed to achieve high compression is insane.)

The typical things most people do most of the time (web browsing, watching videos, creating documents, etc) simply aren’t demanding enough to tease out meaningful performance comparisons.

I remember back in the early 90s, when computer benchmarks ran massive scripted suites of real-world software, that one common benchmark item was how long it took a spreadsheet (originally Lotus 1-2-3, later Excel) to recalculate a spreadsheet with a couple of thousand cells. (Not lines. Cells.) They had to increase the size of the spreadsheets to keep up with performance improvements, before eventually abandoning that test because all computers got fast enough that recalculating a spreadsheet became instant, so they just made it automatic as we know it today. (IIRC, we got to the point in those tests that they couldn’t even increase the test spreadsheets, as they’d reached the limits at the time of how many columns and rows you could have; that is, limits written into the code, not computing resource limits.)

Well, i was recently using Excel to quickly plot some CSV data and once you get to >100k rows things start to really slow down. I had cases where excel would lock up for >30 seconds at a time after pressing a button to munch away at data. Admittedly this was like a >30MB CSV file but still, modern computers should handle that (There are now CPUs that could fit the whole file in cache even). Most of the reason is just that excel is not very well optimized for dealing with large amounts of data and is bogged down by 30 years of legacy code under the hood.

But yes every synthetic benchmark has a bias towards something. Not everyone uses a computer to do the same thing. There is nothing that makes Cinebench particularly superior to other benchmarks, but its workload of raytracing on CPU is still a pretty well-balanced load that stresses all of the modern CPUs functionality. It involves a good deal of math computation for all those vectors, it involves heavily branching code that tests the pipelines agility, it consumes a fair bit of RAM bandwidth due to the size of data, it's easily parallelizable so that it can make use of pretty much any number of cores... etc So this means it includes most things that CPU heavy applications would need and so the resulting score number is a reasonably good indicator of overall performance.

These benchmarks are very useful when you are deciding to buy your next CPU. It is an easily comparable number to let you find the best CPU for your available budget. It also lets you see how viable upgrading your computer might be. Sure Intel has been cranking out generation after generation of improved CPUs, but when looking at the benchmarks you can see that i7 made some big leaps in performance from Gen 1 2 3 4 but then AMD disappeared and generations 5 6 7 8 9 10 barely made small improvements. So one would expect going from 4th gen to 10th gen would be a massive jump, but its actually not really when looking at benchmarks, so you might decide to stick with the trusty 4th gen until something better comes around to make the investment into a new PC worth while.
 

Offline tooki

  • Super Contributor
  • ***
  • Posts: 5658
  • Country: ch
Re: Apples new M1 microprocessor
« Reply #178 on: November 22, 2020, 10:36:44 am »
Probably because video encoding is one of the most demanding CPU tasks. When not hardware-accelerated, it’s incredibly taxing on a system, as it involves gargantuan amounts of data and processing. (We use compressed formats that have staggering compression ratios. The raw data the encoder needs is huge, and the amount of processing needed to achieve high compression is insane.)

The typical things most people do most of the time (web browsing, watching videos, creating documents, etc) simply aren’t demanding enough to tease out meaningful performance comparisons.

I remember back in the early 90s, when computer benchmarks ran massive scripted suites of real-world software, that one common benchmark item was how long it took a spreadsheet (originally Lotus 1-2-3, later Excel) to recalculate a spreadsheet with a couple of thousand cells. (Not lines. Cells.) They had to increase the size of the spreadsheets to keep up with performance improvements, before eventually abandoning that test because all computers got fast enough that recalculating a spreadsheet became instant, so they just made it automatic as we know it today. (IIRC, we got to the point in those tests that they couldn’t even increase the test spreadsheets, as they’d reached the limits at the time of how many columns and rows you could have; that is, limits written into the code, not computing resource limits.)

Well, i was recently using Excel to quickly plot some CSV data and once you get to >100k rows things start to really slow down. I had cases where excel would lock up for >30 seconds at a time after pressing a button to munch away at data. Admittedly this was like a >30MB CSV file but still, modern computers should handle that (There are now CPUs that could fit the whole file in cache even). Most of the reason is just that excel is not very well optimized for dealing with large amounts of data and is bogged down by 30 years of legacy code under the hood.
For sure. Over the years they've increased the sizes of data sets it can work with, but it definitely doesn't mean it's optimized for them. (This is also the one area where 64-bit Office actually makes a difference, supposedly.)

Since I really only use Excel to do things like BOMs and other such non-number-crunchy data, it's not something I've ever had to wrestle with!

But yes every synthetic benchmark has a bias towards something. Not everyone uses a computer to do the same thing. There is nothing that makes Cinebench particularly superior to other benchmarks, but its workload of raytracing on CPU is still a pretty well-balanced load that stresses all of the modern CPUs functionality. It involves a good deal of math computation for all those vectors, it involves heavily branching code that tests the pipelines agility, it consumes a fair bit of RAM bandwidth due to the size of data, it's easily parallelizable so that it can make use of pretty much any number of cores... etc So this means it includes most things that CPU heavy applications would need and so the resulting score number is a reasonably good indicator of overall performance.
Exactly!

There isn't one benchmark that can make everyone happy, since that's literally impossible, given people's competing needs. But few everyday tasks can actually push a modern system anywhere close to its limits.

As for parallelization: this is a point I think most people don't really understand. I don't think most people realize that for most tasks, it's single-core performance that matters, since a) not all things can be parallelized, and b) parallelizing code is fucking hard to get right. (I think it's actually something most developers are not capable of doing well.) For everyday things, going from a single core to dual core made a big difference, since the second core could handle OS housekeeping while leaving a whole CPU core to the user app. But every core beyond two delivers only insignificant performance increases for everyday apps whose performance is ultimately bound by one thread, even if some things are offloaded to separate threads (like video playback, or indexing, etc).
 

Online bd139

  • Super Contributor
  • ***
  • Posts: 17141
  • Country: gb
Re: Apples new M1 microprocessor
« Reply #179 on: November 22, 2020, 10:40:42 am »
Using excel as a hammer is not recommended  :-DD
 

Offline SilverSolder

  • Super Contributor
  • ***
  • Posts: 3470
  • Country: 00
Re: Apples new M1 microprocessor
« Reply #180 on: November 22, 2020, 01:28:19 pm »
[...]  parallelizing code is fucking hard to get right. (I think it's actually something most developers are not capable of doing well.) For everyday things, going from a single core to dual core made a big difference, since the second core could handle OS housekeeping while leaving a whole CPU core to the user app. But every core beyond two delivers only insignificant performance increases for everyday apps whose performance is ultimately bound by one thread, even if some things are offloaded to separate threads (like video playback, or indexing, etc).

In general, I would agree that 4 cores is probably plenty for most people's workloads.

For those apps that do benefit from multi-threading (e.g. circuit sims, heavy Photoshop filters, video editing, compression/backup, and so on) I think developers generally do understand what they are doing, and they do make good use of whatever cores you have.  So there is a significant subset of people for whom more cores (and speed too - both at the same time!) makes a big difference.


« Last Edit: November 22, 2020, 01:36:00 pm by SilverSolder »
 

Online DiTBho

  • Regular Contributor
  • *
  • Posts: 52
  • Country: gb
Re: Apples new M1 microprocessor
« Reply #181 on: November 22, 2020, 01:29:15 pm »
how do these cores operates regarding the bus to the GPU and other units? is there a shared -Three-state logic- bus, or a cross-matrix based bus? or something different?
 

Online bd139

  • Super Contributor
  • ***
  • Posts: 17141
  • Country: gb
Re: Apples new M1 microprocessor
« Reply #182 on: November 22, 2020, 01:34:36 pm »
Don’t think there’s any info on that yet. Either way the GPU and CPU are in competition for memory access so you’re never going to get anywhere near half decent performance out of this architecture.
 
The following users thanked this post: DiTBho

Offline Cerebus

  • Super Contributor
  • ***
  • Posts: 6242
  • Country: gb
Re: Apples new M1 microprocessor
« Reply #183 on: November 22, 2020, 02:49:57 pm »
how do these cores operates regarding the bus to the GPU and other units? is there a shared -Three-state logic- bus, or a cross-matrix based bus? or something different?

Apple's own, very sketchy, marketing diagram shows a switch fabric connecting all the subsystems:

Anybody got a syringe I can use to squeeze the magic smoke back into this?
 
The following users thanked this post: DiTBho

Offline tooki

  • Super Contributor
  • ***
  • Posts: 5658
  • Country: ch
Re: Apples new M1 microprocessor
« Reply #184 on: November 22, 2020, 06:59:23 pm »
For those apps that do benefit from multi-threading (e.g. circuit sims, heavy Photoshop filters, video editing, compression/backup, and so on) I think developers generally do understand what they are doing, and they do make good use of whatever cores you have.  So there is a significant subset of people for whom more cores (and speed too - both at the same time!) makes a big difference.
Those are all things that are trivial to parallelize, which is why people are used to seeing things like that be multithreaded.

But scores of apps that would benefit from multithreading end up being CPU-bound by a single thread that gets bogged down running the show, because so many developers don't know how to architect software that doesn't get held up in the main thread somewhere. (Note that I'm not a developer. I studied IT, and so I am well familiar with requirements engineering, software development life cycles, etc., but actual coding isn't my thing. So I'm expressly not claiming to know how to do advanced multithreading myself.)

Anyhow, it takes rare talent to be able to really break down complex, interactive applications into threads that can run without waiting for each other, while ensuring data concurrency (one of the big reasons devs avoid multithreading). Unsurprisingly, another area where data concurrency is central is also one most developers plain and simply suck at: data synchronization. I suspect another reason devs love cloud-based software is because it avoids them having to worry about syncing desktop clients later.
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 1815
  • Country: nz
  • Formerly SiFive, Samsung R&D
Re: Apples new M1 microprocessor
« Reply #185 on: November 22, 2020, 09:59:35 pm »
I promised some results of my M1 Mac testing so here you go.

Early days yet, but very different (open-source programmer oriented) tests than you'll find elsewhere. Includes building serious open-source software (binutils, gcc, gdb, newlib). Includes arm64 Ubuntu in a VM.

http://hoult.org/arm64_mini.html

Ongoing. Not everything works with the combination of MacOS AND arm64 yet. Will add more tests. Suggestions welcome if not too difficult to do.
 
The following users thanked this post: bd139, DiTBho

Offline SilverSolder

  • Super Contributor
  • ***
  • Posts: 3470
  • Country: 00
Re: Apples new M1 microprocessor
« Reply #186 on: November 22, 2020, 10:00:59 pm »
For those apps that do benefit from multi-threading (e.g. circuit sims, heavy Photoshop filters, video editing, compression/backup, and so on) I think developers generally do understand what they are doing, and they do make good use of whatever cores you have.  So there is a significant subset of people for whom more cores (and speed too - both at the same time!) makes a big difference.
Those are all things that are trivial to parallelize, which is why people are used to seeing things like that be multithreaded.

But scores of apps that would benefit from multithreading end up being CPU-bound by a single thread that gets bogged down running the show, because so many developers don't know how to architect software that doesn't get held up in the main thread somewhere. (Note that I'm not a developer. I studied IT, and so I am well familiar with requirements engineering, software development life cycles, etc., but actual coding isn't my thing. So I'm expressly not claiming to know how to do advanced multithreading myself.)

Anyhow, it takes rare talent to be able to really break down complex, interactive applications into threads that can run without waiting for each other, while ensuring data concurrency (one of the big reasons devs avoid multithreading). Unsurprisingly, another area where data concurrency is central is also one most developers plain and simply suck at: data synchronization. I suspect another reason devs love cloud-based software is because it avoids them having to worry about syncing desktop clients later.

I'm sure they exist, but I'm struggling to think of an example of an application that could use multithreading, where it hasn't already been done?
 

Offline olkipukki

  • Frequent Contributor
  • **
  • Posts: 586
  • Country: 00
Re: Apples new M1 microprocessor
« Reply #187 on: November 23, 2020, 09:02:07 am »
In real use performance is as good as the best Windows or Linux x86 machines. No problems. Even emulated x86 apps feel absolutely normal and snappy.

What do you use for x86 VM on Mini M1?

Looks like Wine support M1 , but not yet VMware Fusion
https://www.codeweavers.com/blog/jwhite/2020/11/18/okay-im-on-the-bandwagon-apple-silicon-is-officially-cool

I'm keen to update my 7 years old 13" retina to M1, but Windows VM is must be

 

Online DiTBho

  • Regular Contributor
  • *
  • Posts: 52
  • Country: gb
Re: Apples new M1 microprocessor
« Reply #188 on: November 23, 2020, 09:52:47 am »
I'd like to buy a MBP/M1 for Finalcut, but I am not sure about this choice
 

Offline NANDBlog

  • Super Contributor
  • ***
  • Posts: 4924
  • Country: nl
  • Current job: ATEX certified product design
Re: Apples new M1 microprocessor
« Reply #189 on: November 23, 2020, 10:20:13 am »
For those apps that do benefit from multi-threading (e.g. circuit sims, heavy Photoshop filters, video editing, compression/backup, and so on) I think developers generally do understand what they are doing, and they do make good use of whatever cores you have.  So there is a significant subset of people for whom more cores (and speed too - both at the same time!) makes a big difference.
Those are all things that are trivial to parallelize, which is why people are used to seeing things like that be multithreaded.

But scores of apps that would benefit from multithreading end up being CPU-bound by a single thread that gets bogged down running the show, because so many developers don't know how to architect software that doesn't get held up in the main thread somewhere. (Note that I'm not a developer. I studied IT, and so I am well familiar with requirements engineering, software development life cycles, etc., but actual coding isn't my thing. So I'm expressly not claiming to know how to do advanced multithreading myself.)

Anyhow, it takes rare talent to be able to really break down complex, interactive applications into threads that can run without waiting for each other, while ensuring data concurrency (one of the big reasons devs avoid multithreading). Unsurprisingly, another area where data concurrency is central is also one most developers plain and simply suck at: data synchronization. I suspect another reason devs love cloud-based software is because it avoids them having to worry about syncing desktop clients later.
(at least a few years ago) One of these problems was FPGA implementation (No, it is not called compiling), that has so many branches in the code, that they are single threaded. And the code is pretty complicated, and the companies are much more concerned about using the FPGA resources optimally, that the time and money of implementation is not worth it.
 
The following users thanked this post: tooki

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 1815
  • Country: nz
  • Formerly SiFive, Samsung R&D
Re: Apples new M1 microprocessor
« Reply #190 on: November 23, 2020, 10:30:49 am »
In real use performance is as good as the best Windows or Linux x86 machines. No problems. Even emulated x86 apps feel absolutely normal and snappy.

What do you use for x86 VM on Mini M1?

Just the built in Rosetta, emulating x86_64 MacOS apps.
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 1815
  • Country: nz
  • Formerly SiFive, Samsung R&D
Re: Apples new M1 microprocessor
« Reply #191 on: November 24, 2020, 02:02:15 am »
Does anyone have any tests they'd like run on the M1, and in particular source code? MacOS or Ubuntu code ...
 

Offline Berni

  • Super Contributor
  • ***
  • Posts: 3167
  • Country: si
Re: Apples new M1 microprocessor
« Reply #192 on: November 24, 2020, 06:04:49 am »
Well apple has also been making claims of "Worlds fastest integrated GPU"

So it might be interesting to run some OpenGL and OpenCL benchmarks and see how it compares to Intel and AMDs integrated GPU solutions. Perhaps also while running a RAM bandwidth stress test at the same time to see what effect the shared CPU/GPU RAM has.
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 1815
  • Country: nz
  • Formerly SiFive, Samsung R&D
Re: Apples new M1 microprocessor
« Reply #193 on: November 24, 2020, 07:13:15 am »
Well apple has also been making claims of "Worlds fastest integrated GPU"

So it might be interesting to run some OpenGL and OpenCL benchmarks and see how it compares to Intel and AMDs integrated GPU solutions. Perhaps also while running a RAM bandwidth stress test at the same time to see what effect the shared CPU/GPU RAM has.

If you want something then you need to give me a ready prepared project not a general suggestion in something I have no interest in :-)

Also, the machine is headless and I'm using it via mostly ssh, with a little VNC when necessary.

*Every* review site in the world is covering video and gaming performance. That's not my thing. At all.

I'm currently playing around with the hypervisor framework, with a view to making a sandboxed environment that runs apps, and maybe the whole OS, for a non-ARM non-x86 ISA using dynamic code translation of my own design.
 

Online borjam

  • Supporter
  • ****
  • Posts: 849
  • Country: es
  • EA2EKH
Re: Apples new M1 microprocessor
« Reply #194 on: November 24, 2020, 07:34:12 am »
I remember back in the early 90s, when computer benchmarks ran massive scripted suites of real-world software, that one common benchmark item was how long it took a spreadsheet (originally Lotus 1-2-3, later Excel)
You mean peecee benchmarks on the typical glossy magazines.

Peecees are computers, not all computers are peecees.
 

Online borjam

  • Supporter
  • ****
  • Posts: 849
  • Country: es
  • EA2EKH
Re: Apples new M1 microprocessor
« Reply #195 on: November 24, 2020, 07:35:58 am »
Well apple has also been making claims of "Worlds fastest integrated GPU"

So it might be interesting to run some OpenGL and OpenCL benchmarks and see how it compares to Intel and AMDs integrated GPU solutions. Perhaps also while running a RAM bandwidth stress test at the same time to see what effect the shared CPU/GPU RAM has.
Something like this?
https://barefeats.com/m1-macbook-pro-versus-intel-egpu.html
https://barefeats.com/m1-macbook-pro-versus-intel-version.html


 

Online bd139

  • Super Contributor
  • ***
  • Posts: 17141
  • Country: gb
Re: Apples new M1 microprocessor
« Reply #196 on: November 24, 2020, 07:49:53 am »
Interesting anecdata point to add. A friend of mine bought a bottom end 8gb M1 mini to replace his 8th gen i5 mini with 16gb ram. Side by side it’s faster, more responsive and runs cold unlike the i5. I had an i3 one myself and that thing ran burning hot all the time. So there’s a longevity improvement there in theory but bang goes the coffee warmer  :-DD

I’m tempted to buy one and play with it. Apple stuff depreciates so little that it’s not much of a risk.
 

Online borjam

  • Supporter
  • ****
  • Posts: 849
  • Country: es
  • EA2EKH
Re: Apples new M1 microprocessor
« Reply #197 on: November 24, 2020, 10:36:31 am »
And what's the memory bandwidth? Seems it has a huge SoC with CPU, GPU and RAM.
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 1815
  • Country: nz
  • Formerly SiFive, Samsung R&D
Re: Apples new M1 microprocessor
« Reply #198 on: November 24, 2020, 11:10:51 am »
And what's the memory bandwidth? Seems it has a huge SoC with CPU, GPU and RAM.

Novabench give 41975 MB/sec running as an x86 app under Rosetta.
 

Offline JPortici

  • Super Contributor
  • ***
  • Posts: 2691
  • Country: it
Re: Apples new M1 microprocessor
« Reply #199 on: November 24, 2020, 11:16:43 am »
I’m tempted to buy one and play with it. Apple stuff depreciates so little that it’s not much of a risk.

Me too, i'll seriously consider one when my current NUC will kaput (corrupted bios on the previous one right after warranty epxired, but intel was kind enough to replace it anyway)
Performance is the same, price is more or less the same once you add the SSD and RAM.
I'll lose native windows :( for now, but hey by that time there will probably be a build, or parallels fully working
 
The following users thanked this post: tooki


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf