| General > General Technical Chat |
| ASM programming is FASCINATING! |
| << < (6/24) > >> |
| joeqsmith:
--- Quote from: eti on July 27, 2020, 02:54:44 am ---Okay, so my mind works best at the lowest possible (humanly parseable) level, with regard to machines. I find ASM particular fascinating, although I do also use bash a great deal and enjoy that, but ASM and knowing how the internal machinations of digital machines work, REALLY turns on lightbulbs in my brain. I'm a lonnnnnnng way off knowing even 5% of what I need to know, but am gradually getting used to PIC ASM, and am now learning by messing around with an Altair 8800 simulator and loading registers by hand, and also referring to as much ASM as I can, no matter what the arch. I've been looking here also, at 6502: http://www.obelisk.me.uk/6502/reference.html If anyone would like to comment or advise how best to get a true, firm grasp on ASM, I'm open to suggestions. Thanks guys and girls. --- End quote --- Don't. Each instruction set is different. The hardware will be different as well. The more of them you work with, the worse it gets. You start making up instructions that are not there or on some other micro. You spend hours trying to come up with better techniques, and for what? Then you have to work on a RISC and |O We used to do everything it seems in assembler. I had friend who worked at a fairly large company and he was in the third tier optimizing group. The first level was the general coders. Their stuff was clean and easy to follow and maintain. If the code wouldn't fit, it went to the second group who would try and trim it up but keep a similar flow. The third group would rewrite whole sections to try and get every last byte out. The code would be nothing like the original and normally be difficult to understand. I don't miss those days, but when I built my little transient generator, I wire wrapped the controller using a 6801 and wrote it all in assembler. Just for the fun of it. That codes a mess and the whole project I would say took at least 10X longer. I've written a few disasseblers and debuggers for smaller micros. I've also designed and built my own ICE. At one point I was rolling my own CPUs. Similar to the ucode on the 68332, create your own micro, then you can create your own uassember. Fun, but today, there would be little point in any of it. |
| tggzzz:
--- Quote from: T3sl4co1l on July 28, 2020, 09:24:18 pm ---The lesson on timing is, of course: if you have enough CPU power to do it, and deterministic timings, then you can hard code it; if not, then buffer it, and make sure you have enough CPU power to get through the worst-case paths to refreshing that buffer in time. --- End quote --- Making sure of that is very difficult. If you measure a mean time of X, what fudge factor should you apply to get to the worst case? As for "if you have enough cpu power", yes that is a solution. But I'm reminded that you can get a brick to fly if you apply enough power. (Or search yootoob for "flying lawnmower"!) --- Quote ---Indeed, modern application processors are so fast that you might not care at all, about their indeterministic execution time; an AVR or Cortex-M0 can execute one or two instructions, in the time it takes the big CPU to execute a hundred -- and those instructions are vastly more powerful, operating on more data (including SIMD extensions) in ever richer ways. In that fraction of a microsecond, the entire computation might be complete, whereas the deterministic CPUs are just sitting down to work. --- End quote --- Or might not be complete. I'm reminded of the old joke about someone entering a programming competition. The winning entry was faster, but contained errors. The losing competitor remarked that he could have made his program ten times faster if he didn't have to give the correct result. --- Quote ---Not to mention if multiple cores are employed (not that their outputs will be combined until much later, due to inter-CPU communication and cache coherency). --- End quote --- Cache coherency is a killer, both in large systems or hard realtime systems. The larger HPC systems appear to be settling on message passing architectures, which can avoid the problems of cache coherency. |
| MK14:
--- Quote from: joeqsmith on July 29, 2020, 12:40:34 am ---Don't. --- End quote --- But, doing at least some assembly language. Can be fun and educational. Even a few hours at it, can be rather educational and fun. Even if that is the first and last, short assembly language program you ever end up writing. |
| VK3DRB:
No one has ever written anything in assembler. But they have used an assembler and written code in assembly language. There is a big difference between an assembler and assembly language. Notepad++ now call it "Assembly" because I advised the authors several years ago they had the terminology wrong. They agreed. (Notepad++ --> Language --> A --> Assembly). These days I write in C, but for register settings and digital I/O there is little difference between C and Assembly. I have not bothered with Assembly language for years except for some inline code occasionally. But I can see why you like Assembly language. For larger programs where there is complex program flow, C is much easier to use, read and understand. That being said, one of the most important things in writing in assembly language (or C) is commenting. Don't comment WHAT you have done on a line-by line basis as it is pointless, but comment WHY you have done things. Use intelligent labels. Get the commenting right and use a format that ensures no ambiguity between comments or blocks of comments. Use decent good headings too. Comments will help you when you revisit the code a few years down the track. Also, it helps the "next poor bunny" that has to pick up the code. There is nothing worse than un-commented assembly code. I have seen some "professional" code written by well known companies with very amateurish commenting. Commenting in C can be reduced if intelligent identifier names and function names are used, rather than nonsensical or ambiguous names that only confuse. |
| T3sl4co1l:
--- Quote from: tggzzz on July 29, 2020, 01:03:33 am ---Making sure of that is very difficult. If you measure a mean time of X, what fudge factor should you apply to get to the worst case? As for "if you have enough cpu power", yes that is a solution. But I'm reminded that you can get a brick to fly if you apply enough power. (Or search yootoob for "flying lawnmower"!) --- End quote --- Is this an argument in favor of or opposition to my post? ;D I like it, it's actually a really good analogy. It highlights the same gross excess, and rational economy. Back in the day, it took crazy defense projects (or a few very dedicated and probably wealthy amateurs) to come up with junk like that (e.g., those ill fated flying-saucer platforms). Nowadays anyone with under a thousand bucks knocking around can slap together something like that! In the same way, what used to require heroic assembler on one platform, with today's platforms is now trivial, even on a budget. Don't let some imagined combination of efficiency, elegance and so on, be the barrier to "good enough"! Doing some boring housekeeping tasks? Don't worry about learning 8051 assembler, just slap in the AVR or STM32 you're familiar with. Cost reduce it later when you have time -- and more importantly, budget -- to! Using 10s technology to force 90s games to "run" on an 80s console? Don't worry about learning VHDL for the bus interface, just use the rPi you're handy with! Fluent in Python but the data-cranking problem would really do better on a DSP or FPGA? Toss in the $50 SBC, who cares! And of course not that one should take such liberty for granted: there will always be some applications where the harder solution is required, so there is value in learning lower level things (even assembly). By all means, take the time to investigate them, as you can. :-+ --- Quote ---I'm reminded of the old joke about someone entering a programming competition. The winning entry was faster, but contained errors. The losing competitor remarked that he could have made his program ten times faster if he didn't have to give the correct result. --- End quote --- Yup. Timing from start of instruction(s) is what I meant, of course; but knowing when they start, is another matter (or when their outputs propagate to their targets). :) --- Quote ---Cache coherency is a killer, both in large systems or hard realtime systems. The larger HPC systems appear to be settling on message passing architectures, which can avoid the problems of cache coherency. --- End quote --- Reminds me of this story: https://randomascii.wordpress.com/2018/01/07/finding-a-cpu-design-bug-in-the-xbox-360/ tl;dr they added an instruction to perform an incoherent prefetch, bypassing L2. Turns out... just putting that instruction into memory anywhere executable at all,* introduced an extremely small probability that it would be speculatively executed, tainting coherency and setting up a crash with absolutely no warning or explanation. *Hmm, doesn't say if it was tested quite this far. A branch-indirect instruction could potentially be predicted to land on one, even if the target is not in any intended executable code path. (Also assuming memory is r/w/x tagged, so that general data doesn't get spec-ex.'d; that would just about damn it to a respin, I would guess!) Depends when and how such an instruction is decoded; maybe those are really slow on the platform, decoded late, and it's actually safe? Tim |
| Navigation |
| Message Index |
| Next page |
| Previous page |