There is no single definition of fine grained parallelism, especially since parallelism can be defined and expressed at many many levels and in many many ways.
But don't ignore the wood and concentrate on one clump of trees. That isn't enlightening.
For many people it is more enlightening for them to consider the relationship between a specification/algorithm, how it can be implemented in hardware and/or software, and the deep equivalence between hardware and software. More useful, too
But don't ignore the wood and concentrate on one clump of trees. That isn't enlightening.That difference is crucial to understand the fundamental difference between FPGA and microcontrollers.
For many people it is more enlightening for them to consider the relationship between a specification/algorithm, how it can be implemented in hardware and/or software, and the deep equivalence between hardware and software. More useful, tooThere is absolutely NO equivalence between hardware and software, they differ in the very fundamental way, which is why stuff like XMOS is nothing but a crutch for software developers who can't into hardware. Anything saying otherwise simply doesn't understand that difference, and that's why they embrace crutches like bit-banging, which is totally and fundamentally different to how things happen inside FPGA (or any hardware for that matter).
And as someone who came into hardware from the software world, it wasn't easy to fully understand this fundamental difference, but like everything, understanding comes with practice.
I have to agree with tggzzz here. Where is the difference between a programmable statemachine working according to instructions from a flash memory compared to a bunch of programmable logic elements + flipflops working according to programmable connections read from a flash memory?
The only real difference is, is that the programmable statemachine (AKA CPU) offers a much more constrained interface and thus is easier to manage compared to a collection of logic that needs to be pieced together one by one. In the end only the level of abstraction is different. And more interestingly: a lot of effort has been put into 'defining' (by lack of a better word I can come up with right now) programmable logic functions by using high level programming languages.
Or more practical: if I have a bunch of software engineers and a problem that would lend itself to be solved in software somehow, I'd go for a software solution instead of trying to turn software engineers into programmable logic designers.
I don't think you know as much as you think you know.
Please define the your boundary between hardware and software. Start with something we all know and use: a system with an Intel x86 processor and some memory. Where does software stop and the hardware start?
Background: some of the things I've designed and implemented include low-noise analogue electronics (including "DSP" circuits without ADCs/DACs ), semi-custom digital ICs, designing and implementing an application specific processor, through to life-critical software/hardware systems, cellular system RF modelling/instrumentation/measurement, and telecom server systems. I have a solid feel for where the boundaries aren't.
I have to agree with tggzzz here. Where is the difference between a programmable statemachine working according to instructions from a flash memory compared to a bunch of programmable logic elements + flipflops working according to programmable connections read from a flash memory?
I have to agree with tggzzz here. Where is the difference between a programmable statemachine working according to instructions from a flash memory compared to a bunch of programmable logic elements + flipflops working according to programmable connections read from a flash memory?The difference begins and ends with circuit.
You can always try and see differences and then get totally worked up about how things are not equal.
Try look at it a different way: a screw and a nail are completely different and yet they perform the same function: keep materials together. Which one is the best to use is highly debatable in many cases.
I don't think you know as much as you think you know.Don't worry, I know more than enough.
Please define the your boundary between hardware and software. Start with something we all know and use: a system with an Intel x86 processor and some memory. Where does software stop and the hardware start?Hardware works the way it works due to the way it's interconnected, software always uses the same circuit, so hardware don't change from one operation to another.
Background: some of the things I've designed and implemented include low-noise analogue electronics (including "DSP" circuits without ADCs/DACs ), semi-custom digital ICs, designing and implementing an application specific processor, through to life-critical software/hardware systems, cellular system RF modelling/instrumentation/measurement, and telecom server systems. I have a solid feel for where the boundaries aren't.And yet apparently you are still missing the crucial difference.
You can always try and see differences and then get totally worked up about how things are not equal.That difference has important implications for FPGA designs which hinders a lot of folks who are coming into FPGA world from software world. And the fact that HDLs visually look similar to software languages doesn't help to bridge this gap.
Try look at it a different way: a screw and a nail are completely different and yet they perform the same function: keep materials together. Which one is the best to use is highly debatable in many cases.Mechanical folks might have a problem with that stament
Good. Use that to describe what is and isn't hardware in an x86 processor.
That should be easy enough.
I don't understand what you are attempting to say. However, looking at "hardware don't change from one operation to another".
If that's the case then some FPGAs aren't hardware. Do you really mean that?
Example: the Xilinx Partial Reconfiguration. "Partial reconfiguration is a technique that allows replacing the logic of some parts of the FPGA, while its other parts are working normally. This consists of feeding the FPGA with a bitstream, exactly like the initial bitstream that programs its functionality on powerup. However the bitstream for Partial Reconfiguration doesn't cause the FPGA to halt. Instead, it works on specific logic elements, and updates the memory cells that control their behavior. It's a hot replacement of specific logic blocks." https://www.01signal.com/vendor-specific/xilinx/partial-reconfiguration/part1-introduction/
That, of course, is exactly equivalent to what happens when an operating system loads and runs an application program.
There are many differences, just as there are many differences between design strategies, differences between computer languages - and screws and nails.
You are missing the crucial similarities.
You are overestimating the differences.
Trivial example: frequently screws are inserted with a hammer - and then screwed up for the last quarter turn.
Good. Use that to describe what is and isn't hardware in an x86 processor.
That should be easy enough.The circuit is hardware. Everything else is software. And yes I'm aware about microops and all that jazz. That's still software and not hardware.
I don't understand what you are attempting to say. However, looking at "hardware don't change from one operation to another".
If that's the case then some FPGAs aren't hardware. Do you really mean that?
Example: the Xilinx Partial Reconfiguration. "Partial reconfiguration is a technique that allows replacing the logic of some parts of the FPGA, while its other parts are working normally. This consists of feeding the FPGA with a bitstream, exactly like the initial bitstream that programs its functionality on powerup. However the bitstream for Partial Reconfiguration doesn't cause the FPGA to halt. Instead, it works on specific logic elements, and updates the memory cells that control their behavior. It's a hot replacement of specific logic blocks." https://www.01signal.com/vendor-specific/xilinx/partial-reconfiguration/part1-introduction/When PR is occuring the circuit is not functional, so at that stage it's neither. But once it's completed, it behaves like a regular circuit.That, of course, is exactly equivalent to what happens when an operating system loads and runs an application program.Absolutely NOT. OS Task switch doesn't change the hardware. That is an example of a coarse grained parallelism, when commands streams are chopped into chunks, and each chunk is executed sequentially, but if they are made small enough and observed over large enough intervals, it appears as if those commands are being executed in parallel.
There are many differences, just as there are many differences between design strategies, differences between computer languages - and screws and nails.
You are missing the crucial similarities.
You are overestimating the differences.Similarities are superficial, while differences are fundamental.
You can always try and see differences and then get totally worked up about how things are not equal.That difference has important implications for FPGA designs which hinders a lot of folks who are coming into FPGA world from software world. And the fact that HDLs visually look similar to software languages doesn't help to bridge this gap.
The same observation can be made about different software programming paradigms, and even languages.
Fundamentally language syntax is trivial; language semantics is far more significant.
Simple example: "7 - 7 - 7" gives different results in different languages. In one language , "7 - 7 - 7" is numerically equal to "7 - 7 - 7 - 7 -7".
Other examples might include the radically different semantics of FSM languages, logic programming languages, constraint satisfaction language etc.
So an intel processor is software? Most people would disagree with you on that point.
The FPGA continues to operate during partial reconfiguration.
Task switching is not the point; it occurs after the program has been loaded and is running.
CPM and MSDOS load and run a single application at a time; there is no task switching per-se.
You've got that the wrong way round.
The fundamental similarities are easily understood when identical functionality is implemented using either hardware or software or a combination of the two. Frequently the fundamental behaviour remains fixed while the exact partitioning changes over time as the superficial differences change[1], but also according to different constraints[2].
[1] during development and/or product lifetime
[2] especially cost and size and performance
So an intel processor is software? Most people would disagree with you on that point.Don't be so dense. Hardware of CPU is hardware, firmware (microcode) is software.
The fundamental similarities are easily understood when identical functionality is implemented using either hardware or software or a combination of the two. Frequently the fundamental behaviour remains fixed while the exact partitioning changes over time as the superficial differences change[1], but also according to different constraints[2].
[1] during development and/or product lifetime
[2] especially cost and size and performanceSoftware and hardware implementatiuons are never identical, there are always fundamental differences, even if sometimes they appear subtle to some who doesn't understand hardware.
For example, a logical gate reacts immediately (ignoring propagation delay) on a change of input signal, while software can only "react" at a fixed intervals of time. That is a fundamental difference.
You are choosing to snip so much context that the points I make are being obscured in favour of the points you would like to make.
Not going to fall for that debating technique!
So which are the electrons stored in a transistor's gate?
Insufficient distinction.
Take some functionality and implement it using radically different software paradigms. The resulting implementations will not be identical and will have fundamental differences.
Hardware is just another step on the continuum between formal mathematical expressions and particles and waves.
Not true, of course - for several reasons related to the intractability of asynchronous behaviour of systems at various conceptual levels.
Almost all practical hardware only reacts only at fixed intervals of time, due to the intractibility of creating designs where the ordering of events is undefined. If you don't understand why, then take time to understand when and why it is necessary to insert "bridging terms" into logic implementations expressed in the form of Karnaugh maps.
Almost all practical hardware only reacts only at fixed intervals of time, due to the intractibility of creating designs where the ordering of events is undefined. If you don't understand why, then take time to understand when and why it is necessary to insert "bridging terms" into logic implementations expressed in the form of Karnaugh maps.
You can always try and see differences and then get totally worked up about how things are not equal.That difference has important implications for FPGA designs which hinders a lot of folks who are coming into FPGA world from software world. And the fact that HDLs visually look similar to software languages doesn't help to bridge this gap.
You are choosing to snip so much context that the points I make are being obscured in favour of the points you would like to make.
Not going to fall for that debating technique!I've skipped irrelevant details to focus on what's important such that readers won't have to suffer through your irrelevant brain dumps.
It becomes more and more clear to me that you simply don't understand what hardware is, hence you constant pitching of XMOS. You are too stuck to software world to see that there is a whole other world, which actually provides that you have something to run your software on.
Almost all practical hardware only reacts only at fixed intervals of time, due to the intractibility of creating designs where the ordering of events is undefined. If you don't understand why, then take time to understand when and why it is necessary to insert "bridging terms" into logic implementations expressed in the form of Karnaugh maps.They are not intractable. They are just difficult.
There are, for example, fully asynchronous DSPs. I'm not clear whether they show any cost performance benefits over more conventional designs. What bothers many people about them is you just don't know how much latency there will be between input and output, as it varies from sample to sample of the device. As long as you design your system for the specified worst case of the device it shouldn't be an issue.
You can always try and see differences and then get totally worked up about how things are not equal.That difference has important implications for FPGA designs which hinders a lot of folks who are coming into FPGA world from software world. And the fact that HDLs visually look similar to software languages doesn't help to bridge this gap.IMHO your view is way too narrow and really focussed on the edge of an FPGA where signals go in & out. But that is just a very tiny part of doing digital logic design. Back when I was doing my EE study I also took several classes on digital IC logic design. Compared to software development, that was/is highly abstract. Didn't even involve controlling anything real (unlike software). Just running simulation after simulation and doing analysis of test vector coverage. And this is true for a lot of logic design work. When I design a new part for a complicated piece of logic in an FPGA, I start out with simulations and once the logic does what I need it to do, I add it somewhere to the rest of the design. However, I really can't see how this workflow is any different compared to developing a new software module in C (which also involves providing stimuli and checking output aimed to maximise test coverage).
So yes, HDLs look like software development tools because they are software development tools.
They are not intractable. They are just difficult. There are, for example, fully asynchronous DSPs. I'm not clear whether they show any cost performance benefits over more conventional designs. What bothers many people about them is you just don't know how much latency there will be between input and output, as it varies from sample to sample of the device. As long as you design your system for the specified worst case of the device it shouldn't be an issue.
"Asynchronous" means different things to different people.
They are not intractable. They are just difficult. There are, for example, fully asynchronous DSPs. I'm not clear whether they show any cost performance benefits over more conventional designs. What bothers many people about them is you just don't know how much latency there will be between input and output, as it varies from sample to sample of the device. As long as you design your system for the specified worst case of the device it shouldn't be an issue.
"Asynchronous" means different things to different people. I learned asynchronous state machine design, which used no FFs. In essence, the feedback paths, created latches in the logic, but still, there was no clock or enable.
I've also studied asynchronous processors, which have no free running master clock. They do have FFs with clock inputs, but the clocks are generated locally, and are stopped when there is no data to process. The clock is often generated with variable delay, corresponding to the timing of the particular circuit processing the data. The Green Arrays GA144 has 144 such processors running at 700 MIPS peak in each processor!
The speed advantage of the async processor is in being able to take advantage of portions of the design that are faster than the remainder of the logic. In a properly clocked design, the entire circuit runs at the speed of the slowest circuit.
It also achieves speed gains from PVT (Process, Voltage and Temperature). However, these gains can not be counted on, other than perhaps running at higher or lower voltage to tradeoff speed and power.