Author Topic: Learning FPGAs: wrong approach?  (Read 55134 times)

0 Members and 1 Guest are viewing this topic.

Offline Cerebus

  • Super Contributor
  • ***
  • Posts: 10576
  • Country: gb
Re: Learning FPGAs: wrong approach?
« Reply #75 on: June 17, 2017, 07:09:05 pm »
Late breaking news:  The board has shipped - from Mouser.  The very place I looked for stock.  I must have had a serious bout of 'senior moments' yesterday!

Rather than a 'senior moment', it's more likely that Lattice have some reserved fulfilment stock at Mouser that doesn't show up as stock available for sale.
Anybody got a syringe I can use to squeeze the magic smoke back into this?
 

Offline Cerebus

  • Super Contributor
  • ***
  • Posts: 10576
  • Country: gb
Re: Learning FPGAs: wrong approach?
« Reply #76 on: June 17, 2017, 07:19:11 pm »
p.s.
why Lattice? Never used, I am curious.

For the equivalent sized parts to those offered by Xilinx or Altera I don't think that Lattice necessarily offers parts with any particular advantages. Where I think they have a winner is in the ICE40 range where there are a number of FPGAs in the £3-5 bracket (one off prices) with 1k to 8k LEs/cells/pick-your-own-terminology available in prototyping friendly QFP and QFN packages.
Anybody got a syringe I can use to squeeze the magic smoke back into this?
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9889
  • Country: us
Re: Learning FPGAs: wrong approach?
« Reply #77 on: June 17, 2017, 07:53:42 pm »
To the recent comment re: Vivado and its capability, yes, it really will do everything.  And, in many cases, there are multiple ways to get things done.  But, if you were a brand new EE student, would you want to use Vivado for your very first project?

Part of my problem with Vivado is that I am used to ISE.  I have been using ISE for 13 years or so and I still use it for Spartan 3 projects.  I haven't spent enough time with Vivado to get comfortable.  Lattice Diamond doesn't do everything that Vivado does and, in my view, Diamond is an easier way to start.  Or maybe I just like it because it is closer to ISE.

But, yes, Vivado is a tremendous upgrade from ISE.

 

Online mikeselectricstuff

  • Super Contributor
  • ***
  • Posts: 13742
  • Country: gb
    • Mike's Electric Stuff
Re: Learning FPGAs: wrong approach?
« Reply #78 on: June 17, 2017, 09:16:30 pm »
p.s.
why Lattice? Never used, I am curious.

For the equivalent sized parts to those offered by Xilinx or Altera I don't think that Lattice necessarily offers parts with any particular advantages. Where I think they have a winner is in the ICE40 range where there are a number of FPGAs in the £3-5 bracket (one off prices) with 1k to 8k LEs/cells/pick-your-own-terminology available in prototyping friendly QFP and QFN packages.
Not familiar with ICE40 but an advantage of the XO2 family is onboard flash, plus  core voltage regulator, and even an internal oscillator, so they are very useable on 2-layer PCBs with no additional support parts - just a 3.3v supply, a JTAG header and off you go.
Youtube channel:Taking wierd stuff apart. Very apart.
Mike's Electric Stuff: High voltage, vintage electronics etc.
Day Job: Mostly LEDs
 

Offline Cerebus

  • Super Contributor
  • ***
  • Posts: 10576
  • Country: gb
Re: Learning FPGAs: wrong approach?
« Reply #79 on: June 17, 2017, 09:58:30 pm »
p.s.
why Lattice? Never used, I am curious.

For the equivalent sized parts to those offered by Xilinx or Altera I don't think that Lattice necessarily offers parts with any particular advantages. Where I think they have a winner is in the ICE40 range where there are a number of FPGAs in the £3-5 bracket (one off prices) with 1k to 8k LEs/cells/pick-your-own-terminology available in prototyping friendly QFP and QFN packages.
Not familiar with ICE40 but an advantage of the XO2 family is onboard flash, plus  core voltage regulator, and even an internal oscillator, so they are very useable on 2-layer PCBs with no additional support parts - just a 3.3v supply, a JTAG header and off you go.

Some, but not all, of the ICE40 range have those features with the exception of an on-board core voltage regulator - they need a nominal 1.2V plus whatever your I/O standard requires. Lattice have always been good at integrating features that get you closer to the ideal of 'just needs a supply and a programming header'. Anybody else remember their in system programmable PALs, when everybody else's PALs needed dedicated out of circuit, high voltage programming?
Anybody got a syringe I can use to squeeze the magic smoke back into this?
 

Offline Bassman59

  • Super Contributor
  • ***
  • Posts: 2501
  • Country: us
  • Yes, I do this for a living
Re: Learning FPGAs: wrong approach?
« Reply #80 on: June 20, 2017, 10:47:22 pm »
If you have the possibility ( = if your boss/customers pay it ), switch to Sigasi. It's the Eclipse-like for HDL :D

What does Sigasi cost these days? Their web site has the usual "contact me with pricing information" form, which usually indicates an expensive product. I seem to remember that it was $80 a month, but that was a few years ago.
 

Offline jefflieu

  • Contributor
  • Posts: 43
  • Country: au
Re: Learning FPGAs: wrong approach?
« Reply #81 on: June 20, 2017, 10:57:18 pm »
Would learning FPGA by writing peripherals for NIOS system be an interesting to you?
I did learn quite a lot when I was doing intern and I had to modify a peripheral of an existing Microblaze sytem to extend its functionality.
Everything else had been setup, timing constraints, pins configuration ... etc ... etc.
I only needed to work out how the bus worked and wrote simple codes to let the bus read registers and write registers. Clear registers on read ... etc

I have a project here and always need new peripherals then verification on different boards.
www.github.com/jefflieu/recon
If you've been doing software then most of the stuff should be familiar to you.

Cheers,
Jeff


i love Melbourne
 

Online mikeselectricstuff

  • Super Contributor
  • ***
  • Posts: 13742
  • Country: gb
    • Mike's Electric Stuff
Re: Learning FPGAs: wrong approach?
« Reply #82 on: June 21, 2017, 07:42:34 am »
p.s.
why Lattice? Never used, I am curious.

For the equivalent sized parts to those offered by Xilinx or Altera I don't think that Lattice necessarily offers parts with any particular advantages. Where I think they have a winner is in the ICE40 range where there are a number of FPGAs in the £3-5 bracket (one off prices) with 1k to 8k LEs/cells/pick-your-own-terminology available in prototyping friendly QFP and QFN packages.
Not familiar with ICE40 but an advantage of the XO2 family is onboard flash, plus  core voltage regulator, and even an internal oscillator, so they are very useable on 2-layer PCBs with no additional support parts - just a 3.3v supply, a JTAG header and off you go.

Some, but not all, of the ICE40 range have those features with the exception of an on-board core voltage regulator - they need a nominal 1.2V plus whatever your I/O standard requires. Lattice have always been good at integrating features that get you closer to the ideal of 'just needs a supply and a programming header'. Anybody else remember their in system programmable PALs, when everybody else's PALs needed dedicated out of circuit, high voltage programming?
I thought ICE40 had  OTP memory, or are there now some flash versions?
Youtube channel:Taking wierd stuff apart. Very apart.
Mike's Electric Stuff: High voltage, vintage electronics etc.
Day Job: Mostly LEDs
 

Online mikeselectricstuff

  • Super Contributor
  • ***
  • Posts: 13742
  • Country: gb
    • Mike's Electric Stuff
Re: Learning FPGAs: wrong approach?
« Reply #83 on: June 21, 2017, 08:02:54 am »
Would learning FPGA by writing peripherals for NIOS system be an interesting to you?
I did learn quite a lot when I was doing intern and I had to modify a peripheral of an existing Microblaze sytem to extend its functionality.
Everything else had been setup, timing constraints, pins configuration ... etc ... etc.
I only needed to work out how the bus worked and wrote simple codes to let the bus read registers and write registers. Clear registers on read ... etc

I have a project here and always need new peripherals then verification on different boards.
www.github.com/jefflieu/recon
If you've been doing software then most of the stuff should be familiar to you.

Cheers,
Jeff
Out of interest, what's the compile/run/debug cycle time doing that? Does including the NIOS stuff add a lot ?
Youtube channel:Taking wierd stuff apart. Very apart.
Mike's Electric Stuff: High voltage, vintage electronics etc.
Day Job: Mostly LEDs
 

Offline jefflieu

  • Contributor
  • Posts: 43
  • Country: au
Re: Learning FPGAs: wrong approach?
« Reply #84 on: June 21, 2017, 08:48:30 am »
Would learning FPGA by writing peripherals for NIOS system be an interesting to you?
I did learn quite a lot when I was doing intern and I had to modify a peripheral of an existing Microblaze sytem to extend its functionality.
Everything else had been setup, timing constraints, pins configuration ... etc ... etc.
I only needed to work out how the bus worked and wrote simple codes to let the bus read registers and write registers. Clear registers on read ... etc

I have a project here and always need new peripherals then verification on different boards.
www.github.com/jefflieu/recon
If you've been doing software then most of the stuff should be familiar to you.

Cheers,
Jeff
Out of interest, what's the compile/run/debug cycle time doing that? Does including the NIOS stuff add a lot ?
Can you please be more specific, doing what? (this could be off topic though)
When you say add a lot, if you mean resources then NIOS stuff costs about 1000 to 1500 LUs + Flops, simple CPU core and Avalon bus
If you mean add a lot of effort, then yeah, it takes some effort to setup hardware and software correctly, but not so bad.
I think if the NIOS system is setup, FPGA can be learnt by adding/creating new peripherals, especially if you're familiar with software, it'll be more interesting.
The coding for CPU peripherals is mostly RTL design I'd say.
i love Melbourne
 

Online mikeselectricstuff

  • Super Contributor
  • ***
  • Posts: 13742
  • Country: gb
    • Mike's Electric Stuff
Re: Learning FPGAs: wrong approach?
« Reply #85 on: June 21, 2017, 09:45:53 am »
Would learning FPGA by writing peripherals for NIOS system be an interesting to you?
I did learn quite a lot when I was doing intern and I had to modify a peripheral of an existing Microblaze sytem to extend its functionality.
Everything else had been setup, timing constraints, pins configuration ... etc ... etc.
I only needed to work out how the bus worked and wrote simple codes to let the bus read registers and write registers. Clear registers on read ... etc

I have a project here and always need new peripherals then verification on different boards.
www.github.com/jefflieu/recon
If you've been doing software then most of the stuff should be familiar to you.

Cheers,
Jeff
Out of interest, what's the compile/run/debug cycle time doing that? Does including the NIOS stuff add a lot ?
Can you please be more specific, doing what? (this could be off topic though)
When you say add a lot, if you mean resources then NIOS stuff costs about 1000 to 1500 LUs + Flops, simple CPU core and Avalon bus
If you mean add a lot of effort, then yeah, it takes some effort to setup hardware and software correctly, but not so bad.
I think if the NIOS system is setup, FPGA can be learnt by adding/creating new peripherals, especially if you're familiar with software, it'll be more interesting.
The coding for CPU peripherals is mostly RTL design I'd say.
No I mean comparing developing a standalone function versus hanging something off a NIOS processor, what is the time penalty of the synthesize/place & route time doing the latter?
I've not used Altera, but IME with ISE and Diamond, for small designs, compile cycles of low tens of seconds are typical, and tolerable for a write/compile/debug/repeat workflow.
 If adding a processor makes this a lot longer, any benefit of using a processor to simplify testing may be outweighed by the extended debug cycle times. 
Youtube channel:Taking wierd stuff apart. Very apart.
Mike's Electric Stuff: High voltage, vintage electronics etc.
Day Job: Mostly LEDs
 

Offline legacy

  • Super Contributor
  • ***
  • !
  • Posts: 4415
  • Country: ch
Re: Learning FPGAs: wrong approach?
« Reply #86 on: June 21, 2017, 10:23:57 am »
What does Sigasi cost these days?

I don't know, neither I want to know :D

What is the benefit of being an employed (even if not permanently, one year contract) for a job? When you are just a person who works freelance you might be asked to care about your personal tools (software, laptop, DSO, LA, RLC-meter, etc), which means that at least you have to phone vendors and waste your time with their marketing office, sometimes you also have phone your bank asking funds for buying them as your customer will refund you only when the job is done. I mean anticipatory funds, refunded with gain.

As employed there is always a wonderful secretary (yes, I have a secretary now, and two tulip plants in my office) who does the job for you, and a person in the stuff who pays for your tools.

Awesome!!! So, who cares? Now, I am more interested in productivity since more productivity means more plants in my office, may be a bigger office with two secretaries and an aquarium with tropical fishes  :D :D :D
 

Offline jefflieu

  • Contributor
  • Posts: 43
  • Country: au
Re: Learning FPGAs: wrong approach?
« Reply #87 on: June 21, 2017, 11:47:39 am »
No I mean comparing developing a standalone function versus hanging something off a NIOS processor, what is the time penalty of the synthesize/place & route time doing the latter?
I've not used Altera, but IME with ISE and Diamond, for small designs, compile cycles of low tens of seconds are typical, and tolerable for a write/compile/debug/repeat workflow.
 If adding a processor makes this a lot longer, any benefit of using a processor to simplify testing may be outweighed by the extended debug cycle times.
Compilation time is about 4 minutes for a design of 3K LUTS. I wouldn't say there's any penalty for using NIOS, it depends on what you want to do. Generally, compilation time is related to the size of the design and chip size. Embedded processor lets you do certain stuff quickly once the hardware is done. Interesting systems often comprise of processor running control stuff and FPGA fabric implementing custom stuff.
i love Melbourne
 

Offline JPortici

  • Super Contributor
  • ***
  • Posts: 3461
  • Country: it
Re: Learning FPGAs: wrong approach?
« Reply #88 on: June 21, 2017, 12:29:31 pm »
Awesome!!! So, who cares? Now, I am more interested in productivity since more productivity means more plants in my office, may be a bigger office with two secretaries and an aquarium with tropical fishes  :D :D :D

[OT]
And Fantozzi's rise in ranks scene comes to mind :D
[/OT]
 

Offline Cerebus

  • Super Contributor
  • ***
  • Posts: 10576
  • Country: gb
Re: Learning FPGAs: wrong approach?
« Reply #89 on: June 21, 2017, 02:36:58 pm »
I thought ICE40 had  OTP memory, or are there now some flash versions?

Sorry, in a effort at writing economy I stuffed that up. Some have OTP, some don't, all can work with external SPI flash, all can be configured over SPI by an MPU. Some have an on-board oscillator, some don't.
Anybody got a syringe I can use to squeeze the magic smoke back into this?
 

Offline sporadic

  • Regular Contributor
  • *
  • Posts: 72
  • Country: us
    • forkineye.com
Re: Learning FPGAs: wrong approach?
« Reply #90 on: June 22, 2017, 06:58:37 pm »
For all the Python haters, yes.. you can design hardware with Python - http://www.myhdl.org :)
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26896
  • Country: nl
    • NCT Developments
Re: Learning FPGAs: wrong approach?
« Reply #91 on: June 22, 2017, 07:46:33 pm »
No I mean comparing developing a standalone function versus hanging something off a NIOS processor, what is the time penalty of the synthesize/place & route time doing the latter?
I've not used Altera, but IME with ISE and Diamond, for small designs, compile cycles of low tens of seconds are typical, and tolerable for a write/compile/debug/repeat workflow.
 If adding a processor makes this a lot longer, any benefit of using a processor to simplify testing may be outweighed by the extended debug cycle times.
You can always simulate. I like that better for complex designs because it allows to see ANY signal in detail and figure out what is wrong (very similar to stepping through a piece of C code with a debugger).
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Bruce Abbott

  • Frequent Contributor
  • **
  • Posts: 627
  • Country: nz
    • Bruce Abbott's R/C Models and Electronics
Re: Learning FPGAs: wrong approach?
« Reply #92 on: June 22, 2017, 07:47:05 pm »
For all the Python haters, yes.. you can design hardware with Python - http://www.myhdl.org :)
As if VHDL and Verilog weren't confusing enough, now we have another HDL to learn.

What advantages does MyHDL have over the other two?
 

Offline sporadic

  • Regular Contributor
  • *
  • Posts: 72
  • Country: us
    • forkineye.com
Re: Learning FPGAs: wrong approach?
« Reply #93 on: June 22, 2017, 07:52:00 pm »
For all the Python haters, yes.. you can design hardware with Python - http://www.myhdl.org :)
As if VHDL and Verilog weren't confusing enough, now we have another HDL to learn.

What advantages does MyHDL have over the other two?
It actually processes into VHDL or Verilog for synthesis.   The site does a better job explaining the pros and cons better than I ever could.  It's legit though, been used for ASICs.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9889
  • Country: us
Re: Learning FPGAs: wrong approach?
« Reply #94 on: June 22, 2017, 09:18:59 pm »
For all the Python haters, yes.. you can design hardware with Python - http://www.myhdl.org :)
As if VHDL and Verilog weren't confusing enough, now we have another HDL to learn.

What advantages does MyHDL have over the other two?

I don't see it either!  Anything I can do with Python HDL, I can do with VHDL and skip a couple of steps.  Perhaps the Python simulation is a little faster (maybe even a lot faster) but I don't usually bother with simulation.  If I did do simulation, I would use the chip vendor's simulator.  It's the only opinion that matters.

I look at it as "just because I can".

Maybe somebody can make the case that I should care about this but, at the moment, I don't.
 

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: nz
Re: Learning FPGAs: wrong approach?
« Reply #95 on: June 22, 2017, 09:27:01 pm »
For all the Python haters, yes.. you can design hardware with Python - http://www.myhdl.org :)
As if VHDL and Verilog weren't confusing enough, now we have another HDL to learn.

What advantages does MyHDL have over the other two?

All these High Level Synthesis (HLS) HDLs seem to have common threads to address these (and other) problems:

- Couldn't hardware design  it be more like programming?
- The level of abstraction in HDLs is too low
- I don't want to micromanage bits - I just want it to work like integers and floats
- Productivity of HDLs is too low - e.g. testing through simulation is slow.
- I want to use programmers, not hardware designers

I have played with a couple.

- "It can't be more like programming?". There is a solid barrier that makes one not like the other. Programming updates a little bit of data every cycle. To make the most of FPGAs you can not use them like that, you need to make pipelines and have data flow through your design in a way programming can't do.

- "The level of abstraction in HDLs is too low". You can break out of low level HDL programming if you want, but at the cost of doing things somebody else's way, and most likely paying a lot for IP blocks that are huge, complex and costly. However if your needs are unique, then you need to work at low levels of abstraction, for at least part of the design. The 80/20 rule applies

- "I don't want to micromanage bits" - If you want to burn through FPGA resources at an alarming rate, and have minimal performance, all your 'variables' can be 64-bit integers. Sometimes the tools will pick up a reduced range and optimize unused bits way, sometimes it wont. The tighter you constrain the design (e.g. size of counters) the better the design will perform.

- "Productivity of HDLs is too low compared to programming" - Very valid. Being able to efficiently test designs like software is awesome. But then you have to verify that the resulting design actually is equivalent to the software...

- "I want to use programmers, not hardware designers" - If the programmer can't envisage what the design will look like in hardware, then they are just fumbling around in the dark. They will spend a lot of time trying to find an efficient way to express what they are trying to do in a way that the tool set likes and produces an efficient design.

In short - it seems to be great when cost is no object (e.g. research), performance is no object (e.g. research) and rapidly testing new things (e.g. research).

You can also paint yourself into a dead end. If your design tests out ok, fits into the target chip, but does not meet timing requirements then what can you do? You need a skilled HDL coder to to re-write the slow bit.

For some commercial use it is also workable, but requires a skilled hardware designer who knows the HLS tools and the problem space intimately, rather than a generic C/Python/whatever hack.

So in short it ends up with high-level code that is written in a quirky, ungainly way, but a 'normal programmer' can read and maybe make sense of - but a normal programmer will have minimal understanding of why it is like that. A single "refactor" of a module to make it "more normal" will break everything.

Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 
The following users thanked this post: MK14

Offline Bruce Abbott

  • Frequent Contributor
  • **
  • Posts: 627
  • Country: nz
    • Bruce Abbott's R/C Models and Electronics
Re: Learning FPGAs: wrong approach?
« Reply #96 on: June 22, 2017, 09:54:51 pm »
It actually processes into VHDL or Verilog for synthesis.   The site does a better job explaining the pros and cons better than I ever could. 
Apart from 'empowering hardware designers with the elegance and simplicity of the Python language' the only advantage I could see is that you can apparently quickly create and simulate a design interactively. I say 'apparently' because the website is full of 'page not founds' everywhere.

"For more information about installing on non-Linux platforms such as Windows, read about Installing Python Modules." - 404 Not Found.

Great! can't even get started...
 

Offline Cerebus

  • Super Contributor
  • ***
  • Posts: 10576
  • Country: gb
Re: Learning FPGAs: wrong approach?
« Reply #97 on: June 22, 2017, 10:10:12 pm »
Perhaps the Python simulation is a little faster (maybe even a lot faster) but I don't usually bother with simulation.  If I did do simulation, I would use the chip vendor's simulator.  It's the only opinion that matters.

For general purpose verilog simulation (i.e. not process specific verification) there's verilator, an open source simulator that 'compiles' the verilog into C, which can then itself be compiled into machine code. It's fast, and on the right source material it's blazingly fast.

Also you can get at the internals of the simulation in a controlled fashion. I've used this to do mixed model simulation writing the analogue side of the simulation in C.
Anybody got a syringe I can use to squeeze the magic smoke back into this?
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4525
  • Country: au
    • send complaints here
Re: Learning FPGAs: wrong approach?
« Reply #98 on: June 23, 2017, 01:08:41 am »
- "The level of abstraction in HDLs is too low". You can break out of low level HDL programming if you want, but at the cost of doing things somebody else's way, and most likely paying a lot for IP blocks that are huge, complex and costly. However if your needs are unique, then you need to work at low levels of abstraction, for at least part of the design. The 80/20 rule applies.
Well you can go to extremely high levels of abstraction in VHDL, so its possible to have a higher level language by using the existing tools better. But the core issue is that programming for simultaneous execution is radically different to programming for sequential execution.

There have been some good attempts at C-hdl and they work well at matching some patterns, but remain poor at improving all code. So even with the high level tools you still end up needing to understand the flow and patterns that fit into logic, just as if you were programming HDL to begin with.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9889
  • Country: us
Re: Learning FPGAs: wrong approach?
« Reply #99 on: June 23, 2017, 02:09:53 am »
- "The level of abstraction in HDLs is too low". You can break out of low level HDL programming if you want, but at the cost of doing things somebody else's way, and most likely paying a lot for IP blocks that are huge, complex and costly. However if your needs are unique, then you need to work at low levels of abstraction, for at least part of the design. The 80/20 rule applies.
Well you can go to extremely high levels of abstraction in VHDL, so its possible to have a higher level language by using the existing tools better. But the core issue is that programming for simultaneous execution is radically different to programming for sequential execution.

There have been some good attempts at C-hdl and they work well at matching some patterns, but remain poor at improving all code. So even with the high level tools you still end up needing to understand the flow and patterns that fit into logic, just as if you were programming HDL to begin with.

This is exactly the point.  There is a difference between writing sequential C code and designing hardware  and hardware design is, well, hard.  That's why it's called hardware.

Software speaks for itself  - soft.

It doesn't seem to me that CS majors are going to do well with HDL unless they also took some EE courses.  HDL is an entirely different thing.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf