EEVblog Electronics Community Forum

Electronics => Microcontrollers => Topic started by: krivx on March 22, 2015, 07:17:41 pm

Title: Lattice iCE40 Bitstream Reverse-Engineered
Post by: krivx on March 22, 2015, 07:17:41 pm
I just saw this project: http://www.clifford.at/icestorm/ (http://www.clifford.at/icestorm/)

Here is the demo video: https://www.youtube.com/watch?v=u1ZHcSNDQMM&feature=youtu.be (https://www.youtube.com/watch?v=u1ZHcSNDQMM&feature=youtu.be)

It looks like bitstreams can be converted back to Verilog already.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: evb149 on March 23, 2015, 02:46:24 am
Someone did similarly for the spartan 6 at some point.  Due to the larger density parts, that seemed particularly interesting relative to being able to use the information to optimize one's design or placement or experiment with dynamically reconfigurable systems and such.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: free_electron on March 23, 2015, 04:00:12 am
This is pretty much useless as all it does is pull a netlist of the interconnects. The generated output is not readable nor portable. You don't get the source. And it is dependent on the actual luts of the used chip. If your mew target uses a different lut architecture it won't even synthesize.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: blueskull on March 23, 2015, 04:50:51 am
Yes, actually one idea of this year's google summer of code is to write a place and roite algorighm for ice40. Im applying for it.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: clifford on March 23, 2015, 03:34:32 pm
This is pretty much useless [...] If your mew target uses a different lut architecture it won't even synthesize.

This is just plain wrong.

In the video I used the following code for initial synthesis:

Code: [Select]
module top (
input  clk,
output LED1,
output LED2,
output LED3,
output LED4,
output LED5
);

localparam BITS = 5;
localparam LOG2DELAY = 22;

function [BITS-1:0] bin2gray(input [BITS-1:0] in);
integer i;
reg [BITS:0] temp;
begin
temp = in;
for (i=0; i<BITS; i=i+1)
bin2gray[i] = ^temp[i +: 2];
end
endfunction

reg [BITS+LOG2DELAY-1:0] counter = 0;

always@(posedge clk)
counter <= counter + 1;

assign {LED1, LED2, LED3, LED4, LED5} = bin2gray(counter >> LOG2DELAY);
endmodule

And this was the output of icebox_vlog:

Code: [Select]
module chip (output LED1, output LED3, output LED4, output LED5, output LED2, input clk);

wire n1;
reg LED1, n3, n4, n5, n6, n7, n8, n9, n10, n11, n12, n13, n14, n15, n16, n17, n18, n19, n20, n21, n22, n23, n24, n25, n26, n27, n28;
wire n29, n30, n31, n32, n33, n34, n35, n36, n37, n38, n39, n40, n41, n42, n43, n44, n45, LED3, LED4, LED5, n49, n50, n51, n52, n53, n54, n55, n56, n57, n58, LED2;
assign n1 = clk, n29 = 1;
wire n60, n61, n62, n63, n64, n65, n66, n67, n68, n69, n70, n71, n72, n73, n74, n75, n76, n77, n78, n79, n80, n81, n82, n83, n84, n85, n86;

assign n60  = /* LUT    8  9  0 */ n29 ? !n4 : n4;
assign LED5 = /* LUT    9 11  3 */ n26 ? !n27 : n27;
assign n61  = /* LUT    8  9  1 */ n30 ? !n5 : n5;
assign LED4 = /* LUT    9 11  2 */ n28 ? !n27 : n27;
assign n62  = /* LUT    8  9  2 */ n31 ? !n6 : n6;
assign LED3 = /* LUT    9 11  1 */ n3 ? !n28 : n28;
assign n63  = /* LUT    8  9  3 */ n32 ? !n7 : n7;
assign n64  = /* LUT    8  9  4 */ n33 ? !n8 : n8;
assign n65  = /* LUT    8 10  1 */ n38 ? !n13 : n13;
assign n66  = /* LUT    8  9  5 */ n34 ? !n9 : n9;
assign n67  = /* LUT    8 10  0 */ n37 ? !n12 : n12;
assign n68  = /* LUT    8  9  6 */ n35 ? !n10 : n10;
assign n69  = /* LUT    8 10  3 */ n40 ? !n15 : n15;
assign n70  = /* LUT    8  9  7 */ n36 ? !n11 : n11;
assign n71  = /* LUT    8 10  2 */ n39 ? !n14 : n14;
assign n72  = /* LUT    8 10  5 */ n42 ? !n17 : n17;
assign n73  = /* LUT    8 11  2 */ n50 ? !n22 : n22;
assign n74  = /* LUT    8 10  4 */ n41 ? !n16 : n16;
assign n75  = /* LUT    8 11  3 */ n51 ? !n23 : n23;
assign n76  = /* LUT    8 10  7 */ n44 ? !n19 : n19;
assign n77  = /* LUT    8 11  0 */ n45 ? !n20 : n20;
assign n78  = /* LUT    8 10  6 */ n43 ? !n18 : n18;
assign n79  = /* LUT    8 11  1 */ n49 ? !n21 : n21;
assign n80  = /* LUT    8 11  6 */ n54 ? !n26 : n26;
assign n81  = /* LUT    8 11  7 */ n55 ? !n27 : n27;
assign n82  = /* LUT    8 12  2 */ n58 ? !LED1 : LED1;
assign n83  = /* LUT    8 11  4 */ n52 ? !n24 : n24;
assign n84  = /* LUT    8 12  1 */ n57 ? !n3 : n3;
assign LED2 = /* LUT   12 12  7 */ n3 ? !LED1 : LED1;
assign n85  = /* LUT    8 11  5 */ n53 ? !n25 : n25;
assign n86  = /* LUT    8 12  0 */ n56 ? !n28 : n28;
assign n30  = /* CARRY  8  9  0 */ (0 & n4) | ((0 | n4) & n29);
assign n31  = /* CARRY  8  9  1 */ (0 & n5) | ((0 | n5) & n30);
assign n32  = /* CARRY  8  9  2 */ (0 & n6) | ((0 | n6) & n31);
assign n33  = /* CARRY  8  9  3 */ (0 & n7) | ((0 | n7) & n32);
assign n34  = /* CARRY  8  9  4 */ (0 & n8) | ((0 | n8) & n33);
assign n39  = /* CARRY  8 10  1 */ (0 & n13) | ((0 | n13) & n38);
assign n35  = /* CARRY  8  9  5 */ (0 & n9) | ((0 | n9) & n34);
assign n38  = /* CARRY  8 10  0 */ (0 & n12) | ((0 | n12) & n37);
assign n36  = /* CARRY  8  9  6 */ (0 & n10) | ((0 | n10) & n35);
assign n41  = /* CARRY  8 10  3 */ (0 & n15) | ((0 | n15) & n40);
assign n37  = /* CARRY  8  9  7 */ (0 & n11) | ((0 | n11) & n36);
assign n40  = /* CARRY  8 10  2 */ (0 & n14) | ((0 | n14) & n39);
assign n43  = /* CARRY  8 10  5 */ (0 & n17) | ((0 | n17) & n42);
assign n51  = /* CARRY  8 11  2 */ (0 & n22) | ((0 | n22) & n50);
assign n42  = /* CARRY  8 10  4 */ (0 & n16) | ((0 | n16) & n41);
assign n52  = /* CARRY  8 11  3 */ (0 & n23) | ((0 | n23) & n51);
assign n45  = /* CARRY  8 10  7 */ (0 & n19) | ((0 | n19) & n44);
assign n49  = /* CARRY  8 11  0 */ (0 & n20) | ((0 | n20) & n45);
assign n44  = /* CARRY  8 10  6 */ (0 & n18) | ((0 | n18) & n43);
assign n50  = /* CARRY  8 11  1 */ (0 & n21) | ((0 | n21) & n49);
assign n55  = /* CARRY  8 11  6 */ (0 & n26) | ((0 | n26) & n54);
assign n56  = /* CARRY  8 11  7 */ (0 & n27) | ((0 | n27) & n55);
assign n53  = /* CARRY  8 11  4 */ (0 & n24) | ((0 | n24) & n52);
assign n58  = /* CARRY  8 12  1 */ (0 & n3) | ((0 | n3) & n57);
assign n54  = /* CARRY  8 11  5 */ (0 & n25) | ((0 | n25) & n53);
assign n57  = /* CARRY  8 12  0 */ (0 & n28) | ((0 | n28) & n56);
/* FF  8  9  0 */ always @(posedge n1) if (1) n4 <= 0 ? 0 : n60;
/* FF  8  9  1 */ always @(posedge n1) if (1) n5 <= 0 ? 0 : n61;
/* FF  8  9  2 */ always @(posedge n1) if (1) n6 <= 0 ? 0 : n62;
/* FF  8  9  3 */ always @(posedge n1) if (1) n7 <= 0 ? 0 : n63;
/* FF  8  9  4 */ always @(posedge n1) if (1) n8 <= 0 ? 0 : n64;
/* FF  8 10  1 */ always @(posedge n1) if (1) n13 <= 0 ? 0 : n65;
/* FF  8  9  5 */ always @(posedge n1) if (1) n9 <= 0 ? 0 : n66;
/* FF  8 10  0 */ always @(posedge n1) if (1) n12 <= 0 ? 0 : n67;
/* FF  8  9  6 */ always @(posedge n1) if (1) n10 <= 0 ? 0 : n68;
/* FF  8 10  3 */ always @(posedge n1) if (1) n15 <= 0 ? 0 : n69;
/* FF  8  9  7 */ always @(posedge n1) if (1) n11 <= 0 ? 0 : n70;
/* FF  8 10  2 */ always @(posedge n1) if (1) n14 <= 0 ? 0 : n71;
/* FF  8 10  5 */ always @(posedge n1) if (1) n17 <= 0 ? 0 : n72;
/* FF  8 11  2 */ always @(posedge n1) if (1) n22 <= 0 ? 0 : n73;
/* FF  8 10  4 */ always @(posedge n1) if (1) n16 <= 0 ? 0 : n74;
/* FF  8 11  3 */ always @(posedge n1) if (1) n23 <= 0 ? 0 : n75;
/* FF  8 10  7 */ always @(posedge n1) if (1) n19 <= 0 ? 0 : n76;
/* FF  8 11  0 */ always @(posedge n1) if (1) n20 <= 0 ? 0 : n77;
/* FF  8 10  6 */ always @(posedge n1) if (1) n18 <= 0 ? 0 : n78;
/* FF  8 11  1 */ always @(posedge n1) if (1) n21 <= 0 ? 0 : n79;
/* FF  8 11  6 */ always @(posedge n1) if (1) n26 <= 0 ? 0 : n80;
/* FF  8 11  7 */ always @(posedge n1) if (1) n27 <= 0 ? 0 : n81;
/* FF  8 12  2 */ always @(posedge n1) if (1) LED1 <= 0 ? 0 : n82;
/* FF  8 11  4 */ always @(posedge n1) if (1) n24 <= 0 ? 0 : n83;
/* FF  8 12  1 */ always @(posedge n1) if (1) n3 <= 0 ? 0 : n84;
/* FF  8 11  5 */ always @(posedge n1) if (1) n25 <= 0 ? 0 : n85;
/* FF  8 12  0 */ always @(posedge n1) if (1) n28 <= 0 ? 0 : n86;

endmodule

Now please point to the construct in that verilog code that will not synthesize on some targets!

I remember from theamphour that you are not very keen on open source eda software, so you might not realize the importance of our work in that larger picture, but regardless of that I think its still pretty rude to say something like "this is pretty much useless" about someone else's work without actually looking at it first..
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: ogoun on March 24, 2015, 05:46:09 am
Hi,
Great work!

I was looking at the Xilinx XC3000 series (dinosaur fpgas), with a view to doing something similar but never got very far.

Given the large number of orphaned designs using these ancient parts, have you considered doing something similar for them?

Cheers,

Pete
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: mikeselectricstuff on March 25, 2015, 12:22:25 am
Even if the resulting code is not readable, I can see how it might be useful to simulate it.
And extract contents of block-ram initialisation data

Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: mikeselectricstuff on March 25, 2015, 12:29:35 am
This is pretty much useless [...] If your mew target uses a different lut architecture it won't even synthesize.

This is just plain wrong.

But if it used any device-specific features  it would be of much more limited use, though it may be possible to generate HDL that emulates the functions of those features.
Of course ICE40 doesn't really have much in the way of device-specific features....
Quote
I remember from theamphour that you are not very keen on open source eda software, so you might not realize the importance of our work in that larger picture,
How exactly is it important? To whom?
Whilst i'd be the first to say that the whole FPGA software process is a pain, archaic, bloated  and generally crap, I can't see any realistic chance of an open-source solution ever becoming  a useable alternative for anything but the most trivial cases.

The ability to discover & understand  internal functionality from a bitstream is of course interesting and potentially useful for reverse-engineering, but considering the size and complexity of even low-end FPGAs these days, and how little progress has been made on this in the last 20-odd years it seems like it's still a long way away...

Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: blueskull on March 25, 2015, 12:39:24 am
This is pretty much useless [...] If your mew target uses a different lut architecture it won't even synthesize.

This is just plain wrong.

But if it used any device-specific features  it would be of much more limited use, though it may be possible to generate HDL that emulates the functions of those features.
Of course ICE40 doesn't really have much in the way of device-specific features....
Quote
I remember from theamphour that you are not very keen on open source eda software, so you might not realize the importance of our work in that larger picture,
How exactly is it important? To whom?
Whilst i'd be the first to say that the whole FPGA software process is a pain, archaic, bloated  and generally crap, I can't see any realistic chance of an open-source solution ever becoming  a useable alternative for anything but the most trivial cases.

The ability to discover & understand  internal functionality from a bitstream is of course interesting and potentially useful for reverse-engineering, but considering the size and complexity of even low-end FPGAs these days, and how little progress has been made on this in the last 20-odd years it seems like it's still a long way away...

Open source is a faith, a way to freedom to OSS believers.

Windows and VS won't be so cheap if there is no GNU/Linux and GCC.

For the same reason, some people want to challenge the dominance of Cadence and Synopsis.

Many works has been done on open source SoC, such as OR1k and lowRISC.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: mikeselectricstuff on March 25, 2015, 09:32:51 am
Quote
Open source is a faith, a way to freedom to OSS believers.
And often about as useful as other religions
Quote
Windows and VS won't be so cheap if there is no GNU/Linux and GCC.
Debatable - the market will ultimately decide prices, and I'm not sure it makes much difference if competition comes from OSS or a commercial competitor.
Quote
For the same reason, some people want to challenge the dominance of Cadence and Synopsis.
The reason they are dominant is they are the experts - any newcomer has a huge learning curve
Quote
Many works has been done on open source SoC, such as OR1k and lowRISC.
And how many people are using it in real products? Can I buy a fully tested & qualified chip based on these?

Open source is a "nice to have", but most people just want to get a job done.

IMO, instead of spending time basically duplicating vendor tool functionality, it would be a much better use of resources to work on new innovative front-end design methods to replace the current HDL for driving the vendor place & route tools.
The vendor will always be in the best position to know how to use the parts their design - there is a two-way design process between the silicon & the tools. No Open Source effort stands any chance of doing better for anything of useful functionality. 
 
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: clifford on March 25, 2015, 09:40:15 am
Quote
Quote
I remember from theamphour that you are not very keen on open source eda software, so you might not realize the importance of our work in that larger picture,

How exactly is it important? To whom?

We will develop a reference open-source place&route flow for iCE40 for the GSoC. That would not be possible without knowing the bitstream format. THIS is the main motivation for icestorm, not reverse engineering fpga bitstreams. The reverse engineering part just demonstrates well that we do in fact understand the bitstream to a very large extent.

This is not only relevant for iCE40 as architecture, but the main motivation is to deliver a reference full open-source end-to-end FPGA flow, to demonstrate that this can be done. There is a lot of dogmatic thinking about how huge and complex and impossibly complicated this kind of stuff is, primarily by people who them self have zero experience in writing software for this kind of problems.

Btw: Being told that I'm working on things that cannot be done, by people who clearly are not qualified to be a judge of that because they do not have the relevant background, is the story of my life in the last couple of years. (The relevant background in this case is writing FPGA and ASIC design tools btw., not using them, and it really does not matter how many decades of experience you have using them.. A secretary might also have years and years of experience in using word processors. That does not make him or her an expert on estimating the complexity of writing one either.) The vendors of commercial EDA tools are very careful to market even the simplest programs as super-complex ones. And especially the people who are using those programs day-in day-out seem to have a tendency to buy into that marketing BS.

So here is an important truth: Writing synthesis tools is about as complex as writing compilers. I know, because I've worked on both. In both cases you have no chance if you don't know what you are doing. An most people would not even know where to begin. But that does not mean it is an impossible task or even that it is inherently complex. It just means you have to read the relevant books and learn the relevant algorithms and methods before you start.

The pure existence of an Open Source end-to-end FPGA flow can change how willing vendors are to share low-level specifications. If we only had the compilers provided by CPU vendors, and no Open Source compilers, vendors maybe would take a different road about specifying things like instruction sets. But with compilers we have reached a point where it is not only a no-brainer to release all your instruction set documentation, you also don't write your own compiler from scratch. Noone does. You take GCC or Clang+LLVM and port it to your architecture (in the last few years we have seen Clang+LLVM to become more and more important). I can see a similar shift happening in FPGA synthesis within the next 10-15 years.

And this is not only about ideology. You can do great things today by using code generators on a CPU to create code for the CPU. JITs are a good example for that. You cannot do something like run Vivado on the ARM processor in a Zynq and on-demand synthesize a circuit and load it into the FPGA fabric. (An example for something like that would be a complex trigger circuit for a logic analyzer.) There are many applications that will profit from an open source flow, even when the OSS flow is not as optimized as the competing proprietary flow.

Our iCE40 flow will use Yosys as HDL synthesis front-end. We can already synthesize to netlists for Xilinx 7 series FPGAs with Yosys and I've implemented the basic stuff for iCE40 synthesis a while back and will improve on that while a student is working on the place&route tool.

Quote
, but considering the size and complexity of even low-end FPGAs these days

that's why we chose the iCE40 as target. it is not complex at all, and even the largest part in the family is not very large (which limits the complexity of the place&route problem to something that I would consider reasonable for a student project).

Quote
, and how little progress has been made on this in the last 20-odd years it seems like it's still a long way away...

As author of Yosys I must say a statement like this really hurts. In the last two years there have been a couple of (industrial and accademic) ASIC tapeouts of designs that have been synthesized with Yosys. On the iCE40 side I expect similar results from Yosys and Lattice LSE by the end of the summer, so it might still be a long way until we can compare our synthesis solution with Synpilfy Pro, but we are in the range of what you would expect from an industry tool. (The iCE40 backend is pretty new, but by comparing Yosys and Synpilfy Pro on the Xilinx side and comparing Lattice LSE and Synpilfy Pro on the iCE40 side i would say that this is a realistic goal.)

When it comes to ASIC and FGPA flows, the amount of things we can do with OSS right now would have been unthinkable 5 years ago, 20 years ago there wasn't even a discussion about things like that..

PS: There are a lot of FPGA architectures that most people here have never heard of, that are used as IP blocks in larger ASICs and cannot be bought as a parts but only as ASIC IP. Yosys plays an increasingly larger role in this industry. But unfortunately I'm not at liberty to share any details about that.. ASIC people are a very secretive folk..
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: clifford on March 25, 2015, 10:10:10 am
And how many people are using it in real products? Can I buy a fully tested & qualified chip based on these?

Thats exactly what the lowRISC project is about, as well as some other RISC-V projects like Shakti (but I'm not sure if Shakti CPUs will be available on the free market or Indian government only).

Open source is a "nice to have", but most people just want to get a job done.

For some projects open source is not a question of ideology. Its about can I use that tool in the way that is required by my application. And in this cases you simply cannot do your project without having an open source tool.

Think of Linux for example. Of course you could just go and compare Linux with Windows CE feature-wise and then come to one conclusion or another. But for some project you simply cannot go with something that isn't Open Source, for example if you have to port the whole thing to a new processor architecture, or recompile it to use your hardware extension that add special protection for return addresses on the stack, but requires different instructions to be used by the compiled code to access those return addresses.

IMO, instead of spending time basically duplicating vendor tool functionality, it would be a much better use of resources to work on new innovative front-end design methods to replace the current HDL for driving the vendor place & route tools.

There are people working on this as well (see Chisel for example, which is used by more and more projects, including the rocket core generator).

You are free to participate in this kind of projects if you think it is important. I think this is important too, but I also think the stuff I'm working on is important and a nice think about not getting paid is that I can decide for myself where my resources are used best..

The vendor will always be in the best position to know how to use the parts their design - there is a two-way design process between the silicon & the tools. No Open Source effort stands any chance of doing better for anything of useful functionality.

And what if you ARE the vendor? Look at compilers: noone is writing their own compiler from scratch because it is just plain stupid to replicate to work of everyone else who has written a compiler before you, just so you can add your one little special feature. You know what is really really hard about making your own small FPGA? Its writing the design tools.

Why do we have processors with built in vector floating point units and megabytes of cache, but no FPGA fabric? It would be really easy to create a uC with a small and simple FPGA fabric that can do things like serialisation/deserialisation for your bus protocols? One reason is of course power (hard IPs are way more power efficient), but that's not the entire truth. The main reason is: Because it is impractical to write your own synthesis toolchain from scratch for something that is just an extra-feature of a chip that's main purpose is something else and you won't be able to negotiate licences for packages like Synplify Pro that are reasonable for a applications like this.

Btw: lowRISC is considering a small FPGA fabric for their minion cores. Guess what: Noone talks about licencing Synplify Pro for that.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: mikeselectricstuff on March 25, 2015, 10:16:39 am

The pure existence of an Open Source end-to-end FPGA flow can change how willing vendors are to share low-level specifications.
I seriously doubt that
Quote
If we only had the compilers provided by CPU vendors, and no Open Source compilers, vendors maybe would take a different road about specifying things like instruction sets. But with compilers we have reached a point where it is not only a no-brainer to release all your instruction set documentation, you also don't write your own compiler from scratch. Noone does. You take GCC or Clang+LLVM and port it to your architecture (in the last few years we have seen Clang+LLVM to become more and more important). I can see a similar shift happening in FPGA synthesis within the next 10-15 years.
The problem is the FPGA market (in terms of number of designs) is miniscule compared to CPUs, and there are far fewer players. Vendors supply tools that work. Any competition will have too small a market share to attain critical mass for anything outside a few niche areas.

Quote
As author of Yosys I must say a statement like this really hurts. In the last two years there have been a couple of (industrial and accademic) ASIC tapeouts of designs that have been synthesized with Yosys.
One tapeout a year - that's hardly significant.
And I'm not referring to synthesis tools, just the place & route side of things.
Quote
ASIC people are a very secretive folk..
Which is Exactly why you'll never see an FPGA vendor publish the essential details. Apart from competietion and customer IP concerns, another reason is they don't want to reveal things that competitors may claim violates their IP rights.

I can certainly see that a Open Source synthesis and front-end tools could potentially be very useful, however IMO place and route/fitting will always be the domain of vendor tools. The vast majority of users simply don't care if it's OSS or not, they just want to get the job done.

What you are doing is undoubtedly interesting and cool,  I just can't see much in the way of practical use, let alone "important" to anyone outside adademia . ICE40 is so far behind the state of the art that there is a huge gap between being able to place & route such a simple device is a long way from being able to do anything useful with more versatile devices, and by the time it;s done the device will have been superceded.

Maybe I'm just getting too cynical in my old age :)




Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: amyk on March 25, 2015, 01:11:18 pm
The ability to discover & understand  internal functionality from a bitstream is of course interesting and potentially useful for reverse-engineering, but considering the size and complexity of even low-end FPGAs these days, and how little progress has been made on this in the last 20-odd years it seems like it's still a long way away...
You mean "how little public progress"... ;)

FPGA bitstreams are very regular, because FPGAs themselves are. The major issues are legal/political, not technical.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: nctnico on March 25, 2015, 03:15:48 pm
The problem is the FPGA market (in terms of number of designs) is miniscule compared to CPUs, and there are far fewer players. Vendors supply tools that work. Any competition will have too small a market share to attain critical mass for anything outside a few niche areas.
I'd put that as 'barely work'. I have been using Xilinx' tools for close to 15 years and for designs with a high density it takes tweaking the parameters from numbers obtained by rolling the dice to get a design routed properly and quickly.

In an open source environment it is much easier to get to a better generic place & routing tool than vendors can. Let the vendors concentrate on creating silicon. For example: from a technical point of view the Linux kernel is lightyears ahead compared to what is under the hood of Windows (which is made by a multi billion dollar firm!).

edit: typos
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: mikeselectricstuff on March 25, 2015, 03:21:26 pm
The problem is the FPGA market (in terms of number of designs) is miniscule compared to CPUs, and there are far fewer players. Vendors supply tools that work. Any competition will have too small a market share to attain critical mass for anything outside a few niche areas.
I'd put that as 'barely work'. I have been using Xilinx' tools for close to 15 years and for designs with a high density it takes tweaking the parameters from number obtained by rolling the dice to get a design routed properly and quickly.

In an open source environment it is much easier to get better to a better generic place & routing tool than vendors can. Let the vendors concentrate on creating silicon. For example: from a technical point of view the Linux kernel is lightyears ahead compared to what is under the hood of Windows (which is made by a multi billion dollar firm!).
But is a generic P&R process ever going to get anywhere near something which is (presumably) highly optimised to the particular device?
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: nctnico on March 25, 2015, 03:46:18 pm
AFAIK an FPGA is like a grid of components with routing resources in between. I think you can compare it with a PCB design where you have a fixed amount of room for wires between components so many problems you find with placement and routing on a PCB design also apply to placing and routing an FPGA.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: Someone on March 25, 2015, 10:27:20 pm
The vendor will always be in the best position to know how to use the parts their design - there is a two-way design process between the silicon & the tools. No Open Source effort stands any chance of doing better for anything of useful functionality.

And what if you ARE the vendor? Look at compilers: noone is writing their own compiler from scratch because it is just plain stupid to replicate to work of everyone else who has written a compiler before you, just so you can add your one little special feature. You know what is really really hard about making your own small FPGA? Its writing the design tools.

Why do we have processors with built in vector floating point units and megabytes of cache, but no FPGA fabric? It would be really easy to create a uC with a small and simple FPGA fabric that can do things like serialisation/deserialisation for your bus protocols? One reason is of course power (hard IPs are way more power efficient), but that's not the entire truth. The main reason is: Because it is impractical to write your own synthesis toolchain from scratch for something that is just an extra-feature of a chip that's main purpose is something else and you won't be able to negotiate licences for packages like Synplify Pro that are reasonable for a applications like this.

Btw: lowRISC is considering a small FPGA fabric for their minion cores. Guess what: Noone talks about licencing Synplify Pro for that.
We can look at the example of processors in the last decade where we have transitioned from expensive vendor tools and lacking documentation, to an explosion of vendors offering silicon backed by open source toolchains and competing to offer low cost processors. Hopefully an open source toolchain for FPGAs can help spur on small semi firms to start releasing FPGAs (or as above FPGA fabric on/with/inside other existing products).
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: mikeselectricstuff on March 25, 2015, 11:29:32 pm
Quote
We can look at the example of processors in the last decade where we have transitioned from expensive vendor tools and lacking documentation, to an explosion of vendors offering silicon backed by open source toolchains and competing to offer low cost processors.
I'd disagree that availability of OS toolchains has much to do with it - nobody said "hey, there's an OSS compiler, let's sell a chip to use it"
It's the other way round - they make use of OS toolchains for a new product because they're there, and the easiest/cheapest way to support the product.
If they weren't there, the manufacturers would have worked with one of the commercial companies like IAR,Keil etc. to do a compiler.
Quote

Hopefully an open source toolchain for FPGAs can help spur on small semi firms to start releasing FPGAs (or as above FPGA fabric on/with/inside other existing products).
I can't really see that happenning - the existing players are too dominant for it to be worth a new player risking entry to the market - several have tried & failed over the years.
The only recent-ish "new" player I can recall in recent memory is SiliconBlue, and they are now the Lattice Ice40 range. Actel got taken over by Microsemi. The number of players is shrinking, not expanding.

Even if someone new found a new niche (like low pin-count devices) , it would be easy for the existing players to add to their range to compete with it.

I'm sure tools are a major part of an FPGA company's cost, but doubt availability tools would be enough to swing the viability of entry to the market.

Linux is successful because so many people have a use for it. the FPGA area is just too specialist for enough people to be interested, and the vast majority of users don't care what flavour of tools they need to use to do the job.

Considering we've not yet got fully professional-grade OSS PCB software, what are the chances of seeing something in a field that's way more niche, and more complex than that ?


Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: Rasz on March 25, 2015, 11:35:05 pm
Maybe I'm just getting too cynical in my old age :)

Yes. You (and free electron) sound like Sun 2 workstation salesman in 1984 telling us we will never get free compilers because nobody works for free, they are extremely hard to write and need to support mountains of hardware, not to mention why bother when you get perfectly fine CC shipped with your $40K box.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: hamster_nz on March 26, 2015, 12:48:37 am
I ponder on how hard it is to characterise the timing of the iCE40 devices (propagation, loading due to fanout, switching, temperature, voltage, different speed grades) to allow accurate enough timing. You can get away with nearly anything at 1MHz, but what at about 200MHz?

Sure you can make some assumptions, but you wouldn't want to do that for a commercial product where you have liability issues.

At least with a CPU once you have the part number you pretty much know the performance limits for the entire device, and as over-clockers nuts will tell you, the limits can be pretty close.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: free_electron on March 26, 2015, 03:31:41 am
This is pretty much useless [...] If your mew target uses a different lut architecture it won't even synthesize.

This is just plain wrong.

In the video I used the following code for initial synthesis:

Code: [Select]
module top (
input  clk,
output LED1,
output LED2,
output LED3,
output LED4,
output LED5
);

localparam BITS = 5;
localparam LOG2DELAY = 22;

function [BITS-1:0] bin2gray(input [BITS-1:0] in);
integer i;
reg [BITS:0] temp;
begin
temp = in;
for (i=0; i<BITS; i=i+1)
bin2gray[i] = ^temp[i +: 2];
end
endfunction

reg [BITS+LOG2DELAY-1:0] counter = 0;

always@(posedge clk)
counter <= counter + 1;

assign {LED1, LED2, LED3, LED4, LED5} = bin2gray(counter >> LOG2DELAY);
endmodule

And this was the output of icebox_vlog:

Code: [Select]
module chip (output LED1, output LED3, output LED4, output LED5, output LED2, input clk);

wire n1;
reg LED1, n3, n4, n5, n6, n7, n8, n9, n10, n11, n12, n13, n14, n15, n16, n17, n18, n19, n20, n21, n22, n23, n24, n25, n26, n27, n28;
wire n29, n30, n31, n32, n33, n34, n35, n36, n37, n38, n39, n40, n41, n42, n43, n44, n45, LED3, LED4, LED5, n49, n50, n51, n52, n53, n54, n55, n56, n57, n58, LED2;
assign n1 = clk, n29 = 1;
wire n60, n61, n62, n63, n64, n65, n66, n67, n68, n69, n70, n71, n72, n73, n74, n75, n76, n77, n78, n79, n80, n81, n82, n83, n84, n85, n86;

assign n60  = /* LUT    8  9  0 */ n29 ? !n4 : n4;
assign LED5 = /* LUT    9 11  3 */ n26 ? !n27 : n27;
assign n61  = /* LUT    8  9  1 */ n30 ? !n5 : n5;
assign LED4 = /* LUT    9 11  2 */ n28 ? !n27 : n27;
assign n62  = /* LUT    8  9  2 */ n31 ? !n6 : n6;
assign LED3 = /* LUT    9 11  1 */ n3 ? !n28 : n28;
assign n63  = /* LUT    8  9  3 */ n32 ? !n7 : n7;
assign n64  = /* LUT    8  9  4 */ n33 ? !n8 : n8;
assign n65  = /* LUT    8 10  1 */ n38 ? !n13 : n13;
assign n66  = /* LUT    8  9  5 */ n34 ? !n9 : n9;
assign n67  = /* LUT    8 10  0 */ n37 ? !n12 : n12;
assign n68  = /* LUT    8  9  6 */ n35 ? !n10 : n10;
assign n69  = /* LUT    8 10  3 */ n40 ? !n15 : n15;
assign n70  = /* LUT    8  9  7 */ n36 ? !n11 : n11;
assign n71  = /* LUT    8 10  2 */ n39 ? !n14 : n14;
assign n72  = /* LUT    8 10  5 */ n42 ? !n17 : n17;
assign n73  = /* LUT    8 11  2 */ n50 ? !n22 : n22;
assign n74  = /* LUT    8 10  4 */ n41 ? !n16 : n16;
assign n75  = /* LUT    8 11  3 */ n51 ? !n23 : n23;
assign n76  = /* LUT    8 10  7 */ n44 ? !n19 : n19;
assign n77  = /* LUT    8 11  0 */ n45 ? !n20 : n20;
assign n78  = /* LUT    8 10  6 */ n43 ? !n18 : n18;
assign n79  = /* LUT    8 11  1 */ n49 ? !n21 : n21;
assign n80  = /* LUT    8 11  6 */ n54 ? !n26 : n26;
assign n81  = /* LUT    8 11  7 */ n55 ? !n27 : n27;
assign n82  = /* LUT    8 12  2 */ n58 ? !LED1 : LED1;
assign n83  = /* LUT    8 11  4 */ n52 ? !n24 : n24;
assign n84  = /* LUT    8 12  1 */ n57 ? !n3 : n3;
assign LED2 = /* LUT   12 12  7 */ n3 ? !LED1 : LED1;
assign n85  = /* LUT    8 11  5 */ n53 ? !n25 : n25;
assign n86  = /* LUT    8 12  0 */ n56 ? !n28 : n28;
assign n30  = /* CARRY  8  9  0 */ (0 & n4) | ((0 | n4) & n29);
assign n31  = /* CARRY  8  9  1 */ (0 & n5) | ((0 | n5) & n30);
assign n32  = /* CARRY  8  9  2 */ (0 & n6) | ((0 | n6) & n31);
assign n33  = /* CARRY  8  9  3 */ (0 & n7) | ((0 | n7) & n32);
assign n34  = /* CARRY  8  9  4 */ (0 & n8) | ((0 | n8) & n33);
assign n39  = /* CARRY  8 10  1 */ (0 & n13) | ((0 | n13) & n38);
assign n35  = /* CARRY  8  9  5 */ (0 & n9) | ((0 | n9) & n34);
assign n38  = /* CARRY  8 10  0 */ (0 & n12) | ((0 | n12) & n37);
assign n36  = /* CARRY  8  9  6 */ (0 & n10) | ((0 | n10) & n35);
assign n41  = /* CARRY  8 10  3 */ (0 & n15) | ((0 | n15) & n40);
assign n37  = /* CARRY  8  9  7 */ (0 & n11) | ((0 | n11) & n36);
assign n40  = /* CARRY  8 10  2 */ (0 & n14) | ((0 | n14) & n39);
assign n43  = /* CARRY  8 10  5 */ (0 & n17) | ((0 | n17) & n42);
assign n51  = /* CARRY  8 11  2 */ (0 & n22) | ((0 | n22) & n50);
assign n42  = /* CARRY  8 10  4 */ (0 & n16) | ((0 | n16) & n41);
assign n52  = /* CARRY  8 11  3 */ (0 & n23) | ((0 | n23) & n51);
assign n45  = /* CARRY  8 10  7 */ (0 & n19) | ((0 | n19) & n44);
assign n49  = /* CARRY  8 11  0 */ (0 & n20) | ((0 | n20) & n45);
assign n44  = /* CARRY  8 10  6 */ (0 & n18) | ((0 | n18) & n43);
assign n50  = /* CARRY  8 11  1 */ (0 & n21) | ((0 | n21) & n49);
assign n55  = /* CARRY  8 11  6 */ (0 & n26) | ((0 | n26) & n54);
assign n56  = /* CARRY  8 11  7 */ (0 & n27) | ((0 | n27) & n55);
assign n53  = /* CARRY  8 11  4 */ (0 & n24) | ((0 | n24) & n52);
assign n58  = /* CARRY  8 12  1 */ (0 & n3) | ((0 | n3) & n57);
assign n54  = /* CARRY  8 11  5 */ (0 & n25) | ((0 | n25) & n53);
assign n57  = /* CARRY  8 12  0 */ (0 & n28) | ((0 | n28) & n56);
/* FF  8  9  0 */ always @(posedge n1) if (1) n4 <= 0 ? 0 : n60;
/* FF  8  9  1 */ always @(posedge n1) if (1) n5 <= 0 ? 0 : n61;
/* FF  8  9  2 */ always @(posedge n1) if (1) n6 <= 0 ? 0 : n62;
/* FF  8  9  3 */ always @(posedge n1) if (1) n7 <= 0 ? 0 : n63;
/* FF  8  9  4 */ always @(posedge n1) if (1) n8 <= 0 ? 0 : n64;
/* FF  8 10  1 */ always @(posedge n1) if (1) n13 <= 0 ? 0 : n65;
/* FF  8  9  5 */ always @(posedge n1) if (1) n9 <= 0 ? 0 : n66;
/* FF  8 10  0 */ always @(posedge n1) if (1) n12 <= 0 ? 0 : n67;
/* FF  8  9  6 */ always @(posedge n1) if (1) n10 <= 0 ? 0 : n68;
/* FF  8 10  3 */ always @(posedge n1) if (1) n15 <= 0 ? 0 : n69;
/* FF  8  9  7 */ always @(posedge n1) if (1) n11 <= 0 ? 0 : n70;
/* FF  8 10  2 */ always @(posedge n1) if (1) n14 <= 0 ? 0 : n71;
/* FF  8 10  5 */ always @(posedge n1) if (1) n17 <= 0 ? 0 : n72;
/* FF  8 11  2 */ always @(posedge n1) if (1) n22 <= 0 ? 0 : n73;
/* FF  8 10  4 */ always @(posedge n1) if (1) n16 <= 0 ? 0 : n74;
/* FF  8 11  3 */ always @(posedge n1) if (1) n23 <= 0 ? 0 : n75;
/* FF  8 10  7 */ always @(posedge n1) if (1) n19 <= 0 ? 0 : n76;
/* FF  8 11  0 */ always @(posedge n1) if (1) n20 <= 0 ? 0 : n77;
/* FF  8 10  6 */ always @(posedge n1) if (1) n18 <= 0 ? 0 : n78;
/* FF  8 11  1 */ always @(posedge n1) if (1) n21 <= 0 ? 0 : n79;
/* FF  8 11  6 */ always @(posedge n1) if (1) n26 <= 0 ? 0 : n80;
/* FF  8 11  7 */ always @(posedge n1) if (1) n27 <= 0 ? 0 : n81;
/* FF  8 12  2 */ always @(posedge n1) if (1) LED1 <= 0 ? 0 : n82;
/* FF  8 11  4 */ always @(posedge n1) if (1) n24 <= 0 ? 0 : n83;
/* FF  8 12  1 */ always @(posedge n1) if (1) n3 <= 0 ? 0 : n84;
/* FF  8 11  5 */ always @(posedge n1) if (1) n25 <= 0 ? 0 : n85;
/* FF  8 12  0 */ always @(posedge n1) if (1) n28 <= 0 ? 0 : n86;

endmodule

Now please point to the construct in that verilog code that will not synthesize on some targets!


Note : do no confuse LUT. LUT in this context = logic unit , not 'look-up-table). Some families hav a LUT that is 2 fliplfops with a combinatorial cloud before it with 10 inputs. , others have 4 flipflops with a larger cloud. every manufacturer and family has its own mix of such LUT blocks.
take any fpga where the LUT is constructed differently from the Lattice LUT used for this specific family.

If i understand it correctly what you do is pull a reverse netlist and wire it into the LUT inputs and outputs. And then you unmap the lut matrix into NOt AND and OR operations and multiplexers and assign those to wires. you know what the LUT of this particular device looks like so you can resolve down to the flipflop inputs and outputs. correct ?

Assume one architecture where 1 lut consists of an 8 input and/or matrix feeding 2 flipflops.
Assume a different architecture where 1 lut is a 20 input and/or matrix feeding 4 flipflops

The lut architecture is different between families of the same manufacturers. Compare a lut from Lattice against an altera or a Xilinx one. They are different. Even though you have a logic cloud of and/not/or and multiplexers ( the ? operand in verilog ) feeding flipflops , that will not map correctly on an architecture with a different lut structure. You may end up with such large differences that the recompiled entity does not run as timing is completely off.

It gets more complex once code starts using specialistic macros.
How will you trans-map the bitstream if it uses embedded memory blocks , embedded adders / multipliers or other hard-macros that don't exist in other families or manufacturers ?

It s great you found a way to decompile the bit stream, but its output is a massive logic cloud of and/or/not that is pretty much unreadable. Yes you can recompile it , providing you don't switch architecture or families, if you are lucky it may work on some other families.

So what you have is a bitstream to logic equation resolver ( you could take your output one step further and get rid of the intermediate wires and write out the boolean equations driving the fliplfop control lines. You can compare your output to opening the final , minimized logic netlist in the schematic viewer of the FPGA tools.

that's not really decompiling in my book.

Anyway, cool you figured out how the bitstream works , but i don't see a practical use.
Let's say i have a board with a xilinx fpga that has, amongst other functionality, some logic function in it i really want.  i want to 'decompile the bitstream' , steal that interesting block and slap it in an altera.

The system works for simple devices but is not really portable ( mainly due to architectural differences between manufacturers and their use of hard-macro's that do not exist in other families.

As for 'free' compilers for FPGA : you can synthesize down to a point but then you really need the manufacturer tools. Only the manufacturer tools have access to those hard macro's.
The tools are optimized to recognize certain functions and map those onto available hardware in the particular device. you may code a simple counter . The compiler detects this and knows it has a certain number of LUTS that have optimized interconnects to implement counters.
Every FPGA manufacturer has their secret sauce of  'accelerators'. Only they know how to translate the source to an optimized output.
With old, simple CPLD's and fpga's that only held simple gates and simple flipflops it doesn't matter.
Fact is that devices produced in the last 10 years have 'accelerators'. your code is not flattened to it shortest form logic equation like they did 20 years ago. Instead it is smartly mapped on those accelerators.  every vendor has their own 'secret mix'.

most devices these days have special functionality to even accelerate relatively simple things like counters. unless you know those exist, and how they work ( for example the carry propagation logic may exist as a hard , optimized block that is much faster than its synthesized equivalent ) you don't stand a chance reversing that , nor porting it.

While this is a cool experiment , it doesn't really have a practical application. The output is pretty much unreadable and it will only work for simple devices.

Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: clifford on March 26, 2015, 08:02:53 am
Quote
ASIC people are a very secretive folk..
Which is Exactly why you'll never see an FPGA vendor publish the essential details.

Aehmm.. never? do you remember JBits? Xilinx did release the bitstream format of the XC4000 devices (and I think also the Virtex I, which might just have been called Virtex without a number). But back then there was no Open Source infrastructure to do anything useful with that.

FPGA Vendors have a long standing tradition of using BSD-licenced open source components in their tool chains. Look at ABC for example and all the commercial tools that are essentially a GUI and a little front-end that feed stuff through ABC for logic optimization or verification. They don't tell you about it of course, because why should they? On occasion they release very detailed information about old very large devices, so the academic world has something to play and advance the tools the industry is using. (I remember VTR using bitstream docs released by Xilinx for newer devices than Virtex I, but I can't find anything yet.). But so far there never was an open-source end-to-end flow, mostly because of the lack of a HDL front-end.

(Yes, there is Odin-II, and vl2m, and HANA, but have you tried any of those? At best they can be used as netlist parsers. I couple of years ago I've been asked to compare those to each other (and to yosys). This have been my results: http://scratch.clifford.at/vlogev.pdf (http://scratch.clifford.at/vlogev.pdf))

So FPGA vendors publishing essential details is nothing new. They just not advertise it to end-users and usually you need a few years old dev board for a chip that has been top-of-the-line at its time. With our work you can buy an icestick for 25 USD and play with an FPGA bitstream format.

FPGA vendors using open source software in their tools is nothing new either. But usually this is mostly stuff for logic optimization, because this is the domain where we have really good open source tools for a long time now. But I see no reason why they would not start using open source in other problem domains, once it is available.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: mikeselectricstuff on March 26, 2015, 08:44:32 am
Quote
ASIC people are a very secretive folk..
Which is Exactly why you'll never see an FPGA vendor publish the essential details.
Aehmm.. never? do you remember JBits? Xilinx did release the bitstream format of the XC4000 devices (and I think also the Virtex I, which might just have been called Virtex without a number). But back then there was no Open Source infrastructure to do anything useful with that.
OK if you want to argue semantics I'll rephrase -  you'll never see an FPGA vendor publish the essential details on any remotely current device that anyone would think of using in a new design.

All I'm saying is there are much more useful things to spend OSS development time on than place & route.
e.g. place & route will always need updating as  new devices appear, whereas synthesis & other front-end tools are mostly device-independent, so unless an OSS effort was actively funded, chances are it wouldn't sustain enough development effort over time to continue to be useful.
It's not inconceivable that a manufacturer may develop  new tools and open source them, but it's hard to see that they would see enough benefit for them to do it, certainly towards the back-end aspects specific to their products.
 
Another  aspect, which I believe was a significant factor back in the PLD/GAL days was simply that by staying closed, it is much  quicker and easier for them to make changes, e.g. to adapt to new processs & fabs - as soon as you commit to publically documenting things you take on significant work just keeping public documentation up to date, and risks of bad publicity from problems with 3rd-parties using outdated info.


Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: blueskull on March 26, 2015, 09:13:39 am
I ponder on how hard it is to characterise the timing of the iCE40 devices (propagation, loading due to fanout, switching, temperature, voltage, different speed grades) to allow accurate enough timing. You can get away with nearly anything at 1MHz, but what at about 200MHz?

Sure you can make some assumptions, but you wouldn't want to do that for a commercial product where you have liability issues.

At least with a CPU once you have the part number you pretty much know the performance limits for the entire device, and as over-clockers nuts will tell you, the limits can be pretty close.

Those info can be reverse engineered from Aldec simulator comes with ICECube 2 or even Altium Designer. The timing constraint file really explained everything.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: wumpus on June 03, 2015, 11:17:45 am
Linux is successful because so many people have a use for it. the FPGA area is just too specialist for enough people to be interested, and the vast majority of users don't care what flavour of tools they need to use to do the job.
Many people would disagree. Due to various reasons (crowdfunding, distrust, scaling limits of CPUs among others), interest in open source hardware has picked up enormously recently.

The unavailability of general, open source FPGA tooling has always been a limiting factor there. Instead of applying tunnel vision to current (sort of) happy users of vendor tools, take a wider view. This can open up completely new markets.

There is nothing inherently esoteric or specialist about programmable logic, it could be said to be more fundamental than programming CPUs. But the bulky, hard to install, restrictively licensed tools have kept many people from even reaching the "hello world" phase - see the Parallela forums for enough examples, even if they have an FPGA available and are interested.

A freely available "gcc for FPGA" could make a huge difference here, smash down the adoption barrier. And one look at gcc (and clang) is enough to be convinced that the open source community is able to build huge, complex code generation pipelines. There is a lot of specialist knowledge represented.

Just as in a compiler, the chief part of the pipeline is generic. Place and route, seen abstractly, is a matter of having a correct definition of the units and connections of the target hardware then optimizing within those constraints. Even if the result is less optimized for a specific vendor, the generality and availability is a great boon for other applications (the same could be said about vendor compilers versus open source compilers).

Anyhow, arguing about this is useless. We'll see where this goes. I'm excited about it, at least.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: Mechanical Menace on June 03, 2015, 12:07:25 pm
Bookmarked the IceStorm page. Amazing work Clifford, thanks. I can see this (and supported FPGAs) becoming very popular in the admittedly small homebrew computer and cpu scene before the year's out.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: mikeselectricstuff on June 03, 2015, 12:25:21 pm
Linux is successful because so many people have a use for it. the FPGA area is just too specialist for enough people to be interested, and the vast majority of users don't care what flavour of tools they need to use to do the job.
Many people would disagree. Due to various reasons (crowdfunding, distrust, scaling limits of CPUs among others), interest in open source hardware has picked up enormously recently.

The unavailability of general, open source FPGA tooling has always been a limiting factor there.
Nonsense. There is no real barrier to using an FPGA in an OSHW project that an OS toolchain would remove. Everyone can access FPGA tools at minimal cost.

Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: daqq on June 03, 2015, 12:41:27 pm
Quote
Nonsense. There is no real barrier to using an FPGA in an OSHW project that an OS toolchain would remove. Everyone can access FPGA tools at minimal cost.
To be fair, the bigger devices are supported only by the costly versions of stuff - atleast for Xilinx. Dunno about the rest, Altera, erm, Intertra?.

I never really got this - what's the point of not publishing stuff like this? From a chip manufacturers point of view it seems the best course of action would be to ensure that ALL of my tools area available to everyone, free of charge. Or atleast MASSIVELY support open source initiatives like this.

The availability of free or cheaply priced tools is a big issue for me, I assume the same goes for others.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: Mechanical Menace on June 03, 2015, 12:42:17 pm
Nonsense. There is no real barrier to using an FPGA in an OSHW project that an OS toolchain would remove.

I could use this on a none x86 cpu and (with a lot of learning) port it to an OS that isn't Windows or Linux. If you can't see some possibilities opening up there, no matter how niche, that's a bit of a lack of imagination.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: mikeselectricstuff on June 03, 2015, 01:03:52 pm
Quote
Nonsense. There is no real barrier to using an FPGA in an OSHW project that an OS toolchain would remove. Everyone can access FPGA tools at minimal cost.
To be fair, the bigger devices are supported only by the costly versions of stuff - atleast for Xilinx. Dunno about the rest, Altera, erm, Intertra?.

I never really got this - what's the point of not publishing stuff like this? From a chip manufacturers point of view it seems the best course of action would be to ensure that ALL of my tools area available to everyone, free of charge. Or atleast MASSIVELY support open source initiatives like this.

The availability of free or cheaply priced tools is a big issue for me, I assume the same goes for others.
The big-parts issue isn't really an issue here as we're talking parts with multi-hundred dollar price tags, in big BGA packages which need umpteen PCB layers to route. If you're making that kind of investment, a few $k on tools is chickenfeed. I suspect the reason for this is that they can use it to subsidise providing free tools to lower-end users.
Quote
I could use this on a none x86 cpu and (with a lot of learning) port it to an OS that isn't Windows or Linux. If you can't see some possibilities opening up there, no matter how niche, that's a bit of a lack of imagination.
I'm not saying that there aren't some niche situations where it might be interesting, just that these would be the exception, and existing tools are just fine for the vast majority of users, so the existence of OSS tools is a minimal benefit, to a few people.

The reason FPGAs aren't used very widely is nothing to do with tools, it's simply that they are only needed in niche applications. Availability of OSS tools won't change that.


Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: Mechanical Menace on June 03, 2015, 01:22:23 pm
The reason FPGAs aren't used very widely is nothing to do with tools, it's simply that they are only needed in niche applications. Availability of OSS tools won't change that.

In the hobbyist domain I'd say how overwhelming the tools can be is a serious barrier to entry. Something like this could lead to an Arduino style revolution (it has had it's upsides) for FPGAs.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: mikeselectricstuff on June 03, 2015, 02:29:31 pm
The reason FPGAs aren't used very widely is nothing to do with tools, it's simply that they are only needed in niche applications. Availability of OSS tools won't change that.

In the hobbyist domain I'd say how overwhelming the tools can be is a serious barrier to entry. Something like this could lead to an Arduino style revolution (it has had it's upsides) for FPGAs.
No it won't. Most hobbyists have no use for FPGAs.
Although they are huge and clunky, you can install and use an existing FPGA toolchain pretty easily. The biggest hurdle by far is getting your head round using an HDL. That is where there is definitely scope for interesting things to be done, and that doesn't need anyone to spend time reinventing the wheel with the back-end tools as any "easy-to-use" front-end can output HDL or RTL to the existing toolchain.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: wumpus on June 03, 2015, 03:20:54 pm
Nonsense. There is no real barrier to using an FPGA in an OSHW project that an OS toolchain would remove. Everyone can access FPGA tools at minimal cost.
You're quite quick to call someone else's experience nonsense, aren't you? I don't understand why you feel so strongly against this. Likely, in 1987 people like you were arguing against Richard Stallman for starting gcc.

It's not just the cost that is problematic, it is also the license restrictions to distribution. Distributing, say, a VM or docker image with ready-made FPGA toolchain is usually not allowed. Neither is offering an automatic 'build server' for bitstreams. This is the problem Parallela bumped against, as well as some educational projects.

Quote
That is where there is definitely scope for interesting things to be done, and that doesn't need anyone to spend time reinventing the wheel with the back-end tools as any "easy-to-use" front-end can output HDL or RTL to the existing toolchain.
The one doesn't exclude the other. There's many people working on different projects...
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: mikeselectricstuff on June 03, 2015, 03:48:46 pm
Nonsense. There is no real barrier to using an FPGA in an OSHW project that an OS toolchain would remove. Everyone can access FPGA tools at minimal cost.
You're quite quick to call someone else's experience nonsense, aren't you?
Yes, when it's obvious  :D
Quote
I don't understand why you feel so strongly against this. Likely, in 1987 people like you were arguing against Richard Stallman for starting gcc.
gcc is not a reasonable comparison. The potential user base is orders of magnitude smaller, and there are currently tools available at no cost that are perfectly adequate for the majority of that user base.
Quote

It's not just the cost that is problematic, it is also the license restrictions to distribution. Distributing, say, a VM or docker image with ready-made FPGA toolchain is usually not allowed. Neither is offering an automatic 'build server' for bitstreams. This is the problem Parallela bumped against, as well as some educational projects.
and how many people is that actually useful for?

Quote

That is where there is definitely scope for interesting things to be done, and that doesn't need anyone to spend time reinventing the wheel with the back-end tools as any "easy-to-use" front-end can output HDL or RTL to the existing toolchain.
The one doesn't exclude the other. There's many people working on different projects...
[/quote]
True but given a limited number of available man-hours, my argument is simply that those hours would be more useful to more people if they were spent making new front-end tools and new design flows than reinventing stuff that already exists in a form that is perfectly fine for the vast majority of potential users, and will get updated by the manufacturers for new devices long after OSS projects have stalled and been abandoned.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: Muxr on June 03, 2015, 04:00:23 pm
An open sourced FPGA toolchain could have a benefit. For instance Python is heavily used in scientific field, this could leverage on demand use of higher level libs like MyHDL, and produce bitstream. Decreasing the barrier of entry and streamlining the whole process. There is definitely room and demand for it. But not really for hardware design, more for performance computing.

I have to agree though for hardware design, it would be hard for a FOSS project to reach the quality and functionality of what's already provided by the FPGA manufacturers for free.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: mikeselectricstuff on June 03, 2015, 05:32:52 pm
An open sourced FPGA toolchain could have a benefit. For instance Python is heavily used in scientific field, this could leverage on demand use of higher level libs like MyHDL, and produce bitstream.
And what would be the advantage of that over feeding it into the existing tools?
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: blueskull on June 03, 2015, 05:58:02 pm
An open sourced FPGA toolchain could have a benefit. For instance Python is heavily used in scientific field, this could leverage on demand use of higher level libs like MyHDL, and produce bitstream.
And what would be the advantage of that over feeding it into the existing tools?

OS toolchain is important in academic and niche "open source belief" users. For most everyday users, I'd rather use vendor provided toolchain.

For the MyHDL things, I prefer it can translate its input into netlist or verilog. Reinventing wheels is never a good thing, unless you have a completely new concept and breakthrough.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: Muxr on June 03, 2015, 06:54:46 pm
An open sourced FPGA toolchain could have a benefit. For instance Python is heavily used in scientific field, this could leverage on demand use of higher level libs like MyHDL, and produce bitstream.
And what would be the advantage of that over feeding it into the existing tools?
Automated deployment for cloud compute. I think things are about to get interesting in the FPGA and x86 server market. With Intel buying Altera I could see an FPGA with x86 cores, used for hw acceleration of your compute clusters.

Sort of how OpenCL is being used, except FPGA can offer distinct advantages, like low latency. Opening the bitstream for this use would help adoption since most of your cloud outhere runs on Linux and FOSS stack.

It wouldn't really change how one designs in FPGAs into their hardware, but it would open up FPGAs to new applications.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: mikeselectricstuff on June 03, 2015, 06:58:26 pm
An open sourced FPGA toolchain could have a benefit. For instance Python is heavily used in scientific field, this could leverage on demand use of higher level libs like MyHDL, and produce bitstream.
And what would be the advantage of that over feeding it into the existing tools?
Automated deployment for cloud compute. I think things are about to get interesting in the FPGA and x86 server market. With Intel buying Altera I could see an FPGA with x86 cores, used for hw acceleration of your compute clusters.

Sort of how OpenCL is being used, except FPGA can offer distinct advantages, like low latency.

It wouldn't really change how one designs in FPGAs into their hardware, but it would open up FPGAs to new applications.
Yes but in that example you'd almost certainly only be loading pre-compiled designs.
There is no way we'd ever see an OSS solution for a device that complex anyway.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: Muxr on June 03, 2015, 07:00:05 pm
An open sourced FPGA toolchain could have a benefit. For instance Python is heavily used in scientific field, this could leverage on demand use of higher level libs like MyHDL, and produce bitstream.
And what would be the advantage of that over feeding it into the existing tools?
Automated deployment for cloud compute. I think things are about to get interesting in the FPGA and x86 server market. With Intel buying Altera I could see an FPGA with x86 cores, used for hw acceleration of your compute clusters.

Sort of how OpenCL is being used, except FPGA can offer distinct advantages, like low latency.

It wouldn't really change how one designs in FPGAs into their hardware, but it would open up FPGAs to new applications.
Yes but in that example you'd almost certainly only be loading pre-compiled designs.
There is no way we'd ever see an OSS solution for a device that complex anyway.
Yes sorry I edited my response to include why I think it would be important. To help adoption, because most of your cloud runs Linux and the FOSS stack. Same reason Open CL exists over Cuda.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: andersm on June 03, 2015, 07:05:07 pm
FPGA development environments are so universally awful that anything that can help spur innovation is a godsend.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: hamster_nz on June 03, 2015, 07:07:54 pm
Quote
Nonsense. There is no real barrier to using an FPGA in an OSHW project that an OS toolchain would remove. Everyone can access FPGA tools at minimal cost.
To be fair, the bigger devices are supported only by the costly versions of stuff - atleast for Xilinx. Dunno about the rest, Altera, erm, Intertra?.

I never really got this - what's the point of not publishing stuff like this? From a chip manufacturers point of view it seems the best course of action would be to ensure that ALL of my tools area available to everyone, free of charge. Or at least MASSIVELY support open source initiatives like this.

The availability of free or cheaply priced tools is a big issue for me, I assume the same goes for others.

The limits on size of 'free' versions isn't as big as it was even 5 years ago. Take for example Xilinx Vivado - the free version supports the XC7A200T part... 740 DSP blocks, quarter a million flip flips, 13Mb of on-chip RAM, 16 6Gb transceivers, PCIe, 500 I.O pins.

That is a LOT of stuff. What can't you build with that that doesn't need a team of full time engineers and enough financing to actually buy a license (which is rewarding Xilinx for their efforts, rather than just leaching :) )?
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: mikeselectricstuff on June 03, 2015, 09:26:33 pm
FPGA development environments are so universally awful that anything that can help spur innovation is a godsend.
No argument there, but IMO the biggest problem is the archaic nature of the HDLs, and definitely an area where something new is well overdue.
Reinventing what's already there is just wasted effort and will not do anything to improve the awfulness.
Unless of course someone can come up with some magic solution to place & route much, much more quickly.
Just imagine how useful it would be to use all the power sitting in GPUs to get near-instant update of a device when you change logic onscreen...


Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: hamster_nz on June 03, 2015, 10:53:19 pm
FPGA development environments are so universally awful that anything that can help spur innovation is a godsend.
No argument there, but IMO the biggest problem is the archaic nature of the HDLs, and definitely an area where something new is well overdue.
Reinventing what's already there is just wasted effort and will not do anything to improve the awfulness.
Unless of course someone can come up with some magic solution to place & route much, much more quickly.
Just imagine how useful it would be to use all the power sitting in GPUs to get near-instant update of a device when you change logic onscreen...

Outside of hobby use it wouldn't be too useful at all. Unlike incremental software compilation, changes made higher up in a design force structural changes all the way through the design, and changes in the lowest levels could be unfairly constrained by what is already in place. It is most likely that any reasonable commercial design will use over 50% of some of the resources on a chip (otherwise you would use a smaller chip) and that doesn't leave much room for rip-up and place and route.

You would also have the problem that the performance of your design will depend on everything that has happened to the design beforehand - so you can't give a copy of the source to a co-worker and expect them to get the same results.

And I have to eat humble pie and say that Vivado isn't that bad (I didn't like it at first) - I'm currently working on a design using 90 DSP slices, running at 250MHz (about 0.13ns slack), and it build in under 5 minutes on my i3 laptop. Imagine if you were doing the equivalent of a PCB layout for ninety 100-pin chips and a few thousand bits of bubblegum logic... it is not just putting instructions and data into memory (which is all a s/w compiler has to).


And it is asking a bit much - Right I'm working on a design with 90 DSP splices, 18,000+ paths, running at 250MHz, with about 5% timing slack.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: mikeselectricstuff on June 03, 2015, 11:01:25 pm
FPGA development environments are so universally awful that anything that can help spur innovation is a godsend.
No argument there, but IMO the biggest problem is the archaic nature of the HDLs, and definitely an area where something new is well overdue.
Reinventing what's already there is just wasted effort and will not do anything to improve the awfulness.
Unless of course someone can come up with some magic solution to place & route much, much more quickly.
Just imagine how useful it would be to use all the power sitting in GPUs to get near-instant update of a device when you change logic onscreen...

Outside of hobby use it wouldn't be too useful at all. Unlike incremental software compilation, changes made higher up in a design force structural changes all the way through the design, and changes in the lowest levels could be unfairly constrained by what is already in place. It is most likely that any reasonable commercial design will use over 50% of some of the resources on a chip (otherwise you would use a smaller chip) and that doesn't leave much room for rip-up and place and route.

You would also have the problem that the performance of your design will depend on everything that has happened to the design beforehand - so you can't give a copy of the source to a co-worker and expect them to get the same results.
No, I'm not talking incremental, I mean, you make a change to your HDL or whatever, and it recompiles, places, routes and downloads  in a second or two.
Quote

And I have to eat humble pie and say that Vivado isn't that bad (I didn't like it at first)
Is Vivado the new name for ISE, or something completely new?
If so what sort of differences are there?
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: c4757p on June 03, 2015, 11:01:40 pm
FPGA development environments are so universally awful that anything that can help spur innovation is a godsend.
No argument there, but IMO the biggest problem is the archaic nature of the HDLs, and definitely an area where something new is well overdue.

The archaic nature of the HDLs? What archaic nature would this be...?
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: hamster_nz on June 03, 2015, 11:10:21 pm
Is Vivado the new name for ISE, or something completely new?
If so what sort of differences are there?

Vivado is the toolset for Xilinx's 7 series devices (Zynq, Aritx,...) . Much more oriented around building IP blocks with standard interfaces and joining them graphically to build your SoC or other design. It supports quite a high level of design automation (so when you add a GPIO IP block to your ARM SoC, it will connect up the AXI interconnects and insert any bridges and reset controllers you need and so on).

It also has support for high-level synthesis (i.e. a subset of C to HDL).

The tools are all quite integrated, so you can bounce between project, implemented design, RTL design, block-level design, simulation and hardware programming in the one application window. But you do have to drop out into Eclipse to do any S/W side of a design though.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: Bassman59 on June 03, 2015, 11:16:06 pm
FPGA development environments are so universally awful that anything that can help spur innovation is a godsend.
No argument there, but IMO the biggest problem is the archaic nature of the HDLs, and definitely an area where something new is well overdue.

The archaic nature of the HDLs? What archaic nature would this be...?

The fact that the two major HDLs were initially defined before some of these kids were born? You know, like how there are kids who want to do full-scale object-oriented coding on an 8051. Or something.

I have no idea. VHDL does everything I need it to do.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: mikeselectricstuff on June 03, 2015, 11:22:12 pm
FPGA development environments are so universally awful that anything that can help spur innovation is a godsend.
No argument there, but IMO the biggest problem is the archaic nature of the HDLs, and definitely an area where something new is well overdue.

The archaic nature of the HDLs? What archaic nature would this be...?
Speaking about VHDL as that's what I sort-of know...
no block comments, no #define, #include, compile-time macros
Having to hope that the synthesis process infers what you want instead of being able to specify things more simply & directly (e.g. stuff like async resets).
Yet another different comment symbol
 No meaningful way (AFAICS) to easily manage build variants for different parts, pinouts etc. (more of an issue with the whole toolchain than the HDL) 

I'll admit I don't use FPGAs that often and don't know VHDL inside out, but it just seems that I'm often finding that the sort of things that I do routinely in software projects are a total ball-ache to do.

A concrete example - I use Lattice Diamond but my previous experience of ISE seemed pretty much the same.
I have a design that can be used on one of two different PCBs, with a few different FPGAs, depending on pins and memory required for a particular build.
It  already has a lot of paramaterization using VHDL constants ( much of which would have been easier with #ifdef type structures) , but what I'd like to be able to do is have a single #define in the top level source that would allow it to pull in the required set of pin definitions and define the FPGA type depending which PCB it will go on and how big a memory it needs.

And  there isn't even a way to have it automatically download to a device on a successful build. or even beep at me to tell me it's done compiling. Pathetic.



Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: c4757p on June 03, 2015, 11:27:47 pm
Not every language has block comments; any editor that isn't incompetent can still comment off a block. You're the first person I've ever seen to claim not having macros makes it archaic, those are archaic features typically only provided by older languages. Async resets are easy to do, I don't know what you're on about there (though I'd not really recommend using them at all). Build variants are easily accomplished using generics.

And  there isn't even a way to have it automatically download to a device on a successful build. or even beep at me to tell me it's done compiling. Pathetic.

buh, wha?? you're using a computer, dude, script it!
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: Muxr on June 03, 2015, 11:46:31 pm
Can you define pin settings in Verilog? I have not been able to figure it out. I am also using the Lattice toolchain. Been doing it from the spreadsheet view which I find really annoying. I have a KiCad project that knows all the pins, my CPLD pins depend on the board layout so I would really like KiCad to drive that, so I don't have to manually define them.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: mikeselectricstuff on June 03, 2015, 11:56:58 pm
Not every language has block comments
; any editor that isn't incompetent can still comment off a block.
I shouldn't have to change editors because of inadequacies in a language

Quote
Async resets are easy to do, I don't know what you're on about there
I'm sure they are if you use VHDL regularly. As an occasional user it's the sort of thing I always have to look up, and find the syntax somewhat cumbersome for what should be a simple operation
Quote

 (though I'd not really recommend using them at all).
Power-up initialisation? Dealing with lost input clocks?
This would be less of an issue if it weren't for the the near-impossibility of easily specifying the logic state you want a node to be at powerup. The FPGA hardware initialises everything to a known state , but by the time the toolchain has had its way it can be anyone's guess what you end up with.
Quote
Build variants are easily accomplished using generics.
Build variants include FPGA type and pinouts. You should be able to specify these in the  HDL.
I have a vague recollection of reading that in ISE there is a way to specify pin constraints in HDL but couldn't find it last time I looked. And even then I don't think this was amenable to selecting with compile-time constants

And  there isn't even a way to have it automatically download to a device on a successful build. or even beep at me to tell me it's done compiling. Pathetic.


buh, wha?? you're using a computer, dude, script it!
This is the sort of stuff that's been standard in MCU environments for ever. FPGA tools are pretty retarded in this sort of useability aspect Again. I shouldn't have to dick around with scripts to get what should be a standard feature of any sane dev environment.

And don't get me started with the ridiculously limited ways to specify ROM data for BlockRAM (in Lattice Diamond at least). On what planet would someone call a "binary" file a text file full of  binary strings...? ...And having to specify ROM data in the native bus width rather than the width you've generated it...
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: c4757p on June 04, 2015, 12:21:49 am
Not every language has block comments
; any editor that isn't incompetent can still comment off a block.
I shouldn't have to change editors because of inadequacies in a language

Is it too bloody much that you use the right hammer for your nail before you start whinging about the nail?

Quote
This would be less of an issue if it weren't for the the near-impossibility of easily specifying the logic state you want a node to be at powerup.

erm.?

Code: [Select]
signal foo: std_ulogic := '1';

There, done.

Quote
Build variants include FPGA type and pinouts. You should be able to specify these in the  HDL.

This is done in the constraints file. I really can't imagine you have too much issue with having one separate file for that?
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: Bassman59 on June 04, 2015, 12:28:37 am
Speaking about VHDL as that's what I sort-of know...
no block comments,

As already noted, that's easily handled by your favorite editor. But it was added to VHDL-2008! It's up to the tool vendors to implement it (as well as other new features).

Quote
no #define,

Why do you need #define? If you want to #define a constant, you can do so either in the architecture's declarative block (before the first begin), or you can pass one through an entity interface as a generic.

You can also create an enumerated type.

If you want to #define a macro, I suppose I should ask: why?

Quote
#include

Why do you need to #include anything? If you wish to "include" a component declaration, you should be using the direct instantiation of an entity idiom instead, such as:

u_foo : entity work.foo port map (bar => bar, bletch => bletch);

This has been in the language since VHDL-93.

If you wish to #include constants, type definitions, function and procedure declarations, use packages, which have been in the language since the beginning.

Quote
compile-time macros

See above.

Quote
Having to hope that the synthesis process infers what you want instead of being able to specify things more simply & directly (e.g. stuff like async resets).

If you code an async reset, you get an async reset. There's no hoping involved. Now, yes, we know that the synthesis tools have a switch which will tell them to convert async resets into sync resets, but that's off by default and of course you are better off coding that sort of thing directly.

Quote
No meaningful way (AFAICS) to easily manage build variants for different parts, pinouts etc. (more of an issue with the whole toolchain than the HDL)

With generics and generate statements you can manage quite a bit. For example, I have this Camera Link transmitter module I wrote initially for Virtex 4, and I ported it to Spartan 6.  what's the difference? XST is too stupid to infer DDR output (and input) flops, so you have to instantiate them. And the library element is not the same for the two families. So a simple generic and generate combo chooses the correct primitive.

As for different pinouts, you are constrained by the family architectures, and which pins support specific features (global clock inputs, differential pairs, hard IP blocks like memory controllers, whatever), and you are correct, that's not really a synthesis issue, more of an implementation-tool issue. So you have to maintain a constraint file with pinouts and timing constraints.

Quote
I'll admit I don't use FPGAs that often and don't know VHDL inside out, but it just seems that I'm often finding that the sort of things that I do routinely in software projects are a total ball-ache to do.

I do FPGAs every day, and I know VHDL as well as anyone who's used it forever can know it, I guess.

I suppose that I can say that I find some of the things one needs to do going between different processors and compilers can be a total ball-ache to do. Oh, this ARM needs this sort of initialization, and code I wrote for my 8051 can't be directly ported to ARM because the 8051 compiler needs to care about different memory spaces where ARM doesn't, and what is this linker-description file stuff, anyway?

Quote
A concrete example - I use Lattice Diamond but my previous experience of ISE seemed pretty much the same.
I have a design that can be used on one of two different PCBs, with a few different FPGAs, depending on pins and memory required for a particular build.
It  already has a lot of paramaterization using VHDL constants ( much of which would have been easier with #ifdef type structures) , but what I'd like to be able to do is have a single #define in the top level source that would allow it to pull in the required set of pin definitions and define the FPGA type depending which PCB it will go on and how big a memory it needs.

Without seeing the design, I really can't suggest better ways of doing what you want. But I do think that setting generics at the top level and making sure the synthesizer can find the correct entity source files and such, you can get there.

What I have done when my (Xilinx) designs need to support different board configurations (depending on the number of ADC channels, etc) is to write the source code to cover the different configurations (again, top-level generics and generate statements as needed), and each configuration has its own .xise and .ucf files. The .xise file has correct settings for the generics which filter down to the source, and the .ucf file has the pinouts defined for each variant. Then of course I have to build each variant, but that's not really a big deal.

Quote
And  there isn't even a way to have it automatically download to a device on a successful build.

Most of the time, I'm not even connected to the real hardware. So that's not very interesting.

Quote
or even beep at me to tell me it's done compiling.

I have the sound turned down on the machine. if everyone's computer beeped every time it did something, there'd be cacophony here. But what I would like to see is an option which will make the entire screen blink brightly and annoyingly so that when the boss walks by, he'll see that the computer is actually doing something and I'm not just just staring into space!
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: hamster_nz on June 04, 2015, 12:31:29 am
Quote
This would be less of an issue if it weren't for the the near-impossibility of easily specifying the logic state you want a node to be at powerup.

erm.?

Code: [Select]
signal foo: std_ulogic := '1';

There, done.


He might be on one of those FPGA types that are more ASIC-like and don't allow you to specify an initial value for flip-flips, and force you to have a reset.

If so, then that isn't the problem of the language, but due to decisions taken by the FPGA designers when they decided on their programmable logic architecture.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: Bassman59 on June 04, 2015, 12:33:16 am
Can you define pin settings in Verilog? I have not been able to figure it out. I am also using the Lattice toolchain. Been doing it from the spreadsheet view which I find really annoying. I have a KiCad project that knows all the pins, my CPLD pins depend on the board layout so I would really like KiCad to drive that, so I don't have to manually define them.

Why would you want to define the pinouts in the HDL source? That makes the design non-portable. Pin definitions belong in the implementation constraint file.

I suppose a Python script which can take the eeschema files and parse it to find your FPGA pinouts and then update the .ucf file would be clever indeed. (There's a way to do this in Altium.) But you have to ensure that you choose valid pins for things; you don't want to move your clock to a non-GCLK pin because it makes your layout easier.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: c4757p on June 04, 2015, 12:36:31 am
hamster_nz: indeed, many of the complains seem like they would be better directed at Lattice. Funny, he usually pooh-poohs any suggestion that Lattice themselves are archaic or otherwise a ball-ache. >:D

Here on the Xilinx+Altera side, all these complaints have been solved. (Well, except for the IDE sucking, but all manufacturer-provided IDEs do that.)
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: Bassman59 on June 04, 2015, 12:41:09 am
Quote
This would be less of an issue if it weren't for the the near-impossibility of easily specifying the logic state you want a node to be at powerup.

erm.?

Code: [Select]
signal foo: std_ulogic := '1';

There, done.


He might be on one of those FPGA types that are more ASIC-like and don't allow you to specify an initial value for flip-flips, and force you to have a reset.

That bit me. I took some code I had written for a Xilinx part, which has initializers as part of the signal declarations and also used synchronous resets (where neeed), and ported it to an Actel ProASIC-3L part, which doesn't make use of the initializers as part of the FPGA configuration and the flops don't support a synchronous reset (Synplify built it with logic in front of the flops' D inputs). Recoding was a pain, but it was necessary because the sync reset killed performance.

Quote
If so, then that isn't the problem of the language, but due to decisions taken by the FPGA designers when they decided on their programmable logic architecture.

Exactly.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: Muxr on June 04, 2015, 12:47:45 am
Can you define pin settings in Verilog? I have not been able to figure it out. I am also using the Lattice toolchain. Been doing it from the spreadsheet view which I find really annoying. I have a KiCad project that knows all the pins, my CPLD pins depend on the board layout so I would really like KiCad to drive that, so I don't have to manually define them.

Why would you want to define the pinouts in the HDL source? That makes the design non-portable. Pin definitions belong in the implementation constraint file.

I suppose a Python script which can take the eeschema files and parse it to find your FPGA pinouts and then update the .ucf file would be clever indeed. (There's a way to do this in Altium.) But you have to ensure that you choose valid pins for things; you don't want to move your clock to a non-GCLK pin because it makes your layout easier.
Right, I wasn't looking to do it in the same HDL source file, but I just learned of constraint files. So that answers my question, thanks!

Yeah I will write a script to do it automatically, sure I still have to make the decision which pin gets what, when I am laying out the PCB.

edit: btw for the benefit of others who are learning like me, here is an example of a parameterized project for the Lattice toolchain. http://www.trifdev.com/downloads.htm (http://www.trifdev.com/downloads.htm) . I am all set.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: Muxr on June 04, 2015, 01:19:00 am
hamster_nz: indeed, many of the complains seem like they would be better directed at Lattice. Funny, he usually pooh-poohs any suggestion that Lattice themselves are archaic or otherwise a ball-ache. >:D

Here on the Xilinx+Altera side, all these complaints have been solved. (Well, except for the IDE sucking, but all manufacturer-provided IDEs do that.)
I have limited experience with FPGAs/CPLDs but so far Lattice's CPLDs have been awesome for the project I am working on. Their MachXO2 line has versions with 3.3v voltage reg built in and you can easily get them from Mouser for $4-5 a pop. Their development board is only about $20 and it's as barebones as it gets, so it's perfect for quick prototyping or even a permanent bodge job.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: c4757p on June 04, 2015, 01:38:26 am
Yes, a common dichotomy in programmable logic. Everyone has either good hardware or good software, not both. Lattice's software sucks. (Some of their hardware does too. MachXO2 look cool, though.)
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: daqq on June 04, 2015, 05:16:07 am
Quote
Nonsense. There is no real barrier to using an FPGA in an OSHW project that an OS toolchain would remove. Everyone can access FPGA tools at minimal cost.
To be fair, the bigger devices are supported only by the costly versions of stuff - atleast for Xilinx. Dunno about the rest, Altera, erm, Intertra?.

I never really got this - what's the point of not publishing stuff like this? From a chip manufacturers point of view it seems the best course of action would be to ensure that ALL of my tools area available to everyone, free of charge. Or atleast MASSIVELY support open source initiatives like this.

The availability of free or cheaply priced tools is a big issue for me, I assume the same goes for others.
The big-parts issue isn't really an issue here as we're talking parts with multi-hundred dollar price tags, in big BGA packages which need umpteen PCB layers to route. If you're making that kind of investment, a few $k on tools is chickenfeed. I suspect the reason for this is that they can use it to subsidise providing free tools to lower-end users.
You are forgetting about salvaged stuff, reverse engineered, hacking... also, I've seen hobbyists do some really amazing stuff at home.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: mikeselectricstuff on June 04, 2015, 07:19:32 am
Not every language has block comments
; any editor that isn't incompetent can still comment off a block.
I shouldn't have to change editors because of inadequacies in a language
Quote

Is it too bloody much that you use the right hammer for your nail before you start whinging about the nail?

If nobody complains, people will assume all is OK when there is scope for improvement. Things like block comments are useful for making code readable, as well as allowing quick nondestructive removal of sections for debugging and there is no excuse for the lack of it in a language.
Quote
This would be less of an issue if it weren't for the the near-impossibility of easily specifying the logic state you want a node to be at powerup.

erm.?

Code: [Select]
signal foo: std_ulogic := '1';

There, done.
Nope - this didn't work. A while ago I spent a lot of time trying to solve this elegantly and ended up having to do a bodge.
The situation was that I wanted a signal that was '0' at powerup , changed to '1' on a particular event and stayed in that state forever.
As I was never explicitly assigning it to '0 ', it minimsed the logic to nothing and set it permanently high. It ignored the attempt to intialise it as you suggested.
It's a while ago now but I think the only way I could force the state was to make it part of a 2-bit counter which got incremented if the previous value was 0.
This may be a limitation of the toolchain rather the language itself, though I did get the impression from some newsgroup messages that I wasn't alone in having trouble doing this.

Quote
Build variants include FPGA type and pinouts. You should be able to specify these in the  HDL.

This is done in the constraints file. I really can't imagine you have too much issue with having one separate file for that?
I don;t have a problem having the file seperate. The problem is  (at least in Lattice Diamond) I can't specify the FPGA partno in the file and I also can't do conditional pin assignment based on a constant build option symbol that is visible to the HDL and CF.

If I could specify everything in the HDL, do #define and #include etc.  I could so something along the lines of

#define  variant 2
#if variant=2
fpga_type="LCMXO2-100"
#include pinout_for_version_2.inc
constant ram_size_bits:integer=10;
#endif

etc.

Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: mikeselectricstuff on June 04, 2015, 07:35:53 am
hamster_nz: indeed, many of the complains seem like they would be better directed at Lattice. Funny, he usually pooh-poohs any suggestion that Lattice themselves are archaic or otherwise a ball-ache. >:D

Here on the Xilinx+Altera side, all these complaints have been solved. (Well, except for the IDE sucking, but all manufacturer-provided IDEs do that.)
I only quote Lattic beacuse that's what I know - I used ISE a while ago and it looked pretty much identical - maybe it's better now. Never used Altera as they'v historically not had many low-cost parts.

Let me give some background as to why I find FPGA stuff frustrating.

I only use FPGAs very occasionally - maybe a week or so every few months, so I'm not intimately familiar with things. My FPGA designs are typically very simple as these things go - typically stuff like generating the waveforms for driving LED matrices etc.
I understand hardware. I know exactly what I am trying to achieve, but I often find it a frustrating  process making it happen, and a variety of factors contribute to making it harder than it should/could be, from the language through the IDE.

Why do I want #define and #include ?
For exactly the same reason it's there in C. HDL isn't a programming language, but the way it's used in the context of an FPGA hardware project is no different, so it ought to be possible to use the same techniques to do the same type of thing.
 
It would mean I can specify options in a way that is consistent across all sections of the design, from signal declarations constraints to sync and async logic and easily omitting sections. Yes I can do _some_ of it with constants, but not all. And not all in the same way in different parts of the design.

 
 
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: nctnico on June 04, 2015, 01:51:19 pm
In VHDL you can create a package which has global constants (much like a header file in C). Based on these constants you can use if - generate blocks to include or exclude certain pieces of VHDL code. The same goes for the width of busses. I have used these kind of techniques to keep my FPGA designs configurable. I always say that the key to use VHDL effectively is to treat it as a programming language and not to use it to describe hardware.

Anyway, I think it is good to have open source FPGA tools. People can bolt on their own extensions more easely like running their design files through a pre-processor.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: mikeselectricstuff on June 04, 2015, 06:07:56 pm
In VHDL you can create a package which has global constants (much like a header file in C). Based on these constants you can use if - generate blocks to include or exclude certain pieces of VHDL code. The same goes for the width of busses. I have used these kind of techniques to keep my FPGA designs configurable.

That's fine as far as it goes but doesn't deal with pinout and part type variants.
Pinouts within one device type can sometimes be handled by defining generic node names e.g.pin_1, pin_2 etc. then selectively mapping within the HDL but it's much messier than it could be.
 
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: c4757p on June 04, 2015, 06:09:31 pm
But the constraints file handles pinouts and part type variants! I don't get what the problem is.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: mikeselectricstuff on June 04, 2015, 06:49:03 pm
But the constraints file handles pinouts and part type variants! I don't get what the problem is.
The problem is that there is no visibility between the constraints file and the HDL, and no way to select a group of constraints (pins in particular) based on a single build option. And no way to select or access the device type.
If, for example there was an IDE/project-generated symbol representing the selected device, visible to both the HDL and constraints, and the constraints file could do #ifdefs or similar based on it, that would be a reasonable solution.

Probably the closest I could get is to turn the whole design into a component and have two projects with different constraints files, and an HDL wrapper that pulls in the same component for the main functionality.

Still way messier than the same process for a microcontroller project, where the C source can look at a processor type variable set by the environment and define pin mappings and functionality based on it.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: c4757p on June 04, 2015, 06:58:05 pm
I don't know about Lattice, but in both Xilinx and Altera software, you can specify the constraints using TCL (IIRC Altera does this all the time; it's a less common alternative in Xilinx). A TCL constraints file can definitely access all the information you want to make flexible variants, as it's a full script.

Xillinx: UG760 (http://www.xilinx.com/support/documentation/sw_manuals/xilinx13_1/ug760_patut_tcl.pdf), page 26

Altera: Quartus II handbook, vol 2 (https://www.altera.com/content/dam/altera-www/global/en_US/pdfs/literature/hb/qts/qts_qii52001.pdf), page 5
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: hamster_nz on June 04, 2015, 07:15:35 pm
Code: [Select]
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;

entity pin_test is
    Port ( sw : in  STD_LOGIC;
           led : out  STD_LOGIC);
end pin_test;

architecture Behavioral of pin_test is
   attribute LOC : string;
   attribute LOC of sw: signal  is "P114";
   attribute LOC of led: signal is "P123";
begin
   
   led <= sw;

end Behavioral;

Built it, checked it - Q.E.D.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: mikeselectricstuff on June 04, 2015, 07:47:07 pm
Code: [Select]
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;

entity pin_test is
    Port ( sw : in  STD_LOGIC;
           led : out  STD_LOGIC);
end pin_test;

architecture Behavioral of pin_test is
   attribute LOC : string;
   attribute LOC of sw: signal  is "P114";
   attribute LOC of led: signal is "P123";
begin
   
   led <= sw;

end Behavioral;

Built it, checked it - Q.E.D.
OK now how do you make the pin assignment conditional on a compile-time constant...?
or, better, stick it in a seperate file, and be able to select one of several files at compile time, preferably depending on the device selected in the project

With #define/#include it would be easy and obvious how to do it.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: Bassman59 on June 04, 2015, 08:00:26 pm
You can do this with VHDL projects as well but you'd have to use a Makefile or similar workflow. The regular FPGA IDEs aren't setup for this.

This is correct.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: nctnico on June 04, 2015, 08:00:57 pm
But the constraints file handles pinouts and part type variants! I don't get what the problem is.
The problem is that there is no visibility between the constraints file and the HDL, and no way to select a group of constraints (pins in particular) based on a single build option. And no way to select or access the device type.
If, for example there was an IDE/project-generated symbol representing the selected device, visible to both the HDL and constraints, and the constraints file could do #ifdefs or similar based on it, that would be a reasonable solution.

Probably the closest I could get is to turn the whole design into a component and have two projects with different constraints files, and an HDL wrapper that pulls in the same component for the main functionality.

Still way messier than the same process for a microcontroller project, where the C source can look at a processor type variable set by the environment and define pin mappings and functionality based on it.
You can do this with VHDL projects as well but you'd have to use a Makefile or similar workflow. CERN has developed a tool called HDLmake to create an ISE project file based in a Makefile oriented approach. The regular FPGA IDEs aren't setup for this. Either way I see a pin mapping more like a linker description file (which goes at what address) than something that should be included in the HDL. Still you'd need different FPGA bit files for each hardware version.

@Bassman59: had to delete the post you replied to; something went wrong.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: hamster_nz on June 04, 2015, 08:10:33 pm
Code: [Select]
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;

entity pin_test is
    Port ( sw : in  STD_LOGIC;
           led : out  STD_LOGIC);
end pin_test;

architecture Behavioral of pin_test is
   attribute LOC : string;
   attribute LOC of sw: signal  is "P114";
   attribute LOC of led: signal is "P123";
begin
   
   led <= sw;

end Behavioral;

Built it, checked it - Q.E.D.
OK now how do you make the pin assignment conditional on a compile-time constant...?
or, better, stick it in a seperate file, and be able to select one of several files at compile time, preferably depending on the device selected in the project

With #define/#include it would be easy and obvious how to do it.

It all can be done, just not in a way you are familiar with, or in a way anybody would use it, because it doesn't make sense to do it that way.

It might just be true that 30 years of VHDL engineers missed something as obvious "oh, how about we add #ifdef from C, a language that has been around since 1972". It might also be true that you are used to using that tool and want to use it here even though it isn't really appropriate.

If I had a good understanding of what your needs truly are (i.e. what problem would having #include solve for you), and I can suggest a solution... but it will likely be one or more of the following:

1. Put all your I/O through a single module, and then conditionally use the different modules in your design (VHDL 'GENERATE' clause). This is how you would handle architecture specific features (like PLLs and so on), so if you move to a different architecture you don't have to rework a whole lot of things.

2. Have two different constraints files, and use a script to build the second project. This is how I would handle different PCB layouts for the same chip.

3. Have two different build projects, which share the common source files where appropriate. This is how I would handle building for different vendor's targets (e.g. Altera and Xilinx) where they need different toolchains. (like a cross-compile).

Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: Bassman59 on June 04, 2015, 08:10:56 pm
But the constraints file handles pinouts and part type variants! I don't get what the problem is.
The problem is that there is no visibility between the constraints file and the HDL

Because the HDL is concerned with the logic description, from a functional point of view. The constraints file controls the implementation.

From a logic perspective, pin assignment is irrelevant, as is pin drive strength, input termination, I/O supply voltage, I/O standard and even clock frequency. Putting all of that stuff into the HDL just clutters the files with stuff that makes the design difficult to port.

Really, I don't see what's so difficult about creating a UCF (if you're a Xilinx non-series-7 user) file for your constraints and being done with it.

Quote
and no way to select a group of constraints (pins in particular) based on a single build option. And no way to select or access the device type.

You could use a makefile with the command-line xflow, where you specify all of that stuff.
 
We think that you're making this a lot harder than necessary. It seems to me that you've got one basic hardware design that you use everywhere. As it turns out, most FPGA designs aren't like that. Certainly many logic blocks get reused, but most designs are different enough where that it works best to use a source-code-control system to pull in those resuable logic blocks and the constraints file is created for each specific project.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: mikeselectricstuff on June 04, 2015, 08:54:00 pm
Quote
Really, I don't see what's so difficult about creating a UCF (if you're a Xilinx non-series-7 user) file for your constraints and being done with it.
That is not the problem. The problem is selecting between multiple sets of constraints for one design.
 
 
Quote
We think that you're making this a lot harder than necessary. It seems to me that you've got one basic hardware design that you use everywhere. As it turns out, most FPGA designs aren't like that. Certainly many logic blocks get reused, but most designs are different enough where that it works best to use a source-code-control system to pull in those resuable logic blocks and the constraints file is created for each specific project.
What is making things difficult is that FPGA tools use different conventions to software tools, for no other reason than history.
The decreasing cost of FPGAs means it is now routine to use them to augment microcontroler systems, which means the tools would be more productive if similar constructs were available.
Learning tools can be a major part of development time, and where the required functionality isn't that complex,  it can be disproportionate to the design effort.

In terms of how they are designed-in and used, FPGAs really aren't any different to MCUs at the topmost level - they're both chips that need code.
Arbitary and unnecessary differences in how that code is created just get in the way of getting the job done.

I have no illusions that we'll see any improvement any time soon, either from manufacturer or OSS tools. It's just that I get annoyed when things are more difficult than they need to be to get the job done.

Like I said, I only know enough VHDL to get by, as I only use it occasionally, and even then have to look at old designs to remind myself how to do stuff.
With my matrix  LED driver design (which is maybe 50 lines total BTW) , it took about 2 days learning and experimenting to get it paramarised to a useable level to configure display resolutions and data formats, and I have 2 project files for the 2 FPGA types, an copy the VHDL between them whenever I update it, because that's quicker and simpler for that particular situation. Learning how to dick around with makefiles and TCL would not have been a good investment in my time.

If the FPGA tools had, for example,  the same preprocessor functionality as C it would have taken maybe a couple of hours.
On the flipside I now know a lot more VHDL, but I'll probably have forgoten it by the time I do a new FPGA project.    project that happens to have an FPGA hanging on the side of it.

This thread was about how FPGA tools could be made better, albeit via OSS.
All I'm saying is that there are some simple things that could be done to existing tools to make FPGA development more accessible to people familiar with software design flows.


Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: hamster_nz on June 04, 2015, 10:30:40 pm
I'm about to have a bit of a rant (not a nasty rant, just a brain dump), so please excuse me....

What is making things difficult is that FPGA tools use different conventions to software tools, for no other reason than history.
...
In terms of how they are designed-in and used, FPGAs really aren't any different to MCUs at the topmost level - they're both chips that need code.
...

But HDL design isn't like software - it might look like software but it isn't A lot of the abstractions that S/W gives you have gone. Lets take the first thing I was told in programming 101 about 30 years ago....

Quote
Computers can do three things:

* Sequential execution of sets of instructions
* Conditional execution of sets of instructions
* Iterative execution of sets of instructions (a.k.a. loops)

... and as programmers it is our job to tell the computer what instructions are needed to complete the task. Our example for the day was "instructions for an alien on how to make a cup of tea".

Working on FPGAs in a HDL looks the same (code in a text editor), but it is very different - no longer have:

* sequential execution (as everything occurs in parallel all over the chip)

* the nature of conditional execution changes, because you no longer are executing a series of statements, but configuring a chain of muxes and digital logic.

* Loops do not really exist, unless they are bounded at compile time.

* no fixed data types exist, apart from that binary bits can be treated as numbers

* no dynamic resource allocation is possible - you can't just "malloc" in 64k of SRAM into a design at runtime.

That is a lot of abstractions to loose and on top of that we pick up new problems:

* Timing closure - is everything simple enough to complete in the tick of a clock

* Clock domains and clock domain crossings are just annoying

* Interfacing with FPGA resources

* Fighting against resource limitations - we only have so much chip to use

You can write the instructions for an alien to make a cup of tea in HDL, but it is for an alien with no short term memory. It looks completely different, involving an finite state machine, watching a clock and remembering lots of state information.

And this is why learning HDL is so painful. You hear people say "I've had 20 years of programming, but I find FPGAs hard". This is because

(a) it is complex - implementing an non-trivial high speed digital design is hard work

(b) it is like somebody saying "I've got a degree in literature and used a word processor to write 20 books, yet I find writing a working Arduino program is hard". Of course it is. Although it looks the same it isn't what you are skilled in.

A course in digital logic is more appropriate as a grounding in low-level FPGA work than a course in programming. If you have ever used TTL logic to build a video card, then you would say "My, this FPGA stuff is a walk in the park, it's so fast and flexible, and turnaround time is a few minutes".

With the foundations being so different, it follows that the tools will be different - for example, that is why there is no 'gdb' for VHDL - it is the wrong tool for the job.





Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: mikeselectricstuff on June 04, 2015, 10:48:54 pm
I'm about to have a bit of a rant (not a nasty rant, just a brain dump), so please excuse me....

What is making things difficult is that FPGA tools use different conventions to software tools, for no other reason than history.
...
In terms of how they are designed-in and used, FPGAs really aren't any different to MCUs at the topmost level - they're both chips that need code.
...

But HDL design isn't like software - it might look like software but it isn't
I wasn't suggesting otherwise. It has nothing to do with the semantics of the language itself, but the process of managing the deployment of that code to actual devices.
Whilst the meaning of what's written, and the design process is different, the way that coding process fits into the product development process is basically the same - you write code to make the chip do what you want. You use a tool to turn your code into something that the chip can understand.
Like I said, FPGAs and MCUs are both chips that need code.
That code is part of the product. There is no reason that the ways of managing that code at the "preprocessor" and "project" level should be any - it's just an accident of history, but it makes things harder, less consistent and less easy to understand.
Maybe if ADA had won out over C it wouldn't have been so bad...




Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: Muxr on June 05, 2015, 02:48:45 am
To hamster_nz point. I think programmers in general have a hard time with low level programming, since most development happens at a higher level of the stack, and you're encouraged to reuse or leverage other people's low level tools and libs to save time and keep the project scope focused.

But I don't think an HDL is particularly hard. It kind of reminds me of using a coroutine or a green thread framework in networking, except you're very restricted. But then again I have worked with developers who struggle with multithreaded programming. What has helped me I think is that I both have Assembly programming experience as well as writing multithreaded code is what I do at my day job.

I've certainly had a harder time grasping some higher level languages (Haskell) or concepts. At least in an HDL once you get the basic building blocks you can establish a solid fundamental understanding of how everything else is built, because it has to adhere to the basic principles of digital logic which is set in stone (0,1,x,z was a bit weird though), whereas higher level paradigms in your standard programming languages are laced with syntactic sugar and magic that often make little sense.

For instance after knowing Perl and using it, I still come across code I don't understand, whereas only after a few days of learning Verilog I can read other people's code with a fair amount of confidence that I understand what's going on.

I don't think there are as many people who write an HDL or if there are it's not as represented as your [regular] software development. Perhaps because most of it is proprietary (ASIC design) and it's not as open. So that I think is what makes it tougher to get into it since there is less resources to learn from out there.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: Bassman59 on June 05, 2015, 03:47:47 pm
But I don't think an HDL is particularly hard.

HDLs are not at all hard if your background is in digital logic design.

Quote
At least in an HDL once you get the basic building blocks you can establish a solid fundamental understanding of how everything else is built, because it has to adhere to the basic principles of digital logic which is set in stone (0,1,x,z was a bit weird though)

0, 1, X, Z are not weird -- at its base, binary logic has only only two possible states (you can guess what they are). You need X to model contention or some other unknown state, and you need Z to model the high-Z driver (tri-state) condition.

Quote
whereas higher level paradigms in your standard programming languages are laced with syntactic sugar and magic that often make little sense.

They make perfect sense if you are versed in the language and its paradigms. At first glance, Objective-C looks "weird" to the K&R C programmer, but after spending some time understanding it, you'll see that it makes sense.

Quote
I don't think there are as many people who write an HDL or if there are it's not as represented as your [regular] software development.

I'm not sure whether you mean "who write an HDL" as a) Engineers who use an HDL to describe logic for implementation in an FPGA or ASIC (or for simulation and verification, which is necessary), or b) tools designers who write synthesis and simulation tools for those HDLs.

But in either case, there are a lot more people writing software for all of the various processors than there are people designing FPGAs or ASICs.
Title: Re: Lattice iCE40 Bitstream Reverse-Engineered
Post by: Muxr on June 05, 2015, 04:18:05 pm
(0, 1, X, Z) was weird for me at first since my background is mainly software design. I mean there is nothing hard about the states, they make perfect sense, what wasn't as easy was remembering the logic operation outcomes for X,Z that aren't always obvious to me. You probably don't have issues with it since it's what you do on a daily basis. Not difficult though I agree, just a bit weird for someone with my background.

Objective-C was pretty straight forward for me, people who aren't used to managing memory probably get hung up on the reference counting, but I didn't have any issues with it.

What I was referring to is higher level patterns which might not be obvious for someone not familiar with the pattern, like the first time you come across inversion of control in a large project (it may not be so obvious), or the first time you use a functional language like F or Haskell. But mainly I am talking about things people do in languages like Perl or Ruby... Where the language itself can be introspected and modified to implement new language features, I mean in Ruby you can completely change how string behaves and I know libraries who do just that, this all can make things really hard to understand. You can change the entire language which is why DSLs are common in Ruby.

Or languages which pride themselves on having a lot of syntactic sugar.

For instance this is valid perl, and I know programmers who get a kick out of writing clever code nobody can understand:

Code: [Select]
''=~('(?{'.(']])@+}'^'-/@._]').'"'.('/<[*-_<+>?}{>]@}+@}]])@+}@<[*-_<+>?}{>]@^'^'`^=_^<]_[[]+[/,]_/]-/@._]/^=_^<]_[[]+[/,|').',$/})')It prints a message.

These features are what make Perl powerful, but it's also the reason why the code can be a nightmare to understand.