Nonsense. There is no real barrier to using an FPGA in an OSHW project that an OS toolchain would remove.
QuoteNonsense. There is no real barrier to using an FPGA in an OSHW project that an OS toolchain would remove. Everyone can access FPGA tools at minimal cost.To be fair, the bigger devices are supported only by the costly versions of stuff - atleast for Xilinx. Dunno about the rest, Altera, erm, Intertra?.
I never really got this - what's the point of not publishing stuff like this? From a chip manufacturers point of view it seems the best course of action would be to ensure that ALL of my tools area available to everyone, free of charge. Or atleast MASSIVELY support open source initiatives like this.
The availability of free or cheaply priced tools is a big issue for me, I assume the same goes for others.
I could use this on a none x86 cpu and (with a lot of learning) port it to an OS that isn't Windows or Linux. If you can't see some possibilities opening up there, no matter how niche, that's a bit of a lack of imagination.
The reason FPGAs aren't used very widely is nothing to do with tools, it's simply that they are only needed in niche applications. Availability of OSS tools won't change that.
The reason FPGAs aren't used very widely is nothing to do with tools, it's simply that they are only needed in niche applications. Availability of OSS tools won't change that.
In the hobbyist domain I'd say how overwhelming the tools can be is a serious barrier to entry. Something like this could lead to an Arduino style revolution (it has had it's upsides) for FPGAs.
Nonsense. There is no real barrier to using an FPGA in an OSHW project that an OS toolchain would remove. Everyone can access FPGA tools at minimal cost.
That is where there is definitely scope for interesting things to be done, and that doesn't need anyone to spend time reinventing the wheel with the back-end tools as any "easy-to-use" front-end can output HDL or RTL to the existing toolchain.
Nonsense. There is no real barrier to using an FPGA in an OSHW project that an OS toolchain would remove. Everyone can access FPGA tools at minimal cost.You're quite quick to call someone else's experience nonsense, aren't you?
I don't understand why you feel so strongly against this. Likely, in 1987 people like you were arguing against Richard Stallman for starting gcc.
It's not just the cost that is problematic, it is also the license restrictions to distribution. Distributing, say, a VM or docker image with ready-made FPGA toolchain is usually not allowed. Neither is offering an automatic 'build server' for bitstreams. This is the problem Parallela bumped against, as well as some educational projects.
That is where there is definitely scope for interesting things to be done, and that doesn't need anyone to spend time reinventing the wheel with the back-end tools as any "easy-to-use" front-end can output HDL or RTL to the existing toolchain.
An open sourced FPGA toolchain could have a benefit. For instance Python is heavily used in scientific field, this could leverage on demand use of higher level libs like MyHDL, and produce bitstream.
An open sourced FPGA toolchain could have a benefit. For instance Python is heavily used in scientific field, this could leverage on demand use of higher level libs like MyHDL, and produce bitstream.And what would be the advantage of that over feeding it into the existing tools?
An open sourced FPGA toolchain could have a benefit. For instance Python is heavily used in scientific field, this could leverage on demand use of higher level libs like MyHDL, and produce bitstream.And what would be the advantage of that over feeding it into the existing tools?Automated deployment for cloud compute. I think things are about to get interesting in the FPGA and x86 server market. With Intel buying Altera I could see an FPGA with x86 cores, used for hw acceleration of your compute clusters.
Sort of how OpenCL is being used, except FPGA can offer distinct advantages, like low latency.
It wouldn't really change how one designs in FPGAs into their hardware, but it would open up FPGAs to new applications.
An open sourced FPGA toolchain could have a benefit. For instance Python is heavily used in scientific field, this could leverage on demand use of higher level libs like MyHDL, and produce bitstream.And what would be the advantage of that over feeding it into the existing tools?Automated deployment for cloud compute. I think things are about to get interesting in the FPGA and x86 server market. With Intel buying Altera I could see an FPGA with x86 cores, used for hw acceleration of your compute clusters.
Sort of how OpenCL is being used, except FPGA can offer distinct advantages, like low latency.
It wouldn't really change how one designs in FPGAs into their hardware, but it would open up FPGAs to new applications.Yes but in that example you'd almost certainly only be loading pre-compiled designs.
There is no way we'd ever see an OSS solution for a device that complex anyway.
QuoteNonsense. There is no real barrier to using an FPGA in an OSHW project that an OS toolchain would remove. Everyone can access FPGA tools at minimal cost.To be fair, the bigger devices are supported only by the costly versions of stuff - atleast for Xilinx. Dunno about the rest, Altera, erm, Intertra?.
I never really got this - what's the point of not publishing stuff like this? From a chip manufacturers point of view it seems the best course of action would be to ensure that ALL of my tools area available to everyone, free of charge. Or at least MASSIVELY support open source initiatives like this.
The availability of free or cheaply priced tools is a big issue for me, I assume the same goes for others.
FPGA development environments are so universally awful that anything that can help spur innovation is a godsend.
FPGA development environments are so universally awful that anything that can help spur innovation is a godsend.No argument there, but IMO the biggest problem is the archaic nature of the HDLs, and definitely an area where something new is well overdue.
Reinventing what's already there is just wasted effort and will not do anything to improve the awfulness.
Unless of course someone can come up with some magic solution to place & route much, much more quickly.
Just imagine how useful it would be to use all the power sitting in GPUs to get near-instant update of a device when you change logic onscreen...
FPGA development environments are so universally awful that anything that can help spur innovation is a godsend.No argument there, but IMO the biggest problem is the archaic nature of the HDLs, and definitely an area where something new is well overdue.
Reinventing what's already there is just wasted effort and will not do anything to improve the awfulness.
Unless of course someone can come up with some magic solution to place & route much, much more quickly.
Just imagine how useful it would be to use all the power sitting in GPUs to get near-instant update of a device when you change logic onscreen...
Outside of hobby use it wouldn't be too useful at all. Unlike incremental software compilation, changes made higher up in a design force structural changes all the way through the design, and changes in the lowest levels could be unfairly constrained by what is already in place. It is most likely that any reasonable commercial design will use over 50% of some of the resources on a chip (otherwise you would use a smaller chip) and that doesn't leave much room for rip-up and place and route.
You would also have the problem that the performance of your design will depend on everything that has happened to the design beforehand - so you can't give a copy of the source to a co-worker and expect them to get the same results.
And I have to eat humble pie and say that Vivado isn't that bad (I didn't like it at first)
FPGA development environments are so universally awful that anything that can help spur innovation is a godsend.No argument there, but IMO the biggest problem is the archaic nature of the HDLs, and definitely an area where something new is well overdue.
Is Vivado the new name for ISE, or something completely new?
If so what sort of differences are there?
FPGA development environments are so universally awful that anything that can help spur innovation is a godsend.No argument there, but IMO the biggest problem is the archaic nature of the HDLs, and definitely an area where something new is well overdue.
The archaic nature of the HDLs? What archaic nature would this be...?
FPGA development environments are so universally awful that anything that can help spur innovation is a godsend.No argument there, but IMO the biggest problem is the archaic nature of the HDLs, and definitely an area where something new is well overdue.
The archaic nature of the HDLs? What archaic nature would this be...?
And there isn't even a way to have it automatically download to a device on a successful build. or even beep at me to tell me it's done compiling. Pathetic.
Not every language has block comments
; any editor that isn't incompetent can still comment off a block.
Async resets are easy to do, I don't know what you're on about there
(though I'd not really recommend using them at all).
Build variants are easily accomplished using generics.
And there isn't even a way to have it automatically download to a device on a successful build. or even beep at me to tell me it's done compiling. Pathetic.
buh, wha?? you're using a computer, dude, script it!
Not every language has block comments
; any editor that isn't incompetent can still comment off a block.I shouldn't have to change editors because of inadequacies in a language
This would be less of an issue if it weren't for the the near-impossibility of easily specifying the logic state you want a node to be at powerup.
signal foo: std_ulogic := '1';
Build variants include FPGA type and pinouts. You should be able to specify these in the HDL.
Speaking about VHDL as that's what I sort-of know...
no block comments,
no #define,
#include
compile-time macros
Having to hope that the synthesis process infers what you want instead of being able to specify things more simply & directly (e.g. stuff like async resets).
No meaningful way (AFAICS) to easily manage build variants for different parts, pinouts etc. (more of an issue with the whole toolchain than the HDL)
I'll admit I don't use FPGAs that often and don't know VHDL inside out, but it just seems that I'm often finding that the sort of things that I do routinely in software projects are a total ball-ache to do.
A concrete example - I use Lattice Diamond but my previous experience of ISE seemed pretty much the same.
I have a design that can be used on one of two different PCBs, with a few different FPGAs, depending on pins and memory required for a particular build.
It already has a lot of paramaterization using VHDL constants ( much of which would have been easier with #ifdef type structures) , but what I'd like to be able to do is have a single #define in the top level source that would allow it to pull in the required set of pin definitions and define the FPGA type depending which PCB it will go on and how big a memory it needs.
And there isn't even a way to have it automatically download to a device on a successful build.
or even beep at me to tell me it's done compiling.