Author Topic: Looking for set of MCU devkits representative of industry and hobbyists  (Read 10837 times)

0 Members and 1 Guest are viewing this topic.

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3143
  • Country: ca
And what is it when it is simultaneously dealing with i/o from a USB link plus a front-panel, plus  a motor, plus control-loop processing, plus data presentation processing, plus some DSP, plus... That's what hard realtime systems have to do.

MCUs don't work that way. Most of the work is dome by peripheral modules which ensure real-time operations. Peripheral does buffering, can do DMA if needed. This removes urgency from the CPU. CPU mostly organizes everything and often sits idle waiting for things to happen.

Two can play that (rather unenlightening) game, viz: presumably you are either using a tiny part of an FPGA and have an additional processor, or you have a very inefficient soft-processor in the FPGA. Either way is wasteful :)

FPGA don't work that way neither. Most of the stuff in FPGA is done by state-machines in parallel. The fabric is split between tasks and the tasks run pretty much independent using just as much resources as needed. Although, you can build soft cores and specialize them for the application, this is not necessary and often not needed.

BTW: "very inefficient soft-processor", such as Xilinx MicroBlaze can run circles around XMOS-cores.

Xilinx Artix-7 wich can handle clock periods  of 2ns starts from $25.

That entirely depends on your application. Don't presume that your limited view of the world encompasses all applications.

My limited view of the world doesn't encompass applications. It encompasses principles and common sense. If you want to discuss applications, post the detailed description and specs for the application, and then we can discuss applications.
 

Elf

  • Guest
Dynamic, horizontally scaled infrastructure and parallelized software architecture is my current day job, and Erlang is one of my favorite programming languages, so I was very interested in the XMOS chips. I bought the start kit and the programmer and all that. Neat product, well thought out.

The only thing is that most problems, or at least the ones I deal with in electronics (rather than on computers), are "boring problems." I actively try to find a reason to use XMOS chips in my project backlog, but I always end up back with a cheaper microcontroller with peripherals doing most of the lifting, and the rest with perhaps somewhat ugly, but workable, use of interrupts. Implementation on an xCore chip could be more elegant from a software perspective, but not actually necessary, a few dollars too expensive, and still not quite a substitute for what an FPGA is good at.

It seems like some strong applications of xCore are pretty much any kind of display interfacing, and dealing with digital audio. Makes sense, since they also seem to have an audio-centric product line.

I think if they released a cut down low end chip (even smaller than their 4 core) in QFN or QFP, around the $2-3 price point, I would use it a lot more often, just because programming with interrupts is not very fun. Or, with their current lineup, if they just had a good vendor supplied USB host library. (Not something I want to implement on my own)

As far as the original topic I think the Renesas RL78 is often overlooked and probably one of my favorite chips. Good documentation, easy to work with, inexpensive chips (although the dev board cost is above average), and absolutely loaded with peripherals. More timers and serial units than you can shake a stick at. Their RX lineup also seems similarly good if you want more resources and 32-bit, but I have not used it as much.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19470
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Dynamic, horizontally scaled infrastructure and parallelized software architecture is my current day job, and Erlang is one of my favorite programming languages, so I was very interested in the XMOS chips. I bought the start kit and the programmer and all that. Neat product, well thought out.

I've always liked Erlang for the reasons you mention, plus the claimed advantage of relatively easy hotswapping hardware and software - which is vital for "high availability" applications. Unfortunately I haven't been able to justify deploying Erlang, since there are advantages to sticking with widely understood languages such as Java. (A point that is also valid in this xCORE vs conventional MCU debate).

Unsurprisingly "my" systems ended up having many resemblances to Erlang (and some of its properties), but in a "homebrew" architecture.

Quote
The only thing is that most problems, or at least the ones I deal with in electronics (rather than on computers), are "boring problems." I actively try to find a reason to use XMOS chips in my project backlog, but I always end up back with a cheaper microcontroller with peripherals doing most of the lifting, and the rest with perhaps somewhat ugly, but workable, use of interrupts. Implementation on an xCore chip could be more elegant from a software perspective, but not actually necessary, a few dollars too expensive, and still not quite a substitute for what an FPGA is good at.

Yes indeed! I agree completely.

Nonetheless it is useful and beneficial to understand radical alternatives, so that you can better understand how to recognise and mitigate the disadvantages of conventional approaches. It sounds like you actively do that :)

Quote
It seems like some strong applications of xCore are pretty much any kind of display interfacing, and dealing with digital audio. Makes sense, since they also seem to have an audio-centric product line.

I think if they released a cut down low end chip (even smaller than their 4 core) in QFN or QFP, around the $2-3 price point, I would use it a lot more often, just because programming with interrupts is not very fun. Or, with their current lineup, if they just had a good vendor supplied USB host library. (Not something I want to implement on my own)

My understanding is that they do have a USB host library. It seems each USB endpoint requires a separate core. I don't know enough about USB to be able to make useful comments about its effectiveness. Commercially I would have thought it would be better to have dedicated hardware for a USB (or ethernet) interface - and oddly enough that's exactly what XMOS does!

The noteworthy point is that it is possible and practical to do it in software at the same time as your application does useful work.

Quote
As far as the original topic I think the Renesas RL78 is often overlooked and probably one of my favorite chips. Good documentation, easy to work with, inexpensive chips (although the dev board cost is above average), and absolutely loaded with peripherals. More timers and serial units than you can shake a stick at. Their RX lineup also seems similarly good if you want more resources and 32-bit, but I have not used it as much.

I have no comment, other than to note that (just as with computer languages) there are too many MCU families for it to be possible to learn them all - and that many problems can be adequately solved using any variant.

The things that are worth understanding are the tools with radically different strategies for skinning the cat. Both xCORE/xC and Erlang fall into that category.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Elf

  • Guest
I've always liked Erlang for the reasons you mention, plus the claimed advantage of relatively easy hotswapping hardware and software - which is vital for "high availability" applications.
The language does make it easy to instrument, but practical implementation is the difficult part. You have to plan around things like versioning data structures and backwards compatibility, since you may have older instances of objects still out there somewhere from before the code upgrade. An interesting challenge, though.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19470
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
I've always liked Erlang for the reasons you mention, plus the claimed advantage of relatively easy hotswapping hardware and software - which is vital for "high availability" applications.
The language does make it easy to instrument, but practical implementation is the difficult part. You have to plan around things like versioning data structures and backwards compatibility, since you may have older instances of objects still out there somewhere from before the code upgrade. An interesting challenge, though.

Yes indeed, but that is inherent in the problem, however you choose to solve it. Hence it is true of any and every HA system, regardless of the language and implementation technologies, including RDBMSs.

Erlang makes one aspect of the solution easier, and doesn't make the other aspects more difficult.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19470
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
I notice your other fallacious statements and misapprehensions have melted away. Good.

And what is it when it is simultaneously dealing with i/o from a USB link plus a front-panel, plus  a motor, plus control-loop processing, plus data presentation processing, plus some DSP, plus... That's what hard realtime systems have to do.

MCUs don't work that way. Most of the work is dome by peripheral modules which ensure real-time operations. Peripheral does buffering, can do DMA if needed. This removes urgency from the CPU. CPU mostly organizes everything and often sits idle waiting for things to happen.

Ah, the old trap of thinking that hard realtime == fast. It doesn't. Hard realtime means guaranteed timing.

Your statement ignores the requirement for timing that is guaranteed by design, not by testing and measurement.

Quote
Two can play that (rather unenlightening) game, viz: presumably you are either using a tiny part of an FPGA and have an additional processor, or you have a very inefficient soft-processor in the FPGA. Either way is wasteful :)

FPGA don't work that way neither. Most of the stuff in FPGA is done by state-machines in parallel. The fabric is split between tasks and the tasks run pretty much independent using just as much resources as needed. Although, you can build soft cores and specialize them for the application, this is not necessary and often not needed.

BTW: "very inefficient soft-processor", such as Xilinx MicroBlaze can run circles around XMOS-cores.

You appear to be switching your definition of "efficient" between "low area" and "high performance" without bothering to tell people which you mean in any given statement.

There are many variants of MicroBlaze; which are you referring to?
Before implementing one in your design, what is the guaranteed cycle time (i.e. pre-layout)?
By what measure does the smallest (or largest) MicroBlaze run circles around the smallest (4 core) xCORE  processor or largest (32 core)?

Quote
Xilinx Artix-7 wich can handle clock periods  of 2ns starts from $25.

There are obviously cases where FPGAs will beat other technology; only a fool would think otherwise.

However, I'll note that the XS1 devices found in the £10 StartKit handle clock periods of 4ns - which is definitely encroaching on FPGA territory.

The newer  xCORE200 devices have two tiles with up to 8 concurrent threads each. Each thread can run at up to 100 MHz, and threads may be able to execute 2 instructions in a clock cycle. Five threads follow each other through the pipeline, resulting in a top speed of 2000 MIPS (if all instructions dual issue), and a speed of at least 1000 MIPS (if all instructions are single issue)

Quote
That entirely depends on your application. Don't presume that your limited view of the world encompasses all applications.

My limited view of the world doesn't encompass applications. It encompasses principles and common sense. If you want to discuss applications, post the detailed description and specs for the application, and then we can discuss applications.

Common sense isn't common, and is irrelevant in significantly different situations.

Theory without practice is mental masturbation. Practice without theory is fumbling in the dark. The XMOS approach is strong in both theory (40 years old) and practice (30 years old) and specific implementation (10 years old).

You appear to be unaware of fundamental theory, so I'm unsurprised that you don't understand the relevant principles, and unsurprised that your "common sense" isn't applicable in this case.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19470
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
I don't know what an MMACS is (a mini hamburger?), so I can' t help you there.

Million Multiply and Accumulations. Used to benchmark raw power of DSPs. Many classical (FFT, FIR, IIR) DSP algorithms and software neural network can be computed as blocks expressed as y=y+k*x, which is a MAC (mult-accu).

I suspected it might be, but I haven't seen that specific FLA before :)

I won't comment on the suitability for your application, but I will note that XMOS devices have been used for DSP for a while now. Currently they seem to be pushing "smart microphones", which I presume is heavy on DSP. I suspect, without any evidence whatsoever, that the alternative implementation technologies would involve FPGAs, but that XMOS devices can be cheaper.

Quote
Moores law is at an end. Multicore computing is everywhere. C won't cut it for long, and will gradually become less important, in the same way that Cobol and C++ are.

I turned to C#. Spending 1.5 years on HPC (OpenCL, CUDA, OpenMP/gcc) as a hobby, I realized by my own force doing HPC isn't really that smart. I would like MS to get that done for me and give my parallelized libraries.
Windows 10 is a perfect example -- they introduces parallelism in CLR level, so even a (seemingly) single thread program, as long as it's based on .NET, can take advantage of multi-core.
Also, C# has recently introduces quite some new features on using multiple cores, besides libraries (think of STL in C++), such as async functions and async linq.

Yes. While I started using C when there were precisely two books about it, IMNSHO C/C++ started to be part of the problem (rather than part of the solution) in the early-mid 90s. Fortunately Java came along to pick up where C/C++ were getting into difficulties. What kind of difficulties? Well, POSIX libraries having to rely on compiler/language behaviour that was explicitly outside the language definition (and punted to the libraries!), the language designers not realising templates are Turing complete in their own right, and the interminable unresolvable committee arguments about whether "casting away constness" was mandatory or forbidden.

I'd been using Java for 5 years (IIRC) when I listened to a talk by Hejlsberg just before C# was released, and mentioned to him that it looked like C# was Java with a different optimisation strategy and reduced security. His response wasn't memorable, and I decided not to bother learning yet another vendor specific "me too" language with only a few minor advantages and disadvantages. Curiously I'd used the same reasoning to avoid learning Delphi a decade earlier.

I don't regard xC/xCORE as being "me too" with only a few minor differences to existing products. IMNSHO they have some unique characteristics that fit with my interests and my beliefs (hopes really :( ) about the future direction technology ought to take. Having "kicked the technology's tyres", it does do what it claims.

I'm more than happy for people to decide not to use XMOS stuff for valid reasons. Being out of their comfort zone and/or presuming they can rely on their pre-existing rules-of-thumb don't count as valid reasons :)
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3143
  • Country: ca
You appear to be unaware of fundamental theory, so I'm unsurprised that you don't understand the relevant principles, and unsurprised that your "common sense" isn't applicable in this case.

Common sense is applicable everywhere. It lets us distinguish facts from the soup of numbers and cliché phrases which you're outputting here. No wonder you don't like it.

I'm sorry. I grew a little bit tired of your stupidity, so I'll bail out of this thread.
 

Offline legacy

  • Super Contributor
  • ***
  • !
  • Posts: 4415
  • Country: ch
RTFDS :) You will find it worthwhile.

You may even note echoes of CSP in the way systems are architected.

Yup, I am going to download everything I can download into my Kindle PaperWhite (ebook reader made by Amazon), so I will have interesting things to read under the umbrella on the beach during the next vacation time :D

p.s.
oh, MIPS removed the status register, so I have also removed it from my Arise-v2/r5 project. Looking at XMOS, I see it comes with threads, dedicated registers, and there is also a status register which contains various mode bits, but the processor does not have the standard ALU result flags like carry, zero, negative or overflow.

Probably I will have to study the XMOS ALU. Mine is a standard MIPS design where I am supporting overflow and underflow as "exceptions". If you enable the features in COP1 (in my design it's the arithmetic signed/unsigned unit) then the unit raises an exception when an overflow/underflow event occurs. I also have special "trap" instructions: e.g. TrapOnOverFlow, TrapOnUnderFlow, which you need to put after a computation. From the point of view of the datapath along the CPUCORE it's a "NOP", but if the COP1 has an event ... you get trapped into exception.

e.g.
Code: [Select]
mac rt0, rs1, rs1, rs2
TrapOnOverFlow

The difference between them is: with the first class of instruction everything that makes an over/under flow is catch as exception, whereas with the second class of instruction I can selectively check operations of my specific interest.

I also have a COP2, which is a DSP engine with a saturated arithmetic, it means no overflow/underflow can happen. Never.

What do you think about that?

Let me download XMOS's doc. Probably I will end buying an Evaluation board  :D
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19470
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
You appear to be unaware of fundamental theory, so I'm unsurprised that you don't understand the relevant principles, and unsurprised that your "common sense" isn't applicable in this case.

Common sense is applicable everywhere. It lets us distinguish facts from the soup of numbers and cliché phrases which you're outputting here. No wonder you don't like it.

No, common sense isn't always applicable; it frequently leads people astray.

We note you have made many assertions (e.g. "runs circles") that you have neither clarified when requested, nor backed up with numbers.

If you don't want to learn and expand your horizons, then it is indeed better if you "bail out".
« Last Edit: July 18, 2017, 02:16:21 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline legacy

  • Super Contributor
  • ***
  • !
  • Posts: 4415
  • Country: ch
XMOS (at least the XS1 architecture) seems to be "event-driven"

I mean, as far as I understand for the ISA, there are two kind of instructions:
those that can dispatch an external events (synchronous)
those that can handle interrupts (asynchronous)

So we still have interrupts, but you can choose to use events, and if so, the underlying processor has to expect an event and wait in a specific place so that it can be handled synchronously.

That is interesting, since in my case every I/O can ONLY be handled asynchronously using interrupts.



p.s.
I am reading that in XS1 all the communication between threads is performed using channels that provide full-duplex data transfer between channel-ends. That is another interesting concept.

In my case I am using mailboxes. I have to understand the difference, they are similar in some aspects ( I believe )
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19470
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
RTFDS :) You will find it worthwhile.

You may even note echoes of CSP in the way systems are architected.

Yup, I am going to download everything I can download into my Kindle PaperWhite (ebook reader made by Amazon), so I will have interesting things to read under the umbrella on the beach during the next vacation time :D

You're nuts :) Vacations are for doing something completely different :)

Quote

p.s.
oh, MIPS removed the status register, so I have also removed it from my Arise-v2/r5 project. Looking at XMOS, I see it comes with threads, dedicated registers, and there is also a status register which contains various mode bits, but the processor does not have the standard ALU result flags like carry, zero, negative or overflow.

Probably I will have to study the XMOS ALU. Mine is a standard MIPS design where I am supporting overflow and underflow as "exceptions". If you enable the features in COP1 (in my design it's the arithmetic signed/unsigned unit) then the unit raises an exception when an overflow/underflow event occurs. I also have special "trap" instructions: e.g. TrapOnOverFlow, TrapOnUnderFlow, which you need to put after a computation. From the point of view of the datapath along the CPUCORE it's a "NOP", but if the COP1 has an event ... you get trapped into exception.

e.g.
Code: [Select]
mac rt0, rs1, rs1, rs2
TrapOnOverFlow

The difference between them is: with the first class of instruction everything that makes an over/under flow is catch as exception, whereas with the second class of instruction I can selectively check operations of my specific interest.

I also have a COP2, which is a DSP engine with a saturated arithmetic, it means no overflow/underflow can happen. Never.

What do you think about that?

Let me download XMOS's doc. Probably I will end buying an Evaluation board  :D

If you are interested in the assembly level, then https://www.xmos.com/published/xs2-isa-specification might be of interest to you.

You will note there are some instructions for setting up on comms channels between cores, between tiles and between chips. Hence you might be interested in the XSwitch technology.

You will also note there are some instructions for i/o ports, so you might also be interested in the ports architecture.

Summary: multicore, fast i/o, comms and the language are all tied together in a unified whole, in a way that just doesn't occur with traditional MCUs.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19470
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
XMOS (at least the XS1 architecture) seems to be "event-driven"

The architecture is indeed "event driven". But then most embedded software systems are event driven. An interrupt is captured, the ISR generates an internal software event, which is  delivered to a task using RTOS facilities such as mailboxes and FIFOs etc.

With XMOS, the events are generated in the ports, and the corresponding event is transmitted through the switch fabric to the relevant core. In that sense the hardware pokes something into "an RTOS mailbox", and the RTOS is in hardware. That gives an inkling of why traditional interrupts aren't very interesting.

Quote
I mean, as far as I understand for the ISA, there are two kind of instructions:
those that can dispatch an external events (synchronous)
those that can handle interrupts (asynchronous)

My interest so far has been to push hard on the xC/xCORE facilities to see where they aren't sufficient for a simple hard realtime system (a dual channel 15MHz frequency counter implemented in software, with a real front panel and a soft front panel at the other end of a USB link).

Since I've been able to do everything I wanted at that level, I haven't felt it necessary to find out about interrupts - and that's significant. Indeed, I'm not clear where they might be or must be used.

I suspect your knowledge will rapidly exceed mine in this area!  Good :)

Quote
So we still have interrupts, but you can choose to use events, and if so, the underlying processor has to expect an event and wait in a specific place so that it can be handled synchronously.

That is interesting, since in my case every I/O can ONLY be handled asynchronously using interrupts.

p.s.
I am reading that in XS1 all the communication between threads is performed using channels that provide full-duplex data transfer between channel-ends. That is another interesting concept.

In my case I am using mailboxes. I have to understand the difference, they are similar in some aspects ( I believe )

Channels are directly modelled on CSP concepts. The Ada rendezvous has the same heritage. You will notice the correspondance :)

That blocking synchronisation between the sender and receiver, while theoretically sufficient, can be a pain in the backside. For example, you can make traditional multiple-input single-server FIFOs out of channels (see an XMOS app note), but you "lose" a core.

Fortunately there are also xC "interfaces", which enable non-blocking notification between tasks. I've used both: channels for strict hard realtime dataflow, and interfaces for notification that a user has requested a change that the receiver can pick up and implement at a convenient time.  I suspect the interface notification might be a useful implementation mechanism for such server FIFOs.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline Sal Ammoniac

  • Super Contributor
  • ***
  • Posts: 1668
  • Country: us
BTW: "very inefficient soft-processor", such as Xilinx MicroBlaze can run circles around XMOS-cores.

And if those soft cores are not enough horsepower, there's always the Xilinx Zynq parts, which have dual core ARM Cortex-A9 or dual and quad core Cortex-A53 (and dual Cortex-R5) CPUs running at over a GHz, plus the FPGA fabric, plus gigabit Ethernet, USB 3, SATA 3.1, Display Port, and more.
Complexity is the number-one enemy of high-quality code.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19470
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
BTW: "very inefficient soft-processor", such as Xilinx MicroBlaze can run circles around XMOS-cores.

And if those soft cores are not enough horsepower, there's always the Xilinx Zynq parts, which have dual core ARM Cortex-A9 or dual and quad core Cortex-A53 (and dual Cortex-R5) CPUs running at over a GHz, plus the FPGA fabric, plus gigabit Ethernet, USB 3, SATA 3.1, Display Port, and more.

They do indeed have a lot of horsepower, and are interesting devices. However they aren't cheap, and the development environment (Vivado) is a pig in several ways! In some ways that's to be expected, since the underlying capabilities are powerful, complex, and unconstrained.

If you are proficient in VHDL/Verilog and ARM/C and Eclipse, I wonder how long it takes to become proficient in the use of Vivado itself. I would guess several weeks at least. That's an impediment to someone that might or might not want to develop an FPGA design just to understand their characteristics.

By way of contrast, knowing Eclipse but having zero experience of xCORE and xC, I was creating useful non-trivial first versions of my application within a day. By non trivial I mean determining the speed of the core part of the application.

That was easier than I expected, and means that I think it is practical and worthwhile for many people to expand their horizons, even if they never use xC/xCORE in anger.
« Last Edit: July 18, 2017, 05:16:20 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline Sal Ammoniac

  • Super Contributor
  • ***
  • Posts: 1668
  • Country: us
If you are proficient in VHDL/Verilog and ARM/C and Eclipse, I wonder how long it takes to become proficient in the use of Vivado itself. I would guess several weeks at least. That's an impediment to someone that might or might not want to develop an FPGA design just to understand their characteristics.

From personal experience: about two days. Vivado is much better than the earlier Xilinx software (ISE).

Even if it takes longer, that's not an impediment to a professional developing a commercial product.
Complexity is the number-one enemy of high-quality code.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19470
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
If you are proficient in VHDL/Verilog and ARM/C and Eclipse, I wonder how long it takes to become proficient in the use of Vivado itself. I would guess several weeks at least. That's an impediment to someone that might or might not want to develop an FPGA design just to understand their characteristics.

From personal experience: about two days. Vivado is much better than the earlier Xilinx software (ISE).

Even if it takes longer, that's not an impediment to a professional developing a commercial product.

Agreed, but I'm surprised it isn't longer. I've heard complaints about Figaro, but I haven't used ISE.

One thing that did surprise me about Vivado (a few years ago when it was still novel) was that I couldn't find a definition of which of the myriad files should and should not be stored in a source control system.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline andyturk

  • Frequent Contributor
  • **
  • Posts: 895
  • Country: us
[...] IMNSHO [...]

A little humility would do you good--your posts are technically interesting, but often argumentative and annoying. You're more reasonable in person, right?

Quote
I'm more than happy for people to decide not to use XMOS stuff for valid reasons. Being out of their comfort zone and/or presuming they can rely on their pre-existing rules-of-thumb don't count as valid reasons :)

Being out of one's comfort zone is *absolutely* a valid reason for not using a technology--at least in a commercial project. I.e., if I'm not comfortable with a technology, I have no business telling someone I can deliver a product with it.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19470
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
[...] IMNSHO [...]

A little humility would do you good--your posts are technically interesting, but often argumentative and annoying. You're more reasonable in person, right?

Sometimes, sometimes not ;)

A lot depends on humility in the form of being dissatisfied with what I did last time, and seeking to improve next time around. A lot depends on having the experience to know when you are on solid ground, and the humility to recognise where you are not. Part of that comes from actively seeking contrary opinions, and assessing where they are/aren't valid - and from running away as fast as possible from "echo chambers".

Sometimes it is beneficial for all parties involved if some concepts and conceptions are vigorously challenged. It is necessary, however, that the concepts are being challenged with the right intention, e.g. making step improvements.

Sometimes when people are pigheadedly misguided it is necessary to emphatically point out where they are simply wrong.

Quote
Quote
I'm more than happy for people to decide not to use XMOS stuff for valid reasons. Being out of their comfort zone and/or presuming they can rely on their pre-existing rules-of-thumb don't count as valid reasons :)

Being out of one's comfort zone is *absolutely* a valid reason for not using a technology--at least in a commercial project. I.e., if I'm not comfortable with a technology, I have no business telling someone I can deliver a product with it.

Yes and no.
  • if that attitude is taken too rigidly, then it prevents progress and improvement.
  • in most of my professional career, I have been in application domains that were novel and/or using technologies that were novel. (Where "novel" is either to the team/company, or globally)
So being out of my comfort zone is normal, and it biasses me against rejecting beneficial advances because they are novel.

However, on some commercial projects I have indeed made the decision that the risks of the unknown outweigh the foreseeable potential benefits. (E.g. C# w.r.t. Java, or Delphi w.r.t. C.)

OTOH, on some commercial projects (e.g. as the manager of a project proposing to use C for the first time, back in 1982), I was definitely out of my comfort zone but I listened to those with some relevant experience. That was a good decision.

And I expect most of us have seen people that have naive outdated notions about technology, and refused to believe that they simply don't understand modern advances. Classic example is that "garbage collectors are slow and not suited to (soft) realtime systems", and refusing to comprehend that they had only heard about decades old reference counting GCs, and that Java's GCs were perfectly adequate. Even when presented with presented with a working system, they still clung to their outdated notion, and started to whinge about other things.
« Last Edit: July 22, 2017, 05:26:55 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf