Author Topic: Arrow DECA MAX 10 board for $37  (Read 25896 times)

0 Members and 1 Guest are viewing this topic.

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2103
  • Country: ca
Re: Arrow DECA MAX 10 board for $37
« Reply #175 on: November 12, 2021, 06:31:35 pm »
Absolutely not. Not only it's not the way git works, but git itself has no concept of a fork.

You could have said "that's how github works", which would have been already a little less wrong, because that's github that actually introduced forking as a collaborative model, which it never was before, since "forking" was definitely not about collaborating per se, but creating, you guessed it... forks. Derivatives of projects going in different directions, essentially.

But even if github has "promoted" this fork model, it's not even the only way of using github, and as I explained, I find this model has more drawbacks than benefits, and gave reasons why.
Just for starters: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/getting-started/about-collaborative-development-models
As already discussed, I favor asking to be a collaborator if you have anything interesting to contribute - and then of course if the maintainer refuses or doesn't respond, forking as a second intention would make sense. Not as a first. Otherwise if it's just for personal use, just clone the repo locally and play with it.

Now I get it that many people have been first exposed to git - and even more generally to VCS - using github, but github is not git. And github, IMO, gives bad habits, including this frantic forking and using a VCS as a backup. The fact it's now owned by MS hasn't helped me having a better impression of it either, but that was just icing on the cake.
You just don't understand how git works. The entire point of git (and one of it's chief advantages over legacy SC like SVN et al) is that forks are very easy and lightweight. So the way you are supposed to work with it is as follows: you have a single master branch which contains a stable version, and for each change you create a separate fork, implement the change in fork, test it, and once it's ready, you merge it back into the master branch. Advantages of this approach are that 1) every developer always have a stable codebase when he/she begins working on a change, and if that change will turn out to be the dead end, you just leave it as is and don't merge it back, 2) each developer can work independently of each other (because of distributed nature of git), and 3) you can cherrypick which changes are going to be released (this is typically implemented by having additional "release" branch which devops people use for actual deployment).

This is how git is designed to be used, and so this is how it SHOULD be used. If you don't use it this way - you are doing it WRONG. This has absolutely nothing to do with Github (or BitBucket, or others) - they merely implemented a  GUI around git (and it's actually pretty thin layer above it - most of functionality you see on Github et al is provided by the git itself.

Incidentally, lightweight forks is exactly why git has been so wildly successful in huge projects like Linux kernel with thousands of developers working on it.
« Last Edit: November 12, 2021, 06:40:40 pm by asmi »
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 9811
  • Country: fr
Re: Arrow DECA MAX 10 board for $37
« Reply #176 on: November 12, 2021, 11:56:34 pm »
Sure, lecture me on how DVCS work, it was obvious I didn't know.
The workflow you implement on top of DVCS, including how you manage branches, is completely context-dependent. No two organizations have the same approaches. You are probably confusing forking, branching and clones. Interestingly, excessive branching itself now tends to be considered less than ideal, but it again all depends on a team's specific workflow. There isn't just one.

The github link I gave shortly explains the two approaches you can use, and I explained numerous times why I favor the collaborator approach to the fork approach UNLESS you have a good reason to do it with forks. I have given my main reasons, which seem to be consistently ignored. Good, I just hope at least a few people have gotten my points. For the others, I don't care. And I have used DVCS for years, thank you. Using the collaborator model doesn't mean that you're not using git as a distributed system either. DVCS have a number of benefits over centralized systems even when not "forking". But I think the term "fork" has been largely abused here.

Again, while forking can be alright if you're actually working on say a feature that is far from being ready, that may not get accepted in the mainline for months, and, most of all, that several developers may work on in parallel, if it's just a convenience as a personal backup or even just as a sort of bookmark (as we said earlier), this just dilutes projects. The more forks, and the more the dilution - this point has nothing subjective about it, it's not opinion, it's just mechanically inevitable. And unfortunately, those cases seem to be the norm rather than the exception on github. This goes hand in hand with the excessive use, IMHO, of "cloud" storage as well. I haven't seen recent figures, but the ones I saw from a couple years ago showed a lot more forks than pull requests, and even fewer merges. That may raise a few questions about the efficiency of it all. And yes, it's probably obvious by now that I don't like github much. And that yes, you can absolutely use git or other DCVS on your own organization's terms, on your own servers, without breaking the spirit of DVCS.

Oh and a final thought related to this discussion. The most successful open-source projects have pretty much all used a form of vertical organization - the Linux kernel being a prime example - which is likely how they could become successful in the end. As Linux shows, the use of a DVCS is absolutely not contradictory to having some amount of vertical project management, and is still quite useful in this setting. Github, while promoting, whether they want it or not, the dilution of open-source projects, hinders vertical project management. Not saying it prevents it, but it makes it unnecessarily harder. Now, what happens, to be honest, is that many of the successful projects, even when they have a github repo, are primarily managed outside of github. And to get back to Brian here, even if it's a small project, what I've seen so far is that it's what he does. (IIRC, he has even stated on his github page to use this forum for any communication... ;D )
« Last Edit: November 12, 2021, 11:59:25 pm by SiliconWizard »
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2103
  • Country: ca
Re: Arrow DECA MAX 10 board for $37
« Reply #177 on: November 13, 2021, 01:44:33 am »
Sure, lecture me on how DVCS work, it was obvious I didn't know.
Yes, that much was obvious...

The github link I gave shortly explains the two approaches you can use, and I explained numerous times why I favor the collaborator approach to the fork approach UNLESS you have a good reason to do it with forks. I have given my main reasons, which seem to be consistently ignored. Good, I just hope at least a few people have gotten my points. For the others, I don't care. And I have used DVCS for years, thank you. Using the collaborator model doesn't mean that you're not using git as a distributed system either. DVCS have a number of benefits over centralized systems even when not "forking". But I think the term "fork" has been largely abused here.
...and so is the fact that you don't use it properly. When using git you stick to a "single branch/fork - single developer" approach, or you going to have massive headaches when several devs will constantly step on each other's toes.

Oh and a final thought related to this discussion. The most successful open-source projects have pretty much all used a form of vertical organization - the Linux kernel being a prime example - which is likely how they could become successful in the end. As Linux shows, the use of a DVCS is absolutely not contradictory to having some amount of vertical project management, and is still quite useful in this setting. Github, while promoting, whether they want it or not, the dilution of open-source projects, hinders vertical project management. Not saying it prevents it, but it makes it unnecessarily harder. Now, what happens, to be honest, is that many of the successful projects, even when they have a github repo, are primarily managed outside of github. And to get back to Brian here, even if it's a small project, what I've seen so far is that it's what he does. (IIRC, he has even stated on his github page to use this forum for any communication... ;D )
Git was designed by Linus precisely for Linux, and each kernel dev always works in his own branch, and once a feature is implemented, there is a pull request (PR) to merge it back into the mainline, primary job of maintainers do is pick-and-choose which PRs to include into the release. And the exact same mechanism is used for accepting code from external entities for inclusion into the mainline (for example, this is how RISC-V and ARM support was added into the mainline).

So yeah, forks are the norm in git, not exception nor any sort of insult or whatever other nonsense you fantasized. With that, I think this discussion is over.

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 9811
  • Country: fr
Re: Arrow DECA MAX 10 board for $37
« Reply #178 on: November 13, 2021, 11:57:08 pm »
So yes, it's obvious I don't know. Sure. Looks impossible to have a different view on some topics, in particular anything related to software development, the discussions of which often end up very badly. I'm not surprised. You can absolutely disagree with me and find my concerns invalid or at least largely exxagerated, no problem. Claiming I don't know how a DVCS work is something else. Thanks for the smile =)

It's also obvious you didn't address most of the points I made, nor cared to understand them. That's possibly actually mainly due to how the term "fork" is abused here, although I tried explaining what my concerns were, and how making forked projects *public*, as with github, was NOT quite the same thing as each developer working on their own repo, because it precisely promotes project dilution - diluting the number of potential users and contributors to several instances of the same project, which is a concern that I'm far from being the only one to have expressed, and while you can indeed perfectly do so using DVCSs, there are myriads of ways to use them with different workflows. Not to talk about closed-source projects for which DVCSs are also used a lot these days. There are a ton of different workflows, very few of them including each developer in a team having their own forked *repo* on a (or several) shared server(s). A branch is not a complete forked repo. Dunno - I get the impression you absolutely see no difference. Again, git as no concept of fork by itself. All it has are clones and branches. Even a cloned repo is a slightly different beast from a fork (as in github and gitlab), which they themselves explain if you care to read the docs. But I'm the one not knowing here.

As to my comment on excessive branching, this is also nothing new. In well managed projects, excessive branching is avoided - using the right workflows - because they can quickly make merges an untractable problem. That doesn't mean DVCSs are not appropriate. Not at all. It's all about structuring things a bit instead of pure anarchy.

I still stand by the opinion that Github tends to promote bad use of git and VCS in general because it's used a lot as *cloud storage* rather than a truly collaborative tool. Small projects are likely to be more affected than the big ones, for which users will naturally tend to flock to the repo that has the most activity. I've seen some stats showing that a large majority of forks on github were just dead beef. I prefer personal dead beef on personal computers rather than being public and polluting projects. I'm not just saying that in theory. I've seen a lot of projects, some I'm interested int, being diluted this way and thus harder to contribute to or even use. I doubt I'm the only one.

Again, everyone working on their own branch and having multiple *public* forks of a project are two different things, which you apparently don't want to acknowledge, good. Maintaining or contributing to a an open-source project takes some work and active collaboration, even if it's just to report bugs (which can be every bit as useful as contributing code, btw, and for which splitting reports on many different projects makes it all the harder to aggregate them. That was one of my point too.) Clicking on a "fork" button takes a split second, otoh.

As to the way the Linux project is handled - it definitely doesn't use git in the way that is common in Github, and has indeed a very specific workflow. It doesn't take anything from github. See what every pull request gets as answer: https://github.com/torvalds/linux/pull/805 . The github repo is read-only as they explain, but "funnily" enough, there are almost 40k forks. Not, Linus himself certainly doesn't use git the way that looks "usual" on github.

So yeah, I'll leave it at that, but just wanted to make my points clear. If they are still not, oh well.
« Last Edit: November 14, 2021, 02:39:38 am by SiliconWizard »
 

Online BrianHG

  • Super Contributor
  • ***
  • Posts: 6639
  • Country: ca
Re: Arrow DECA MAX 10 board for $37
« Reply #179 on: November 19, 2021, 02:19:45 am »
LOL, Arrow now has just received 2 more DECA boards in stock.
__________
BrianHG.
 

Online BrianHG

  • Super Contributor
  • ***
  • Posts: 6639
  • Country: ca
__________
BrianHG.
 

Offline migry

  • Contributor
  • Posts: 47
  • Country: gb
Re: Arrow DECA MAX 10 board for $37
« Reply #181 on: January 19, 2022, 10:33:04 pm »
I hope you all enjoying using your dirt cheap DECA  :-+

I have used mine in a project to extract digital level video from an Atari STe, and then use the DECA to generate HDMI (since it has a nice easy HDMI transmitter chip to utilise).

The picture shows the video (4 bits each: R,G and B) buffer and level translator, for which I had a PCB manufactured by JLCPCB. I used the wrong footprint for the buffers  |O

On the DECA is a "hat". There is a 20 way IDC cable with headers which connects them. I couldn't get the built in audio to work so I added an external PCB/module.

Verilog software was developed while it was a still a mess of wires. But I was rewarded with a pixel perfect 50Hz Atari medium res display. Much better than the jail bar ridden mess which came out of the composite video. I do not have a RGB (15kHz) monitor or TV (BTW).

Initially scoping the signals, they were a horrendous mess. Lots of spikes everywhere. I output the received vsync and hsync on debug output pins, and they looked horrendous. I eventually figured it was all down to @@@@-poor high speed (high slew rate) signal mis-management. As an experiment, as seen in the photo, I added 1k resistors into the hsync and vsync lines out from the Atari and this fixed the syncs. The debug outputs from the FPGA were now clean.

I am now suffering from a new problem. I use the 32MHz clock, generated on the FPGA to sample the incoming signals. Apart from vsync and hsync is "display enable" or de. I use this to detect the start of active video. I am pretty sure that I am not sampling this signal cleanly using the 32MHz. It worked on the mess of wires since code would be compiled differently and delays would be different.

Oh BTW, the 32MHz clock generated by the DECA FPGA goes to the STe and is used as the video clock, so it is synchronous with the RGB and sync outputs back from the STe. This feature is not available on the standard Atari ST (note no 'e').

I simply want to use the input delay feature of the IO pins, to tweak de. I know that this is a bad one off solution, but I do not plan to publish the code and the project is a one off. I just need it to work. I need to tweak de, by adding a small delay so that it is not transitioning when sampled by the 32MHz clock. Over time the picture stabilises, then jitters again, then badly jitters, repeat and rinse. Adding finger capacitance changes the jitter, which makes me confident that the issue is simple a sampling point problem.

My question is, how do I use the MAX10 input buffer delay feature, and how do I code this in Verilog?

NOTE: I did some googling and found a solution for the Xilinx Spartan6, where you can specify directives to the compiler in comments (IOB).

I have read the PDF for the MAX10 I/O and I do see what looks like a programmable delay for the input signal on DDR pins. I tried to find some example Verilog code, but my google foo failed me once again.
 

Online BrianHG

  • Super Contributor
  • ***
  • Posts: 6639
  • Country: ca
Re: Arrow DECA MAX 10 board for $37
« Reply #182 on: January 19, 2022, 10:50:48 pm »
I simply want to use the input delay feature of the IO pins, to tweak de. I know that this is a bad one off solution, but I do not plan to publish the code and the project is a one off. I just need it to work. I need to tweak de, by adding a small delay so that it is not transitioning when sampled by the 32MHz clock. Over time the picture stabilises, then jitters again, then badly jitters, repeat and rinse. Adding finger capacitance changes the jitter, which makes me confident that the issue is simple a sampling point problem.

Get rid of the capacitors...  Input delay buffers operate in to +/-50 picosecond range, up to +/- a few nanoseconds.  Way too small and useless for 32MHz.  This was designed to correct for pin-pin and PCB routing delays for buses above 300MHz way into the 900MHz range.

Here is what you should do:  Take in 32MHz.  Multiply by 8 on the MAX10 PLL for an internal 256 MHz.  Now, sample the input at 256MHz and have your HDL code select 1 of the 8 phases to sample from to get that 'sweet' 32 MHz sampling spot.

You can have a user set selection which phase to use, or if you want smart HDL, you can work out the sweet spot on the fly by looking for when the inputs transition and when the data has a 2-3 position clean stable value.  (I used to sample the Amiga's DENISE 12bit video output and different revisions of the chip has different ns delay skews on the data output bus VS it's 14.31818 MHz source clock  A smart self timing alignment is the best way to go whereas in the old days, we used to use a re-clocking PLL with a trimpot to phase adjust our data capture clock to zero in on that sweet data sampling spot.)

If you truly want to go all out, you can also use DDR input buffers for your input giving you an equivilant 512MHz sampling rate with 1 of 16 phase positions to select your source data from.

And, don't forget about using 'Signal-Tap' within Quartus where you can get your 256/512MHz logic analyzer of the input data + your HDL code's registers allowing you to capture/see what's actually happening on your inputs and HDL in real-time.


I doubt you would need to, but if the ST's output is crap beyond belief, you may select a different 1 or 8 sampling position phase for each input, though, this is an extreme case of fixing a problem which is generated elsewhere.

An example is if the Atari ST has a different logic IC generating the video sync output compared to the 12 bit RGB data and you have no choice but to correct for a different skew delay where in an analog RGB monitor, this skew would have no effect.
« Last Edit: January 19, 2022, 11:25:18 pm by BrianHG »
__________
BrianHG.
 

Offline RoGeorge

  • Super Contributor
  • ***
  • Posts: 4124
  • Country: ro
Re: Arrow DECA MAX 10 board for $37
« Reply #183 on: January 20, 2022, 12:23:27 am »
Nice project!  :-+

I didn't used my DECA recently, though there is a pile of projects for it in the bucket list.

Analog TV signals are using 75 ohms impedance.  If that Atari output is for a standard video monitor, then you need to use 75 ohms impedance coax cables (preferably same length for vertical and horizontal sync).  These 75 ohms coaxial cables (can be as well video cables, often RCA terminated coax cables, like the ones for old VCRs) must be terminated by a 75 ohms resistor at the other end of the cable, in parallel with the input of the FPGA.  Without impedance matching, the signals will reflect back and forth in the cable, creating horrible spikes, echoes and waveform degradation.

I don't recall if the ADC inputs in the DECA board already have 50 ohms low impedance input, I suspect they are not.  In case they do have 50 ohms, then a series resistor of 25 ohms is needed at the FPGA side of the cable.

Series 1k could be a workaround, but that is expected to degrade high frequencies, so more jitter and less detailed edges for pulses.  A proper 75 ohms impedance matching and 75 ohms coaxial cables should work better.



TV signal synchronization can be very tricky when the signal source has an independent clock.  I did once a machine to overlay video subtitles, which immediately showed the same clock and syncro problems you are are talking.  Back then I simply synchronized the clock of subtitling computer by pausing the clock (it was a ZX Spectrum, not Atari) so to make the computer clock "driven" by the analog video signal coming from a VCR.

Not sure if your Atari setup allows this trick.  If not, whatever you do, there will be skew between the clocks, maybe buffering a full frame might fix it.  I have no idea how sensitive the HDMI standard is against time jitter.

Also, analog TV were very tolerant regarding lines jitter, or frames jitter/frequency, and they were locking well with the source signal in a much broader range than the specs of broadcasting TV standards.  Therefore many computers back then were far for compliant, so I wouldn't be surprised if some of the jitter comes from the Atari hardware itself, but let's hope it is not that.



I'm sure many will chip in with new tricks and ideas for many pages to come, video conversion is a tricky problem, this new project of yours deserves its own topic. 
« Last Edit: January 20, 2022, 12:32:22 am by RoGeorge »
 

Online BrianHG

  • Super Contributor
  • ***
  • Posts: 6639
  • Country: ca
Re: Arrow DECA MAX 10 board for $37
« Reply #184 on: January 20, 2022, 01:32:32 am »
Analog TV signals are using 75 ohms impedance.  If that Atari output is for a standard video monitor, then you need to use 75 ohms impedance coax cables (preferably same length for vertical and horizontal sync).  These 75 ohms coaxial cables (can be as well video cables, often RCA terminated coax cables, like the ones for old VCRs) must be terminated by a 75 ohms resistor at the other end of the cable, in parallel with the input of the FPGA.  Without impedance matching, the signals will reflect back and forth in the cable, creating horrible spikes, echoes and waveform degradation.
It sounds like he is generating the video clock for the STe and sampling the binary data from the ST's display chip.  Otherwise, he would still get the jail bars and not have any jitter issues.

IE, if he makes the video clock, there cannot be any jitter on the HDMI output.
I have made enough video output on the DECA, now at 1080p, 720p, 480p, 960p, and a few other modes, no jitter at all.  Maybe he isn't driving the HDMI output IC properly?
I never had to use any fancy delays.

Look at the ribbon cables on his PCB where the silkscreen says red/green/blue, it has 4 wires on each ribbon, ie 12 bit digital color.

There are also 3 wires labeled sync and DE.  My guess is just like the Amiga (IE: the sync signals toggle on the rising clock while the digital video out toggle on the falling clock unless it is the ECS DENISE with the 28.63636MHz video out mode where the output timing has all new timing problems), the output timing of the sync VS the RGB color data has a fine but definite skew.  Using my trick of running the FPGA inputs at 8x frequency and sweet-spot selecting the RGB data and sync/de inputs from the ST will allow a perfect clean jitter free experience at each 8x sample position will proved a programmable tunable +/- ~4ns step size delay line.
« Last Edit: January 20, 2022, 01:45:22 am by BrianHG »
__________
BrianHG.
 

Offline migry

  • Contributor
  • Posts: 47
  • Country: gb
Re: Arrow DECA MAX 10 board for $37
« Reply #185 on: January 20, 2022, 02:08:33 am »
Sorry if I wasn't clear, and sorry for not knowing how to quote properly.

"It sounds like he is generating the video clock for the STe and sampling the binary data from the ST's display chip."

Yes.

The STe version of the Atari allows the video chip (aka the shifter) to be driven from an external 32MHz clock.

The HDMI generation (which I have done many times on various different hobbiest FPGA boards and found amazing interesting) is given below and I chose a 50Hz mode which needs a 54MHz clock. I use a Dell monitor as my display.

Code: [Select]
// FORMAT_1440x576p_50Hz
// FORMAT 29,30  == 1440 x 576 Progressive @ 50Hz
// pixel clock =  54.000MHz
// aspect ratio = 4:3 & 16:9
// horiz = 32.0us => 31.250kHz
// frame = 20.0ms => 50Hz

The line frequencies and pixel count work out perfectly at 50Hz, but unfortunately not for 60Hz (although I found something close, but the Atari VSYNC was 58Hz or thereabouts).

The prototype, which used a plug in breadboard and lots of "dupont" wires, eventually worked flawlessly. Each set of 4 digital RGB outputs from the "shifter" which go through resistors to generate an analogue voltage are buffered by my 74LVC8T245 voltage translator/buffers and are then input to the FPGA. Same for the VSYNC, HSYNC and DE. Only the 32MHz external clock comes from the FPGA back to the Atari (there is a pin on the 13 pin video connector for this). I had two PCBs made to tidy up the setup, but I didn't consider issues related to having a bus running at this speed. Anyway the glitch problem is fixed with 1k resistors which just "take off the edge" so to speak, working with the input capacitance of the FPGA pins.

It's been a while, but the digital parallel video rate is 16Mb/s for medium res and 8Mb/s for low res.

In order to find the first pixel on the top active video line, I use the DE (display enable) input. This input is sampled by the 32MHz clock. I am pretty sure the current configuration has a bad sampling point of DE, and that the pos edge of the 32MHz clock is too close to where DE transitions. I simply need to find a way to delay DE in order to get a clean sample. Also the Atari has a feature (bug) where it can come up in one of 4 clock positions relative to DE.

Just FYI, the digital video (with 15.625kHz sync rate) is read into a memory, then  each scan line is output twice at the HDMI rate of 31.25kHz.

As long as I sample the RGB input busses and DE "cleanly" the picture is rock solid.

As a side note I made several attempts to use the DECAs audio codec and failed miserably. So I used a fairly dumb analogue to I2S converter board instead. So the HDMI output, thanks to the HDMI chip on the DECA carries video and audio.

The problem is that I don't know how to utilise the I/O features of the MAX10 FPGA. I currently use them in rather a dumb way. I am looking for any insight or pointers to example code where I can learn how to properly use the advanced(?) features of the I/O, just like the DDR controller that is already described in this thread, where I assume the clock needs aligning to the data window, which is more or less the same problem which I face.

Please dumb down your replies!  ;D
 

Online BrianHG

  • Super Contributor
  • ***
  • Posts: 6639
  • Country: ca
Re: Arrow DECA MAX 10 board for $37
« Reply #186 on: January 20, 2022, 02:40:01 am »
In Quartus, use the IPcatalog and select basic Functions / I/O - GPIO-Lite Intel FPGA
Choose a verilog or vhdl file name and configure your IO port.
You can set the port register to DDR and look at the documentation there.

You will not find the IO timing de-skew features here.
Such fine timing is useless for a 32MHz as the adjustable skew is in the +/- 50ps delay adjustments.
(Do you know how fine 50ps is?)

To confine your IO port to a specific time, you will need to create an .sdc file.  This may be out of your league for now, but again, this constraint file is for +/- a 0.5 to a few nanoseconds.
(Your video clock is 16MHz, do you know how many nanoseconds this is?)

My recommendation to run your inputs clocked at 256MHz, 8x your current 32MHz reference clock, 16x the speed of your video output data is your best solution.  Create a software adjustable pipe delay, 1 for your syncs and DE, another for your video data and operate there in standard verilog.
(With a 256MHz input data clock, do you know how many positions you may take from that Atari ST source 16MHz video output?)

The super complicated +/- 50ps alignment delay wont do a thing for your 16MHz clocked DE signal coming from your Atari ST.  If you Atari's video was clocked at 500MHz, then I would first say your PCBs and wiring needs to be improved, then you would want to use the super complicated +/- 50ps alignment delays capabilities in the advanced IO features of the MAX10 to compensate for signal timing.


Q: How did you divide your output 54MHz into the ST's 16MHz?
« Last Edit: January 20, 2022, 02:55:56 am by BrianHG »
__________
BrianHG.
 

Offline migry

  • Contributor
  • Posts: 47
  • Country: gb
Re: Arrow DECA MAX 10 board for $37
« Reply #187 on: January 20, 2022, 11:09:03 pm »
So Brian, thank you for taking the time to read and answer my questions. You have made suggestions and I will explore those which make sense to help me to a solution. I haven't used anything except the basic I/O pad, so I really ought to find some learning material to understand more about these special I/O pins (features?) and how to instantiate them. I've found a few useful PDFs, but nothing regarding delay primitives.

Quote
(Do you know how fine 50ps is?)
(Your video clock is 16MHz, do you know how many nanoseconds this is?)
(With a 256MHz input data clock, do you know how many positions you may take from that Atari ST source 16MHz video output?)
This may be out of your league for now...

Is there really any need to be so insulting?


Quote
Q: How did you divide your output 54MHz into the ST's 16MHz?

Code: [Select]
PLL_HDMI pll_hdmi_inst (
  .inclk0 ( MAX10_CLK2_50 ),
  .c0     ( w_pll_hdmi    ) // 54MHz
);

PLL_ATARI pll_atari_inst (
  .inclk0 ( w_pll_hdmi  ),
  .c0     ( w_clk_atari ) // 32MHz
);

The numbers work out in that: 54 * 16 / 27 = 32.
 

Online BrianHG

  • Super Contributor
  • ***
  • Posts: 6639
  • Country: ca
Re: Arrow DECA MAX 10 board for $37
« Reply #188 on: January 20, 2022, 11:56:16 pm »

Quote
(Do you know how fine 50ps is?)
(Your video clock is 16MHz, do you know how many nanoseconds this is?)
(With a 256MHz input data clock, do you know how many positions you may take from that Atari ST source 16MHz video output?)
This may be out of your league for now...

Is there really any need to be so insulting?

No insult intended.  I was just trying to convey the vast scope of timing between a 16/32MHz system in the day of 25-15ns PLDs and an FPGA which can easily do 400MHz and when they talk about IO timing, the kind of scope of their available adjustments are for correcting timing inside this tiny period where simple coding cannot accommodate those required tiny delays.

You are working in a world where the kind of timing adjustments you are doing is done in HDL code, not the realm of adjusting an IO's delay by +/- a few hundred picoseconds.

For your 'PLL_ATARI pll_atari_inst', if you are outputting that 32MHZ on an IO directly, unless it is a specific dedicated PLL output, timing isn't guaranteed.  Your output will be better quality if you ran the PLL at 64MHz and make an output @ that clock ' out <= !out; ' making a logic output at half frequency.


I still say make your PLL generate the 256MHz and from that 54MHz, generate the 32MHz out, IE count 8 clocks out, and select an adjustable position withing to sample 1 every 16 clocks making your adjustable delayed input for the DE.
« Last Edit: January 21, 2022, 12:01:19 am by BrianHG »
__________
BrianHG.
 

Offline normi

  • Regular Contributor
  • *
  • Posts: 72
  • Country: 00
Re: Arrow DECA MAX 10 board for $37
« Reply #189 on: January 24, 2022, 01:01:37 pm »
Only option for quote available on Arrow site, says only 2 in stock but I doubt they have any available. When was the last time that these were in stock at the $37 price.
Di a quote request but waiting for response. 
 

Offline lintweaker

  • Contributor
  • Posts: 22
  • Country: nl
Re: Arrow DECA MAX 10 board for $37
« Reply #190 on: January 25, 2022, 08:19:59 am »
Only option for quote available on Arrow site, says only 2 in stock but I doubt they have any available. When was the last time that these were in stock at the $37 price.
Di a quote request but waiting for response.
I did use the quote options a few days ago and got response: they do not have any stock currently.
 

Offline ArsenioDev

  • Regular Contributor
  • *
  • Posts: 158
  • Country: us
    • DiscountMissiles: my portfolio and landing page
Re: Arrow DECA MAX 10 board for $37
« Reply #191 on: April 25, 2022, 04:45:04 pm »
Yeah the stock on these has totally dried up, plus the FPGA silicon shortage is BRUTAL.
I snagged one for $56 on ebay recently as an entry into soft silicon world.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf