Is there any kind of protection on the ADC and DAC pins to prevent damage from ESD and overvoltage? This seems to be a common failing with basic DAQ designs, but is pretty essential for any real world use.
This looks interesting. Will you be publishing the schematic? That will help some users like me to understand suitability for an application.
I'd be interested in whether you see any increased noise (lower ENOB) when used with the optional ESP32 & WiFi, assuming the ESP32 is meant to mount on the back of the PCB. Especially with the 14-bit ADC option.
Just a suggestion - You may want to spend a little extra time on assembly for the next sample PCB. The soldering is a bit rough on the above pictures, and some of the terminal pins aren't even soldered.
It's all open-source. I will release the whole KiCAD project. I will also release all of the firmware that I wrote for it.
That's an interesting point. I've never thought about it actually, it would probably increase the noise. Obviously this increase is going to be more pronounced at higher resolution as you pointed out. ESP32 module is something that I added just before the pre-launch page is released, actually the boards that you see in my post does not include the ESP32 module yet, It's for another update. The board is a 4-layer board and the ESP32 module will be on the bottom side, so maybe ground plane would help to combat this issue.
I didn't solder all of the pins because I had a single sample at the time I was taking the pictures and I needed it take its pictures of it with and without the raspberry pi pico on top, and therefore I left most of the pins unsoldered.
Excuses excuses I know.
Consider subscribing to mailing. I will post any updates here, but if you want to get notified you can sub here
Yes, it looked to me that the current footprint on the back looks like one for a BLE module not ESP32-MINI
Reasonable excuses. No point making too many prototypes when you are planning another revision soon.
My main concern with the ESP32 would be current paths as the ESP32 chip can draw quite a bit when transmitting. Depending on the application it may be possible to avoid ADC conversions during application based transmission, but not all due to the nature of TCP/IP & WiFi.
Haha, already done on both projects - but this one is of more interest to me right now. Always plenty of applications for DAQ here.
I can reach up to more than 7Mbits/s. The speed depends on your setup as well. For example, if you use a hub it's slower.
I can't really complain about this. I don't think I can go faster than this because at the end of the day USB full-speed works at 12 MHz.
Did anyone managed get more than 7Mbits/s? I am curious.
To get higher throughput you could potentially use some simple compression. A lot of ADC data at high data rates may just be varying by small amounts due to noise, unless of course you are actually measuring a signal that is some decent fraction the bandwidth limit.
If you are packing 12-bit data already, you could use some other scheme to pack small changes in smaller number of bits. I'm sure there is a variation on RLE or similar which can compress the noise in a lossless way. It is a trade off between processing speed vs communications speed vs use case.
I'm curious as well. What kind of driver are you using on the PC side?
Yeah, I'm not an expert on compression. It can get into heavy mathematics beyond my pay grade quite quickly.
RLE is a simple compression scheme which can work well with data that has a lot of repeated values in a series, such that it stores the values and repeat counts. It is /was commonly used in simple image formats where a lot of a single background color may be present. Fax machines used something similar. It is easy to implement and pretty fast, but when used with certain data it could increase the overall size in which case you just send the original uncompressed data. See https://en.wikipedia.org/wiki/Run-length_encoding
Maybe Deflate compression as used in the original PKZIP (a bit more complex). A lossless audio codec may be quite suitable as they are also waveforms.
There are plenty of options without re-inventing the wheel, but I don't know enough to make a recommendation.
See https://en.wikipedia.org/wiki/Lossless_compression
DAQ boards are usually used as part of some automated test setup. So being able to display the waveform like an oscilloscope is not that useful in most cases. Tho it is still an excellent demo program to show it working and quickly test it out.
What tends to be more important is being able to easily interface to it in the programing language of choice. So you want to use a driver library that has some multiplatform support, so that people using say C++ or C# can use it.
When it comes to matplotlib it is pretty performant since a lot of these chart libraries do the actual piloting in machine code DLLs rather than python. When it comes to hitches in performance this might actually the the fault of pythons garbage collector. Once you throw away enough allocated memory the garbage collector kicks in to reclaim it as free memory. If you built up a lot of garbage it can take a significant amount of time to clean up and during this time the program is paused. If that is a problem there are ways to manually run garbage collection before it gets too bad, or even better to write code in a way that generates less garbage.
The industry standard for test equipment automation is mostly VISA:
https://en.wikipedia.org/wiki/Virtual_instrument_software_architecture
Even more compatibility can be achieved with using SCPI on top of it:
https://en.wikipedia.org/wiki/Standard_Commands_for_Programmable_Instruments
Thanks to these the same piece of PC test automation software can use a Keysight DMM, Fluke DMM or Keithley DMM without being specifically designed for any of them. The communication can even happen over GPIB, LAN, USB, RS232 ..etc and the automation software doesn't even know about it. The software simply asks to open connection to a device named for example "GPIB0::5::INSTR" and the VISA API will figure out where that piece of test equipment is and how to communicate with it. Once connected the connection looks pretty much like a serial port that can send and receive any data you want.
The neat thing is that since RS232 is one of the commonly supported VISA communication methods, means that any serial port can be set up to be a VISA device. This includes USB to UART chips that use the generic USB CDC driver. So if you also speak the SCPI protocol over that serial port your DAQ can be used with existing software that already uses DAQs from the big vendors like Keysight,Tek...etc
The way to use VISA is to just install your favorite implementation of VISA from a vendor like NI, Keysight, Rigol etc.. My personal favorite is the Keysight VISA implementation since it has a nice UI and some useful debugging tools to spy on the communication happening between devices. This does not lock you into only using Keysight gear, it talks to a Rigol DMM just fine (It just makes setting up some of the more complex GPIB or PXI stuff easier on Keysight branded equipment)
This compatibility goes both ways, so if your Python GUI uses VISA to talk to the instrument means that it can talk to other DAQ boards too, not just yours.
As for pythons garbage collection, very few people give it much attention since among the popular languages python is the slowest of them. It is a language that focuses on being easy and powerful rather than being performant. Java is faster, C# is even faster while both being garbage collected. Lower level languages like C and C++ are even faster and don't need garbage collection, but they are harder to use. My personal preference is C# because it comes with lots of modern language creature comforts, is still rather performant and comes with VisualStudio (Excellent free official IDE that makes GUI development super easy and has some of the best code autocomplete)
Tho in your case the Pi Pico only has USB 1.1 so you are limited to the 12Mbit FullSpeed USB, but you should be able to get pretty close to the theoretical 12Mbit if you use it in an efficient way (large data blocks, minimized latency etc..)
Here are some benchmarks for USB speed https://www.pjrc.com/teensy/usb_serial.html
Turns out there is a rule on USB that should only utilize 90% or 80% of the bus bandwidth (depending on what spec you read since USB 2.0 updates FullSpeed operation a bit) and there is a bit of overhead on top of that. So about 10Mbit is the theoretical maximum and about 9.2Mbit is what he could actually get. So there is room for improvement. It is possible that the driver for the USB device peripheral on your pico is not the most optimized (squeezing the most out of hardware can sometimes take some trickery).
Interestingly OSes are still finding it hard to service USB controllers, so this drops the usable bandwidth some too. Likely because USB FS does not support the bigger 512byte long packets that USB HS has. So the CPU has to do too much babysitting to get full bandwidth.
Oh and as usual USB hubs are always to be weary of. They are known to cause problems in high bandwidth situations. Some motherboards even have on board USB hub chips in order to offer more USB ports.
EDIT:
Also it might not be a good idea to push USB to the limits in a product. I have used the Saleae USB logic analyzers quite a bit. Those stream data live to the PC and use the PCs RAM as the sample memory. When using the max sample rate these, they get unreliable. In favorable conditions (low CPU load, good USB controller, no hubs etc...) it works well, but otherwise it keeps clogging the FIFO, making that sample rate unusable.
Yeah having USB HS 480Mbit here would be very useful, but MCUs with a built in USB HS PHY are quite rare.
Compression for noisy waveforms is a bit tricky since it works terribly with the simple and easy RLE compression. From my experience wavelet transforms work well for compressing continuous signals. This in itself doesn't make the result smaller, but it does make a lot of values near 0. Allowing you to round them to 0 and then have RLE work very well on it(This is mostly how JPEG works). This is obviously lossy so to make it lossless you also need to include the error signal (that is much smaller so needs fewer bits).
Tho more of a problem is that lossless compression by definition can't grantee a given compression ratio. If you feed it a messy enough uncompressible signal then you simply get no compression or even negative compression. In your case this would make the FIFO overflow and everything to break down.
But if you use lossy compression you can give it a target bitrate, so the data is squeezed down however much is needed to fit. If you give it a particularly uncompressible signal then the result is a particularly distorted signal that still fits in the bitrate. For a wavelet transform this is easily done by adjusting at what threshold data points are rounded to 0 in order to make RLE more efficient. You can often squeeze down the amount of data to only 10% its original size with this.
More of a problem is that lossy compression is generally not desired for a DAQ. In 99% of use cases the lossy compression would not be noticeable at all, especially since it would be very good at compressing the kind of signals it usually gets fed. If anything the data might look better in most cases since the tiny high frequency noise is the first thing the compression is tempted to throw away.
i would imagine quite a few people would hate knowing in the back of there mind that there precious data might have been mucked up in compression along the way.