Author Topic: First Arduino experience.  (Read 5355 times)

0 Members and 1 Guest are viewing this topic.

Offline westfw

  • Super Contributor
  • ***
  • Posts: 4196
  • Country: us
Re: First Arduino experience.
« Reply #25 on: December 09, 2017, 04:58:12 am »
Quote
Quote
By the way. Which version of linux kernel is installed? Because in pre 4.10 versions CH341 was only partially supported
It seem it fully supports the rs232 serial, but not the i2c. 

Whether the Linux driver for the CH341 supports I2C or SPI is irrelevant, because the com link between it and the AVR chip on the arduino only supports Async Serial anyway.   The AVR chip supporys I2C and SPI (on other pins, not involving the CH341)
 

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4003
  • Country: gb
Re: First Arduino experience.
« Reply #26 on: December 09, 2017, 07:34:49 am »
Umm...  Your complaints about the tutorial are ... seriously tiny nits, for that level of tutorial.

And that's the beauty of it all, isn't it?  Perhaps you feel a bit guilty about not having to have written your own functions to manipulate the I2C interface, because those Arduino libraries are bloated an inefficient?   Or perhaps not!

Yes, they were nit picks.  I was in a bad mood.

The null terminator may or may not be sent.  I suppose in this case it might not be.  \n is usual for Unix and most other things, \r\n is a DOS thing, there are many debates on why they decided to do that.  It is the command sequence a lot of printers back then used.  Similar to why they decided that filenames where in 8.3 format... or why they went to the trouble of making it case insensitive.

I should have been specific about the storage; memory in a computer, RAM or ROM, would usually be specified in powers of 2, 1024 multiplies as it matters to computers.  Flash memory cards that emulate disks are usually/should be multiplies of 1000.  Network, such as 100 megabit Ethernet is 100,000,0000 bits.

The hard disk storage example was of course grossly wrong, as you hint to.  Anyone who has formatted a 1Tb drive and looked at home much they get in Megabytes is often disappointed but not only do you lose the 1000 to 1024, but you loose a load in formatting over head, then you have the minimum data allocation of a sector.  A 1 byte file might consume 512 bytes (or 1024) (+inodes and/or allocation table space).   You run into this when trying to make a Linux boot floppy with 1.4Mb, but the /dev/ folder inodes make some space inaccessible.  You have to tune the filesystem to increase the number of inodes.  (It's actually better to have the minimum /dev nodes on the disk, create a ram disk and populate /dev there.)

Memory allocation is similar in that regard but even more complex in the case of an OS like Linux.  You can write a C program on a 64 bit Linux machine to allocate 64Gb of RAM when you only have 1Gb and it will return successfully.  It would only be when you try and write to all the the memory allocation blocks that you would run into problems.  All to do with the MMU relationship between virtual and physical addresses.  Also side stepping compiler optimisations allocating 10 locations for 1 byte each can result in 5,120 bytes of ram being allocated.  Then there is of course memory address alignments, packing, unpacking etc.

Fair enough, way beyond a tutorial of that level.

I do feel guilty using the Ada Fruit libraries, though I'm not sure "guilty" is the right word, disappointed maybe.  My day job is currently a Java programmer and one of my pet hates of Java is just how insanely inefficient most of the code written for it is.  HUGE elaborate multi megabyte frameworks doing simple things in complicated abstract and generic ways and then the author only uses a tiny part of it they could have written themselves 100 times more performant and 1000 times smaller.  They import dozens and dozens for jars.  We have applications (not saying where I work) which pre-allocate 16Gb of heap on start up, 4Gb is becoming fairly standard these days in enterprise software.

So I find looking into embedded C/C++ again refreshing.
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 4003
  • Country: nz
Re: First Arduino experience.
« Reply #27 on: December 09, 2017, 08:57:12 am »
The null terminator may or may not be sent.  I suppose in this case it might not be.  \n is usual for Unix and most other things, \r\n is a DOS thing, there are many debates on why they decided to do that.  It is the command sequence a lot of printers back then used.  Similar to why they decided that filenames where in 8.3 format... or why they went to the trouble of making it case insensitive.

The null terminator certainly should not be sent! That's string metadata, not part of the string data. Other storage formats for strings might have, for example, a length count prepended to the string data instead. You would never think of sending that from a "print" (as opposed to storing it on disk, or sending to another program)

MSDOS followed CP/M in many things, which in turned followed DEC operating systems. DEC used 8.3 file names in RSTS/E (and others) on the PDP11 and in VMS. DEC used CRLF to terminate lines in a text file (when they didn't use a prepended record length, which they did often as well).

CRLF is a mistake, really. Two bytes where one will do. Sometimes you might want to return to the start of the same line to overprint. You would probably never want to drop down one line and print the next character below and to the right of the previous character (or if you wanted that, you'd want a lot of other things as well, using arbitrary XY positioning). So CR for simple start of line for overprinting and LF for start of next line is the most sensible. Even in the 70s you could perfectly easily tell the printer driver to expand an LF to CRLF.

Quote
I should have been specific about the storage; memory in a computer, RAM or ROM, would usually be specified in powers of 2, 1024 multiplies as it matters to computers.  Flash memory cards that emulate disks are usually/should be multiplies of 1000.

Flash memory might be used in place of disks, but it's built using a semiconductor grid with binary address lines. So its size is naturally a power of two -- or at least each block is.

It's interesting to note that Samsung, for example, offers SSDs in both power of ten and power of two sizes!

Samsung 960 Evo 500GB M.2 (2280),NVMe SSD R/W(Max) 3,200MB/s/1800MB/s ,330K/330K IOPS
SSD Capacity: 480 GB

Samsung 960 Pro 512GB M.2 (2280),NVMe SSD R/W(Max) 3,500MB/s/2100MB/s ,330K/330K IOPS
SSD Capacity: 512 GB

The Pro has the same binary capacity as it's binary power model number, while the Evo has less binary capacity than its model number.

But ... 500,000,000,000 bytes is only 465.66 GB, not 480. So something else is going on there.
 

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4003
  • Country: gb
Re: First Arduino experience.
« Reply #28 on: December 09, 2017, 03:20:13 pm »
But ... 500,000,000,000 bytes is only 465.66 GB, not 480. So something else is going on there.

The units are probably miss leading. 

Wikipedia suggests that the ^2 should take the name gibibyte instead, but I don't think I've every heard that being used.

Although I have seen their acronyms "GiB", "KiB" and "MiB" used in textbooks etc. a few times, but I think the units are so polluted in popular marketing and press it's probably only going to be correct in textbooks and datasheets.

The other ones that get people are megabit and megabyte per second, I've seen 1.5Mbps, 1.5MBps and 1.5MiB/s and all manor of variations.  Often Mb is used for bits and MB is used for bytes, but not always. 

I "believe" it might be correct to say a broadband connection with 10Mbps link speed should be capable of approximately 1MiB/s.  (using an old IP networking rule of thumb that on average 8bits will on average consume 2bits for the headers).  Of course it's approximate as a lot of small packets wastes more header bandwidth compared to a few large packets.

It's as clear as mud.  Best not assume if it's really important to what you are doing.
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4003
  • Country: gb
Re: First Arduino experience.
« Reply #29 on: December 09, 2017, 03:42:12 pm »
So today I managed to get my DC load working again.  I also managed to get the arduino to send the control voltage from a linux terminal.

For reference, I used:

tty -F /dev/ttyUSB0 cs8 9600 ignbrk -brkint -icrnl -imaxbel -opost -onlcr -isig -icanon -iexten -echo -echoe -echok -echoctl -echoke noflsh -ixon -crtscts

To configure the TTY.

echo -en '\xff' > /dev/ttyUSB0
echo -en '\0f' > /dev/ttyUSB0

To send the binary values (written in hex). 

Note: "Echo"ing to the port seems to open the serial coms, send the byte and close it again.  This rapid open/close apparently resets the arduino.

Trying to set -hupcl on the terminal with tty to not close the serial connection did not work and did not prevent the Arduino reseting.  So I pulled the reset pin high on the breadboard to prevent it from resetting.

Obviously this might be better done in code than a bash terminal.  Anyway, baby steps.

Next I have to figure out how to read the difference between 3 voltage ranges.

3V - 4.2V
6V - 8.4V
9V - 12.6V

So if I had:
Input: 3.6V, 7.2V, 10V

I want...

Output: 3.6V, 3.6V, 2.8V

I can put a meter across different points in the series circuit (A LiPo) and read these voltages, but I need to alter them to have a common ground for use with Arduino analogue in ports.

Unfortunately this will mean a hard day working out differential subtracting op amps.  So I'm not going to have able to avoid analogue electronics for long.

I could, potentially put all three through "by 3" dividers and do the subtraction in software.  Then I lose resolution in the lower voltage. 

Or though 1:1, 1:2, 1:3 dividers to normalize them to 0-4.2V but have less resolution in the upper voltage then.

Not to mention the error factor in voltage dividers which would need calibration pots.  I don't know how accurate I can get differential amps though.
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline mariush

  • Super Contributor
  • ***
  • Posts: 4983
  • Country: ro
  • .
Re: First Arduino experience.
« Reply #30 on: December 09, 2017, 04:10:08 pm »


Quote
I should have been specific about the storage; memory in a computer, RAM or ROM, would usually be specified in powers of 2, 1024 multiplies as it matters to computers.  Flash memory cards that emulate disks are usually/should be multiplies of 1000.

Flash memory might be used in place of disks, but it's built using a semiconductor grid with binary address lines. So its size is naturally a power of two -- or at least each block is.

It's interesting to note that Samsung, for example, offers SSDs in both power of ten and power of two sizes!

Samsung 960 Evo 500GB M.2 (2280),NVMe SSD R/W(Max) 3,200MB/s/1800MB/s ,330K/330K IOPS
SSD Capacity: 480 GB

Samsung 960 Pro 512GB M.2 (2280),NVMe SSD R/W(Max) 3,500MB/s/2100MB/s ,330K/330K IOPS
SSD Capacity: 512 GB

The Pro has the same binary capacity as it's binary power model number, while the Evo has less binary capacity than its model number.

But ... 500,000,000,000 bytes is only 465.66 GB, not 480. So something else is going on there.

Some space is hidden from user to extend the life of the SSD.

Flash memory is arranged in chunks (sectors) of 512 bytes or some small value like that, and then multiple sectors are grouped into pages, let's say 128 - 512 KB in size.
With Flash memory, you can write sectors , but you can't overwrite them. In order to write over a sector, you have to erase it but you can't... you can only erase a whole page of 128-512 KB and the flash memory can only be erased a limited number of times.
For SLC NAND, each page can be erased maybe 10k times, for MLC you're down to 2k-4k depending on manufacturing (lower the nm process, the lower the erase cycles) and for TLC you can be down to 500-1000 erase cycles.
So the SSD controller tries to delay erasing pages as much as possible, whenever a 512b - 1kb sector needs to be overwritten it just writes the data into some other sector in another page and marks that sector as "can be erased". At some later time, when the SSD controller has no very few empty sectors to write data to, or during idle time, it scans the pages and finds pages with a huge number of "can be erased" sectors, it copies the other sectors with data to other pages and then erases the whole page wasting one of those erase cycles, and making the sectors in that page available to be written to.

So in order to reduce the erase cycles, the SSD controller uses some of those GB of flash hidden to shuffle sectors in those pages instead of being forced to erase pages to make sectors writable. It makes it possible to delay erasing pages as much as possible.  For example, you are sold a 240 GB SSD (with powers of 1000) which has 256 GiB of actual memory (powers of 1024) so there's 256 x 1024x1024x1024 / 240 x 1000 x 1000 x 1000  = 34,877,906,944 = ~ 33 GB  used to extend the life of the SSD.

Also some ssd controllers and some nand chips can configure a part of their memory in a pseudo-slc mode for example Sandisk has some SSD drives where a 240-256 GB SSD with MLC memory has around 10 GB of "SLC memory" ... basically from each of the 8-16  x 32-64 GB MLC chips, a portion of around 2-4 GB MLC where each cell stores 2 bits, is configured to store only 1 bit per cell, making around 0.75-1GB of pseudo-SLC memory. These portions will have higher endurance, let's say from 2-4000 erase cycles to around 6-8000 erase cycles

So if the SSD applies this technique, a 240 GB SSD will have 256 GB of MLC inside but not necessarily 30ish of GB as spare, maybe only 16-20 the rest being reconfigured as SLC memory and used usually as write buffers to achieve higher throughput when writing to it (the controller prefers writing to the slc portion and then when idle moves sectors from that portion to other regular areas)
 


 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf