Author Topic: Anybody wants old data books (UK)?  (Read 3222 times)

0 Members and 1 Guest are viewing this topic.

Offline peter-hTopic starter

  • Super Contributor
  • ***
  • Posts: 4356
  • Country: gb
  • Doing electronics since the 1960s...
Anybody wants old data books (UK)?
« on: April 22, 2022, 06:55:52 pm »
I have a few hundred kg of data books, going back to 1990 and then some before that.

The reason it is 1990 is because I inherited a load back then; before that I collected only ones actually needed.

I can't see these have any value to anybody because most data sheets can be found online with google, but some of these data books have "famous" appnotes in them e.g. from Jim Williams (Linear Technology).

So I was going to chuck them out, but maybe somebody could use them.

Location is Brighton area, SE UK.
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 
The following users thanked this post: eti

Online tooki

  • Super Contributor
  • ***
  • Posts: 13157
  • Country: ch
Re: Anybody wants old data books (UK)?
« Reply #1 on: April 22, 2022, 07:40:58 pm »
::sigh:: if I were nearby…

(I need to make another visit to Bournemouth to see my tattoo artists, so somewhat closer than the mainland, but of course Swiss (the airline) is so stingy with weight that it’d be impossible to bring back anything. :( )
 

Offline daqq

  • Super Contributor
  • ***
  • Posts: 2321
  • Country: sk
    • My site
Re: Anybody wants old data books (UK)?
« Reply #2 on: April 22, 2022, 09:04:30 pm »
Believe it or not, pointy haired people do exist!
+++Divide By Cucumber Error. Please Reinstall Universe And Reboot +++
 
The following users thanked this post: amyk, nctnico

Offline TerraHertz

  • Super Contributor
  • ***
  • Posts: 3958
  • Country: au
  • Why shouldn't we question everything?
    • It's not really a Blog
Re: Anybody wants old data books (UK)?
« Reply #3 on: April 24, 2022, 09:49:21 am »
Hmm, what a pity. I can only take on one impossible crazy, costly quest at a time.
Also I already have a 'several hundred Kg' data books collection, so there'd be a lot of overlap.
Sure wish I could pick out a couple of boxes full though.
Are they on shelves? Any chance of a photo series of spines along the shelves? At resolution good enough to read the titles?

Whatever you do, don't bin them. Do a photo summary and post them for sale (or free?) as a collection. On ebay and mail lists like cctalk cctalk.classiccmp.org / classic computers.

Also please try to find a taker who will give an assurance that they will be preserved in their original paper form. You will find _many_ people all fired up to 'scan them, now!' But they won't be able to, or may use a destructive method like slicing off spines and using an auto-feeder. Results are near worthless, and the books are destroyed. It's a kind of collective derangement. Nearly as bad as book burners.

Collections such as yours need to be preserved from fools like that, at least for the next few decades. By then there will be adequate solutions to the problems of physical scanning and file representation that plague current scanning efforts. For eg the PDF standard contains NO adequate image format (and never will.)  JPG, BMP, Fax-mode, etc are not adequate. PNG is close to good enough, but PDF does not include the PNG format. Not many people know that. 

Result: everything ever scanned to PDF is going to have to be redone in a better format eventually. That's one reason paper originals need to be preserved.
Another is that digital documents can lie. There are other reasons too.
« Last Edit: April 24, 2022, 10:01:15 am by TerraHertz »
Collecting old scopes, logic analyzers, and unfinished projects. http://everist.org
 
The following users thanked this post: tooki

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 4367
  • Country: gb
Re: Anybody wants old data books (UK)?
« Reply #4 on: April 24, 2022, 10:04:59 am »
I can't see these have any value to anybody because most data sheets can be found online with google

eh, the digital datasheets on Google didn't stop me from buying !!!90!!! Kg of paper datasheets produced by Motorola between 1972 and 2001.

I love the smell of a sheet of paper, of a good tree that has become a source of knowledge, and still caress your fingers as you browse sheets.

My Remarkable2 is a great device to carry tons of datasheets with you, it is more practical when you travel, so it is the one I use for work, you can also write digital notes on a page, you can do research, you can do a lot of things that you can do with the paper datasheet, but ... there is no tree smell and no page will caress your fingers as you flip through the sheets.
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 
The following users thanked this post: tooki

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 21226
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Anybody wants old data books (UK)?
« Reply #5 on: April 24, 2022, 10:42:18 am »
My Remarkable2 is a great device to carry tons of datasheets with you, it is more practical when you travel, so it is the one I use for work, you can also write digital notes on a page, you can do research, you can do a lot of things that you can do with the paper datasheet, but ... there is no tree smell and no page will caress your fingers as you flip through the sheets.

Interesting concept, but the website states connectivity with the cloud storage (50 days unless you pay subscription). If the cloud is required, then no thanks.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 4367
  • Country: gb
Re: Anybody wants old data books (UK)?
« Reply #6 on: April 24, 2022, 10:54:32 am »
Interesting concept, but the website states connectivity with the cloud storage (50 days unless you pay subscription). If the cloud is required, then no thanks.

Yes, that point is really bad, plus you have to pay for OCR and other features that were free one year ago.

Anyway, I don't care, I don't use the could-storage, I don't transfer files wireless, and I don't need OCR. I use the remarkable2 as a book. I uploads files over USB like if it was a pen-stick, and I use the device to read and take notes.

Everything stays on the device, just like your pencil notes stay on paper datasheets  :D
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 21226
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Anybody wants old data books (UK)?
« Reply #7 on: April 24, 2022, 12:25:27 pm »
Interesting concept, but the website states connectivity with the cloud storage (50 days unless you pay subscription). If the cloud is required, then no thanks.

Yes, that point is really bad, plus you have to pay for OCR and other features that were free one year ago.

Anyway, I don't care, I don't use the could-storage, I don't transfer files wireless, and I don't need OCR. I use the remarkable2 as a book. I uploads files over USB like if it was a pen-stick, and I use the device to read and take notes.

Everything stays on the device, just like your pencil notes stay on paper datasheets  :D

If you can upload/download/delete files over USB from your computer, that is sufficient to avoid the cloud. That's how I use my kindle; it prevents Amazon from deleting a copy of 1984 purchased from Amazon. That irony was widely reported and appreciated when it occurred!

Cloud only => no, in my book (pun intended).
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 
The following users thanked this post: DiTBho

Offline Black Phoenix

  • Super Contributor
  • ***
  • Posts: 1137
  • Country: hk
Re: Anybody wants old data books (UK)?
« Reply #8 on: April 25, 2022, 09:57:02 am »
Interesting concept, but the website states connectivity with the cloud storage (50 days unless you pay subscription). If the cloud is required, then no thanks.

Yes, that point is really bad, plus you have to pay for OCR and other features that were free one year ago.

Anyway, I don't care, I don't use the could-storage, I don't transfer files wireless, and I don't need OCR. I use the remarkable2 as a book. I uploads files over USB like if it was a pen-stick, and I use the device to read and take notes.

Everything stays on the device, just like your pencil notes stay on paper datasheets  :D

That's what I've been looking for. Something to put every PDF I may need.

My requirements are simple:

- eInk Screen, I don't care about taking notes so no touch pad needed;
- Screen size like an iPad Pro, so 10"+;
- WiFi and BT and whatever connection not needed. Just need to have an SD Slot or a big memory for everything;
- User serviceable battery with a common part, so no custom designs like you see in most "made not to be open/throw away devices".

This one checks most of the dots excluding the SD slot.
If they released a dumber product just with that 4 things it would be an instant buy.
 

Offline peter-hTopic starter

  • Super Contributor
  • ***
  • Posts: 4356
  • Country: gb
  • Doing electronics since the 1960s...
Re: Anybody wants old data books (UK)?
« Reply #9 on: April 27, 2022, 04:45:26 am »
I have taken photos of the three book cabinets and put them on dropbox, so if anyone is interested, PM me. I won't post the dropbox link here because the free account gets trashed :)
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline PlainName

  • Super Contributor
  • ***
  • Posts: 7508
  • Country: va
Re: Anybody wants old data books (UK)?
« Reply #10 on: April 27, 2022, 06:07:03 pm »
For anyone thinking of acquiring a large format ereader for datasheets and/or notes, this channel is well worth spending a few hours perusing:

https://www.youtube.com/c/MyDeepGuide/videos

He puts pretty much all of them through their paces, uses them for real and gets into the stuff magazine reviews wouldn't think of (like how the nib-on-screen feels compared to actual paper).
 
The following users thanked this post: DiTBho

Offline tom66

  • Super Contributor
  • ***
  • Posts: 7334
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: Anybody wants old data books (UK)?
« Reply #11 on: April 27, 2022, 08:20:59 pm »
On JPEG in PDF, it is possible to set the quality at near lossless (this is generally agreed as Q=96..99) where file sizes are large but image compression still occurs.  Note that most JPEG codecs divide the terms from DCT determined by the Q factor.  Very high Q factors leave most of the DCT components left (few components rounded towards zero), so you are only left with the artefact of DCT, which can be very good for the right JPEG encoder.   You can compare a JPEG image compressed with a modern libjpeg codec at Q=96 and tell me you can distinguish the difference:  certainly, it is not possible without looking closely at the pixel level.  Yet file size will be 1/4 or less of BMP.

PNG is not a good codec for compressing anything scanned due to variation in the page brightness or fixed pattern noise on the sensor.  It will not work well with such documents and I would be surprised if it offers much over BMP.
 

Offline TerraHertz

  • Super Contributor
  • ***
  • Posts: 3958
  • Country: au
  • Why shouldn't we question everything?
    • It's not really a Blog
Re: Anybody wants old data books (UK)?
« Reply #12 on: April 28, 2022, 03:30:36 pm »
On JPEG in PDF, it is possible to set the quality at near lossless (this is generally agreed as Q=96..99) where file sizes are large but image compression still occurs.  Note that most JPEG codecs divide the terms from DCT determined by the Q factor.  Very high Q factors leave most of the DCT components left (few components rounded towards zero), so you are only left with the artefact of DCT, which can be very good for the right JPEG encoder.   You can compare a JPEG image compressed with a modern libjpeg codec at Q=96 and tell me you can distinguish the difference:  certainly, it is not possible without looking closely at the pixel level.  Yet file size will be 1/4 or less of BMP.

PNG is not a good codec for compressing anything scanned due to variation in the page brightness or fixed pattern noise on the sensor.  It will not work well with such documents and I would be surprised if it offers much over BMP.

It's funny you are so conversant with codecs, but so basic on scanning. The trick with scanning documents (text, diagrams) is to pick levels and post processing, so that 'white' areas are really all white, and 'black' is really all black, leaving only edges to be maintained with minimal gray-levels. PNG works brilliantly with such. 4 bits per pixel for edge shading, and all blank areas vanishing into the run-length compression.

PNG still isn't ideal for documents, but it vastly beats other contenders. And if you're comparing something to BMP, then you should know you're wasting your time.
Collecting old scopes, logic analyzers, and unfinished projects. http://everist.org
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 21226
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Anybody wants old data books (UK)?
« Reply #13 on: April 28, 2022, 03:50:49 pm »
I scanned a 48 page scope service manual into a <8MB pdf file. It included photos, text, schematics, colour PCB layouts. That's too large to be attached here, but I've attached a similar one below. In my opinion it is just a legible as the original, unlike many you find in the repositories :(

The basic workflow is dictated by what I found on a bog-standard linux box. No doubt there are better dedicated tools, but...

The steps were:
  • scan at 300dpi, save as colour JPG file
  • use a shellscript to convert each page to two small TIFF files, one more suitable for photos, one for text/diagrams
  • for photos, optionally posterise it to reduce size
  • convert each TIFF file to a single-page PDF
  • concatenate all the PDF files to produce the single final file

I previously tried simply including JPGs in a PDF file, but my tools compressed them so much they were unacceptable. Using TIFF files avoided that.
« Last Edit: April 28, 2022, 03:54:03 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 28429
  • Country: nl
    • NCT Developments
Re: Anybody wants old data books (UK)?
« Reply #14 on: April 28, 2022, 03:56:13 pm »
On JPEG in PDF, it is possible to set the quality at near lossless (this is generally agreed as Q=96..99) where file sizes are large but image compression still occurs.  Note that most JPEG codecs divide the terms from DCT determined by the Q factor.  Very high Q factors leave most of the DCT components left (few components rounded towards zero), so you are only left with the artefact of DCT, which can be very good for the right JPEG encoder.   You can compare a JPEG image compressed with a modern libjpeg codec at Q=96 and tell me you can distinguish the difference:  certainly, it is not possible without looking closely at the pixel level.  Yet file size will be 1/4 or less of BMP.

PNG is not a good codec for compressing anything scanned due to variation in the page brightness or fixed pattern noise on the sensor.  It will not work well with such documents and I would be surprised if it offers much over BMP.

It's funny you are so conversant with codecs, but so basic on scanning. The trick with scanning documents (text, diagrams) is to pick levels and post processing, so that 'white' areas are really all white, and 'black' is really all black, leaving only edges to be maintained with minimal gray-levels. PNG works brilliantly with such. 4 bits per pixel for edge shading, and all blank areas vanishing into the run-length compression.

PNG still isn't ideal for documents, but it vastly beats other contenders. And if you're comparing something to BMP, then you should know you're wasting your time.
Still I recon that original (color!) scans with 600dpi or better resolution would be most ideal (in PNG format for example) because you can always post-process these to improve quality once newer technology to deal with digitising paper records comes along. Storage is super cheap nowadays anyway. Epub (which supports PNG natively) could be a good alternative to PDF as a distribution format.

BTW: choosing a single level between black/white doesn't sound like a good solution. AFAIK there are better ways available nowadays that use a dynamic threshold to determine which parts are black / white.
« Last Edit: April 28, 2022, 04:03:54 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 21226
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Anybody wants old data books (UK)?
« Reply #15 on: April 28, 2022, 04:15:32 pm »
On JPEG in PDF, it is possible to set the quality at near lossless (this is generally agreed as Q=96..99) where file sizes are large but image compression still occurs.  Note that most JPEG codecs divide the terms from DCT determined by the Q factor.  Very high Q factors leave most of the DCT components left (few components rounded towards zero), so you are only left with the artefact of DCT, which can be very good for the right JPEG encoder.   You can compare a JPEG image compressed with a modern libjpeg codec at Q=96 and tell me you can distinguish the difference:  certainly, it is not possible without looking closely at the pixel level.  Yet file size will be 1/4 or less of BMP.

PNG is not a good codec for compressing anything scanned due to variation in the page brightness or fixed pattern noise on the sensor.  It will not work well with such documents and I would be surprised if it offers much over BMP.

It's funny you are so conversant with codecs, but so basic on scanning. The trick with scanning documents (text, diagrams) is to pick levels and post processing, so that 'white' areas are really all white, and 'black' is really all black, leaving only edges to be maintained with minimal gray-levels. PNG works brilliantly with such. 4 bits per pixel for edge shading, and all blank areas vanishing into the run-length compression.

PNG still isn't ideal for documents, but it vastly beats other contenders. And if you're comparing something to BMP, then you should know you're wasting your time.
Still I recon that original (color!) scans with 600dpi or better resolution would be most ideal (in PNG format for example) because you can always post-process these to improve quality once newer technology to deal with digitising paper records comes along. Storage is super cheap nowadays anyway. Epub (which supports PNG natively) could be a good alternative to PDF as a distribution format.

BTW: choosing a single level between black/white doesn't sound like a good solution. AFAIK there are better ways available nowadays that use a dynamic threshold to determine which parts are black / white.

The problem is not storing such large files, it is transmitting them (if you pay for bytes) and viewing them.

I have some old scanned NBS Weston Cell documents that take a ridiculous time to display on a (somewhat old) desktop, and which would be completely intolerable on an e-ink class reader.

I think it is still worth doing compression in a "write-once-read-many" environment.
« Last Edit: April 28, 2022, 04:17:36 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 28429
  • Country: nl
    • NCT Developments
Re: Anybody wants old data books (UK)?
« Reply #16 on: April 28, 2022, 04:48:31 pm »
That is why I mentioned the Epub format for distribution / viewing purposes. The original scans can be of much higher quality as these are only used to create output as a distribution format.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline TerraHertz

  • Super Contributor
  • ***
  • Posts: 3958
  • Country: au
  • Why shouldn't we question everything?
    • It's not really a Blog
Re: Anybody wants old data books (UK)?
« Reply #17 on: April 30, 2022, 07:24:53 am »
Speaking of shelves of data books, here's most of my collection. Some not shown since the isles between the bookshelves are too narrow to get a decent camera view. And there are various piles, boxes full and other shelves with special categories. Like a pile upstairs of 'vintage data books' where I was sorting some old parts recently.

Collecting old scopes, logic analyzers, and unfinished projects. http://everist.org
 

Offline TerraHertz

  • Super Contributor
  • ***
  • Posts: 3958
  • Country: au
  • Why shouldn't we question everything?
    • It's not really a Blog
Re: Anybody wants old data books (UK)?
« Reply #18 on: April 30, 2022, 07:48:22 am »
BTW: choosing a single level between black/white doesn't sound like a good solution. AFAIK there are better ways available nowadays that use a dynamic threshold to determine which parts are black / white.

That's not what I suggested. You're thinking of two-tone, ie fax mode, which is evil even for simple text.  I meant, choose the upper and lower scan cutoff levels to give true white and black in areas that are supposed to be white and black. I say 'supposed to be' because on paper they never actually are, unless you're printing with vantablack and surface-of-the-Sun plasma. But the publisher's intent was pure white and black, so it's valid to assign ffffff and 000000 codes to those pixels.
There still need to be gray levels between. Just how many levels, depends on the context. For black and white text, where all that's needed is to preserve visually clean curves on character edges, 16 levels (4 bits/pixel) total is adequate with sensible pixel sizing relative to the font. For B&W photos, at least 256 and preferably 64K levels to avoid visible posterization effects. For full colour, then 24 bit or better.
But the main point is to remove visually insignificant noise in flat color areas, so PNG's RLL compression scheme can work best.

Btw, 'dynamic threshold' can't work for multi-page documents. It will adapt differently on pages of different content, resulting in digital page representations that look different when they should be the same. You have to do trial scans of representative pages, then choose a scanning and post-processing profile that works best for all of them, then stick with that one profile through all the work. Unless there are radically different types of pages, in which case you need a profile for each type.
« Last Edit: April 30, 2022, 07:55:58 am by TerraHertz »
Collecting old scopes, logic analyzers, and unfinished projects. http://everist.org
 

Online tooki

  • Super Contributor
  • ***
  • Posts: 13157
  • Country: ch
Re: Anybody wants old data books (UK)?
« Reply #19 on: May 03, 2022, 10:17:34 am »
That's not what I suggested. You're thinking of two-tone, ie fax mode, which is evil even for simple text.  I meant, choose the upper and lower scan cutoff levels to give true white and black in areas that are supposed to be white and black.
That’s called “setting the white point” (or black point, respectively). But your original description goes beyond that, strongly suggesting increasing contrast to largely eliminate grays.

The problem when scanning is that the backgrounds are rarely as uniform as we think they are.

Random pro tip when dealing with thin paper where the reverse side bleeds through when scanning: rather than putting a white backing sheet behind it, use a black one! This is far more effective at eliminating bleed through, and the overall darker (but now more uniform) background can easily be adjusted back to white.
 
The following users thanked this post: MK14

Online Zero999

  • Super Contributor
  • ***
  • Posts: 20360
  • Country: gb
  • 0999
Re: Anybody wants old data books (UK)?
« Reply #20 on: May 03, 2022, 11:24:44 am »
BTW: choosing a single level between black/white doesn't sound like a good solution. AFAIK there are better ways available nowadays that use a dynamic threshold to determine which parts are black / white.

That's not what I suggested. You're thinking of two-tone, ie fax mode, which is evil even for simple text.  I meant, choose the upper and lower scan cutoff levels to give true white and black in areas that are supposed to be white and black. I say 'supposed to be' because on paper they never actually are, unless you're printing with vantablack and surface-of-the-Sun plasma. But the publisher's intent was pure white and black, so it's valid to assign ffffff and 000000 codes to those pixels.
There still need to be gray levels between. Just how many levels, depends on the context. For black and white text, where all that's needed is to preserve visually clean curves on character edges, 16 levels (4 bits/pixel) total is adequate with sensible pixel sizing relative to the font. For B&W photos, at least 256 and preferably 64K levels to avoid visible posterization effects. For full colour, then 24 bit or better.
But the main point is to remove visually insignificant noise in flat color areas, so PNG's RLL compression scheme can work best.

Btw, 'dynamic threshold' can't work for multi-page documents. It will adapt differently on pages of different content, resulting in digital page representations that look different when they should be the same. You have to do trial scans of representative pages, then choose a scanning and post-processing profile that works best for all of them, then stick with that one profile through all the work. Unless there are radically different types of pages, in which case you need a profile for each type.
I agree wholeheartedly about PNG, rather than JPEG.

For colour diagrams, it's often better to use a lower colour depth. 8-bit is often more than adequate.

Regarding the original post: does anyone still use hard back? I find it inconvenient. The only advantage I can think of is, there's still a lot of books which are only available in traditional paper format.
« Last Edit: August 06, 2022, 04:00:17 pm by Zero999 »
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 28429
  • Country: nl
    • NCT Developments
Re: Anybody wants old data books (UK)?
« Reply #21 on: May 03, 2022, 11:48:18 am »
BTW: choosing a single level between black/white doesn't sound like a good solution. AFAIK there are better ways available nowadays that use a dynamic threshold to determine which parts are black / white.

That's not what I suggested. You're thinking of two-tone, ie fax mode, which is evil even for simple text.  I meant, choose the upper and lower scan cutoff levels to give true white and black in areas that are supposed to be white and black. I say 'supposed to be' because on paper they never actually are, unless you're printing with vantablack and surface-of-the-Sun plasma. But the publisher's intent was pure white and black, so it's valid to assign ffffff and 000000 codes to those pixels.
There still need to be gray levels between. Just how many levels, depends on the context. For black and white text, where all that's needed is to preserve visually clean curves on character edges, 16 levels (4 bits/pixel) total is adequate with sensible pixel sizing relative to the font.
Yes and no. IMHO you only need gray levels to make clean curves if your resolution is too low (and to me such text looks like blurred crap anyway). If your resolution is high enough (at least 300 real dpi) then you shouldn't need gray scale to show nice text. But again, this is only for the output format. Original 'master' scans are better made in color (using at least 300 real dpi) if you care about preservation. This way you can post-process the 'master scans' in whatever way you like using the latest technology in order to produce better a better result in an output format.

 From what I have seen so far, scanning software that comes with a scanner usually isn't very good at dealing with scanning text / black & white anyway; dedicated & more sophisticated software should be able to give much better results.
« Last Edit: May 03, 2022, 11:52:08 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Stray Electron

  • Super Contributor
  • ***
  • Posts: 2253
Re: Anybody wants old data books (UK)?
« Reply #22 on: May 03, 2022, 01:18:53 pm »
I scanned a 48 page scope service manual into a <8MB pdf file. It included photos, text, schematics, colour PCB layouts. That's too large to be attached here, but I've attached a similar one below. In my opinion it is just a legible as the original, unlike many you find in the repositories :(

The basic workflow is dictated by what I found on a bog-standard linux box. No doubt there are better dedicated tools, but...

The steps were:
  • scan at 300dpi, save as colour JPG file
  • use a shellscript to convert each page to two small TIFF files, one more suitable for photos, one for text/diagrams
  • for photos, optionally posterise it to reduce size
  • convert each TIFF file to a single-page PDF
  • concatenate all the PDF files to produce the single final file

I previously tried simply including JPGs in a PDF file, but my tools compressed them so much they were unacceptable. Using TIFF files avoided that.

   I work with a couple of Professional archivists and they save everything as TIFF. 

   FWIW I've been looking at a new software package for saving image files of old documents called Vivid-pix and it looks VERY good.  it's not a full blown editor like PhotoShop or GIMP and it doesn't clean up scratches and other damage but it does a VERY good job of automatically setting the white balance and contrast and it's fast and easy to use. The price is only $49 and you OWN it and it installs on your computer, not on the cloud. You can also download a limited use trial version for free. www.vivid-pix.com   
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 21226
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Anybody wants old data books (UK)?
« Reply #23 on: May 03, 2022, 01:28:07 pm »
I scanned a 48 page scope service manual into a <8MB pdf file. It included photos, text, schematics, colour PCB layouts. That's too large to be attached here, but I've attached a similar one below. In my opinion it is just a legible as the original, unlike many you find in the repositories :(

The basic workflow is dictated by what I found on a bog-standard linux box. No doubt there are better dedicated tools, but...

The steps were:
  • scan at 300dpi, save as colour JPG file
  • use a shellscript to convert each page to two small TIFF files, one more suitable for photos, one for text/diagrams
  • for photos, optionally posterise it to reduce size
  • convert each TIFF file to a single-page PDF
  • concatenate all the PDF files to produce the single final file

I previously tried simply including JPGs in a PDF file, but my tools compressed them so much they were unacceptable. Using TIFF files avoided that.

   I work with a couple of Professional archivists and they save everything as TIFF. 

I've seen that with professional image libraries, since TIFF is essentially uncompressed.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online tooki

  • Super Contributor
  • ***
  • Posts: 13157
  • Country: ch
Re: Anybody wants old data books (UK)?
« Reply #24 on: August 06, 2022, 11:13:32 am »
I've seen that with professional image libraries, since TIFF is essentially uncompressed.
That’s not correct. TIFF is a container format that can store one (or more!) images in numerous formats. It’s common for TIFF files to contain uncompressed bitmaps or use lossless compression (CCITT, LZW, ZIP, PackBits, and others.), and less common — but possible — for them to use lossy compression like JPEG, JBIG, and others.

From my years of supporting clients in desktop publishing, it’s certainly fair to say that TIFF was almost exclusively used for uncompressed or lossless images, since there wasn’t any real advantage to using, for example, a JPEG-compressed TIFF over a native JPEG file.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf