uint8 value is promoted to int in (value << 24) so -fsanitize
yield runtime errors :
tiff2ps.c:2969:33: runtime error: left shift of 246 by 24 places cannot be represented in type 'int'
too much bytes were processed, causing a heap buffer overrun
http://bugzilla.maptools.org/show_bug.cgi?id=2831
the loop counter must be
for (col = 0; col < width; col += 8 / bps)
Also the values were not properly calculated. It should be
255-x, 15-x, 3-x for bps 8, 4, 2.
But anyway it is easyer to invert all bits as 255-x = ~x, etc.
(substracting from a binary number composed of all 1 is like inverting
the bits)
http://bugzilla.maptools.org/show_bug.cgi?id=2834
usually the test (i < byte_count) is OK because the byte_count is divisible by samplesperpixel.
But if that is not the case, (i + ncomps) < byte_count should be used, or
maybe (i + samplesperpixel) <= byte_count
It is possible to craft a TIFF document where the IFD list is circular,
leading to an infinite loop while traversing the chain. The libtiff
directory reader has a failsafe that will break out of this loop after
reading 65535 directory entries, but it will continue processing,
consuming time and resources to process what is essentially a bogus TIFF
document.
This change fixes the above behavior by breaking out of processing when
a TIFF document has >= 65535 directories and terminating with an error.
From https://github.com/facebook/zstd
"Zstandard, or zstd as short version, is a fast lossless compression
algorithm, targeting real-time compression scenarios at zlib-level
and better compression ratios. It's backed by a very fast entropy stage,
provided by Huff0 and FSE library."
We require libzstd >= 1.0.0 so as to be able to use streaming compression
and decompression methods.
The default compression level we have selected is 9 (range goes from 1 to 22),
which experimentally offers equivalent or better compression ratio than
the default deflate/ZIP level of 6, and much faster compression.
For example on a 6600x4400 16bit image, tiffcp -c zip runs in 10.7 seconds,
while tiffcp -c zstd runs in 5.3 seconds. Decompression time for zip is
840 ms, and for zstd 650 ms. File size is 42735936 for zip, and
42586822 for zstd. Similar findings on other images.
On a 25894x16701 16bit image,
Compression time Decompression time File size
ZSTD 35 s 3.2 s 399 700 498
ZIP/Deflate 1m 20 s 4.9 s 419 622 336
Fix for http://bugzilla.maptools.org/show_bug.cgi?id=2704
This vulnerability - at least for the supplied test case - is because we
assume that a tiff will only have one transfer function that is the same
for all pages. This is not required by the TIFF standards.
We than read the transfer function for every page. Depending on the
transfer function, we allocate either 2 or 4 bytes to the XREF buffer.
We allocate this memory after we read in the transfer function for the
page.
For the first exploit - POC1, this file has 3 pages. For the first page
we allocate 2 extra extra XREF entries. Then for the next page 2 more
entries. Then for the last page the transfer function changes and we
allocate 4 more entries.
When we read the file into memory, we assume we have 4 bytes extra for
each and every page (as per the last transfer function we read). Which
is not correct, we only have 2 bytes extra for the first 2 pages. As a
result, we end up writing past the end of the buffer.
There are also some related issues that this also fixes. For example,
TIFFGetField can return uninitalized pointer values, and the logic to
detect a N=3 vs N=1 transfer function seemed rather strange.
It is also strange that we declare the transfer functions to be of type
float, when the standard says they are unsigned 16 bit values. This is
fixed in another patch.
This patch will check to ensure that the N value for every transfer
function is the same for every page. If this changes, we abort with an
error. In theory, we should perhaps check that the transfer function
itself is identical for every page, however we don't do that due to the
confusion of the type of the data in the transfer function.
program. This is in response to the report associated with
CVE-2017-16232 but does not solve the extremely high memory usage
with the associated POC file.