It is possible to craft a TIFF document where the IFD list is circular,
leading to an infinite loop while traversing the chain. The libtiff
directory reader has a failsafe that will break out of this loop after
reading 65535 directory entries, but it will continue processing,
consuming time and resources to process what is essentially a bogus TIFF
document.
This change fixes the above behavior by breaking out of processing when
a TIFF document has >= 65535 directories and terminating with an error.
When libtiff is included in a super project via a simple
`add_subdirectory(libtiff)`, this way the `tiff` library target has all
the necessary information to build against it.
Note: The BUILD_INTERFACE generator expression feature requires at least
CMake v2.8.11 if I'm correct.
From https://github.com/facebook/zstd
"Zstandard, or zstd as short version, is a fast lossless compression
algorithm, targeting real-time compression scenarios at zlib-level
and better compression ratios. It's backed by a very fast entropy stage,
provided by Huff0 and FSE library."
We require libzstd >= 1.0.0 so as to be able to use streaming compression
and decompression methods.
The default compression level we have selected is 9 (range goes from 1 to 22),
which experimentally offers equivalent or better compression ratio than
the default deflate/ZIP level of 6, and much faster compression.
For example on a 6600x4400 16bit image, tiffcp -c zip runs in 10.7 seconds,
while tiffcp -c zstd runs in 5.3 seconds. Decompression time for zip is
840 ms, and for zstd 650 ms. File size is 42735936 for zip, and
42586822 for zstd. Similar findings on other images.
On a 25894x16701 16bit image,
Compression time Decompression time File size
ZSTD 35 s 3.2 s 399 700 498
ZIP/Deflate 1m 20 s 4.9 s 419 622 336
Fix for http://bugzilla.maptools.org/show_bug.cgi?id=2704
This vulnerability - at least for the supplied test case - is because we
assume that a tiff will only have one transfer function that is the same
for all pages. This is not required by the TIFF standards.
We than read the transfer function for every page. Depending on the
transfer function, we allocate either 2 or 4 bytes to the XREF buffer.
We allocate this memory after we read in the transfer function for the
page.
For the first exploit - POC1, this file has 3 pages. For the first page
we allocate 2 extra extra XREF entries. Then for the next page 2 more
entries. Then for the last page the transfer function changes and we
allocate 4 more entries.
When we read the file into memory, we assume we have 4 bytes extra for
each and every page (as per the last transfer function we read). Which
is not correct, we only have 2 bytes extra for the first 2 pages. As a
result, we end up writing past the end of the buffer.
There are also some related issues that this also fixes. For example,
TIFFGetField can return uninitalized pointer values, and the logic to
detect a N=3 vs N=1 transfer function seemed rather strange.
It is also strange that we declare the transfer functions to be of type
float, when the standard says they are unsigned 16 bit values. This is
fixed in another patch.
This patch will check to ensure that the N value for every transfer
function is the same for every page. If this changes, we abort with an
error. In theory, we should perhaps check that the transfer function
itself is identical for every page, however we don't do that due to the
confusion of the type of the data in the transfer function.
As configure never uses AC_CHECK_SIZEOF(void*), this symbol is never
defined and so it doesn't make sense to test it in the code, this just
results in -Wundef warnings if they're enabled.