Revision 4f22b1015d4203ccdf2b66f27ee5946504342ace authored by Jeff King on 24 February 2012, 22:10:17 UTC, committed by Junio C Hamano on 24 February 2012, 22:18:20 UTC
Because git's object format requires us to specify the
number of bytes in the object in its header, we must know
the size before streaming a blob into the object database.
This is not a problem when adding a regular file, as we can
get the size from stat(). However, when filters are in use
(such as autocrlf, or the ident, filter, or eol
gitattributes), we have no idea what the ultimate size will
be.

The current code just punts on the whole issue and ignores
filter configuration entirely for files larger than
core.bigfilethreshold. This can generate confusing results
if you use filters for large binary files, as the filter
will suddenly stop working as the file goes over a certain
size.  Rather than try to handle unknown input sizes with
streaming, this patch just turns off the streaming
optimization when filters are in use.

This has a slight performance regression in a very specific
case: if you have autocrlf on, but no gitattributes, a large
binary file will avoid the streaming code path because we
don't know beforehand whether it will need conversion or
not. But if you are handling large binary files, you should
be marking them as such via attributes (or at least not
using autocrlf, and instead marking your text files as
such). And the flip side is that if you have a large
_non_-binary file, there is a correctness improvement;
before we did not apply the conversion at all.

The first half of the new t1051 script covers these failures
on input. The second half tests the matching output code
paths. These already work correctly, and do not need any
adjustment.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
1 parent 4c3b57b
Raw File
hash.h
#ifndef HASH_H
#define HASH_H

/*
 * These are some simple generic hash table helper functions.
 * Not necessarily suitable for all users, but good for things
 * where you want to just keep track of a list of things, and
 * have a good hash to use on them.
 *
 * It keeps the hash table at roughly 50-75% free, so the memory
 * cost of the hash table itself is roughly
 *
 *	3 * 2*sizeof(void *) * nr_of_objects
 *
 * bytes.
 *
 * FIXME: on 64-bit architectures, we waste memory. It would be
 * good to have just 32-bit pointers, requiring a special allocator
 * for hashed entries or something.
 */
struct hash_table_entry {
	unsigned int hash;
	void *ptr;
};

struct hash_table {
	unsigned int size, nr;
	struct hash_table_entry *array;
};

extern void *lookup_hash(unsigned int hash, const struct hash_table *table);
extern void **insert_hash(unsigned int hash, void *ptr, struct hash_table *table);
extern int for_each_hash(const struct hash_table *table, int (*fn)(void *, void *), void *data);
extern void free_hash(struct hash_table *table);

static inline void init_hash(struct hash_table *table)
{
	table->size = 0;
	table->nr = 0;
	table->array = NULL;
}

#endif
back to top