Revision 474095e46cd14421821da3201a9fd6a4c070996b authored by Linus Torvalds on 24 April 2015, 16:28:01 UTC, committed by Linus Torvalds on 24 April 2015, 16:28:01 UTC
Pull md updates from Neil Brown:
 "More updates that usual this time.  A few have performance impacts
  which hould mostly be positive, but RAID5 (in particular) can be very
  work-load ensitive...  We'll have to wait and see.

  Highlights:

   - "experimental" code for managing md/raid1 across a cluster using
     DLM.  Code is not ready for general use and triggers a WARNING if
     used.  However it is looking good and mostly done and having in
     mainline will help co-ordinate development.

   - RAID5/6 can now batch multiple (4K wide) stripe_heads so as to
     handle a full (chunk wide) stripe as a single unit.

   - RAID6 can now perform read-modify-write cycles which should help
     performance on larger arrays: 6 or more devices.

   - RAID5/6 stripe cache now grows and shrinks dynamically.  The value
     set is used as a minimum.

   - Resync is now allowed to go a little faster than the 'mininum' when
     there is competing IO.  How much faster depends on the speed of the
     devices, so the effective minimum should scale with device speed to
     some extent"

* tag 'md/4.1' of git://neil.brown.name/md: (58 commits)
  md/raid5: don't do chunk aligned read on degraded array.
  md/raid5: allow the stripe_cache to grow and shrink.
  md/raid5: change ->inactive_blocked to a bit-flag.
  md/raid5: move max_nr_stripes management into grow_one_stripe and drop_one_stripe
  md/raid5: pass gfp_t arg to grow_one_stripe()
  md/raid5: introduce configuration option rmw_level
  md/raid5: activate raid6 rmw feature
  md/raid6 algorithms: xor_syndrome() for SSE2
  md/raid6 algorithms: xor_syndrome() for generic int
  md/raid6 algorithms: improve test program
  md/raid6 algorithms: delta syndrome functions
  raid5: handle expansion/resync case with stripe batching
  raid5: handle io error of batch list
  RAID5: batch adjacent full stripe write
  raid5: track overwrite disk count
  raid5: add a new flag to track if a stripe can be batched
  raid5: use flex_array for scribble data
  md raid0: access mddev->queue (request queue member) conditionally because it is not set when accessed from dm-raid
  md: allow resync to go faster when there is competing IO.
  md: remove 'go_faster' option from ->sync_request()
  ...
2 parent s d56a669 + 9ffc8f7
Raw File
io-mapping.txt
The io_mapping functions in linux/io-mapping.h provide an abstraction for
efficiently mapping small regions of an I/O device to the CPU. The initial
usage is to support the large graphics aperture on 32-bit processors where
ioremap_wc cannot be used to statically map the entire aperture to the CPU
as it would consume too much of the kernel address space.

A mapping object is created during driver initialization using

	struct io_mapping *io_mapping_create_wc(unsigned long base,
						unsigned long size)

		'base' is the bus address of the region to be made
		mappable, while 'size' indicates how large a mapping region to
		enable. Both are in bytes.

		This _wc variant provides a mapping which may only be used
		with the io_mapping_map_atomic_wc or io_mapping_map_wc.

With this mapping object, individual pages can be mapped either atomically
or not, depending on the necessary scheduling environment. Of course, atomic
maps are more efficient:

	void *io_mapping_map_atomic_wc(struct io_mapping *mapping,
				       unsigned long offset)

		'offset' is the offset within the defined mapping region.
		Accessing addresses beyond the region specified in the
		creation function yields undefined results. Using an offset
		which is not page aligned yields an undefined result. The
		return value points to a single page in CPU address space.

		This _wc variant returns a write-combining map to the
		page and may only be used with mappings created by
		io_mapping_create_wc

		Note that the task may not sleep while holding this page
		mapped.

	void io_mapping_unmap_atomic(void *vaddr)

		'vaddr' must be the value returned by the last
		io_mapping_map_atomic_wc call. This unmaps the specified
		page and allows the task to sleep once again.

If you need to sleep while holding the lock, you can use the non-atomic
variant, although they may be significantly slower.

	void *io_mapping_map_wc(struct io_mapping *mapping,
				unsigned long offset)

		This works like io_mapping_map_atomic_wc except it allows
		the task to sleep while holding the page mapped.

	void io_mapping_unmap(void *vaddr)

		This works like io_mapping_unmap_atomic, except it is used
		for pages mapped with io_mapping_map_wc.

At driver close time, the io_mapping object must be freed:

	void io_mapping_free(struct io_mapping *mapping)

Current Implementation:

The initial implementation of these functions uses existing mapping
mechanisms and so provides only an abstraction layer and no new
functionality.

On 64-bit processors, io_mapping_create_wc calls ioremap_wc for the whole
range, creating a permanent kernel-visible mapping to the resource. The
map_atomic and map functions add the requested offset to the base of the
virtual address returned by ioremap_wc.

On 32-bit processors with HIGHMEM defined, io_mapping_map_atomic_wc uses
kmap_atomic_pfn to map the specified page in an atomic fashion;
kmap_atomic_pfn isn't really supposed to be used with device pages, but it
provides an efficient mapping for this usage.

On 32-bit processors without HIGHMEM defined, io_mapping_map_atomic_wc and
io_mapping_map_wc both use ioremap_wc, a terribly inefficient function which
performs an IPI to inform all processors about the new mapping. This results
in a significant performance penalty.
back to top