https://github.com/torvalds/linux
Revision e058a84bfddc42ba356a2316f2cf1141974625c9 authored by Linus Torvalds on 01 July 2021, 19:53:43 UTC, committed by Linus Torvalds on 01 July 2021, 19:53:43 UTC
Pull drm updates from Dave Airlie:
 "Highlights:

   - AMD enables two more GPUs, with resulting header files

   - i915 has started to move to TTM for discrete GPU and enable DG1
     discrete GPU support (not by default yet)

   - new HyperV drm driver

   - vmwgfx adds arm64 support

   - TTM refactoring ongoing

   - 16bpc display support for AMD hw

  Otherwise it's just the usual insane amounts of work all over the
  place in lots of drivers and the core, as mostly summarised below:

  Core:
   - mark AGP ioctls as legacy
   - disable force probing for non-master clients
   - HDR metadata property helpers
   - HDMI infoframe signal colorimetry support
   - remove drm_device.pdev pointer
   - remove DRM_KMS_FB_HELPER config option
   - remove drm_pci_alloc/free
   - drm_err_*/drm_dbg_* helpers
   - use drm driver names for fbdev
   - leaked DMA handle fix
   - 16bpc fixed point format fourcc
   - add prefetching memcpy for WC
   - Documentation fixes

  aperture:
   - add aperture ownership helpers

  dp:
   - aux fixes
   - downstream 0 port handling
   - use extended base receiver capability DPCD
   - Rename DP_PSR_SELECTIVE_UPDATE to better mach eDP spec
   - mst: use khz as link rate during init
   - VCPI fixes for StarTech hub

  ttm:
   - provide tt_shrink file via debugfs
   - warn about freeing pinned BOs
   - fix swapping error handling
   - move page alignment into BO
   - cleanup ttm_agp_backend
   - add ttm_sys_manager
   - don't override vm_ops
   - ttm_bo_mmap removed
   - make ttm_resource base of all managers
   - remove VM_MIXEDMAP usage

  panel:
   - sysfs_emit support
   - simple: runtime PM support
   - simple: power up panel when reading EDID + caching

  bridge:
   - MHDP8546: HDCP support + DT bindings
   - MHDP8546: Register DP AUX channel with userspace
   - TI SN65DSI83 + SN65DSI84: add driver
   - Sil8620: Fix module dependencies
   - dw-hdmi: make CEC driver loading optional
   - Ti-sn65dsi86: refclk fixes, subdrivers, runtime pm
   - It66121: Add driver + DT bindings
   - Adv7511: Support I2S IEC958 encoding
   - Anx7625: fix power-on delay
   - Nwi-dsi: Modesetting fixes; Cleanups
   - lt6911: add missing MODULE_DEVICE_TABLE
   - cdns: fix PM reference leak

  hyperv:
   - add new DRM driver for HyperV graphics

  efifb:
   - non-PCI device handling fixes

  i915:
   - refactor IP/device versioning
   - XeLPD Display IP preperation work
   - ADL-P enablement patches
   - DG1 uAPI behind BROKEN
   - disable mmap ioctl for discerte GPUs
   - start enabling HuC loading for Gen12+
   - major GuC backend rework for new platforms
   - initial TTM support for Discrete GPUs
   - locking rework for TTM prep
   - use correct max source link rate for eDP
   - %p4cc format printing
   - GLK display fixes
   - VLV DSI panel power fixes
   - PSR2 disabled for RKL and ADL-S
   - ACPI _DSM invalid access fixed
   - DMC FW path abstraction
   - ADL-S PCI ID update
   - uAPI headers converted to kerneldoc
   - initial LMEM support for DG1
   - x86/gpu: add Jasperlake to gen11 early quirks

  amdgpu:
   - Aldebaran updates + initial SR-IOV
   - new GPU: Beige Goby and Yellow Carp support
   - more LTTPR display work
   - Vangogh updates
   - SDMA 5.x GCR fixes
   - PCIe ASPM support
   - Renoir TMZ enablement
   - initial multiple eDP panel support
   - use fdinfo to track devices/process info
   - pin/unpin TTM fixes
   - free resource on fence usage query
   - fix fence calculation
   - fix hotunplug/suspend issues
   - GC/MM register access macro cleanup for SR-IOV
   - W=1 fixes
   - ACPI ATCS/ATIF handling rework
   - 16bpc fixed point format support
   - Initial smartshift support
   - RV/PCO power tuning fixes
   - new INFO query for additional vbios info

  amdkfd:
   - SR-IOV aldebaran support
   - HMM SVM support

  radeon:
   - SMU regression fixes
   - Oland flickering fix

  vmwgfx:
   - enable console with fbdev emulation
   - fix cpu updates of coherent multisample surfaces
   - remove reservation semaphore
   - add initial SVGA3 support
   - support arm64

  msm:
   - devcoredump support for display errors
   - dpu/dsi: yaml bindings conversion
   - mdp5: alpha/blend_mode/zpos support
   - a6xx: cached coherent buffer support
   - gpu iova fault improvement
   - a660 support

  rockchip:
   - RK3036 win1 scaling support
   - RK3066/3188 missing register support
   - RK3036/3066/3126/3188 alpha support

  mediatek:
   - MT8167 HDMI support
   - MT8183 DPI dual edge support

  tegra:
   - fixed YUV support/scaling on Tegra186+

  ast:
   - use pcim_iomap
   - fix DP501 EDID

  bochs:
   - screen blanking support

  etnaviv:
   - export more GPU ID values to userspace
   - add HWDB entry for GPU on i.MX8MP
   - rework linear window calcs

  exynos:
   - pm runtime changes

  imx:
   - Annotate dma_fence critical section
   - fix PRG modifiers after drmm conversion
   - Add 8 pixel alignment fix for 1366x768
   - fix YUV advertising
   - add color properties

  ingenic:
   - IPU planes fix

  panfrost:
   - Mediatek MT8183 support + DT bindings
   - export AFBC_FEATURES register to userspace

  simpledrm:
   - %pr for printing resources

  nouveau:
   - pin/unpin TTM fixes

  qxl:
   - unpin shadow BO

  virtio:
   - create dumb BOs as guest blob

  vkms:
   - drmm_universal_plane_alloc
   - add XRGB plane composition
   - overlay support"

* tag 'drm-next-2021-07-01' of git://anongit.freedesktop.org/drm/drm: (1570 commits)
  drm/i915: Reinstate the mmap ioctl for some platforms
  drm/i915/dsc: abstract helpers to get bigjoiner primary/secondary crtc
  Revert "drm/msm/mdp5: provide dynamic bandwidth management"
  drm/msm/mdp5: provide dynamic bandwidth management
  drm/msm/mdp5: add perf blocks for holding fudge factors
  drm/msm/mdp5: switch to standard zpos property
  drm/msm/mdp5: add support for alpha/blend_mode properties
  drm/msm/mdp5: use drm_plane_state for pixel blend mode
  drm/msm/mdp5: use drm_plane_state for storing alpha value
  drm/msm/mdp5: use drm atomic helpers to handle base drm plane state
  drm/msm/dsi: do not enable PHYs when called for the slave DSI interface
  drm/msm: Add debugfs to trigger shrinker
  drm/msm/dpu: Avoid ABBA deadlock between IRQ modules
  drm/msm: devcoredump iommu fault support
  iommu/arm-smmu-qcom: Add stall support
  drm/msm: Improve the a6xx page fault handler
  iommu/arm-smmu-qcom: Add an adreno-smmu-priv callback to get pagefault info
  iommu/arm-smmu: Add support for driver IOMMU fault handlers
  drm/msm: export hangcheck_period in debugfs
  drm/msm/a6xx: add support for Adreno 660 GPU
  ...
2 parent s c288d9c + 8a02ea4
Raw File
Tip revision: e058a84bfddc42ba356a2316f2cf1141974625c9 authored by Linus Torvalds on 01 July 2021, 19:53:43 UTC
Merge tag 'drm-next-2021-07-01' of git://anongit.freedesktop.org/drm/drm
Tip revision: e058a84
pagewalk.c
// SPDX-License-Identifier: GPL-2.0
#include <linux/pagewalk.h>
#include <linux/highmem.h>
#include <linux/sched.h>
#include <linux/hugetlb.h>

/*
 * We want to know the real level where a entry is located ignoring any
 * folding of levels which may be happening. For example if p4d is folded then
 * a missing entry found at level 1 (p4d) is actually at level 0 (pgd).
 */
static int real_depth(int depth)
{
	if (depth == 3 && PTRS_PER_PMD == 1)
		depth = 2;
	if (depth == 2 && PTRS_PER_PUD == 1)
		depth = 1;
	if (depth == 1 && PTRS_PER_P4D == 1)
		depth = 0;
	return depth;
}

static int walk_pte_range_inner(pte_t *pte, unsigned long addr,
				unsigned long end, struct mm_walk *walk)
{
	const struct mm_walk_ops *ops = walk->ops;
	int err = 0;

	for (;;) {
		err = ops->pte_entry(pte, addr, addr + PAGE_SIZE, walk);
		if (err)
		       break;
		if (addr >= end - PAGE_SIZE)
			break;
		addr += PAGE_SIZE;
		pte++;
	}
	return err;
}

static int walk_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
			  struct mm_walk *walk)
{
	pte_t *pte;
	int err = 0;
	spinlock_t *ptl;

	if (walk->no_vma) {
		pte = pte_offset_map(pmd, addr);
		err = walk_pte_range_inner(pte, addr, end, walk);
		pte_unmap(pte);
	} else {
		pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
		err = walk_pte_range_inner(pte, addr, end, walk);
		pte_unmap_unlock(pte, ptl);
	}

	return err;
}

#ifdef CONFIG_ARCH_HAS_HUGEPD
static int walk_hugepd_range(hugepd_t *phpd, unsigned long addr,
			     unsigned long end, struct mm_walk *walk, int pdshift)
{
	int err = 0;
	const struct mm_walk_ops *ops = walk->ops;
	int shift = hugepd_shift(*phpd);
	int page_size = 1 << shift;

	if (!ops->pte_entry)
		return 0;

	if (addr & (page_size - 1))
		return 0;

	for (;;) {
		pte_t *pte;

		spin_lock(&walk->mm->page_table_lock);
		pte = hugepte_offset(*phpd, addr, pdshift);
		err = ops->pte_entry(pte, addr, addr + page_size, walk);
		spin_unlock(&walk->mm->page_table_lock);

		if (err)
			break;
		if (addr >= end - page_size)
			break;
		addr += page_size;
	}
	return err;
}
#else
static int walk_hugepd_range(hugepd_t *phpd, unsigned long addr,
			     unsigned long end, struct mm_walk *walk, int pdshift)
{
	return 0;
}
#endif

static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
			  struct mm_walk *walk)
{
	pmd_t *pmd;
	unsigned long next;
	const struct mm_walk_ops *ops = walk->ops;
	int err = 0;
	int depth = real_depth(3);

	pmd = pmd_offset(pud, addr);
	do {
again:
		next = pmd_addr_end(addr, end);
		if (pmd_none(*pmd) || (!walk->vma && !walk->no_vma)) {
			if (ops->pte_hole)
				err = ops->pte_hole(addr, next, depth, walk);
			if (err)
				break;
			continue;
		}

		walk->action = ACTION_SUBTREE;

		/*
		 * This implies that each ->pmd_entry() handler
		 * needs to know about pmd_trans_huge() pmds
		 */
		if (ops->pmd_entry)
			err = ops->pmd_entry(pmd, addr, next, walk);
		if (err)
			break;

		if (walk->action == ACTION_AGAIN)
			goto again;

		/*
		 * Check this here so we only break down trans_huge
		 * pages when we _need_ to
		 */
		if ((!walk->vma && (pmd_leaf(*pmd) || !pmd_present(*pmd))) ||
		    walk->action == ACTION_CONTINUE ||
		    !(ops->pte_entry))
			continue;

		if (walk->vma) {
			split_huge_pmd(walk->vma, pmd, addr);
			if (pmd_trans_unstable(pmd))
				goto again;
		}

		if (is_hugepd(__hugepd(pmd_val(*pmd))))
			err = walk_hugepd_range((hugepd_t *)pmd, addr, next, walk, PMD_SHIFT);
		else
			err = walk_pte_range(pmd, addr, next, walk);
		if (err)
			break;
	} while (pmd++, addr = next, addr != end);

	return err;
}

static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
			  struct mm_walk *walk)
{
	pud_t *pud;
	unsigned long next;
	const struct mm_walk_ops *ops = walk->ops;
	int err = 0;
	int depth = real_depth(2);

	pud = pud_offset(p4d, addr);
	do {
 again:
		next = pud_addr_end(addr, end);
		if (pud_none(*pud) || (!walk->vma && !walk->no_vma)) {
			if (ops->pte_hole)
				err = ops->pte_hole(addr, next, depth, walk);
			if (err)
				break;
			continue;
		}

		walk->action = ACTION_SUBTREE;

		if (ops->pud_entry)
			err = ops->pud_entry(pud, addr, next, walk);
		if (err)
			break;

		if (walk->action == ACTION_AGAIN)
			goto again;

		if ((!walk->vma && (pud_leaf(*pud) || !pud_present(*pud))) ||
		    walk->action == ACTION_CONTINUE ||
		    !(ops->pmd_entry || ops->pte_entry))
			continue;

		if (walk->vma)
			split_huge_pud(walk->vma, pud, addr);
		if (pud_none(*pud))
			goto again;

		if (is_hugepd(__hugepd(pud_val(*pud))))
			err = walk_hugepd_range((hugepd_t *)pud, addr, next, walk, PUD_SHIFT);
		else
			err = walk_pmd_range(pud, addr, next, walk);
		if (err)
			break;
	} while (pud++, addr = next, addr != end);

	return err;
}

static int walk_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end,
			  struct mm_walk *walk)
{
	p4d_t *p4d;
	unsigned long next;
	const struct mm_walk_ops *ops = walk->ops;
	int err = 0;
	int depth = real_depth(1);

	p4d = p4d_offset(pgd, addr);
	do {
		next = p4d_addr_end(addr, end);
		if (p4d_none_or_clear_bad(p4d)) {
			if (ops->pte_hole)
				err = ops->pte_hole(addr, next, depth, walk);
			if (err)
				break;
			continue;
		}
		if (ops->p4d_entry) {
			err = ops->p4d_entry(p4d, addr, next, walk);
			if (err)
				break;
		}
		if (is_hugepd(__hugepd(p4d_val(*p4d))))
			err = walk_hugepd_range((hugepd_t *)p4d, addr, next, walk, P4D_SHIFT);
		else if (ops->pud_entry || ops->pmd_entry || ops->pte_entry)
			err = walk_pud_range(p4d, addr, next, walk);
		if (err)
			break;
	} while (p4d++, addr = next, addr != end);

	return err;
}

static int walk_pgd_range(unsigned long addr, unsigned long end,
			  struct mm_walk *walk)
{
	pgd_t *pgd;
	unsigned long next;
	const struct mm_walk_ops *ops = walk->ops;
	int err = 0;

	if (walk->pgd)
		pgd = walk->pgd + pgd_index(addr);
	else
		pgd = pgd_offset(walk->mm, addr);
	do {
		next = pgd_addr_end(addr, end);
		if (pgd_none_or_clear_bad(pgd)) {
			if (ops->pte_hole)
				err = ops->pte_hole(addr, next, 0, walk);
			if (err)
				break;
			continue;
		}
		if (ops->pgd_entry) {
			err = ops->pgd_entry(pgd, addr, next, walk);
			if (err)
				break;
		}
		if (is_hugepd(__hugepd(pgd_val(*pgd))))
			err = walk_hugepd_range((hugepd_t *)pgd, addr, next, walk, PGDIR_SHIFT);
		else if (ops->p4d_entry || ops->pud_entry || ops->pmd_entry || ops->pte_entry)
			err = walk_p4d_range(pgd, addr, next, walk);
		if (err)
			break;
	} while (pgd++, addr = next, addr != end);

	return err;
}

#ifdef CONFIG_HUGETLB_PAGE
static unsigned long hugetlb_entry_end(struct hstate *h, unsigned long addr,
				       unsigned long end)
{
	unsigned long boundary = (addr & huge_page_mask(h)) + huge_page_size(h);
	return boundary < end ? boundary : end;
}

static int walk_hugetlb_range(unsigned long addr, unsigned long end,
			      struct mm_walk *walk)
{
	struct vm_area_struct *vma = walk->vma;
	struct hstate *h = hstate_vma(vma);
	unsigned long next;
	unsigned long hmask = huge_page_mask(h);
	unsigned long sz = huge_page_size(h);
	pte_t *pte;
	const struct mm_walk_ops *ops = walk->ops;
	int err = 0;

	do {
		next = hugetlb_entry_end(h, addr, end);
		pte = huge_pte_offset(walk->mm, addr & hmask, sz);

		if (pte)
			err = ops->hugetlb_entry(pte, hmask, addr, next, walk);
		else if (ops->pte_hole)
			err = ops->pte_hole(addr, next, -1, walk);

		if (err)
			break;
	} while (addr = next, addr != end);

	return err;
}

#else /* CONFIG_HUGETLB_PAGE */
static int walk_hugetlb_range(unsigned long addr, unsigned long end,
			      struct mm_walk *walk)
{
	return 0;
}

#endif /* CONFIG_HUGETLB_PAGE */

/*
 * Decide whether we really walk over the current vma on [@start, @end)
 * or skip it via the returned value. Return 0 if we do walk over the
 * current vma, and return 1 if we skip the vma. Negative values means
 * error, where we abort the current walk.
 */
static int walk_page_test(unsigned long start, unsigned long end,
			struct mm_walk *walk)
{
	struct vm_area_struct *vma = walk->vma;
	const struct mm_walk_ops *ops = walk->ops;

	if (ops->test_walk)
		return ops->test_walk(start, end, walk);

	/*
	 * vma(VM_PFNMAP) doesn't have any valid struct pages behind VM_PFNMAP
	 * range, so we don't walk over it as we do for normal vmas. However,
	 * Some callers are interested in handling hole range and they don't
	 * want to just ignore any single address range. Such users certainly
	 * define their ->pte_hole() callbacks, so let's delegate them to handle
	 * vma(VM_PFNMAP).
	 */
	if (vma->vm_flags & VM_PFNMAP) {
		int err = 1;
		if (ops->pte_hole)
			err = ops->pte_hole(start, end, -1, walk);
		return err ? err : 1;
	}
	return 0;
}

static int __walk_page_range(unsigned long start, unsigned long end,
			struct mm_walk *walk)
{
	int err = 0;
	struct vm_area_struct *vma = walk->vma;
	const struct mm_walk_ops *ops = walk->ops;

	if (vma && ops->pre_vma) {
		err = ops->pre_vma(start, end, walk);
		if (err)
			return err;
	}

	if (vma && is_vm_hugetlb_page(vma)) {
		if (ops->hugetlb_entry)
			err = walk_hugetlb_range(start, end, walk);
	} else
		err = walk_pgd_range(start, end, walk);

	if (vma && ops->post_vma)
		ops->post_vma(walk);

	return err;
}

/**
 * walk_page_range - walk page table with caller specific callbacks
 * @mm:		mm_struct representing the target process of page table walk
 * @start:	start address of the virtual address range
 * @end:	end address of the virtual address range
 * @ops:	operation to call during the walk
 * @private:	private data for callbacks' usage
 *
 * Recursively walk the page table tree of the process represented by @mm
 * within the virtual address range [@start, @end). During walking, we can do
 * some caller-specific works for each entry, by setting up pmd_entry(),
 * pte_entry(), and/or hugetlb_entry(). If you don't set up for some of these
 * callbacks, the associated entries/pages are just ignored.
 * The return values of these callbacks are commonly defined like below:
 *
 *  - 0  : succeeded to handle the current entry, and if you don't reach the
 *         end address yet, continue to walk.
 *  - >0 : succeeded to handle the current entry, and return to the caller
 *         with caller specific value.
 *  - <0 : failed to handle the current entry, and return to the caller
 *         with error code.
 *
 * Before starting to walk page table, some callers want to check whether
 * they really want to walk over the current vma, typically by checking
 * its vm_flags. walk_page_test() and @ops->test_walk() are used for this
 * purpose.
 *
 * If operations need to be staged before and committed after a vma is walked,
 * there are two callbacks, pre_vma() and post_vma(). Note that post_vma(),
 * since it is intended to handle commit-type operations, can't return any
 * errors.
 *
 * struct mm_walk keeps current values of some common data like vma and pmd,
 * which are useful for the access from callbacks. If you want to pass some
 * caller-specific data to callbacks, @private should be helpful.
 *
 * Locking:
 *   Callers of walk_page_range() and walk_page_vma() should hold @mm->mmap_lock,
 *   because these function traverse vma list and/or access to vma's data.
 */
int walk_page_range(struct mm_struct *mm, unsigned long start,
		unsigned long end, const struct mm_walk_ops *ops,
		void *private)
{
	int err = 0;
	unsigned long next;
	struct vm_area_struct *vma;
	struct mm_walk walk = {
		.ops		= ops,
		.mm		= mm,
		.private	= private,
	};

	if (start >= end)
		return -EINVAL;

	if (!walk.mm)
		return -EINVAL;

	mmap_assert_locked(walk.mm);

	vma = find_vma(walk.mm, start);
	do {
		if (!vma) { /* after the last vma */
			walk.vma = NULL;
			next = end;
		} else if (start < vma->vm_start) { /* outside vma */
			walk.vma = NULL;
			next = min(end, vma->vm_start);
		} else { /* inside vma */
			walk.vma = vma;
			next = min(end, vma->vm_end);
			vma = vma->vm_next;

			err = walk_page_test(start, next, &walk);
			if (err > 0) {
				/*
				 * positive return values are purely for
				 * controlling the pagewalk, so should never
				 * be passed to the callers.
				 */
				err = 0;
				continue;
			}
			if (err < 0)
				break;
		}
		if (walk.vma || walk.ops->pte_hole)
			err = __walk_page_range(start, next, &walk);
		if (err)
			break;
	} while (start = next, start < end);
	return err;
}

/*
 * Similar to walk_page_range() but can walk any page tables even if they are
 * not backed by VMAs. Because 'unusual' entries may be walked this function
 * will also not lock the PTEs for the pte_entry() callback. This is useful for
 * walking the kernel pages tables or page tables for firmware.
 */
int walk_page_range_novma(struct mm_struct *mm, unsigned long start,
			  unsigned long end, const struct mm_walk_ops *ops,
			  pgd_t *pgd,
			  void *private)
{
	struct mm_walk walk = {
		.ops		= ops,
		.mm		= mm,
		.pgd		= pgd,
		.private	= private,
		.no_vma		= true
	};

	if (start >= end || !walk.mm)
		return -EINVAL;

	mmap_assert_locked(walk.mm);

	return __walk_page_range(start, end, &walk);
}

int walk_page_vma(struct vm_area_struct *vma, const struct mm_walk_ops *ops,
		void *private)
{
	struct mm_walk walk = {
		.ops		= ops,
		.mm		= vma->vm_mm,
		.vma		= vma,
		.private	= private,
	};
	int err;

	if (!walk.mm)
		return -EINVAL;

	mmap_assert_locked(walk.mm);

	err = walk_page_test(vma->vm_start, vma->vm_end, &walk);
	if (err > 0)
		return 0;
	if (err < 0)
		return err;
	return __walk_page_range(vma->vm_start, vma->vm_end, &walk);
}

/**
 * walk_page_mapping - walk all memory areas mapped into a struct address_space.
 * @mapping: Pointer to the struct address_space
 * @first_index: First page offset in the address_space
 * @nr: Number of incremental page offsets to cover
 * @ops:	operation to call during the walk
 * @private:	private data for callbacks' usage
 *
 * This function walks all memory areas mapped into a struct address_space.
 * The walk is limited to only the given page-size index range, but if
 * the index boundaries cross a huge page-table entry, that entry will be
 * included.
 *
 * Also see walk_page_range() for additional information.
 *
 * Locking:
 *   This function can't require that the struct mm_struct::mmap_lock is held,
 *   since @mapping may be mapped by multiple processes. Instead
 *   @mapping->i_mmap_rwsem must be held. This might have implications in the
 *   callbacks, and it's up tho the caller to ensure that the
 *   struct mm_struct::mmap_lock is not needed.
 *
 *   Also this means that a caller can't rely on the struct
 *   vm_area_struct::vm_flags to be constant across a call,
 *   except for immutable flags. Callers requiring this shouldn't use
 *   this function.
 *
 * Return: 0 on success, negative error code on failure, positive number on
 * caller defined premature termination.
 */
int walk_page_mapping(struct address_space *mapping, pgoff_t first_index,
		      pgoff_t nr, const struct mm_walk_ops *ops,
		      void *private)
{
	struct mm_walk walk = {
		.ops		= ops,
		.private	= private,
	};
	struct vm_area_struct *vma;
	pgoff_t vba, vea, cba, cea;
	unsigned long start_addr, end_addr;
	int err = 0;

	lockdep_assert_held(&mapping->i_mmap_rwsem);
	vma_interval_tree_foreach(vma, &mapping->i_mmap, first_index,
				  first_index + nr - 1) {
		/* Clip to the vma */
		vba = vma->vm_pgoff;
		vea = vba + vma_pages(vma);
		cba = first_index;
		cba = max(cba, vba);
		cea = first_index + nr;
		cea = min(cea, vea);

		start_addr = ((cba - vba) << PAGE_SHIFT) + vma->vm_start;
		end_addr = ((cea - vba) << PAGE_SHIFT) + vma->vm_start;
		if (start_addr >= end_addr)
			continue;

		walk.vma = vma;
		walk.mm = vma->vm_mm;

		err = walk_page_test(vma->vm_start, vma->vm_end, &walk);
		if (err > 0) {
			err = 0;
			break;
		} else if (err < 0)
			break;

		err = __walk_page_range(start_addr, end_addr, &walk);
		if (err)
			break;
	}

	return err;
}
back to top