Revision baa1f708ff4d78c3c1dea8c27fcf96ba402aa54b authored by dependabot[bot] on 28 August 2021, 01:20:05 UTC, committed by Facebook GitHub Bot on 28 August 2021, 01:20:55 UTC
Summary:
Bumps [url-parse](https://github.com/unshiftio/url-parse) from 1.5.1 to 1.5.3.
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/unshiftio/url-parse/commit/ad444931666a30bad11472d89a216461cf16cae2"><code>ad44493</code></a> [dist] 1.5.3</li>
<li><a href="https://github.com/unshiftio/url-parse/commit/c7984617e235892cc22e0f47bb5ff1c012e6e39f"><code>c798461</code></a> [fix] Fix host parsing for file URLs (<a href="https://github-redirect.dependabot.com/unshiftio/url-parse/issues/210">https://github.com/facebookresearch/mmf/issues/210</a>)</li>
<li><a href="https://github.com/unshiftio/url-parse/commit/201034b8670c2aa382d7ec410ee750ac6f2f9c38"><code>201034b</code></a> [dist] 1.5.2</li>
<li><a href="https://github.com/unshiftio/url-parse/commit/2d9ac2c94067742b2116332c1e03be9f37371dff"><code>2d9ac2c</code></a> [fix] Sanitize only special URLs (<a href="https://github-redirect.dependabot.com/unshiftio/url-parse/issues/209">https://github.com/facebookresearch/mmf/issues/209</a>)</li>
<li><a href="https://github.com/unshiftio/url-parse/commit/fb128af4f43fa17f351d50cf615c7598c751f50a"><code>fb128af</code></a> [fix] Use <code>'null'</code> as <code>origin</code> for non special URLs</li>
<li><a href="https://github.com/unshiftio/url-parse/commit/fed6d9e338ea39de2d68bb66607066d71328c62f"><code>fed6d9e</code></a> [fix] Add a leading slash only if the URL is special</li>
<li><a href="https://github.com/unshiftio/url-parse/commit/94872e7ab9103ee69b958959baa14c9e682a7f10"><code>94872e7</code></a> [fix] Do not incorrectly set the <code>slashes</code> property to <code>true</code></li>
<li><a href="https://github.com/unshiftio/url-parse/commit/81ab967889b08112d3356e451bf03e6aa0cbb7e0"><code>81ab967</code></a> [fix] Ignore slashes after the protocol for special URLs</li>
<li><a href="https://github.com/unshiftio/url-parse/commit/ee22050a48a67409aa5f7c87947284156d615bd1"><code>ee22050</code></a> [ci] Use GitHub Actions</li>
<li><a href="https://github.com/unshiftio/url-parse/commit/d2979b586d8c7751e0c77f127d9ce1b2143cc0c9"><code>d2979b5</code></a> [fix] Special case the <code>file:</code> protocol (<a href="https://github-redirect.dependabot.com/unshiftio/url-parse/issues/204">https://github.com/facebookresearch/mmf/issues/204</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/unshiftio/url-parse/compare/1.5.1...1.5.3">compare view</a></li>
</ul>
</details>
<br />

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=url-parse&package-manager=npm_and_yarn&previous-version=1.5.1&new-version=1.5.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

 ---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `dependabot rebase` will rebase this PR
- `dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `dependabot merge` will merge this PR after your CI passes on it
- `dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `dependabot cancel merge` will cancel a previously requested merge and block automerging
- `dependabot reopen` will reopen this PR if it is closed
- `dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language

You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/facebookresearch/mmf/network/alerts).

</details>

Pull Request resolved: https://github.com/facebookresearch/mmf/pull/1046

Test Plan:
Imported from GitHub, without a `Test Plan:` line.

|[Site Preview: mmf](https://our.intern.facebook.com/intern/staticdocs/eph/D30392262/V3/mmf/)

Reviewed By: apsdehal

Differential Revision: D30392262

Pulled By: vedanuj

fbshipit-source-id: 5741d20e13b5269d60d214d56207c3f59c31c270
1 parent d772fd1
Raw File
hm_convert.py
# Copyright (c) Facebook, Inc. and its affiliates.

import argparse
import hashlib
import os
import subprocess
import warnings
import zipfile

from mmf.utils.configuration import Configuration
from mmf.utils.download import copy, decompress, move
from mmf.utils.file_io import PathManager


class HMConverter:
    IMAGE_FILES = ["img.tar.gz", "img"]
    JSONL_PHASE_ONE_FILES = ["train.jsonl", "dev.jsonl", "test.jsonl"]
    JSONL_PHASE_TWO_FILES = [
        "train.jsonl",
        "dev_seen.jsonl",
        "test_seen.jsonl",
        "dev_unseen.jsonl",
        "test_unseen.jsonl",
    ]
    POSSIBLE_CHECKSUMS = [
        "d8f1073f5fbf1b08a541cc2325fc8645619ab8ed768091fb1317d5c3a6653a77",
        "a424c003b7d4ea3f3b089168b5f5ea73b90a3ff043df4b8ff4d7ed87c51cb572",
        "6e609b8c230faff02426cf462f0c9528957b7884d68c60ebc26ff83846e5f80f",
        "c1363aae9649c79ae4abfdb151b56d3d170187db77757f3daa80856558ac367c",
    ]

    def __init__(self):
        self.parser = self.get_parser()
        self.args = self.parser.parse_args()
        self.configuration = Configuration()

    def assert_files(self, folder):
        files_needed = self.JSONL_PHASE_ONE_FILES
        phase_one = True
        for file in files_needed:
            try:
                assert PathManager.exists(
                    os.path.join(folder, "data", file)
                ), f"{file} doesn't exist in {folder}"
            except AssertionError:
                phase_one = False

        if not phase_one:
            files_needed = self.JSONL_PHASE_TWO_FILES
            for file in files_needed:
                assert PathManager.exists(
                    os.path.join(folder, "data", file)
                ), f"{file} doesn't exist in {folder}"
        else:
            warnings.warn(
                "You are on Phase 1 of the Hateful Memes Challenge. "
                "Please update to Phase 2"
            )

        files_needed = self.IMAGE_FILES

        exists = False

        for file in files_needed:
            exists = exists or PathManager.exists(os.path.join(folder, "data", file))

        if not exists:
            raise AssertionError("Neither img or img.tar.gz exists in current zip")

        return phase_one

    def get_parser(self):
        parser = argparse.ArgumentParser(formatter_class=argparse.RawTextHelpFormatter)

        parser.add_argument(
            "--zip_file",
            required=True,
            type=str,
            help="Zip file downloaded from the DrivenData",
        )

        parser.add_argument(
            "--password", required=True, type=str, help="Password for the zip file"
        )
        parser.add_argument(
            "--move", required=None, type=int, help="Move data dir to mmf cache dir"
        )
        parser.add_argument(
            "--mmf_data_folder", required=None, type=str, help="MMF Data folder"
        )
        parser.add_argument(
            "--bypass_checksum",
            required=None,
            type=int,
            help="Pass 1 if you want to skip checksum",
        )
        return parser

    def convert(self):
        config = self.configuration.get_config()
        data_dir = config.env.data_dir

        if self.args.mmf_data_folder:
            data_dir = self.args.mmf_data_folder

        bypass_checksum = False
        if self.args.bypass_checksum:
            bypass_checksum = bool(self.args.bypass_checksum)

        print(f"Data folder is {data_dir}")
        print(f"Zip path is {self.args.zip_file}")

        base_path = os.path.join(data_dir, "datasets", "hateful_memes", "defaults")

        images_path = os.path.join(base_path, "images")
        PathManager.mkdirs(images_path)

        move_dir = False
        if self.args.move:
            move_dir = bool(self.args.move)

        if not bypass_checksum:
            self.checksum(self.args.zip_file, self.POSSIBLE_CHECKSUMS)

        src = self.args.zip_file
        dest = images_path
        if move_dir:
            print(f"Moving {src}")
            move(src, dest)
        else:
            print(f"Copying {src}")
            copy(src, dest)

        print(f"Unzipping {src}")
        self.decompress_zip(
            dest, fname=os.path.basename(src), password=self.args.password
        )

        phase_one = self.assert_files(images_path)

        annotations_path = os.path.join(base_path, "annotations")
        PathManager.mkdirs(annotations_path)
        annotations = (
            self.JSONL_PHASE_ONE_FILES
            if phase_one is True
            else self.JSONL_PHASE_TWO_FILES
        )

        for annotation in annotations:
            print(f"Moving {annotation}")
            src = os.path.join(images_path, "data", annotation)
            dest = os.path.join(annotations_path, annotation)
            move(src, dest)

        images = self.IMAGE_FILES

        for image_file in images:
            src = os.path.join(images_path, "data", image_file)
            if PathManager.exists(src):
                print(f"Moving {image_file}")
            else:
                continue
            dest = os.path.join(images_path, image_file)
            move(src, dest)
            if src.endswith(".tar.gz"):
                decompress(dest, fname=image_file, delete_original=False)

    def checksum(self, file, hashes):
        sha256_hash = hashlib.sha256()
        destination = file

        with PathManager.open(destination, "rb") as f:
            print("Starting checksum for {}".format(os.path.basename(file)))
            for byte_block in iter(lambda: f.read(65536), b""):
                sha256_hash.update(byte_block)
            if sha256_hash.hexdigest() not in hashes:
                # remove_dir(download_path)
                raise AssertionError(
                    f"Checksum of downloaded file does not match the expected "
                    + "checksum. Please try again."
                )
            else:
                print("Checksum successful")

    def decompress_zip(self, dest, fname, password=None):
        path = os.path.join(dest, fname)
        print("Extracting the zip can take time. Sit back and relax.")
        try:
            # Python's zip file module is very slow with password encrypted files
            # Try command line
            command = ["unzip", "-o", "-q", "-d", dest]
            if password:
                command += ["-P", password]
            command += [path]
            subprocess.run(command, check=True)
        except Exception:
            obj = zipfile.ZipFile(path, "r")
            if password:
                obj.setpassword(password.encode("utf-8"))
            obj.extractall(path=dest)
            obj.close()


def main():
    converter = HMConverter()
    converter.convert()


if __name__ == "__main__":
    main()
back to top