swh:1:snp:71afc25eb6e6e055a37a962e6b91010ec35e397f

sort by:
Revision Author Date Message Commit Date
0c543ae Match the argument name with the name in `Args` section in docstring PiperOrigin-RevId: 663926739 17 August 2024, 00:22:02 UTC
3a6ff86 Merge pull request #23088 from vfdev-5:update-build-build-py PiperOrigin-RevId: 663876121 16 August 2024, 21:50:57 UTC
8b831b8 Update XLA dependency to use revision http://github.com/openxla/xla/commit/075859a60b9ba002c9f1712798c297d3828abebe. PiperOrigin-RevId: 663861515 16 August 2024, 21:14:21 UTC
60cc041 Minor fixes and documentation update for custom hermetic Python interpreter support. PiperOrigin-RevId: 663784042 16 August 2024, 17:54:00 UTC
3d942ef Merge pull request #23086 from ROCm:ci_build_fix PiperOrigin-RevId: 663769517 16 August 2024, 17:17:04 UTC
1127f49 Merge pull request #23080 from jakevdp:array-doc PiperOrigin-RevId: 663769488 16 August 2024, 17:13:35 UTC
24394a1 Implement initial vmap over pallas_call w/ ragged inputs (via jumbles) The plan here is to load it up with invariants, and start with a really simple kernel. After that, we can slowly relax the various invariants and implement support for others. Note - the work saving here is compute only, not memory yet. A fast-followup CL is adding memory savings via index-map rewriting PiperOrigin-RevId: 663752447 16 August 2024, 16:20:57 UTC
e9d6fd3 document jax.Array methods and attributes 16 August 2024, 13:37:19 UTC
b6306e3 Remove synchronization from GPU LU decomposition kernel by adding an async batch pointers builder. In the batched LU decomposition in cuBLAS, the output buffer is required to be a pointer of pointers to the appropriate batch matrices. Previously this reshaping was done on the host and then copied to the device, requiring a synchronization, but it seems straightforward to instead implement a tiny CUDA kernel to do this work. This definitely isn't a bottleneck or a high priority change, but this seemed like a reasonable time to fix a longstanding TODO. PiperOrigin-RevId: 663686539 16 August 2024, 11:37:09 UTC
12e8bf4 Pass bazel options to requirements_update and requirements_nightly_update commands 16 August 2024, 09:47:21 UTC
acacf88 Determine LAPACK workspace during QR Factorization Kernel runtime PiperOrigin-RevId: 663641199 16 August 2024, 08:20:50 UTC
9785368 [Easy] Refactor ragged_dot transpose, combine ragged_to_dense PiperOrigin-RevId: 663630185 16 August 2024, 07:32:42 UTC
fd7c52d [ROCm] Fix python in rocm ci_build script. 16 August 2024, 02:33:51 UTC
417fcd5 Update XLA dependency to use revision http://github.com/openxla/xla/commit/1db3272ca01754dd38827f4ea332a2f136df5d05. PiperOrigin-RevId: 663454724 15 August 2024, 21:30:30 UTC
a498c1e Set Clang as the default compiler in the build script. PiperOrigin-RevId: 663433112 15 August 2024, 20:36:15 UTC
88ed579 Merge pull request #23039 from froystig:docs3 PiperOrigin-RevId: 663427005 15 August 2024, 20:22:04 UTC
527a4b8 Merge pull request #23069 from jakevdp:doc-vjp PiperOrigin-RevId: 663351438 15 August 2024, 17:15:33 UTC
1516d59 Reverts 6fc57c0eb6f06b2da20c94f5f127fe4a551bda09 PiperOrigin-RevId: 663334727 15 August 2024, 16:33:31 UTC
322d0c2 Rollback the change "Import from ``mlir.dialects`` lazily" Reverts a755f1db837c464f6aa3d3111a1bc40b5ebdd37d PiperOrigin-RevId: 663324497 15 August 2024, 16:00:47 UTC
6913551 If `AbstractMesh` is an input to `shard_map`, then in eager mode require atleast one input to be a `NamedSharding` not all inputs. PiperOrigin-RevId: 663310336 15 August 2024, 15:10:42 UTC
82d3cfb [Pallas] Fix boolean vector loads with indexing. PiperOrigin-RevId: 663124475 15 August 2024, 01:22:56 UTC
a18561d [Pallas] Add run_scoped interpret mode rule + Enable DMA tests. PiperOrigin-RevId: 663115966 15 August 2024, 00:52:48 UTC
91f5512 Document methods of custom_jvp/custom_vjp 14 August 2024, 22:37:20 UTC
2737a73 Update XLA dependency to use revision http://github.com/openxla/xla/commit/aa2340049456d45f3b1fd7b09acc8bcf9d50b749. PiperOrigin-RevId: 663060585 14 August 2024, 21:43:18 UTC
020513f [Mosaic] Update serde to handle upstream MLIR changes For changes from https://github.com/llvm/llvm-project/commit/5f26497da7de10c4eeec33b5a5cfcb47e96836cc PiperOrigin-RevId: 663020509 14 August 2024, 19:48:29 UTC
85fb66a Update XLA dependency to use revision http://github.com/openxla/xla/commit/cb1541c5f092807fced9e5e2b261371dba888906. PiperOrigin-RevId: 662998853 14 August 2024, 18:42:30 UTC
acf4b32 Merge pull request #23060 from jakevdp:core-deps PiperOrigin-RevId: 662988768 14 August 2024, 18:21:00 UTC
25cd9ea Merge pull request #23059 from jakevdp:exports PiperOrigin-RevId: 662988714 14 August 2024, 18:17:26 UTC
db00045 [Pallas] Add boolean vector support. PiperOrigin-RevId: 662985359 14 August 2024, 18:08:32 UTC
599c13a Introduce hermetic CUDA in Google ML projects. 1) Hermetic CUDA rules allow building wheels with GPU support on a machine without GPUs, as well as running Bazel GPU tests on a machine with only GPUs and NVIDIA driver installed. When `--config=cuda` is provided in Bazel options, Bazel will download CUDA, CUDNN and NCCL redistributions in the cache, and use them during build and test phases. [Default location of CUNN redistributions](https://developer.download.nvidia.com/compute/cudnn/redist/) [Default location of CUDA redistributions](https://developer.download.nvidia.com/compute/cuda/redist/) [Default location of NCCL redistributions](https://pypi.org/project/nvidia-nccl-cu12/#history) 2) To include hermetic CUDA rules in your project, add the following in the WORKSPACE of the downstream project dependent on XLA. Note: use `@local_tsl` instead of `@tsl` in Tensorflow project. ``` load( "@tsl//third_party/gpus/cuda/hermetic:cuda_json_init_repository.bzl", "cuda_json_init_repository", ) cuda_json_init_repository() load( "@cuda_redist_json//:distributions.bzl", "CUDA_REDISTRIBUTIONS", "CUDNN_REDISTRIBUTIONS", ) load( "@tsl//third_party/gpus/cuda/hermetic:cuda_redist_init_repositories.bzl", "cuda_redist_init_repositories", "cudnn_redist_init_repository", ) cuda_redist_init_repositories( cuda_redistributions = CUDA_REDISTRIBUTIONS, ) cudnn_redist_init_repository( cudnn_redistributions = CUDNN_REDISTRIBUTIONS, ) load( "@tsl//third_party/gpus/cuda/hermetic:cuda_configure.bzl", "cuda_configure", ) cuda_configure(name = "local_config_cuda") load( "@tsl//third_party/nccl/hermetic:nccl_redist_init_repository.bzl", "nccl_redist_init_repository", ) nccl_redist_init_repository() load( "@tsl//third_party/nccl/hermetic:nccl_configure.bzl", "nccl_configure", ) nccl_configure(name = "local_config_nccl") ``` PiperOrigin-RevId: 662981325 14 August 2024, 17:58:43 UTC
bd9698e Deprecate several internal utilities in jax.core 14 August 2024, 17:06:13 UTC
87a5591 Merge pull request #23040 from jakevdp:isin-method PiperOrigin-RevId: 662957551 14 August 2024, 16:57:49 UTC
ad1bd38 Move logic about when to dispatch to batched LU decomposition algorithm on GPU into the kernel. This simplifies the lowering logic, and means that we don't get hit with a performance penalty when exporting with shape polymorphism. PiperOrigin-RevId: 662945116 14 August 2024, 16:20:40 UTC
bab70dd Reverts 734ebd570891ceaf8c7104e12256a1edfe942b14 PiperOrigin-RevId: 662942100 14 August 2024, 16:12:03 UTC
229cbae Add num_devices to Sharding interface so that it works with NamedSharding containing AbstractMesh too. PiperOrigin-RevId: 662938823 14 August 2024, 16:03:17 UTC
5cc6899 Use PEP484-style exports in several submodules 14 August 2024, 15:59:56 UTC
df2e9c3 [Mosaic] Fix lowering for `_dot_general_lowering_rule` to match the new `vector.MultiDimReductionOp` signature. PiperOrigin-RevId: 662933072 14 August 2024, 15:42:43 UTC
b0a144a Don't export ir_attribute from interpreters.mlir. PiperOrigin-RevId: 662918256 14 August 2024, 14:53:18 UTC
807dcb5 Integrate LLVM at llvm/llvm-project@c8b5d30f7077 Updates LLVM usage to match [c8b5d30f7077](https://github.com/llvm/llvm-project/commit/c8b5d30f7077) PiperOrigin-RevId: 662906261 14 August 2024, 14:09:53 UTC
6290cd7 Added pl.program_id and pl.num_programs to Mosaic GPU lowering PiperOrigin-RevId: 662836490 14 August 2024, 09:23:38 UTC
2ab7558 [Mosaic GPU] Add support for grid tiling to improve L2 cache utilization While CUDA technically does not guarantee anything about the order in which blocks will be executed, in practice they are generally scheduled in column-major order within the grid. We can use this property to launch the blocks in a tiled way, which can lead to an improved rate of L2 hits and a significant performance boost. PiperOrigin-RevId: 662834982 14 August 2024, 09:17:55 UTC
f384497 [Mosaic GPU] Add support for cluster collective loads and barriers over multiple dimensions This will be useful for an upcoming change to the matmul kernel that splits the N blocks over two cluster dimensions. PiperOrigin-RevId: 662825455 14 August 2024, 08:47:12 UTC
4c4660a Merge pull request #23047 from froystig:docs PiperOrigin-RevId: 662779345 14 August 2024, 05:25:54 UTC
dbd6aee Disable some asan tests, times out PiperOrigin-RevId: 662774152 14 August 2024, 05:03:29 UTC
12eebfe docs: reorganize sections * Create "extension guides" section * Sort developer notes into subsections * Move examples from advanced section into user guides * Reorder some listings, adjust some titles 14 August 2024, 04:33:45 UTC
25da7ad Add method argument for jnp.isin 14 August 2024, 02:04:14 UTC
d17edb4 docs: fix `shard_map` guide headings These were off by one level, causing section titles to be listed in the guide index. 14 August 2024, 01:29:42 UTC
9a8f0a6 Add a devices property to AbstractMesh but raise an error in it. This is to make pytype happy PiperOrigin-RevId: 662712450 14 August 2024, 00:37:58 UTC
323e257 Fix test failures. PiperOrigin-RevId: 662703221 14 August 2024, 00:02:14 UTC
2b3ccce Merge pull request #23042 from jakevdp:dataclass-doc PiperOrigin-RevId: 662701533 13 August 2024, 23:55:51 UTC
5903c77 doc: clarify data_fields & meta_fields in register_dataclass 13 August 2024, 23:06:47 UTC
4e580d1 Update XLA dependency to use revision http://github.com/openxla/xla/commit/55476059f622468985141311ef20328993bd7ba5. PiperOrigin-RevId: 662672660 13 August 2024, 22:26:15 UTC
8f23392 [Mosaic:TPU] Refactor relayout helper functions to take ctx instead of only target shape. PiperOrigin-RevId: 662672417 13 August 2024, 22:22:46 UTC
daa69da Introduce `jax.sharding.AbstractMesh(shape_tuple: tuple[tuple[str, int], ...])` and allow `with_sharding_constraint` and `shard_map` to accept an abstract mesh as input (`with_sharding_constraint` is via `NamedSharding(abstract_mesh, pspec)`). **Semantics** Inside jit, we don't need to talk about concrete devices ever so the semantics stay the same as today i.e. we can lower a NamedSharding with abstract mesh with only mesh axis names and sizes and PartitionSpec. The only restriction is that the number of devices need to be consistent throughout the program when we are tracing. During compilation, the order of devices throughout the program needs to be consistent (same as before this change). Outside jit i.e. eager mode, if a `shard_map` or `with_sharding_constraint` contains AbstractMesh, then the input to those primitives should contain a concrete Mesh with the same shape and names as the abstract mesh. **Why do this?** There are cases, where you want the change the devices in the mesh but keep the mesh shape the same (axis names and axis sizes). But this leads to a device mismatch error if you have `with_sharding_constraint` or `shard_map` in your computation because they embed concrete devices in their signature. So to fix the error, you need to change the mesh in `wsc` and `shmap` which will lead to a tracing cache miss (because function id is now different) and consequently a lowering to stableHLO cache miss. Explaining via an example: ``` mesh1 = Mesh(jax.devices()[:2], 'x') mesh2 = Mesh(jax.devices()[2:4], 'x') arr_mesh1 = jax.device_put(np.arange(8), NamedSharding(mesh1, P())) arr_mesh2 = jax.device_put(np.arange(8), NamedSharding(mesh2, P())) @jax.jit def f(x): y = with_sharding_constraint(x, NamedSharding(mesh1, P('x'))) return y * 2 f(arr_mesh1) f(arr_mesh2) # DEVICE MISMATCH ERROR! ``` The same problem exists for `shard_map` since it takes a mesh with concrete devices in it's signature. **Okay, so how do you fix this?** As mentioned above, we need the above program to work and get tracing and lowering cache hits (**cache hits is the most important** part here) The approach in this change, allows `with_sharding_constraint` to accept a `NamedSharding(abstract_mesh, pspec)` as input. This leads to no errors downstream and we get tracing and lowering cache hits since we don't encode the concrete devices anymore. Just the axis_names and axis_size of the mesh. **The important part is that the concrete device information should only come from the arguments. Inside `jax.jit`, you should never reference concrete devices ever.** ``` mesh1 = Mesh(jax.devices()[:2], 'x') mesh2 = Mesh(jax.devices()[2:4], 'x') arr_mesh1 = jax.device_put(np.arange(8), NamedSharding(mesh1, P())) arr_mesh2 = jax.device_put(np.arange(8), NamedSharding(mesh2, P())) # Creating abstract mesh with mesh1 but since both meshes have the same shape (names # and axis size), it should be ok. abstract_mesh = jax.sharding.AbstractMesh(arr_mesh1.shape_tuple) @jax.jit def f(x): y = with_sharding_constraint(x, NamedSharding(abstract_mesh, P('x'))) return y * 2 f(arr_mesh1) f(arr_mesh2) # tracing and lowering cache hit ``` **One caveat is that this only works with `jax.NamedSharding` but that's fine because `NamedSharding` is the most used `Sharding` in JAX.** **What about `shard_map`?** shard_map's signature will be: `shmap(f, mesh: Mesh | AbstractMesh, in_specs: Specs, out_specs: Specs)`. ``` mesh1 = Mesh(jax.devices()[:2], 'x') mesh2 = Mesh(jax.devices()[2:4], 'x') arr_mesh1 = jax.device_put(np.arange(8), NamedSharding(mesh1, P())) arr_mesh2 = jax.device_put(np.arange(8), NamedSharding(mesh2, P())) # Creating abstract mesh with mesh1 but since both meshes have the same shape (names # and axis size), it should be ok. abstract_mesh = jax.sharding.AbstractMesh(arr_mesh1.shape_tuple) @jax.jit def f(x): y = shard_map(lambda x: x, mesh=abstract_mesh, in_specs=P('x'), out_specs=P('x')) return y * 2 f(arr_mesh1) f(arr_mesh2) # tracing and lowering cache hit ``` This is a fully backwards change. So your current code will continue to work as is but you can opt-into this new behavior and get all the benefits! PiperOrigin-RevId: 662670932 13 August 2024, 22:18:08 UTC
98521ad Add todo for slow codegen in Pallas pipeline PiperOrigin-RevId: 662661951 13 August 2024, 21:53:23 UTC
28dfe0d Import etils.epath lazily This shaves off an extra 0.1-0.2s from JAX import times internally. PiperOrigin-RevId: 662660356 13 August 2024, 21:48:38 UTC
2dea3d6 [Mosaic:TPU] Add shuffled load and store. we also emulate shuffled store using (store + shuffled load + store) for previous generations. PiperOrigin-RevId: 662657663 13 August 2024, 21:41:16 UTC
d2b85a4 Merge pull request #23036 from froystig:docs2 PiperOrigin-RevId: 662635310 13 August 2024, 20:40:19 UTC
9f68576 Disable TensorRT in TF, XLA and JAX. This is needed for hermetic CUDA integration in Google ML projects since tensorRT is not distributed in the same free way as other CUDA/CUDNN distributives. PiperOrigin-RevId: 662601190 13 August 2024, 18:58:31 UTC
3c223cd docs: tidy up titles and headings This shortens some titles and makes them more consistent. It also removes "JAX" from several titles ("in JAX", "for JAX", "JAX's", etc.). Since these are JAX docs, that ought to be clear from context. 13 August 2024, 18:53:57 UTC
a755f1d Import from ``mlir.dialects`` lazily These imports jointly account for ~0.3s of import time internally. PiperOrigin-RevId: 662588167 13 August 2024, 18:22:41 UTC
1bba838 Add logging the jax2tf `mlir_module_serialized` module size. PiperOrigin-RevId: 662574156 13 August 2024, 17:47:07 UTC
955699c Merge pull request #23000 from dfm:dce-bug PiperOrigin-RevId: 662565548 13 August 2024, 17:29:30 UTC
3849e0e Merge pull request #23020 from jakevdp:setxor1d-size PiperOrigin-RevId: 662565510 13 August 2024, 17:25:46 UTC
7d2fbd5 [pallas] enable lowering on an AbstractMesh. PiperOrigin-RevId: 662533955 13 August 2024, 15:52:13 UTC
bab096e [Mosaic GPU] Add an autotuning harness to the matmul example PiperOrigin-RevId: 662521895 13 August 2024, 15:11:02 UTC
f4c0b1f [Mosaic GPU] Add control over the output format in the matmul example PiperOrigin-RevId: 662478648 13 August 2024, 12:33:12 UTC
52c269c jnp.setxor1d: add support for static size argument 13 August 2024, 12:24:59 UTC
5cf89b3 [Mosaic GPU] Add support for various swizzles in the matmul example PiperOrigin-RevId: 662459766 13 August 2024, 11:12:43 UTC
ca6be25 [Mosaic GPU] Move matmul tests to Hypothesis We've been generating thousands of test cases and that's just not scalable. Hypothesis should let us efficiently explore a large number of configurations. PiperOrigin-RevId: 662447113 13 August 2024, 10:21:51 UTC
354293d Activate Singular Value Decomposition to XLA's FFI PiperOrigin-RevId: 662436635 13 August 2024, 09:41:57 UTC
1a7c6aa [pallas] Fix test timeouts PiperOrigin-RevId: 662420238 13 August 2024, 08:42:41 UTC
5fc992e Determine LAPACK workspaces during SVD kernel runtime The SVD kernel implementation used to require workspace shapes to be determined prior to the custom call on the JAX's side. The new FFI kernels need not demand these shapes to be specified anymore. They are evaluated during kernel runtime. PiperOrigin-RevId: 662413273 13 August 2024, 08:17:44 UTC
850edee Fix bug in custom_vjp with optimize_remat and custom_vmap. When used with a `custom_vmap` that introduces a new const the previous implementation of `optimize_remat` would error in its DCE rule because of unexpected consts when closing the fwd jaxpr. This shouldn't have ever been hit, but there was a bug in the batching rule for `remat_opt_p` where we weren't properly converting constvars to invars. This fixes this bug and should unbreak internal users. 13 August 2024, 08:06:57 UTC
69fc8bb Consolidate handling of input argument resolution in custom_* APIs. This is a partial re-land of https://github.com/google/jax/pull/22869 with some updates to ensure that it doesn't break existing uses of `custom_vmap`. Previously, using a `custom_jvp` or `custom_vjp` with a primal function that has keyword-only arguments would result in a type error, even if these arguments weren't passed by the caller. I believe that this check is actually slightly stricter than it needed to be, as discovered when adding a similar check to `custom_vmap`. Instead, I think that it is sufficient to check that the caller hasn't _passed_ any keyword-only arguments. The previous behavior in `custom_vmap` was even harsher: it would error if any keyword arguments were passed. In this change, I have moved `resolve_kwargs` into `api_utils` so that the same function can be used in both `custom_derivatives` and `custom_batching`. I've also updated the logic to only throw a `TypeError` if the caller passes a keyword only argument when calling a `custom_*`-decorated function. This changes the behavior of `custom_jvp` and `custom_vjp`, although users shouldn't see that effect, since previously having kwargs would have errored. PiperOrigin-RevId: 662402158 13 August 2024, 07:30:23 UTC
23effba Merge pull request #23027 from froystig:docs PiperOrigin-RevId: 662392585 13 August 2024, 06:59:39 UTC
ff15835 Merge pull request #23017 from jakevdp:set-op-tests PiperOrigin-RevId: 662392445 13 August 2024, 06:59:21 UTC
69ba5f6 Merge pull request #22976 from jakevdp:extra-params-doc PiperOrigin-RevId: 662392239 13 August 2024, 06:54:49 UTC
09e7311 docs: more sentence case 13 August 2024, 03:07:49 UTC
4533aea Remove `jax_enable_memories` conditionals from JAX and remove it from tests too. PiperOrigin-RevId: 662322241 13 August 2024, 02:15:43 UTC
833560d Merge pull request #23023 from froystig:docs3 PiperOrigin-RevId: 662318149 13 August 2024, 02:03:12 UTC
b8f8b7b docs: sentence case page titles, section headings, some content 13 August 2024, 01:12:17 UTC
734ebd5 Support donating arrays with non-default layouts by setting up XLA donation directly instead of defining aliasing for arrays with potentially incompatible layouts. PiperOrigin-RevId: 662258042 12 August 2024, 22:58:52 UTC
26800f1 test: improve test cases for set-like operations Previously, many of the generated test cases were suboptimal because they had few overlaps. This change generates more comprehensive test cases. 12 August 2024, 22:49:46 UTC
7afa907 Update XLA dependency to use revision http://github.com/openxla/xla/commit/b18bc612506b4fb759e5930b9d4b24d4c33dbdbd. PiperOrigin-RevId: 662252977 12 August 2024, 22:43:30 UTC
aa66fb3 [Pallas][XLA:Mosaic] Add python stack traces to Mosaic errors that occur in Pallas. PiperOrigin-RevId: 662232859 12 August 2024, 21:42:48 UTC
b2eb2d4 Merge pull request #22600 from ROCm:feat/manylinux_2_28 PiperOrigin-RevId: 662217658 12 August 2024, 20:59:42 UTC
2644299 docs: sentence case index and sub-index headings We currently use both forms, so for consistency (and easier reading), pick this one. 12 August 2024, 20:52:43 UTC
fafa03c Add missing CPython build deps for pyenv 12 August 2024, 20:01:34 UTC
701cda8 Fix not finding wheels in bazel output 12 August 2024, 20:01:34 UTC
df2d140 Fix jenkins notty issue 12 August 2024, 20:01:34 UTC
319ebf8 Add defaults for ROCm build vars 12 August 2024, 20:01:34 UTC
abe44f6 Add copyright and license headers to new files 12 August 2024, 20:01:34 UTC
a1a0a4e Add support for ROCm development builds Use get_rocm.py changes in ci_build to pull in development builds for ROCm. Specify ROCM_BUILD_JOB and ROCM_BUILD_NUM for activating the development build path. 12 August 2024, 20:01:34 UTC
3175f13 Add internal release support to get_rocm.py 12 August 2024, 20:01:34 UTC
1e58d76 [ROCm] Change ROCm builds to manylinux wheels 12 August 2024, 20:01:34 UTC
e5eaff8 Replace `pjrt_c_api_gpu_plugin.so` symlink with XLA dependency. The runfiles of the original targets were lost when the symlinked files were used. This change is needed for future Hermetic CUDA implementation. Bazel will download CUDA distributives in cache, and CUDA executables and libraries will be added in the runfiles of the targets. When pjrt_c_api_gpu_plugin.so is simlinked, the content of the runfiles is lost. With proper XLA target dependency the runfiles are preserved. PiperOrigin-RevId: 662197057 12 August 2024, 20:01:18 UTC
ee31e95 Register shutdown code at import to hopefully get registered before any other atexit callbacks. `atexit` callbacks are called in a LIFO order, meaning that since Jax currently registers its callback at runtime rather than import time, it gets called before any `atexit` callbacks registered at import time. PiperOrigin-RevId: 662164776 12 August 2024, 18:29:08 UTC
7a873c0 Merge pull request #23014 from google:dependabot/github_actions/actions/upload-artifact-4.3.6 PiperOrigin-RevId: 662161436 12 August 2024, 18:18:29 UTC
da259f8 Merge pull request #22979 from jakevdp:intersect1d-size PiperOrigin-RevId: 662154746 12 August 2024, 18:02:07 UTC
back to top