sort by:
Revision Author Date Message Commit Date
900a283 Filterout methods that are unreachable from native AbsInt 17 July 2024, 13:56:32 UTC
7c9a465 LinearAlgebra: use `≈` instead of `==` for `tr` tests in symmetric.jl (#55143) After investigating JuliaLang/julia#54090, I found that the issue was not caused by the effects of `checksquare`, but by the use of the `@simd` macro within `tr(::Matrix)`: https://github.com/JuliaLang/julia/blob/0945b9d7740855c82a09fed42fbf6bc561e02c77/stdlib/LinearAlgebra/src/dense.jl#L373-L380 While simply removing the `@simd` macro was considered, the strict left-to-right summation without `@simd` otherwise is not necessarily more accurate, so I concluded that the problem lies in the test code, which tests the (strict) equality of two different `tr` execution results. I have modified the test code to use `≈` instead of `==`. - fixes #54090 17 July 2024, 08:39:27 UTC
b049f93 don't throw EOFError from sleep (#54955) 17 July 2024, 04:35:45 UTC
1c7ce01 inference: add basic support for `:globaldecl` expressions (#55144) Following up #54773. Required for external abstract interpreters that may run inference on arbitrary top-level thunks. 16 July 2024, 23:39:29 UTC
946301c create separate function to spawn GC threads (#55108) Third-party GCs (e.g. MMTk) will probably have their own function to spawn GC threads. 16 July 2024, 22:00:06 UTC
2efcfd9 Update the aarch64 devdocs to reflect the current state of its support (#55141) The devdocs here reflect a time when aarch64 was much less well supported, it also reference Cudadrv which has been archived for years 16 July 2024, 19:47:18 UTC
0945b9d Replace some occurrences of iteration over 1:length with more idiomatic structures (mostly eachindex) (#55137) Base should be a model for the ecosystem, and `eachindex(x)` is better than `1:length(x)` in almost all cases. I've updated many, but certainly not all examples. This is mostly a NFC, but also fixes #55136. 16 July 2024, 11:35:52 UTC
742e7d9 Convert message in timing macros before printing (#55122) 16 July 2024, 10:43:25 UTC
d02bfeb REPL: Remove hard-coded prompt strings in favour of pre-existing constants (#55109) 16 July 2024, 10:22:46 UTC
7b0a189 Fix typo in code comment (#55133) 16 July 2024, 05:26:47 UTC
17668e9 delete some unused fields of jl_gc_mark_cache_t (#55138) They should have been deleted in https://github.com/JuliaLang/julia/pull/54936, but were not. 16 July 2024, 02:42:14 UTC
f8bec45 Simplify sweeping of big values (#54936) Simplifies the layout of the doubly linked list of big objects to make it a bit more canonical: let's just store a pointer to the previous element, instead of storing a "pointer to the next element of the previous element". This should make the implementation a bit easier to understand without incurring any memory overhead. I ran the serial and multithreaded benchmarks from GCBenchmarks and this seems fairly close to performance neutral on my machine. We also ran our internal benchmarks on it at RAI and it looks fine from a correctness and performance point of view. --------- Co-authored-by: Kiran Pamnany <kpamnany@users.noreply.github.com> 15 July 2024, 22:20:23 UTC
d3ce499 ðŸĪ– [master] Bump the Pkg stdlib from 046df8ce4 to d801e4545 (#55128) Stdlib: Pkg URL: https://github.com/JuliaLang/Pkg.jl.git Stdlib branch: master Julia branch: master Old commit: 046df8ce4 New commit: d801e4545 Julia version: 1.12.0-DEV Pkg version: 1.12.0 Bump invoked by: @IanButterworth Powered by: [BumpStdlibs.jl](https://github.com/JuliaLang/BumpStdlibs.jl) Diff: https://github.com/JuliaLang/Pkg.jl/compare/046df8ce407659cfaccc647265a6e57bfb02e056...d801e4545f548ab9061a018bf19db1944c711607 ``` $ git log --oneline 046df8ce4..d801e4545 d801e4545 fix artifact printing quiet rule (#3951) ``` Co-authored-by: Dilum Aluthge <dilum@aluthge.com> 15 July 2024, 21:28:37 UTC
d49a3c7 use jl_gc_alloc inside jl_alloc_string (#55098) 15 July 2024, 17:25:48 UTC
7676196 `mkpath` always returns the original path (#54857) fix #54826 related performance test code: ```julia using BenchmarkTools using Base: IOError function checkmode(mode::Integer) if !(0 <= mode <= 511) throw(ArgumentError("Mode must be between 0 and 511 = 0o777")) end mode end function mkpath(path::AbstractString; mode::Integer = 0o777) dir = dirname(path) # stop recursion for `""`, `"/"`, or existed dir (path == dir || isdir(path)) && return path mkpath(dir, mode = checkmode(mode)) try # cases like `mkpath("x/")` will cause an error if `isdir(path)` is skipped # the error will not be rethrowed, but it may be slower, and thus we avoid it in advance isdir(path) || mkdir(path, mode = mode) catch err # If there is a problem with making the directory, but the directory # does in fact exist, then ignore the error. Else re-throw it. if !isa(err, IOError) || !isdir(path) rethrow() end end return path end function mkpath_2(path::AbstractString; mode::Integer = 0o777) dir = dirname(path) # stop recursion for `""` and `"/"` or existed dir (path == dir || isdir(path)) && return path mkpath_2(dir, mode = checkmode(mode)) try mkdir(path, mode = mode) catch err # If there is a problem with making the directory, but the directory # does in fact exist, then ignore the error. Else re-throw it. if !isa(err, IOError) || !isdir(path) rethrow() end end return path end versioninfo() display(@benchmark begin rm("A", recursive=true, force=true); mkpath("A/B/C/D/") end) display(@benchmark begin rm("A", recursive=true, force=true); mkpath_2("A/B/C/D/") end) ``` output: ``` Julia Version 1.10.4 Commit 48d4fd48430 (2024-06-04 10:41 UTC) Build Info: Official https://julialang.org/ release Platform Info: OS: macOS (x86_64-apple-darwin22.4.0) CPU: 16 × Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz WORD_SIZE: 64 LIBM: libopenlibm LLVM: libLLVM-15.0.7 (ORCJIT, skylake) Threads: 1 default, 0 interactive, 1 GC (on 16 virtual cores) Environment: JULIA_EDITOR = code JULIA_NUM_THREADS = BenchmarkTools.Trial: 8683 samples with 1 evaluation. Range (min â€Ķ max): 473.972 Ξs â€Ķ 18.867 ms ┊ GC (min â€Ķ max): 0.00% â€Ķ 0.00% Time (median): 519.704 Ξs ┊ GC (median): 0.00% Time (mean Âą σ): 571.261 Ξs Âą 378.851 Ξs ┊ GC (mean Âą σ): 0.00% Âą 0.00% ▂█▇▄▁ ▃███████▇▅▄▄▃▃▃▂▂▂▂▂▂▂▂▂▂▂▂▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ ▂ 474 Ξs Histogram: frequency by time 961 Ξs < Memory estimate: 5.98 KiB, allocs estimate: 65. BenchmarkTools.Trial: 6531 samples with 1 evaluation. Range (min â€Ķ max): 588.122 Ξs â€Ķ 17.449 ms ┊ GC (min â€Ķ max): 0.00% â€Ķ 0.00% Time (median): 660.071 Ξs ┊ GC (median): 0.00% Time (mean Âą σ): 760.333 Ξs Âą 615.759 Ξs ┊ GC (mean Âą σ): 0.00% Âą 0.00% ██▆▄▃▁▁ ▁ ██████████▆▇█▇▆▆▅▅▅▄▁▄▁▄▄▃▄▃▁▃▃▁▃▁▁▁▄▁▁▁▁▁▁▁▃▃▁▁▁▃▁▁▃▁▁▁▁▅▅▄▇ █ 588 Ξs Histogram: log(frequency) by time 4.2 ms < Memory estimate: 5.63 KiB, allocs estimate: 72. ``` 15 July 2024, 16:46:25 UTC
b88f64f make codegen threadsafe, sinking the necessary lock now into JuliaOJIT (#55106) This adds a new helper `jl_read_codeinst_invoke` that should help manage reading the state out of a CodeInstance correctly everywhere. Then replaces all of the places where we have optimizations in codegen where we check for this (to build a name in the JIT for it) with that call. And finally moves the `jl_codegen_lock` into `jl_ExecutionEngine->jitlock` so that it is now more clear that this is only protecting concurrent access to the JIT state it manages (which includes the invoke field of all CodeInstance objects). In a subsequent followup, that `jitlock` and `codeinst_in_flight` will be replaced with something akin to the new engine (for CodeInfo inference) which helps partition that JIT lock mechanism (for CodeInstance / JIT insertion) to correspond just to a single CodeInstance, and not globally to all of them. 14 July 2024, 17:27:55 UTC
e496b2e Fix type instability in convs using CompoundPeriod (#54995) The functions `toms`, `tons`, and `days` uses `sum` over a vector of `Period`s to obtain the conversion of a `CompoundPeriod`. However, the compiler cannot infer the return type because those functions can return either `Int` or `Float` depending on the type of the `Period`. This PR forces the result of those functions to be `Float64`, fixing the type stability. Before this PR we had: ```julia julia> using Dates julia> p = Dates.Second(1) + Dates.Minute(1) + Dates.Year(1) 1 year, 1 minute, 1 second julia> @code_warntype Dates.tons(p) MethodInstance for Dates.tons(::Dates.CompoundPeriod) from tons(c::Dates.CompoundPeriod) @ Dates ~/.julia/juliaup/julia-nightly/share/julia/stdlib/v1.12/Dates/src/periods.jl:458 Arguments #self#::Core.Const(Dates.tons) c::Dates.CompoundPeriod Body::Any 1 ─ %1 = Dates.isempty::Core.Const(isempty) │ %2 = Base.getproperty(c, :periods)::Vector{Period} │ %3 = (%1)(%2)::Bool └── goto #3 if not %3 2 ─ return 0.0 3 ─ %6 = Dates.Float64::Core.Const(Float64) │ %7 = Dates.sum::Core.Const(sum) │ %8 = Dates.tons::Core.Const(Dates.tons) │ %9 = Base.getproperty(c, :periods)::Vector{Period} │ %10 = (%7)(%8, %9)::Any │ %11 = (%6)(%10)::Any └── return %11 julia> @code_warntype Dates.toms(p) MethodInstance for Dates.toms(::Dates.CompoundPeriod) from toms(c::Dates.CompoundPeriod) @ Dates ~/.julia/juliaup/julia-nightly/share/julia/stdlib/v1.12/Dates/src/periods.jl:454 Arguments #self#::Core.Const(Dates.toms) c::Dates.CompoundPeriod Body::Any 1 ─ %1 = Dates.isempty::Core.Const(isempty) │ %2 = Base.getproperty(c, :periods)::Vector{Period} │ %3 = (%1)(%2)::Bool └── goto #3 if not %3 2 ─ return 0.0 3 ─ %6 = Dates.Float64::Core.Const(Float64) │ %7 = Dates.sum::Core.Const(sum) │ %8 = Dates.toms::Core.Const(Dates.toms) │ %9 = Base.getproperty(c, :periods)::Vector{Period} │ %10 = (%7)(%8, %9)::Any │ %11 = (%6)(%10)::Any └── return %11 julia> @code_warntype Dates.days(p) MethodInstance for Dates.days(::Dates.CompoundPeriod) from days(c::Dates.CompoundPeriod) @ Dates ~/.julia/juliaup/julia-nightly/share/julia/stdlib/v1.12/Dates/src/periods.jl:468 Arguments #self#::Core.Const(Dates.days) c::Dates.CompoundPeriod Body::Any 1 ─ %1 = Dates.isempty::Core.Const(isempty) │ %2 = Base.getproperty(c, :periods)::Vector{Period} │ %3 = (%1)(%2)::Bool └── goto #3 if not %3 2 ─ return 0.0 3 ─ %6 = Dates.Float64::Core.Const(Float64) │ %7 = Dates.sum::Core.Const(sum) │ %8 = Dates.days::Core.Const(Dates.days) │ %9 = Base.getproperty(c, :periods)::Vector{Period} │ %10 = (%7)(%8, %9)::Any │ %11 = (%6)(%10)::Any └── return %11 ``` After this PR we have: ```julia julia> using Dates julia> p = Dates.Second(1) + Dates.Minute(1) + Dates.Year(1) 1 year, 1 minute, 1 second julia> @code_warntype Dates.tons(p) MethodInstance for Dates.tons(::Dates.CompoundPeriod) from tons(c::Dates.CompoundPeriod) @ Dates ~/.julia/juliaup/julia-nightly/share/julia/stdlib/v1.12/Dates/src/periods.jl:458 Arguments #self#::Core.Const(Dates.tons) c::Dates.CompoundPeriod Body::Float64 1 ─ %1 = Dates.isempty::Core.Const(isempty) │ %2 = Base.getproperty(c, :periods)::Vector{Period} │ %3 = (%1)(%2)::Bool └── goto #3 if not %3 2 ─ return 0.0 3 ─ %6 = Dates.Float64::Core.Const(Float64) │ %7 = Dates.sum::Core.Const(sum) │ %8 = Dates.tons::Core.Const(Dates.tons) │ %9 = Base.getproperty(c, :periods)::Vector{Period} │ %10 = (%7)(%8, %9)::Any │ %11 = (%6)(%10)::Any │ %12 = Dates.Float64::Core.Const(Float64) │ %13 = Core.typeassert(%11, %12)::Float64 └── return %13 julia> @code_warntype Dates.toms(p) MethodInstance for Dates.toms(::Dates.CompoundPeriod) from toms(c::Dates.CompoundPeriod) @ Dates ~/.julia/juliaup/julia-nightly/share/julia/stdlib/v1.12/Dates/src/periods.jl:454 Arguments #self#::Core.Const(Dates.toms) c::Dates.CompoundPeriod Body::Float64 1 ─ %1 = Dates.isempty::Core.Const(isempty) │ %2 = Base.getproperty(c, :periods)::Vector{Period} │ %3 = (%1)(%2)::Bool └── goto #3 if not %3 2 ─ return 0.0 3 ─ %6 = Dates.Float64::Core.Const(Float64) │ %7 = Dates.sum::Core.Const(sum) │ %8 = Dates.toms::Core.Const(Dates.toms) │ %9 = Base.getproperty(c, :periods)::Vector{Period} │ %10 = (%7)(%8, %9)::Any │ %11 = (%6)(%10)::Any │ %12 = Dates.Float64::Core.Const(Float64) │ %13 = Core.typeassert(%11, %12)::Float64 └── return %13 julia> @code_warntype Dates.days(p) MethodInstance for Dates.days(::Dates.CompoundPeriod) from days(c::Dates.CompoundPeriod) @ Dates ~/.julia/juliaup/julia-nightly/share/julia/stdlib/v1.12/Dates/src/periods.jl:468 Arguments #self#::Core.Const(Dates.days) c::Dates.CompoundPeriod Body::Float64 1 ─ %1 = Dates.isempty::Core.Const(isempty) │ %2 = Base.getproperty(c, :periods)::Vector{Period} │ %3 = (%1)(%2)::Bool └── goto #3 if not %3 2 ─ return 0.0 3 ─ %6 = Dates.Float64::Core.Const(Float64) │ %7 = Dates.sum::Core.Const(sum) │ %8 = Dates.days::Core.Const(Dates.days) │ %9 = Base.getproperty(c, :periods)::Vector{Period} │ %10 = (%7)(%8, %9)::Any │ %11 = (%6)(%10)::Any │ %12 = Dates.Float64::Core.Const(Float64) │ %13 = Core.typeassert(%11, %12)::Float64 └── return %13 ``` 14 July 2024, 17:14:54 UTC
df3fe22 ðŸĪ– [master] Bump the Pkg stdlib from 8c996799b to 046df8ce4 (#55112) 13 July 2024, 21:13:36 UTC
6e11ffc make threaded exit and cache file loading safer by pausing other threads (#55105) 13 July 2024, 21:12:47 UTC
0fd1f04 Optimize `jl_get_world_counter` ccall (#55032) This is useful for hot loops that perform an `invoke_latest`-like operation, such as the upcoming `TypedCallable` 13 July 2024, 09:58:22 UTC
6cb71e7 NFC: clenaup jl_gc_set_max_memory a bit (#55110) This code is quite contrived. Let's simplify it a bit. 12 July 2024, 23:50:44 UTC
aba6766 Color todo admonitions in the REPL (#54957) This patch adds magenta coloring for `!!! todo` admonitions in addition to the existing styling of `danger`, `warning`, `info`, `note`, `tip`, and `compat` admonitions. This is useful if you want to leave some more colorful todo notes in docstrings, for example. Accompanying PR for Documenter to render these in HTML and PDF docs: https://github.com/JuliaDocs/Documenter.jl/pull/2526. 12 July 2024, 14:31:20 UTC
2a8bdd0 remove USAGE part of error message when using main macro (#55037) I hit this error message and it felt someone was very angry at me 12 July 2024, 12:57:30 UTC
fba928d fix loading of repeated/concurrent modules (#55066) More followup to fix issues with require. There was an accidental variable reuse (build_id) that caused it to be unable to load cache files in many cases. There was also missing check for a dependency already being loaded, resulting in trying to load it twice. Finally, the start_loading code may drop the require_lock, but the surrounding code was not prepared for that. Now integrate the necessary checks into start_loading, instead of needing to duplicate them before and afterwards. Fixes #53983 Fixes #54940 Closes #55064 11 July 2024, 11:31:59 UTC
b491bcc Add alternative compat for public other than Compat.jl (#55097) 11 July 2024, 08:22:34 UTC
ad407a6 Actually setup jit targets when compiling packageimages instead of targeting only one (#54471) 11 July 2024, 08:06:25 UTC
262b40a Fix `(l/r)mul!` with `Diagonal`/`Bidiagonal` (#55052) Currently, `rmul!(A::AbstractMatirx, D::Diagonal)` calls `mul!(A, A, D)`, but this isn't a valid call, as `mul!` assumes no aliasing between the destination and the matrices to be multiplied. As a consequence, ```julia julia> B = Bidiagonal(rand(4), rand(3), :L) 4×4 Bidiagonal{Float64, Vector{Float64}}: 0.476892 ⋅ ⋅ ⋅ 0.353756 0.139188 ⋅ ⋅ ⋅ 0.685839 0.309336 ⋅ ⋅ ⋅ 0.369038 0.304273 julia> D = Diagonal(rand(size(B,2))); julia> rmul!(B, D) 4×4 Bidiagonal{Float64, Vector{Float64}}: 0.0 ⋅ ⋅ ⋅ 0.0 0.0 ⋅ ⋅ ⋅ 0.0 0.0 ⋅ ⋅ ⋅ 0.0 0.0 julia> B 4×4 Bidiagonal{Float64, Vector{Float64}}: 0.0 ⋅ ⋅ ⋅ 0.0 0.0 ⋅ ⋅ ⋅ 0.0 0.0 ⋅ ⋅ ⋅ 0.0 0.0 ``` This is clearly nonsense, and happens because the internal `_mul!` function assumes that it can safely overwrite the destination with zeros before carrying out the multiplication. This is fixed in this PR by using broadcasting instead. The current implementation is generally equally performant, albeit occasionally with a minor allocation arising from `reshape`ing an `Array`. A similar problem also exists in `l/rmul!` with `Bidiaognal`, but that's a little harder to fix while remaining equally performant. 11 July 2024, 06:03:11 UTC
faf17eb create GC TLS (#55086) Encapsulates all relevant GC thread-local-state into a separate structure. Motivation is that MMTk will have its own version of GC thread-local-state, so doesn't need all of the Julia GC TLS. In the future, folks who would be using MMTk would be setting a pre-processor flag which would lead to either the stock Julia GC TLS or MMTk's GC TLS to be included in `julia_threads.h`. I.e., we would have something like: ```C #ifdef MMTK_GC jl_gc_mmtk_tls_states mmtk_gc_tls; #else jl_gc_tls_states gc_tls; #endif ``` 10 July 2024, 19:54:31 UTC
3ab8fef Fix some dependency build issues with `USE_BINARYBUILDER=0` (#55091) A couple of the changes made in #54538 were incorrect. In particular: - libunwind (non-LLVM) does not use CMake, so the `$(CMAKE) --build` is simply reverted here back to `$(MAKE) -C`. - zlib does use CMake but regular Make flags were being passed to its `$(CMAKE) --build`. Those can just be dropped since it's already getting the proper CMake flags. 10 July 2024, 15:55:11 UTC
24535f6 Remove IndexStyle specialization for AdjOrTransAbsMat (#55077) Since `IndexStyle` falls back to `IndexCartesian` by default, this specialization is unnecessary. 10 July 2024, 11:26:09 UTC
ec013f1 LinearAlgebra: LazyString in error messages for Diagonal/Bidiagonal (#55070) 10 July 2024, 04:15:20 UTC
2759961 drop jl_gc_pool_alloc in favor of instrumented version (#55085) And do some re-naming. 09 July 2024, 20:57:24 UTC
e732706 Use triple quotes in TOML.print when string contains newline (#55084) closes #55083 Shouldu this also check for `\r`? --------- Co-authored-by: Alex Arslan <ararslan@comcast.net> 09 July 2024, 18:59:02 UTC
d7609d8 trace-compile: don't generate `precompile` statements for OpaqueClosure methods (#55072) These Methods cannot be looked up via their type signature, so they are incompatible with the `precompile(...)` mechanism. 09 July 2024, 14:01:30 UTC
40966f2 Recommend using RawFD instead of the Int returned by `fd` (#55027) Helps with https://github.com/JuliaLang/julia/issues/51710 --------- Co-authored-by: Daniel Karrasch <daniel.karrasch@posteo.de> 09 July 2024, 13:51:07 UTC
23748ec Use neutral element as init in reduce doctest (#55065) The documentation of reduce states that init must be the neutral element. However, the provided doctest uses a non-neutral element for init. Fix this by changing the example. 09 July 2024, 13:36:47 UTC
fc775c5 add missing setting of inferred field when setting inference result (#55081) This previously could confuse inference, which expects that the field is set to indicate that the rettype has been computed, and cannot tolerate putting objects in the cache for which that is not true. This was causing Nanosoldier to fail. Also cleanup `sv.unreachable` after IR modification so that it remains (mostly) consistent, even though unused (except for the unreachable nodes themselves), as this was confusing me in debugging. 09 July 2024, 12:29:56 UTC
ec90012 Add fast method for copyto!(::Memory, ::Memory) (#55082) Previously, this method hit the slow generic AbstractArray fallback. Closes #55079 This is an ad-hoc bandaid that really ought to be fixed by resolving #54581. 09 July 2024, 07:20:39 UTC
594544d REPL: warn on non-owning qualified accesses (#54872) * Accessing names from other modules can be dangerous, because those names may only be in that module by happenstance, e.g. a name exported by Base * the keyword `public` encourages more qualified accesses, increasing the risk of accidentally accessing a name that isn't part of that module on purpose * ExplicitImports.jl can catch this in a package context, but this requires opting in. Folks who might not be writing packages or might not be using dev tooling like ExplicitImports can easily get caught off guard by these accesses (and might be less familiar with the issue than package developers) * using a REPL AST transform we can emit warnings when we notice this happening in the REPL 08 July 2024, 19:06:34 UTC
ed987f2 Bidiagonal to Tridiagonal with immutable bands (#55059) Using `similar` to generate the zero band necessarily allocates a mutable vector, which would lead to an error if the other bands are immutable. This PR changes this to use `zero` instead, which usually produces a vector of the same type. There are occasions where `zero(v)` produces a different type from `v`, so an extra conversion is added to obtain a zero vector of the same type. The following works after this: ```julia julia> using FillArrays, LinearAlgebra julia> n = 4; B = Bidiagonal(Fill(3, n), Fill(2, n-1), :U) 4×4 Bidiagonal{Int64, Fill{Int64, 1, Tuple{Base.OneTo{Int64}}}}: 3 2 ⋅ ⋅ ⋅ 3 2 ⋅ ⋅ ⋅ 3 2 ⋅ ⋅ ⋅ 3 julia> Tridiagonal(B) 4×4 Tridiagonal{Int64, Fill{Int64, 1, Tuple{Base.OneTo{Int64}}}}: 3 2 ⋅ ⋅ 0 3 2 ⋅ ⋅ 0 3 2 ⋅ ⋅ 0 3 julia> Tridiagonal{Float64}(B) 4×4 Tridiagonal{Float64, Fill{Float64, 1, Tuple{Base.OneTo{Int64}}}}: 3.0 2.0 ⋅ ⋅ 0.0 3.0 2.0 ⋅ ⋅ 0.0 3.0 2.0 ⋅ ⋅ 0.0 3.0 ``` 08 July 2024, 09:18:38 UTC
23dabef add support for indexing in `@atomic` macro (#54707) Following the discussion in #54642 Implemented: - [x] `modifyindex_atomic!`, `swapindex_atomic!`, `replaceindex_atomic!` for `GenericMemory` - [x] `getindex_atomic`, `setindex_atomic!`, `setindexonce_atomic!` for `GenericMemory` - [x] add support for references in `@atomic` macros - [x] add support for vararg indices in `@atomic` macros - [x] tests - [x] update docstrings with example usage - ~[ ] update Atomics section of the manual (?)~ - [x] news @oscardssmith @vtjnash # New `@atomic` transformations implemented here: ```julia julia> @macroexpand (@atomic a[i1,i2]) :(Base.getindex_atomic(a, :sequentially_consistent, i1, i2)) julia> @macroexpand (@atomic order a[i1,i2]) :(Base.getindex_atomic(a, order, i1, i2)) julia> @macroexpand (@atomic a[i1,i2] = 2.0) :(Base.setindex_atomic!(a, :sequentially_consistent, 2.0, i1, i2)) julia> @macroexpand (@atomic order a[i1,i2] = 2.0) :(Base.setindex_atomic!(a, order, 2.0, i1, i2)) julia> @macroexpand (@atomicswap a[i1,i2] = 2.0) :(Base.swapindex_atomic!(a, :sequentially_consistent, 2.0, i1, i2)) julia> @macroexpand (@atomicswap order a[i1,i2] = 2.0) :(Base.swapindex_atomic!(a, order, 2.0, i1, i2)) julia> @macroexpand (@atomic a[i1,i2] += 2.0) :((Base.modifyindex_atomic!(a, :sequentially_consistent, +, 2.0, i1, i2))[2]) julia> @macroexpand (@atomic order a[i1,i2] += 2.0) :((Base.modifyindex_atomic!(a, order, +, 2.0, i1, i2))[2]) julia> @macroexpand (@atomiconce a[i1,i2] = 2.0) :(Base.setindexonce_atomic!(a, :sequentially_consistent, :sequentially_consistent, 2.0, i1, i2)) julia> @macroexpand (@atomiconce o1 o2 a[i1,i2] = 2.0) :(Base.setindexonce_atomic!(a, o1, o2, 2.0, i1, i2)) julia> @macroexpand (@atomicreplace a[i1,i2] (2.0=>3.0)) :(Base.replaceindex_atomic!(a, :sequentially_consistent, :sequentially_consistent, 2.0, 3.0, i1, i2)) julia> @macroexpand (@atomicreplace o1 o2 a[i1,i2] (2.0=>3.0)) :(Base.replaceindex_atomic!(a, o1, o2, 2.0, 3.0, i1, i2)) ``` --------- Co-authored-by: Oscar Smith <oscardssmith@gmail.com> 07 July 2024, 21:07:59 UTC
0ef2bb6 fix concurrent module loading return value (#54898) Previously this might return `nothing` which would confuse the caller of `start_loading` which expects that to mean the Module didn't load. It is not entirely clear if this code ever worked, even single-threaded. Fix #54813 07 July 2024, 19:00:27 UTC
aa07585 lowering: Don't resolve type bindings earlier than necessary (#54999) This is a follow up to resolve a TODO left in #54773 as part of preparatory work for #54654. Currently, our lowering for type definition contains an early `isdefined` that forces a decision on binding resolution before the assignment of the actual binding. In the current implementation, this doesn't matter much, but with #54654, this would incur a binding invalidation we would like to avoid. To get around this, we extend the (internal) `isdefined` form to take an extra argument specifying whether or not to permit looking at imported bindings. If not, resolving the binding is not required semantically, but for the purposes of type definition (where assigning to an imported binding would error anyway), this is all we need. 06 July 2024, 22:18:34 UTC
082e142 remove unused managed realloc (#55050) Follow-up to https://github.com/JuliaLang/julia/pull/54949. Again, motivation is to clean up the GC interface a bit by removing unused functions (particularly after the Memory work). 06 July 2024, 14:33:06 UTC
7d4afba CI: update LabelCheck banned labels (#55051) 06 July 2024, 12:27:27 UTC
1837202 Matmul: `matprod_dest` for `Diagonal` * `SymTridiagonal` (#55039) We specialize `matprod_dest` for the combination of a `Diagonal` and a `SymTridiaognal`, in which case the destination is a `Tridiagonal`. With this, the specialized methods `*(::Diagonal, ::SymTridiagonal)` and `*(::SymTridiagonal, ::Diagonal)` don't need to be defined anymore, which reduces potential method ambiguities. 06 July 2024, 01:47:53 UTC
e318166 cleanup remset logic a bit (#55021) I think that keeping a single `remset` (instead of two and keep alternating between them) should be a bit easier to understand and possibly even a bit faster (since we will be accessing the `remset` only once), though that should be a very small difference. 06 July 2024, 01:30:00 UTC
7122311 Some mailmap updates (#55048) Updating for consistency with more recent commits by authors. 05 July 2024, 20:54:37 UTC
0d9404f Declare type for `libgcc_s` handles in CompilerSupportLibraries_jll (#55011) This improves the type stability of this stdlib. 05 July 2024, 20:00:59 UTC
59f08df LAPACK: annotate size check in `lacpy!` with `@noinline` for reduced latency (#55029) The `@noinline` annotation on the size check appears to reduce latency in a second call with different argument types: ```julia julia> using LinearAlgebra julia> A = rand(2,2); B = similar(A); julia> @time LAPACK.lacpy!(B, A, 'U'); 0.032585 seconds (29.80 k allocations: 1.469 MiB, 99.84% compilation time) julia> A = rand(Float32,2,2); B = similar(A); julia> @time LAPACK.lacpy!(B, A, 'U'); 0.026698 seconds (22.80 k allocations: 1.113 MiB, 99.84% compilation time) # v"1.12.0-DEV.810" 0.024715 seconds (19.88 k allocations: 987.000 KiB, 99.80% compilation time) # Without noinline 0.017084 seconds (18.52 k allocations: 903.828 KiB, 99.72% compilation time) # This PR (with noinline) ``` 05 July 2024, 15:50:46 UTC
140248e Point to ModernJuliaWorkflows in "getting started" (#55036) Add link to https://modernjuliaworkflows.github.io/ 05 July 2024, 15:19:15 UTC
2e3628d remove unused jl_gc_alloc_*w (#55026) Closes https://github.com/JuliaLang/julia/issues/55024. 05 July 2024, 14:23:31 UTC
5468a3e Fix a regression in the test for #13432 (#55004) The test for #13432 is supposed to test a particular codegen path involving Bottom. It turns out that this code path is regressed on master, but hidden by the fact that in modern julia, the optimizer can fold this code path early. Fix the bug and add a variant of the test that shows the issue on julia master. Note that this is both an assertion failure and incorrect codegen. This PR addresses both. 05 July 2024, 14:20:47 UTC
a5f0016 Support `@opaque Tuple{T,U...}->RT (...)->...` syntax for explicit arg/return types (#54947) This gives users a way to explicitly specify the return type of an OpaqueClosure, and it also removes the old syntax `@opaque AT ...` in preference of `@opaque AT->_ ...` 04 July 2024, 20:47:48 UTC
8f1f223 mark a flaky Sockets test as broken (#55030) As suggested in https://github.com/JuliaLang/julia/issues/55008#issuecomment-2207136025. Closes https://github.com/JuliaLang/julia/issues/55008. 04 July 2024, 20:42:48 UTC
c388382 Make the __init__ in GMP more statically compilable (#55012) Co-authored-by: Jeff Bezanson <jeff.bezanson@gmail.com> 04 July 2024, 16:05:57 UTC
8083506 staticdata: Unique Bindings by mod/name (#54993) Currently we error when attempting to serialize Bindings that do not beloing to the incremental module (GlobalRefs have special logic to avoid looking at the binding field). With #54654, Bindings will show up in more places, so let's just unique them properly by their module/name identity. Of course, we then have two objects so serialized (both GlobalRef and Binding), which suggests that we should perhaps finish the project of unifying them. This is not currently possible, because the existence of a binding object in the binding table has semantic content, but this will change with #54654, so we can do such a change thereafter. 04 July 2024, 15:51:23 UTC
34bacaa delete possibly stale reset_gc_stats (#55015) See discussion in https://github.com/JuliaLang/julia/issues/55014. Doesn't seem breaking, but I can close the PR if it is. Closes https://github.com/JuliaLang/julia/issues/55014. 03 July 2024, 23:00:43 UTC
ce4f090 Make ScopedValues public (#54574) 03 July 2024, 16:47:12 UTC
1193997 Don't require that `@inbounds` depends only on local information (#54270) Co-authored-by: Steven G. Johnson <stevenj@alum.mit.edu> 02 July 2024, 20:24:13 UTC
2712633 remove stale objprofile (#54991) As mentioned in https://github.com/JuliaLang/julia/issues/54968, `OBJPROFILE` exposes a functionality which is quite similar to what the heap snapshot does, but has a considerably worse visualization tool (i.e. raw printf's compared to the snapshot viewer from Chrome). Closes https://github.com/JuliaLang/julia/issues/54968. 02 July 2024, 19:32:22 UTC
7579659 inference: add missing `MustAlias` widening in `_getfield_tfunc` (#54996) Otherwise it may result in missing `⊑` method error in uses cases by external abstract interpreters using `MustAliasesLattice` like JET. 02 July 2024, 17:23:59 UTC
6cf3a05 RFC: Make `include_dependency(path; track_content=true)` the default (#54965) By changing the default to `true` we make it easier to build relocatable packages from already existing ones when 1.11 lands. This keyword was just added during 1.11, so its not yet too late to change its default. 02 July 2024, 17:22:30 UTC
f2558c4 Add timing to precompile trace compile (#54962) I think this tool is there mainly to see what's taking so long, so timing information is helpful. 01 July 2024, 22:08:21 UTC
1fdc6a6 NFC: create an actual set of functions to manipulate GC thread ids (#54984) Also adds a bunch of integrity constraint checks to ensure we don't repeat the bug from https://github.com/JuliaLang/julia/pull/54645. 01 July 2024, 19:28:23 UTC
6139779 remove reference to a few stale GC environment variables (#54990) Did a quick grep and couldn't find any reference to them besides this manual. 01 July 2024, 19:08:53 UTC
4b4468a repl: Also ignore local imports with specified symbols (#54982) Otherwise it's trying to find the `.` package, which obviously doesn't exist. This isn't really a problem - it just does some extra processing and loads Pkg, but let's make this correct anyway. 01 July 2024, 14:27:03 UTC
00c700e refactor `contextual` test files (#54970) - moved non-contextual tests into `staged.jl` - moved `@overlay` tests into `core.jl` - test `staged.jl` in an interpreter mode 01 July 2024, 04:11:07 UTC
41bde01 address a TODO in gc_sweep_page (i.e. save an unnecessary fetch-add) (#54976) 29 June 2024, 18:24:32 UTC
334e4d9 simplify handling of buffered pages (#54961) Simplifies handling of buffered pages by keeping them in a single place (`global_page_pool_lazily_freed`) instead of making them thread local. Performance has been assessed on the serial & multithreaded GCBenchmarks and it has shown to be performance neutral. 28 June 2024, 15:46:50 UTC
8791d54 cglobal: Fall back to runtime intrinsic (#54914) Our codegen for `cglobal` was sharing the `static_eval` code for symbols with ccall. However, we do have full runtime emulation for this intrinsic, so mandating that the symbol can be statically evaluated is not required and causes semantic differences between the interpreter and codegen, which is undesirable. Just fall back to the runtime intrinsic instead. 28 June 2024, 11:07:52 UTC
89e391b Fix accidental early evaluation of imported `using` binding (#54956) In `using A.B`, we need to evaluate `A.B` to add the module to the using list. However, in `using A: B`, we do not care about the value of `A.B`, we only operate at the binding level. These two operations share a code path and the evaluation of `A.B` happens early and is unused on the `using A: B` path. I believe this was an unintentional oversight when the latter syntax was added. Fixes #54954. 28 June 2024, 11:06:26 UTC
b5d0b90 inference: implement an opt-in interface to cache generated sources (#54916) In Cassette-like systems, where inference has to infer many calls of `@generated` function and the generated function involves complex code transformations, the overhead from code generation itself can become significant. This is because the results of code generation are not cached, leading to duplicated code generation in the following contexts: - `method_for_inference_heuristics` for regular inference on cached `@generated` function calls (since `method_for_inference_limit_heuristics` isn't stored in cached optimized sources, but is attached to generated unoptimized sources). - `retrieval_code_info` for constant propagation on cached `@generated` function calls. Having said that, caching unoptimized sources generated by `@generated` functions is not a good tradeoff in general cases, considering the memory space consumed (and the image bloat). The code generation for generators like `GeneratedFunctionStub` produced by the front end is generally very simple, and the first duplicated code generation mentioned above does not occur for `GeneratedFunctionStub`. So this unoptimized source caching should be enabled in an opt-in manner. Based on this idea, this commit defines the trait `abstract type Core.CachedGenerator` as an interface for the external systems to opt-in. If the generator is a subtype of this trait, inference caches the generated unoptimized code, sacrificing memory space to improve the performance of subsequent inferences. Specifically, the mechanism for caching the unoptimized source uses the infrastructure already implemented in JuliaLang/julia#54362. Thanks to JuliaLang/julia#54362, the cache for generated functions is now partitioned by world age, so even if the unoptimized source is cached, the existing invalidation system will invalidate it as expected. In JuliaDebug/CassetteOverlay.jl#56, the following benchmark results showed that approximately 1.5~3x inference speedup is achieved by opting into this feature: ## Setup ```julia using CassetteOverlay, BaseBenchmarks, BenchmarkTools @MethodTable table; pass = @overlaypass table; BaseBenchmarks.load!("inference"); benchfunc1() = sin(42) benchfunc2(xs, x) = findall(>(x), abs.(xs)) interp = BaseBenchmarks.InferenceBenchmarks.InferenceBenchmarker() # benchmark inference on entire call graphs from scratch @benchmark BaseBenchmarks.InferenceBenchmarks.@inf_call pass(benchfunc1) @benchmark BaseBenchmarks.InferenceBenchmarks.@inf_call pass(benchfunc2, rand(10), 0.5) # benchmark inference on the call graphs with most of them cached @benchmark BaseBenchmarks.InferenceBenchmarks.@inf_call interp=interp pass(benchfunc1) @benchmark BaseBenchmarks.InferenceBenchmarks.@inf_call interp=interp pass(benchfunc2, rand(10), 0.5) ``` ## Benchmark inference on entire call graphs from scratch > on master ``` julia> @benchmark BaseBenchmarks.InferenceBenchmarks.@inf_call pass(benchfunc1) BenchmarkTools.Trial: 61 samples with 1 evaluation. Range (min â€Ķ max): 78.574 ms â€Ķ 87.653 ms ┊ GC (min â€Ķ max): 0.00% â€Ķ 8.81% Time (median): 83.149 ms ┊ GC (median): 4.85% Time (mean Âą σ): 82.138 ms Âą 2.366 ms ┊ GC (mean Âą σ): 3.36% Âą 2.65% ▂ ▂▂ █ ▂ █ ▅ ▅ █▅██▅█▅▁█▁▁█▁▁▁▁▅▁▁▁▁▁▁▁▁▅▁▁▅██▅▅█████████▁█▁▅▁▁▁▁▁▁▁▁▁▁▁▁▅ ▁ 78.6 ms Histogram: frequency by time 86.8 ms < Memory estimate: 52.32 MiB, allocs estimate: 1201192. julia> @benchmark BaseBenchmarks.InferenceBenchmarks.@inf_call pass(benchfunc2, rand(10), 0.5) BenchmarkTools.Trial: 4 samples with 1 evaluation. Range (min â€Ķ max): 1.345 s â€Ķ 1.369 s ┊ GC (min â€Ķ max): 2.45% â€Ķ 3.39% Time (median): 1.355 s ┊ GC (median): 2.98% Time (mean Âą σ): 1.356 s Âą 9.847 ms ┊ GC (mean Âą σ): 2.96% Âą 0.41% █ █ █ █ █▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█▁▁▁▁▁█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█ ▁ 1.35 s Histogram: frequency by time 1.37 s < Memory estimate: 637.96 MiB, allocs estimate: 15159639. ``` > with this PR ``` julia> @benchmark BaseBenchmarks.InferenceBenchmarks.@inf_call pass(benchfunc1) BenchmarkTools.Trial: 230 samples with 1 evaluation. Range (min â€Ķ max): 19.339 ms â€Ķ 82.521 ms ┊ GC (min â€Ķ max): 0.00% â€Ķ 0.00% Time (median): 19.938 ms ┊ GC (median): 0.00% Time (mean Âą σ): 21.665 ms Âą 4.666 ms ┊ GC (mean Âą σ): 6.72% Âą 8.80% ▃▇█▇▄ ▂▂▃▃▄ █████▇█▇▆▅▅▆▅▅▁▅▁▁▁▁▁▁▁▁▁██████▆▁█▁▅▇▆▁▅▁▁▅▁▅▁▁▁▁▁▁▅▁▁▁▁▁▁▅ ▆ 19.3 ms Histogram: log(frequency) by time 29.4 ms < Memory estimate: 28.67 MiB, allocs estimate: 590138. julia> @benchmark BaseBenchmarks.InferenceBenchmarks.@inf_call pass(benchfunc2, rand(10), 0.5) BenchmarkTools.Trial: 14 samples with 1 evaluation. Range (min â€Ķ max): 354.585 ms â€Ķ 390.400 ms ┊ GC (min â€Ķ max): 0.00% â€Ķ 7.01% Time (median): 368.778 ms ┊ GC (median): 3.74% Time (mean Âą σ): 368.824 ms Âą 8.853 ms ┊ GC (mean Âą σ): 3.70% Âą 1.89% ▃ █ ▇▁▁▁▁▁▁▁▁▁▁█▁▇▇▁▁▁▁▇▁▁▁▁█▁▁▁▁▇▁▁▇▁▁▇▁▁▁▇▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▇ ▁ 355 ms Histogram: frequency by time 390 ms < Memory estimate: 227.86 MiB, allocs estimate: 4689830. ``` ## Benchmark inference on the call graphs with most of them cached > on master ``` julia> @benchmark BaseBenchmarks.InferenceBenchmarks.@inf_call interp=interp pass(benchfunc1) BenchmarkTools.Trial: 10000 samples with 1 evaluation. Range (min â€Ķ max): 45.166 Ξs â€Ķ 9.799 ms ┊ GC (min â€Ķ max): 0.00% â€Ķ 98.96% Time (median): 46.792 Ξs ┊ GC (median): 0.00% Time (mean Âą σ): 48.339 Ξs Âą 97.539 Ξs ┊ GC (mean Âą σ): 2.01% Âą 0.99% ▁▂▄▆▆▇███▇▆▅▄▃▄▄▂▂▂▁▁▁ ▁▁▂▂▁ ▁ ▂▁ ▁ ▃ ▃▇██████████████████████▇████████████████▇█▆▇▇▆▆▆▅▆▆▆▇▆▅▅▅▆ █ 45.2 Ξs Histogram: log(frequency) by time 55 Ξs < Memory estimate: 25.27 KiB, allocs estimate: 614. julia> @benchmark BaseBenchmarks.InferenceBenchmarks.@inf_call interp=interp pass(benchfunc2, rand(10), 0.5) BenchmarkTools.Trial: 10000 samples with 1 evaluation. Range (min â€Ķ max): 303.375 Ξs â€Ķ 16.582 ms ┊ GC (min â€Ķ max): 0.00% â€Ķ 97.38% Time (median): 317.625 Ξs ┊ GC (median): 0.00% Time (mean Âą σ): 338.772 Ξs Âą 274.164 Ξs ┊ GC (mean Âą σ): 5.44% Âą 7.56% ▃▆██▇▅▂▁ ▂▂▄▅██████████▇▆▅▅▄▄▃▃▃▃▃▃▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▁▂▂▂▂▂▂▂▁▂▁▁▂▁▂▂ ▃ 303 Ξs Histogram: frequency by time 394 Ξs < Memory estimate: 412.80 KiB, allocs estimate: 6224. ``` > with this PR ``` @benchmark BaseBenchmarks.InferenceBenchmarks.@inf_call interp=interp pass(benchfunc1) BenchmarkTools.Trial: 10000 samples with 6 evaluations. Range (min â€Ķ max): 5.444 Ξs â€Ķ 1.808 ms ┊ GC (min â€Ķ max): 0.00% â€Ķ 99.01% Time (median): 5.694 Ξs ┊ GC (median): 0.00% Time (mean Âą σ): 6.228 Ξs Âą 25.393 Ξs ┊ GC (mean Âą σ): 5.73% Âą 1.40% ▄█▇▄ ▁▂▄█████▇▄▃▃▃▂▂▂▃▂▂▂▂▂▂▂▂▂▂▂▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ ▂ 5.44 Ξs Histogram: frequency by time 7.47 Ξs < Memory estimate: 8.72 KiB, allocs estimate: 196. julia> @benchmark BaseBenchmarks.InferenceBenchmarks.@inf_call interp=interp pass(benchfunc2, rand(10), 0.5) BenchmarkTools.Trial: 10000 samples with 1 evaluation. Range (min â€Ķ max): 211.000 Ξs â€Ķ 36.187 ms ┊ GC (min â€Ķ max): 0.00% â€Ķ 0.00% Time (median): 223.000 Ξs ┊ GC (median): 0.00% Time (mean Âą σ): 280.025 Ξs Âą 750.097 Ξs ┊ GC (mean Âą σ): 6.86% Âą 7.16% █▆▄▂▁ ▁ ███████▇▇▇▆▆▆▅▆▅▅▅▅▅▄▅▄▄▄▅▅▁▄▅▃▄▄▄▃▄▄▃▅▄▁▁▃▄▁▃▁▁▁▃▄▃▁▃▁▁▁▃▃▁▃ █ 211 Ξs Histogram: log(frequency) by time 1.46 ms < Memory estimate: 374.17 KiB, allocs estimate: 5269. ``` 28 June 2024, 01:08:21 UTC
ca0b2a8 LAPACK: Avoid repr call in `chkvalidparam` (#54952) We were calling `repr` here to interpolate the character with the quotes into the error message. However, this is overkill for this application, and `repr` introduces dynamic dispatch into the call. This PR hard-codes the quotes into the string, which matches the pattern followed in the other error messages following `chkvalidparam`. 27 June 2024, 18:37:53 UTC
17e6e69 Make a few places resilient to inference not working (#54948) When working on Base, if you break inference (in a way that preserves correctness, but not precision), it would be nice if the system bootstrapped anyway, since it's easier to poke at the system if the REPL is running. However, there were a few places where we were relying on the inferred element type for empty collections while passing those values to callees with narrow type signatures. Switch these to comprehensions with declared type instead, so that even if inference is (temporarily) borked, things will still boostrap fine. 27 June 2024, 18:18:33 UTC
d6dd59b Reduce branches in 2x2 and 3x3 stable_muladdmul for standard cases (#54951) We may use the knowledge that `alpha != 0` at the call site to hard-code `alpha = true` in the `MulAddMul` constructor if `alpha isa Bool`. This eliminates the `!isone(alpha)` branches in `@stable_muladdmul`, and reduces latency in matrix multiplication. ```julia julia> using LinearAlgebra julia> A = rand(2,2); julia> @time A * A; 0.596825 seconds (1.05 M allocations: 53.458 MiB, 5.94% gc time, 99.95% compilation time) # nightly v"1.12.0-DEV.789" 0.473140 seconds (793.52 k allocations: 39.946 MiB, 3.28% gc time, 99.93% compilation time) # this PR ``` In a separate session, ```julia julia> @time A * Symmetric(A); 0.829252 seconds (2.37 M allocations: 120.051 MiB, 1.98% gc time, 99.98% compilation time) # nightly v"1.12.0-DEV.789" 0.712953 seconds (2.06 M allocations: 103.951 MiB, 2.17% gc time, 99.98% compilation time) # This PR ``` 27 June 2024, 18:17:47 UTC
beb4f19 Revert "reflection: refine and accurately define the options for `names`" (#54959) It breaks over 500 packages: https://s3.amazonaws.com/julialang-reports/nanosoldier/pkgeval/by_date/2024-06/23/report.html 27 June 2024, 17:28:43 UTC
5163d55 Add optimised findall(isequal(::Char), ::String) (#54593) This uses the same approach as the existing findnext and findprev functions in the same file. The following benchmark: ```julia using BenchmarkTools s = join(rand('A':'z', 10000)); @btime findall(==('c'), s); ``` Gives these results: * This PR: 3.489 Ξs * 1.11-beta1: 31.970 Ξs 27 June 2024, 16:50:47 UTC
f3298ee SuiteSparse: Bump version (#54950) This should have no functional changes, however, it will affect the version of non-stdlib JLLs. I'd like to see if we can add this as a backport candidate to 1.11 since it doesn't change Julia functionality at all, but does allow some non-stdlib JLLs to be kept current. Otherwise at least the SPEX linear solvers and the ParU linear solvers will be missing multiple significant features until 1.12. 27 June 2024, 13:30:47 UTC
b2657a5 Fixed diagonal matrix eigen decomposition to return eigenvectors as diagonal matrix (#54882) 27 June 2024, 08:55:31 UTC
4564134 remove a bunch of bit unnecessary bit clearing in bigval's sz field (#54946) We don't store anything in the lowest two bits of `sz` after https://github.com/JuliaLang/julia/pull/49644. 27 June 2024, 04:03:07 UTC
06e81bc remove stale realloc_string function since we no longer use it (#54949) Seems like this got stale after the Memory work. 26 June 2024, 23:32:33 UTC
db687ad add mechanism for configuring system image builds (#54387) This adds the option to pass a filename of configuration settings when building the Core/compiler system image (from `base/compiler/compiler.jl`). This makes it easier to build different flavors of images, for example it can replace the hack that PackageCompiler uses to edit the list of included stdlibs, and makes it easy to change knobs you might want like max_methods. 26 June 2024, 20:41:10 UTC
4e1fd72 Update noteworthy-differences.md (#54939) Additional comparison with Matlab for bulk editing matrices for operations such as applying a threshold. Useful discussion here: https://discourse.julialang.org/t/replacing-values-of-specific-entries-in-an-array-in-julia/25259 and here: https://stackoverflow.com/questions/56583807/replacing-values-of-specific-entries-in-an-array-in-julia 26 June 2024, 14:42:51 UTC
14956a1 validate `:const` expr properly (#54938) 26 June 2024, 10:39:09 UTC
9fecc19 fix effects for Float64^Int on 32 bit (#54934) fixes https://github.com/JuliaLang/julia/pull/54910 properly. The recursion heuristic was getting mad at this for some reason... 26 June 2024, 04:38:54 UTC
07f7efd ðŸĪ– [master] Bump the SparseArrays stdlib from 82b385f to e61663a (#54931) Stdlib: SparseArrays URL: https://github.com/JuliaSparse/SparseArrays.jl.git Stdlib branch: main Julia branch: master Old commit: 82b385f New commit: e61663a Julia version: 1.12.0-DEV SparseArrays version: 1.12.0 Bump invoked by: @ViralBShah Powered by: [BumpStdlibs.jl](https://github.com/JuliaLang/BumpStdlibs.jl) Diff: https://github.com/JuliaSparse/SparseArrays.jl/compare/82b385ff7db4c0ed57b06df53dca351e041db78d...e61663ad0a79a48906b0b12d53506e731a614ab8 ``` $ git log --oneline 82b385f..e61663a e61663a Update to SuiteSparse 7.7 (#545) 4141e8a Update gen/README.md (#544) 45dfe45 Update ci.yml to ot fail if codecov fails (#541) 0888db6 Bump julia-actions/cache from 1 to 2 (#540) 740b82a test: Don't use GPL module when Base.USE_GPL_LIBS=false (#535) ``` Co-authored-by: Dilum Aluthge <dilum@aluthge.com> 25 June 2024, 17:20:42 UTC
f6f1ff2 Aggressive constprop in the PermutedDimsArray constructor (#54926) After this, the return type in the `PermutedDimsArray` constructor is concretely inferred for `Array`s if the permutation is known at compile time: ```julia julia> @inferred (() -> PermutedDimsArray(collect(reshape(1:8,2,2,2)), (2,3,1)))() 2×2×2 PermutedDimsArray(::Array{Int64, 3}, (2, 3, 1)) with eltype Int64: [:, :, 1] = 1 5 3 7 [:, :, 2] = 2 6 4 8 ``` This should address the second concern in https://github.com/JuliaLang/julia/issues/54918 25 June 2024, 15:44:20 UTC
8ee98b2 Aggressive constprop in mapslices (#54928) This helps improve type-inference, e.g. the return type in the following call is concretely inferred after this: ```julia julia> @inferred (() -> mapslices(sum, reshape(collect(1:16), 2, 2, 2, 2), dims=(3,4)))() 2×2×1×1 Array{Int64, 4}: [:, :, 1, 1] = 28 36 32 40 ``` This should address the first concern in https://github.com/JuliaLang/julia/issues/54918 25 June 2024, 15:16:51 UTC
5654e60 inference: simplify generated function handling (#54912) Removed some unnecessary type assertions in the handling of generated functions to simplify the code a bit. There are no changes to the basic functionality. 25 June 2024, 02:41:10 UTC
f8bdd32 Print binary sizes of more libraries in `build-stats` target (#54901) I just picked some big files that seemed related to Julia or LLVM. 24 June 2024, 20:23:39 UTC
a7fa1e7 #54739-related fixes for loading stdlibs (#54891) This fixes a couple unconventional issues people encountered and were able to report as bugs against #54739 Note that due to several bugs in REPLExt itself (https://github.com/JuliaLang/julia/issues/54889, https://github.com/JuliaLang/julia/issues/54888), loading the extension may still crash julia in some circumstances, but that is now a Pkg bug, and no longer the fault of the loading code. 24 June 2024, 19:19:29 UTC
36a0da0 LazyString in interpolated error messages in threadingconstructs (#54908) 24 June 2024, 05:52:52 UTC
f846b89 also disable `is_foldable` test on other 32bit platforms (#54910) Follow-on from https://github.com/JuliaLang/julia/pull/54323 @aviatesk I think this should also be disabled on linux? It's been failing there On `test i686-linux-gnu` ``` math (7) | failed at 2024-06-23T13:58:26.792 Test Failed at /cache/build/tester-amdci5-8/julialang/julia-buildkite/julia-2d4ef8a238/share/julia/test/math.jl:1607 Expression: Core.Compiler.is_foldable(effects) Context: effects = (+c,+e,+n,!t,+s,+m,+u,+o) T = Float64 ``` 24 June 2024, 05:50:23 UTC
696d9c3 Follow up #54772 - don't accidentally put `Module` into method name slot (#54856) The `:outerref` removal (#54772) ended up accidentally putting a `Module` argument into 3-argument `:method` due to a bad refactor. We didn't catch this in the tests, because the name slot of 3-argument `:method` is unused (except for external method tables), but ordinarily contains the same data as 1-argument `:method`. That said, some packages in the Revise universe look at this (arguably incorrectly, since they should be looking at the signature instead), so it should be correct until we fix Revise, at which point we may just want to always pass `false` here. 23 June 2024, 18:24:31 UTC
dfd1d49 lowering: Refactor lowering for const and typed globals (#54773) This is a prepratory commit for #54654 to change the lowering of `const` and typed globals to be compatible with the new semantics. Currently, we lower `const a::T = val` to: ``` const a global a::T a = val ``` (which further expands to typed-globals an implicit converts). This works, because, under the hood, our const declarations are actually assign-once globals. Note however, that this is not syntactically reachable, since we have a parse error for plain `const a`: ``` julia> const a ERROR: ParseError: # Error @ REPL[1]:1:1 const a └─────┘ ── expected assignment after `const` Stacktrace: [1] top-level scope @ none:1 ``` However, this lowering is not atomic with respect to world age. The semantics in #54654 require that the const-ness and the value are established atomically (with respect to world age, potentially on another thread) or undergo invalidation. To resolve this issue, this PR changes the lowering of `const a::T = val` to: ``` let local a::T = val const (global a) = a end ``` where the latter is a special syntax form `Expr(:const, GlobalRef(,:a), :a)`. A similar change is made to const global declarations, which previously lowered via intrinsic, i.e. `global a::T = val` lowered to: ``` global a Core.set_binding_type!(Main, :a, T) _T = Core.get_binding_type(Main, :a) if !isa(val, _T) val = convert(_T, val) end a = val ``` This changes the `set_binding_type!` to instead be a syntax form `Expr(:globaldecl, :a, T)`. This is not technically required, but we currently do not use intrinsics for world-age affecting side-effects anywhere else in the system. In particular, after #54654, it would be illegal to call `set_binding_type!` in anything but top-level context. Now, we have discussed in the past that there should potentially be intrinsic functions for global modifications (method table additions, etc), currently only reachable through `Core.eval`, but such an intrinsic would require semantics that differ from both the current `set_binding_type!` and the new `:globaldecl`. Using an Expr form here is the most consistent with our current practice for these sort of things elsewhere and accordingly, this PR removes the intrinsic. Note that this PR does not yet change any syntax semantics, although there could in principle be a reordering of side-effects within an expression (e.g. things like `global a::(@isdefined(a) ? Int : Float64)` might behave differently after this commit. However, we never defined the order of side effects (which is part of what this is cleaning up, although, I am not formally defining any specific ordering here either - #54654 will do some of that), and that is not a common case, so this PR should be largely considered non-semantic with respect to the syntax change. Also fixes #54787 while we're at it. 23 June 2024, 13:57:35 UTC
2d4ef8a skip compileall test on all 32-bit (#54897) 23 June 2024, 11:17:29 UTC
5da1f06 mark failing double counting test as broken on i686-linux (#54896) Introduced by https://github.com/JuliaLang/julia/pull/54606 See https://github.com/JuliaLang/julia/pull/54606#issuecomment-2183664446 Issue https://github.com/JuliaLang/julia/issues/54895 23 June 2024, 03:04:38 UTC
2bf4750 implement "engine" for managing inference/codegen (#54816) Continuing from previous PRs to making CodeInstance the primary means of tracking compilation, this introduces an "engine" which keeps track externally of whether a particular inference result is in progress and where. At present, this handles unexpected cycles by permitting both threads to work on it. This is likely to be optimal most of the time currently, until we have the ability to do work-stealing of the results. To assist with that, CodeInstance is now primarily allocated by `jl_engine_reserve`, which also tracks that this is being currently inferred. This creates a sort of per-(MI,owner) tuple lock mechanism, which can be used with the double-check pattern to see if inference was completed while waiting on that. The `world` value is not included since that is inferred later, so there is a possibility that a thread waits only to discover that the result was already invalid before it could use it (though this should be unlikely). The process then can notify when it has finished and wants to release the reservation lock on that identity pair. When doing so, it may also provide source code, allowing the process to potentially begin a threadpool to compile that result while the main thread continues with the job of inference. Includes fix for #53434, by ensuring SOURCE_MODE_ABI results in the item going into the global cache. Fixes #53433, as inInference is computed by the engine and protected by a lock, which also fixes #53680. 23 June 2024, 00:28:18 UTC
323e725 serialization: fix relocatability bug (#54738) 22 June 2024, 18:05:02 UTC
back to top