https://github.com/JuliaLang/julia

sort by:
Revision Author Date Message Commit Date
33701f1 Use original computed edges during serialization instead of trying to guess them 22 July 2024, 20:40:22 UTC
326fab0 compute edges post-inference, from info available there 22 July 2024, 20:40:22 UTC
834d9dc inference: compute edges more precisely in post-inference Start computing edges from stmt_info later (after CodeInstance is able to have been allocated for recursion) instead of immediately. 22 July 2024, 20:40:22 UTC
3544696 add edges metadata field to CodeInfo/CodeInstance, prepare for using This records all invoke targets as edges as a functionality test, before finishing the implementation of recording the edges accurately during inference (via backedges + inference). 22 July 2024, 20:40:22 UTC
2b3e4bb Make Core.TypeofUnion use the type method table Ensures that adding or examining the methods of Type{Union{}} in the method table returns the correct results. Fixes #55187 22 July 2024, 20:40:22 UTC
b79856e Allowing disabling the default prompt suffix in `Base.getpass()` (#53614) By default `Base.getpass()` will append `: ` to the prompt, but sometimes that's undesirable, so now there's a `with_suffix` keyword argument to disable it. For context, my use-case is showing SSH prompts received from a server verbatim to the user. I did attempt to write a test for this but it would've required a lot of refactoring since `getpass()` is pretty hardcoded to expect input from `stdin` :\ An alternative design would be to have a `suffix=": "` argument instead, but I couldn't think of a good usecase for a user putting the prompt suffix in a separate argument instead of `message` itself. --------- Co-authored-by: Matt Bauman <mbauman@juliacomputing.com> 22 July 2024, 16:47:20 UTC
5fae0ff 🤖 [master] Bump the Pkg stdlib from d801e4545 to 6b4394914 (#55205) Stdlib: Pkg URL: https://github.com/JuliaLang/Pkg.jl.git Stdlib branch: master Julia branch: master Old commit: d801e4545 New commit: 6b4394914 Julia version: 1.12.0-DEV Pkg version: 1.12.0 Bump invoked by: @IanButterworth Powered by: [BumpStdlibs.jl](https://github.com/JuliaLang/BumpStdlibs.jl) Diff: https://github.com/JuliaLang/Pkg.jl/compare/d801e4545f548ab9061a018bf19db1944c711607...6b43949143662f7d5b72e5e4fcbae19bf0f5ff7b ``` $ git log --oneline d801e4545..6b4394914 6b4394914 Use more internal Pkg.add api to bypass auto-registry-install (#3941) 6002a29de Pkg.test: document that coverage can be a string (#3957) 77f0225b8 don't use `get_extension` to bridge REPLExt to REPLMode (#3959) e6880bc9d add clarifying comment about source_path being the package root (#3956) b1b4df8d8 Fix codeblock language and prompt in Pkg.status() docstring (#3955) ``` Co-authored-by: Dilum Aluthge <dilum@aluthge.com> 22 July 2024, 15:52:01 UTC
e621c74 Fix potential underrun with annotation merging (#54917) Fixes #54860, see the commit message for more details. The added test serves as a MWE of the original bug report. 22 July 2024, 10:54:37 UTC
a1e0f5d Preserve Git objects from being garbage collected (#55142) This issue has been discussed [here](https://discourse.julialang.org/t/preserve-against-garbage-collection-in-libgit2/117095). In most cases, thanks to the specialization of `Base.unsafe_convert`, it is sufficient to replace `obj.ptr` by `obj` in `ccalls` to fix the issue. In other cases, for example when a pointer to an internal string is returned, the code has to be wrapped in `GC.https://github.com/preserve obj begin ... end` block. All `LibGit2` tests run successfully. I have left a few `FIXME` comments where I have doubts about the code, notably with `Ptr{Ptr{Cvoid}}` arguments. 22 July 2024, 01:06:47 UTC
775c0da Remove assumption in gc that MAX_ALIGN can only be as high as 8 (#55195) Split out from #54848 where this is necessary as `MAX_ALIGN` is now 16 on x86. Similar change to #34554. 21 July 2024, 23:41:47 UTC
43df7fb compat notice for a[begin] indexing (#55197) `a[begin]` indexing was added by #35779 in Julia 1.6, so this feature needs a compat notice in the docstring. 21 July 2024, 22:41:13 UTC
680c7e3 Update stable version number in Readme to v1.10.4 (#55186) 20 July 2024, 22:57:41 UTC
04af446 Update README.md (#55183) Remove external resources to avoid cluttering the landing page. 20 July 2024, 17:31:59 UTC
3290904 Compat for `Base.@nospecializeinfer` (#55178) This macro was added in v1.10 but was missing a compat notice. 20 July 2024, 00:09:17 UTC
500288c Profile: make profile listener init a separate function (#55167) I noticed this was an unrelated change in https://github.com/JuliaLang/julia/pull/55047 so I thought I'd separate it out 19 July 2024, 18:59:51 UTC
b451a1c Very minor doc correction (#55173) Adds a period to the logging docs. That's it 👍🏼 19 July 2024, 15:00:56 UTC
0e6d797 Clean up some docs issues (#55139) - #55055: Remove incorrect BoundsError mention from iterate(string) docs. - #54686: Document the else keyword in the try/catch docs. Closes #55055. Closes #54686. 18 July 2024, 23:22:28 UTC
d4362e4 merge unions of vararg tuples (#55123) Normalize unions such as `Union{Tuple{}, Tuple{T}, Tuple{T, T, Vararg{T}}` to `Tuple{Vararg{T}}`. This should make certain sutyping queries more precise. fixes #54746 18 July 2024, 20:53:31 UTC
d563231 Add `Base.Broadcast.BroadcastFunction` to docs (#54820) 18 July 2024, 19:07:35 UTC
1fc9fe1 Make warn missed transformations pass optional (#54871) This makes the `WarnMissedTransformationsPass` compiler pass optional and off by default. 18 July 2024, 18:57:17 UTC
45e10cd NFC: encapsulate write barrier fast path emission into a single function (#55156) Third-party GCs (e.g. MMTk) will have their own write barrier fast path logic (e.g. may use different bit patterns to define what are young & old objects, etc.). Let's encapsulate the write-barrier fast emission code into a single function to make more explicit what parts of the `late-gc-lowering` code must be re-implemented when porting these third-party GCs into Julia. 18 July 2024, 16:39:15 UTC
695b22b Improve static compilabilaty of Printf (#55149) This makes small tweaks to "compiler hints" in Printf to reduce the number of runtime dispatches. For example, on `master` calling ``` prnt(a::AbstractString, b::AbstractFloat, c::Union{AbstractChar,AbstractString}) = @printf("%5s %12.5e %c\n", a, b, c) ``` results in 4 runtime-dispatches, whereas on this branch there is just 1 (which comes from not knowing the type of `Main.stdout`). This also removes a number of manual `@inline` annotations. These are on fairly large `fmt` methods, many of which have loops; forcing inlining seems likely to bloat the code without much benefit. 18 July 2024, 16:16:55 UTC
7c9a465 LinearAlgebra: use `≈` instead of `==` for `tr` tests in symmetric.jl (#55143) After investigating JuliaLang/julia#54090, I found that the issue was not caused by the effects of `checksquare`, but by the use of the `@simd` macro within `tr(::Matrix)`: https://github.com/JuliaLang/julia/blob/0945b9d7740855c82a09fed42fbf6bc561e02c77/stdlib/LinearAlgebra/src/dense.jl#L373-L380 While simply removing the `@simd` macro was considered, the strict left-to-right summation without `@simd` otherwise is not necessarily more accurate, so I concluded that the problem lies in the test code, which tests the (strict) equality of two different `tr` execution results. I have modified the test code to use `≈` instead of `==`. - fixes #54090 17 July 2024, 08:39:27 UTC
b049f93 don't throw EOFError from sleep (#54955) 17 July 2024, 04:35:45 UTC
1c7ce01 inference: add basic support for `:globaldecl` expressions (#55144) Following up #54773. Required for external abstract interpreters that may run inference on arbitrary top-level thunks. 16 July 2024, 23:39:29 UTC
946301c create separate function to spawn GC threads (#55108) Third-party GCs (e.g. MMTk) will probably have their own function to spawn GC threads. 16 July 2024, 22:00:06 UTC
2efcfd9 Update the aarch64 devdocs to reflect the current state of its support (#55141) The devdocs here reflect a time when aarch64 was much less well supported, it also reference Cudadrv which has been archived for years 16 July 2024, 19:47:18 UTC
0945b9d Replace some occurrences of iteration over 1:length with more idiomatic structures (mostly eachindex) (#55137) Base should be a model for the ecosystem, and `eachindex(x)` is better than `1:length(x)` in almost all cases. I've updated many, but certainly not all examples. This is mostly a NFC, but also fixes #55136. 16 July 2024, 11:35:52 UTC
742e7d9 Convert message in timing macros before printing (#55122) 16 July 2024, 10:43:25 UTC
d02bfeb REPL: Remove hard-coded prompt strings in favour of pre-existing constants (#55109) 16 July 2024, 10:22:46 UTC
7b0a189 Fix typo in code comment (#55133) 16 July 2024, 05:26:47 UTC
17668e9 delete some unused fields of jl_gc_mark_cache_t (#55138) They should have been deleted in https://github.com/JuliaLang/julia/pull/54936, but were not. 16 July 2024, 02:42:14 UTC
f8bec45 Simplify sweeping of big values (#54936) Simplifies the layout of the doubly linked list of big objects to make it a bit more canonical: let's just store a pointer to the previous element, instead of storing a "pointer to the next element of the previous element". This should make the implementation a bit easier to understand without incurring any memory overhead. I ran the serial and multithreaded benchmarks from GCBenchmarks and this seems fairly close to performance neutral on my machine. We also ran our internal benchmarks on it at RAI and it looks fine from a correctness and performance point of view. --------- Co-authored-by: Kiran Pamnany <kpamnany@users.noreply.github.com> 15 July 2024, 22:20:23 UTC
d3ce499 🤖 [master] Bump the Pkg stdlib from 046df8ce4 to d801e4545 (#55128) Stdlib: Pkg URL: https://github.com/JuliaLang/Pkg.jl.git Stdlib branch: master Julia branch: master Old commit: 046df8ce4 New commit: d801e4545 Julia version: 1.12.0-DEV Pkg version: 1.12.0 Bump invoked by: @IanButterworth Powered by: [BumpStdlibs.jl](https://github.com/JuliaLang/BumpStdlibs.jl) Diff: https://github.com/JuliaLang/Pkg.jl/compare/046df8ce407659cfaccc647265a6e57bfb02e056...d801e4545f548ab9061a018bf19db1944c711607 ``` $ git log --oneline 046df8ce4..d801e4545 d801e4545 fix artifact printing quiet rule (#3951) ``` Co-authored-by: Dilum Aluthge <dilum@aluthge.com> 15 July 2024, 21:28:37 UTC
d49a3c7 use jl_gc_alloc inside jl_alloc_string (#55098) 15 July 2024, 17:25:48 UTC
7676196 `mkpath` always returns the original path (#54857) fix #54826 related performance test code: ```julia using BenchmarkTools using Base: IOError function checkmode(mode::Integer) if !(0 <= mode <= 511) throw(ArgumentError("Mode must be between 0 and 511 = 0o777")) end mode end function mkpath(path::AbstractString; mode::Integer = 0o777) dir = dirname(path) # stop recursion for `""`, `"/"`, or existed dir (path == dir || isdir(path)) && return path mkpath(dir, mode = checkmode(mode)) try # cases like `mkpath("x/")` will cause an error if `isdir(path)` is skipped # the error will not be rethrowed, but it may be slower, and thus we avoid it in advance isdir(path) || mkdir(path, mode = mode) catch err # If there is a problem with making the directory, but the directory # does in fact exist, then ignore the error. Else re-throw it. if !isa(err, IOError) || !isdir(path) rethrow() end end return path end function mkpath_2(path::AbstractString; mode::Integer = 0o777) dir = dirname(path) # stop recursion for `""` and `"/"` or existed dir (path == dir || isdir(path)) && return path mkpath_2(dir, mode = checkmode(mode)) try mkdir(path, mode = mode) catch err # If there is a problem with making the directory, but the directory # does in fact exist, then ignore the error. Else re-throw it. if !isa(err, IOError) || !isdir(path) rethrow() end end return path end versioninfo() display(@benchmark begin rm("A", recursive=true, force=true); mkpath("A/B/C/D/") end) display(@benchmark begin rm("A", recursive=true, force=true); mkpath_2("A/B/C/D/") end) ``` output: ``` Julia Version 1.10.4 Commit 48d4fd48430 (2024-06-04 10:41 UTC) Build Info: Official https://julialang.org/ release Platform Info: OS: macOS (x86_64-apple-darwin22.4.0) CPU: 16 × Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz WORD_SIZE: 64 LIBM: libopenlibm LLVM: libLLVM-15.0.7 (ORCJIT, skylake) Threads: 1 default, 0 interactive, 1 GC (on 16 virtual cores) Environment: JULIA_EDITOR = code JULIA_NUM_THREADS = BenchmarkTools.Trial: 8683 samples with 1 evaluation. Range (min … max): 473.972 μs … 18.867 ms ┊ GC (min … max): 0.00% … 0.00% Time (median): 519.704 μs ┊ GC (median): 0.00% Time (mean ± σ): 571.261 μs ± 378.851 μs ┊ GC (mean ± σ): 0.00% ± 0.00% ▂█▇▄▁ ▃███████▇▅▄▄▃▃▃▂▂▂▂▂▂▂▂▂▂▂▂▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ ▂ 474 μs Histogram: frequency by time 961 μs < Memory estimate: 5.98 KiB, allocs estimate: 65. BenchmarkTools.Trial: 6531 samples with 1 evaluation. Range (min … max): 588.122 μs … 17.449 ms ┊ GC (min … max): 0.00% … 0.00% Time (median): 660.071 μs ┊ GC (median): 0.00% Time (mean ± σ): 760.333 μs ± 615.759 μs ┊ GC (mean ± σ): 0.00% ± 0.00% ██▆▄▃▁▁ ▁ ██████████▆▇█▇▆▆▅▅▅▄▁▄▁▄▄▃▄▃▁▃▃▁▃▁▁▁▄▁▁▁▁▁▁▁▃▃▁▁▁▃▁▁▃▁▁▁▁▅▅▄▇ █ 588 μs Histogram: log(frequency) by time 4.2 ms < Memory estimate: 5.63 KiB, allocs estimate: 72. ``` 15 July 2024, 16:46:25 UTC
b88f64f make codegen threadsafe, sinking the necessary lock now into JuliaOJIT (#55106) This adds a new helper `jl_read_codeinst_invoke` that should help manage reading the state out of a CodeInstance correctly everywhere. Then replaces all of the places where we have optimizations in codegen where we check for this (to build a name in the JIT for it) with that call. And finally moves the `jl_codegen_lock` into `jl_ExecutionEngine->jitlock` so that it is now more clear that this is only protecting concurrent access to the JIT state it manages (which includes the invoke field of all CodeInstance objects). In a subsequent followup, that `jitlock` and `codeinst_in_flight` will be replaced with something akin to the new engine (for CodeInfo inference) which helps partition that JIT lock mechanism (for CodeInstance / JIT insertion) to correspond just to a single CodeInstance, and not globally to all of them. 14 July 2024, 17:27:55 UTC
e496b2e Fix type instability in convs using CompoundPeriod (#54995) The functions `toms`, `tons`, and `days` uses `sum` over a vector of `Period`s to obtain the conversion of a `CompoundPeriod`. However, the compiler cannot infer the return type because those functions can return either `Int` or `Float` depending on the type of the `Period`. This PR forces the result of those functions to be `Float64`, fixing the type stability. Before this PR we had: ```julia julia> using Dates julia> p = Dates.Second(1) + Dates.Minute(1) + Dates.Year(1) 1 year, 1 minute, 1 second julia> @code_warntype Dates.tons(p) MethodInstance for Dates.tons(::Dates.CompoundPeriod) from tons(c::Dates.CompoundPeriod) @ Dates ~/.julia/juliaup/julia-nightly/share/julia/stdlib/v1.12/Dates/src/periods.jl:458 Arguments #self#::Core.Const(Dates.tons) c::Dates.CompoundPeriod Body::Any 1 ─ %1 = Dates.isempty::Core.Const(isempty) │ %2 = Base.getproperty(c, :periods)::Vector{Period} │ %3 = (%1)(%2)::Bool └── goto #3 if not %3 2 ─ return 0.0 3 ─ %6 = Dates.Float64::Core.Const(Float64) │ %7 = Dates.sum::Core.Const(sum) │ %8 = Dates.tons::Core.Const(Dates.tons) │ %9 = Base.getproperty(c, :periods)::Vector{Period} │ %10 = (%7)(%8, %9)::Any │ %11 = (%6)(%10)::Any └── return %11 julia> @code_warntype Dates.toms(p) MethodInstance for Dates.toms(::Dates.CompoundPeriod) from toms(c::Dates.CompoundPeriod) @ Dates ~/.julia/juliaup/julia-nightly/share/julia/stdlib/v1.12/Dates/src/periods.jl:454 Arguments #self#::Core.Const(Dates.toms) c::Dates.CompoundPeriod Body::Any 1 ─ %1 = Dates.isempty::Core.Const(isempty) │ %2 = Base.getproperty(c, :periods)::Vector{Period} │ %3 = (%1)(%2)::Bool └── goto #3 if not %3 2 ─ return 0.0 3 ─ %6 = Dates.Float64::Core.Const(Float64) │ %7 = Dates.sum::Core.Const(sum) │ %8 = Dates.toms::Core.Const(Dates.toms) │ %9 = Base.getproperty(c, :periods)::Vector{Period} │ %10 = (%7)(%8, %9)::Any │ %11 = (%6)(%10)::Any └── return %11 julia> @code_warntype Dates.days(p) MethodInstance for Dates.days(::Dates.CompoundPeriod) from days(c::Dates.CompoundPeriod) @ Dates ~/.julia/juliaup/julia-nightly/share/julia/stdlib/v1.12/Dates/src/periods.jl:468 Arguments #self#::Core.Const(Dates.days) c::Dates.CompoundPeriod Body::Any 1 ─ %1 = Dates.isempty::Core.Const(isempty) │ %2 = Base.getproperty(c, :periods)::Vector{Period} │ %3 = (%1)(%2)::Bool └── goto #3 if not %3 2 ─ return 0.0 3 ─ %6 = Dates.Float64::Core.Const(Float64) │ %7 = Dates.sum::Core.Const(sum) │ %8 = Dates.days::Core.Const(Dates.days) │ %9 = Base.getproperty(c, :periods)::Vector{Period} │ %10 = (%7)(%8, %9)::Any │ %11 = (%6)(%10)::Any └── return %11 ``` After this PR we have: ```julia julia> using Dates julia> p = Dates.Second(1) + Dates.Minute(1) + Dates.Year(1) 1 year, 1 minute, 1 second julia> @code_warntype Dates.tons(p) MethodInstance for Dates.tons(::Dates.CompoundPeriod) from tons(c::Dates.CompoundPeriod) @ Dates ~/.julia/juliaup/julia-nightly/share/julia/stdlib/v1.12/Dates/src/periods.jl:458 Arguments #self#::Core.Const(Dates.tons) c::Dates.CompoundPeriod Body::Float64 1 ─ %1 = Dates.isempty::Core.Const(isempty) │ %2 = Base.getproperty(c, :periods)::Vector{Period} │ %3 = (%1)(%2)::Bool └── goto #3 if not %3 2 ─ return 0.0 3 ─ %6 = Dates.Float64::Core.Const(Float64) │ %7 = Dates.sum::Core.Const(sum) │ %8 = Dates.tons::Core.Const(Dates.tons) │ %9 = Base.getproperty(c, :periods)::Vector{Period} │ %10 = (%7)(%8, %9)::Any │ %11 = (%6)(%10)::Any │ %12 = Dates.Float64::Core.Const(Float64) │ %13 = Core.typeassert(%11, %12)::Float64 └── return %13 julia> @code_warntype Dates.toms(p) MethodInstance for Dates.toms(::Dates.CompoundPeriod) from toms(c::Dates.CompoundPeriod) @ Dates ~/.julia/juliaup/julia-nightly/share/julia/stdlib/v1.12/Dates/src/periods.jl:454 Arguments #self#::Core.Const(Dates.toms) c::Dates.CompoundPeriod Body::Float64 1 ─ %1 = Dates.isempty::Core.Const(isempty) │ %2 = Base.getproperty(c, :periods)::Vector{Period} │ %3 = (%1)(%2)::Bool └── goto #3 if not %3 2 ─ return 0.0 3 ─ %6 = Dates.Float64::Core.Const(Float64) │ %7 = Dates.sum::Core.Const(sum) │ %8 = Dates.toms::Core.Const(Dates.toms) │ %9 = Base.getproperty(c, :periods)::Vector{Period} │ %10 = (%7)(%8, %9)::Any │ %11 = (%6)(%10)::Any │ %12 = Dates.Float64::Core.Const(Float64) │ %13 = Core.typeassert(%11, %12)::Float64 └── return %13 julia> @code_warntype Dates.days(p) MethodInstance for Dates.days(::Dates.CompoundPeriod) from days(c::Dates.CompoundPeriod) @ Dates ~/.julia/juliaup/julia-nightly/share/julia/stdlib/v1.12/Dates/src/periods.jl:468 Arguments #self#::Core.Const(Dates.days) c::Dates.CompoundPeriod Body::Float64 1 ─ %1 = Dates.isempty::Core.Const(isempty) │ %2 = Base.getproperty(c, :periods)::Vector{Period} │ %3 = (%1)(%2)::Bool └── goto #3 if not %3 2 ─ return 0.0 3 ─ %6 = Dates.Float64::Core.Const(Float64) │ %7 = Dates.sum::Core.Const(sum) │ %8 = Dates.days::Core.Const(Dates.days) │ %9 = Base.getproperty(c, :periods)::Vector{Period} │ %10 = (%7)(%8, %9)::Any │ %11 = (%6)(%10)::Any │ %12 = Dates.Float64::Core.Const(Float64) │ %13 = Core.typeassert(%11, %12)::Float64 └── return %13 ``` 14 July 2024, 17:14:54 UTC
df3fe22 🤖 [master] Bump the Pkg stdlib from 8c996799b to 046df8ce4 (#55112) 13 July 2024, 21:13:36 UTC
6e11ffc make threaded exit and cache file loading safer by pausing other threads (#55105) 13 July 2024, 21:12:47 UTC
0fd1f04 Optimize `jl_get_world_counter` ccall (#55032) This is useful for hot loops that perform an `invoke_latest`-like operation, such as the upcoming `TypedCallable` 13 July 2024, 09:58:22 UTC
6cb71e7 NFC: clenaup jl_gc_set_max_memory a bit (#55110) This code is quite contrived. Let's simplify it a bit. 12 July 2024, 23:50:44 UTC
aba6766 Color todo admonitions in the REPL (#54957) This patch adds magenta coloring for `!!! todo` admonitions in addition to the existing styling of `danger`, `warning`, `info`, `note`, `tip`, and `compat` admonitions. This is useful if you want to leave some more colorful todo notes in docstrings, for example. Accompanying PR for Documenter to render these in HTML and PDF docs: https://github.com/JuliaDocs/Documenter.jl/pull/2526. 12 July 2024, 14:31:20 UTC
2a8bdd0 remove USAGE part of error message when using main macro (#55037) I hit this error message and it felt someone was very angry at me 12 July 2024, 12:57:30 UTC
fba928d fix loading of repeated/concurrent modules (#55066) More followup to fix issues with require. There was an accidental variable reuse (build_id) that caused it to be unable to load cache files in many cases. There was also missing check for a dependency already being loaded, resulting in trying to load it twice. Finally, the start_loading code may drop the require_lock, but the surrounding code was not prepared for that. Now integrate the necessary checks into start_loading, instead of needing to duplicate them before and afterwards. Fixes #53983 Fixes #54940 Closes #55064 11 July 2024, 11:31:59 UTC
b491bcc Add alternative compat for public other than Compat.jl (#55097) 11 July 2024, 08:22:34 UTC
ad407a6 Actually setup jit targets when compiling packageimages instead of targeting only one (#54471) 11 July 2024, 08:06:25 UTC
262b40a Fix `(l/r)mul!` with `Diagonal`/`Bidiagonal` (#55052) Currently, `rmul!(A::AbstractMatirx, D::Diagonal)` calls `mul!(A, A, D)`, but this isn't a valid call, as `mul!` assumes no aliasing between the destination and the matrices to be multiplied. As a consequence, ```julia julia> B = Bidiagonal(rand(4), rand(3), :L) 4×4 Bidiagonal{Float64, Vector{Float64}}: 0.476892 ⋅ ⋅ ⋅ 0.353756 0.139188 ⋅ ⋅ ⋅ 0.685839 0.309336 ⋅ ⋅ ⋅ 0.369038 0.304273 julia> D = Diagonal(rand(size(B,2))); julia> rmul!(B, D) 4×4 Bidiagonal{Float64, Vector{Float64}}: 0.0 ⋅ ⋅ ⋅ 0.0 0.0 ⋅ ⋅ ⋅ 0.0 0.0 ⋅ ⋅ ⋅ 0.0 0.0 julia> B 4×4 Bidiagonal{Float64, Vector{Float64}}: 0.0 ⋅ ⋅ ⋅ 0.0 0.0 ⋅ ⋅ ⋅ 0.0 0.0 ⋅ ⋅ ⋅ 0.0 0.0 ``` This is clearly nonsense, and happens because the internal `_mul!` function assumes that it can safely overwrite the destination with zeros before carrying out the multiplication. This is fixed in this PR by using broadcasting instead. The current implementation is generally equally performant, albeit occasionally with a minor allocation arising from `reshape`ing an `Array`. A similar problem also exists in `l/rmul!` with `Bidiaognal`, but that's a little harder to fix while remaining equally performant. 11 July 2024, 06:03:11 UTC
faf17eb create GC TLS (#55086) Encapsulates all relevant GC thread-local-state into a separate structure. Motivation is that MMTk will have its own version of GC thread-local-state, so doesn't need all of the Julia GC TLS. In the future, folks who would be using MMTk would be setting a pre-processor flag which would lead to either the stock Julia GC TLS or MMTk's GC TLS to be included in `julia_threads.h`. I.e., we would have something like: ```C #ifdef MMTK_GC jl_gc_mmtk_tls_states mmtk_gc_tls; #else jl_gc_tls_states gc_tls; #endif ``` 10 July 2024, 19:54:31 UTC
3ab8fef Fix some dependency build issues with `USE_BINARYBUILDER=0` (#55091) A couple of the changes made in #54538 were incorrect. In particular: - libunwind (non-LLVM) does not use CMake, so the `$(CMAKE) --build` is simply reverted here back to `$(MAKE) -C`. - zlib does use CMake but regular Make flags were being passed to its `$(CMAKE) --build`. Those can just be dropped since it's already getting the proper CMake flags. 10 July 2024, 15:55:11 UTC
24535f6 Remove IndexStyle specialization for AdjOrTransAbsMat (#55077) Since `IndexStyle` falls back to `IndexCartesian` by default, this specialization is unnecessary. 10 July 2024, 11:26:09 UTC
ec013f1 LinearAlgebra: LazyString in error messages for Diagonal/Bidiagonal (#55070) 10 July 2024, 04:15:20 UTC
2759961 drop jl_gc_pool_alloc in favor of instrumented version (#55085) And do some re-naming. 09 July 2024, 20:57:24 UTC
e732706 Use triple quotes in TOML.print when string contains newline (#55084) closes #55083 Shouldu this also check for `\r`? --------- Co-authored-by: Alex Arslan <ararslan@comcast.net> 09 July 2024, 18:59:02 UTC
d7609d8 trace-compile: don't generate `precompile` statements for OpaqueClosure methods (#55072) These Methods cannot be looked up via their type signature, so they are incompatible with the `precompile(...)` mechanism. 09 July 2024, 14:01:30 UTC
40966f2 Recommend using RawFD instead of the Int returned by `fd` (#55027) Helps with https://github.com/JuliaLang/julia/issues/51710 --------- Co-authored-by: Daniel Karrasch <daniel.karrasch@posteo.de> 09 July 2024, 13:51:07 UTC
23748ec Use neutral element as init in reduce doctest (#55065) The documentation of reduce states that init must be the neutral element. However, the provided doctest uses a non-neutral element for init. Fix this by changing the example. 09 July 2024, 13:36:47 UTC
fc775c5 add missing setting of inferred field when setting inference result (#55081) This previously could confuse inference, which expects that the field is set to indicate that the rettype has been computed, and cannot tolerate putting objects in the cache for which that is not true. This was causing Nanosoldier to fail. Also cleanup `sv.unreachable` after IR modification so that it remains (mostly) consistent, even though unused (except for the unreachable nodes themselves), as this was confusing me in debugging. 09 July 2024, 12:29:56 UTC
ec90012 Add fast method for copyto!(::Memory, ::Memory) (#55082) Previously, this method hit the slow generic AbstractArray fallback. Closes #55079 This is an ad-hoc bandaid that really ought to be fixed by resolving #54581. 09 July 2024, 07:20:39 UTC
594544d REPL: warn on non-owning qualified accesses (#54872) * Accessing names from other modules can be dangerous, because those names may only be in that module by happenstance, e.g. a name exported by Base * the keyword `public` encourages more qualified accesses, increasing the risk of accidentally accessing a name that isn't part of that module on purpose * ExplicitImports.jl can catch this in a package context, but this requires opting in. Folks who might not be writing packages or might not be using dev tooling like ExplicitImports can easily get caught off guard by these accesses (and might be less familiar with the issue than package developers) * using a REPL AST transform we can emit warnings when we notice this happening in the REPL 08 July 2024, 19:06:34 UTC
ed987f2 Bidiagonal to Tridiagonal with immutable bands (#55059) Using `similar` to generate the zero band necessarily allocates a mutable vector, which would lead to an error if the other bands are immutable. This PR changes this to use `zero` instead, which usually produces a vector of the same type. There are occasions where `zero(v)` produces a different type from `v`, so an extra conversion is added to obtain a zero vector of the same type. The following works after this: ```julia julia> using FillArrays, LinearAlgebra julia> n = 4; B = Bidiagonal(Fill(3, n), Fill(2, n-1), :U) 4×4 Bidiagonal{Int64, Fill{Int64, 1, Tuple{Base.OneTo{Int64}}}}: 3 2 ⋅ ⋅ ⋅ 3 2 ⋅ ⋅ ⋅ 3 2 ⋅ ⋅ ⋅ 3 julia> Tridiagonal(B) 4×4 Tridiagonal{Int64, Fill{Int64, 1, Tuple{Base.OneTo{Int64}}}}: 3 2 ⋅ ⋅ 0 3 2 ⋅ ⋅ 0 3 2 ⋅ ⋅ 0 3 julia> Tridiagonal{Float64}(B) 4×4 Tridiagonal{Float64, Fill{Float64, 1, Tuple{Base.OneTo{Int64}}}}: 3.0 2.0 ⋅ ⋅ 0.0 3.0 2.0 ⋅ ⋅ 0.0 3.0 2.0 ⋅ ⋅ 0.0 3.0 ``` 08 July 2024, 09:18:38 UTC
23dabef add support for indexing in `@atomic` macro (#54707) Following the discussion in #54642 Implemented: - [x] `modifyindex_atomic!`, `swapindex_atomic!`, `replaceindex_atomic!` for `GenericMemory` - [x] `getindex_atomic`, `setindex_atomic!`, `setindexonce_atomic!` for `GenericMemory` - [x] add support for references in `@atomic` macros - [x] add support for vararg indices in `@atomic` macros - [x] tests - [x] update docstrings with example usage - ~[ ] update Atomics section of the manual (?)~ - [x] news @oscardssmith @vtjnash # New `@atomic` transformations implemented here: ```julia julia> @macroexpand (@atomic a[i1,i2]) :(Base.getindex_atomic(a, :sequentially_consistent, i1, i2)) julia> @macroexpand (@atomic order a[i1,i2]) :(Base.getindex_atomic(a, order, i1, i2)) julia> @macroexpand (@atomic a[i1,i2] = 2.0) :(Base.setindex_atomic!(a, :sequentially_consistent, 2.0, i1, i2)) julia> @macroexpand (@atomic order a[i1,i2] = 2.0) :(Base.setindex_atomic!(a, order, 2.0, i1, i2)) julia> @macroexpand (@atomicswap a[i1,i2] = 2.0) :(Base.swapindex_atomic!(a, :sequentially_consistent, 2.0, i1, i2)) julia> @macroexpand (@atomicswap order a[i1,i2] = 2.0) :(Base.swapindex_atomic!(a, order, 2.0, i1, i2)) julia> @macroexpand (@atomic a[i1,i2] += 2.0) :((Base.modifyindex_atomic!(a, :sequentially_consistent, +, 2.0, i1, i2))[2]) julia> @macroexpand (@atomic order a[i1,i2] += 2.0) :((Base.modifyindex_atomic!(a, order, +, 2.0, i1, i2))[2]) julia> @macroexpand (@atomiconce a[i1,i2] = 2.0) :(Base.setindexonce_atomic!(a, :sequentially_consistent, :sequentially_consistent, 2.0, i1, i2)) julia> @macroexpand (@atomiconce o1 o2 a[i1,i2] = 2.0) :(Base.setindexonce_atomic!(a, o1, o2, 2.0, i1, i2)) julia> @macroexpand (@atomicreplace a[i1,i2] (2.0=>3.0)) :(Base.replaceindex_atomic!(a, :sequentially_consistent, :sequentially_consistent, 2.0, 3.0, i1, i2)) julia> @macroexpand (@atomicreplace o1 o2 a[i1,i2] (2.0=>3.0)) :(Base.replaceindex_atomic!(a, o1, o2, 2.0, 3.0, i1, i2)) ``` --------- Co-authored-by: Oscar Smith <oscardssmith@gmail.com> 07 July 2024, 21:07:59 UTC
0ef2bb6 fix concurrent module loading return value (#54898) Previously this might return `nothing` which would confuse the caller of `start_loading` which expects that to mean the Module didn't load. It is not entirely clear if this code ever worked, even single-threaded. Fix #54813 07 July 2024, 19:00:27 UTC
aa07585 lowering: Don't resolve type bindings earlier than necessary (#54999) This is a follow up to resolve a TODO left in #54773 as part of preparatory work for #54654. Currently, our lowering for type definition contains an early `isdefined` that forces a decision on binding resolution before the assignment of the actual binding. In the current implementation, this doesn't matter much, but with #54654, this would incur a binding invalidation we would like to avoid. To get around this, we extend the (internal) `isdefined` form to take an extra argument specifying whether or not to permit looking at imported bindings. If not, resolving the binding is not required semantically, but for the purposes of type definition (where assigning to an imported binding would error anyway), this is all we need. 06 July 2024, 22:18:34 UTC
082e142 remove unused managed realloc (#55050) Follow-up to https://github.com/JuliaLang/julia/pull/54949. Again, motivation is to clean up the GC interface a bit by removing unused functions (particularly after the Memory work). 06 July 2024, 14:33:06 UTC
7d4afba CI: update LabelCheck banned labels (#55051) 06 July 2024, 12:27:27 UTC
1837202 Matmul: `matprod_dest` for `Diagonal` * `SymTridiagonal` (#55039) We specialize `matprod_dest` for the combination of a `Diagonal` and a `SymTridiaognal`, in which case the destination is a `Tridiagonal`. With this, the specialized methods `*(::Diagonal, ::SymTridiagonal)` and `*(::SymTridiagonal, ::Diagonal)` don't need to be defined anymore, which reduces potential method ambiguities. 06 July 2024, 01:47:53 UTC
e318166 cleanup remset logic a bit (#55021) I think that keeping a single `remset` (instead of two and keep alternating between them) should be a bit easier to understand and possibly even a bit faster (since we will be accessing the `remset` only once), though that should be a very small difference. 06 July 2024, 01:30:00 UTC
7122311 Some mailmap updates (#55048) Updating for consistency with more recent commits by authors. 05 July 2024, 20:54:37 UTC
0d9404f Declare type for `libgcc_s` handles in CompilerSupportLibraries_jll (#55011) This improves the type stability of this stdlib. 05 July 2024, 20:00:59 UTC
59f08df LAPACK: annotate size check in `lacpy!` with `@noinline` for reduced latency (#55029) The `@noinline` annotation on the size check appears to reduce latency in a second call with different argument types: ```julia julia> using LinearAlgebra julia> A = rand(2,2); B = similar(A); julia> @time LAPACK.lacpy!(B, A, 'U'); 0.032585 seconds (29.80 k allocations: 1.469 MiB, 99.84% compilation time) julia> A = rand(Float32,2,2); B = similar(A); julia> @time LAPACK.lacpy!(B, A, 'U'); 0.026698 seconds (22.80 k allocations: 1.113 MiB, 99.84% compilation time) # v"1.12.0-DEV.810" 0.024715 seconds (19.88 k allocations: 987.000 KiB, 99.80% compilation time) # Without noinline 0.017084 seconds (18.52 k allocations: 903.828 KiB, 99.72% compilation time) # This PR (with noinline) ``` 05 July 2024, 15:50:46 UTC
140248e Point to ModernJuliaWorkflows in "getting started" (#55036) Add link to https://modernjuliaworkflows.github.io/ 05 July 2024, 15:19:15 UTC
2e3628d remove unused jl_gc_alloc_*w (#55026) Closes https://github.com/JuliaLang/julia/issues/55024. 05 July 2024, 14:23:31 UTC
5468a3e Fix a regression in the test for #13432 (#55004) The test for #13432 is supposed to test a particular codegen path involving Bottom. It turns out that this code path is regressed on master, but hidden by the fact that in modern julia, the optimizer can fold this code path early. Fix the bug and add a variant of the test that shows the issue on julia master. Note that this is both an assertion failure and incorrect codegen. This PR addresses both. 05 July 2024, 14:20:47 UTC
a5f0016 Support `@opaque Tuple{T,U...}->RT (...)->...` syntax for explicit arg/return types (#54947) This gives users a way to explicitly specify the return type of an OpaqueClosure, and it also removes the old syntax `@opaque AT ...` in preference of `@opaque AT->_ ...` 04 July 2024, 20:47:48 UTC
8f1f223 mark a flaky Sockets test as broken (#55030) As suggested in https://github.com/JuliaLang/julia/issues/55008#issuecomment-2207136025. Closes https://github.com/JuliaLang/julia/issues/55008. 04 July 2024, 20:42:48 UTC
c388382 Make the __init__ in GMP more statically compilable (#55012) Co-authored-by: Jeff Bezanson <jeff.bezanson@gmail.com> 04 July 2024, 16:05:57 UTC
8083506 staticdata: Unique Bindings by mod/name (#54993) Currently we error when attempting to serialize Bindings that do not beloing to the incremental module (GlobalRefs have special logic to avoid looking at the binding field). With #54654, Bindings will show up in more places, so let's just unique them properly by their module/name identity. Of course, we then have two objects so serialized (both GlobalRef and Binding), which suggests that we should perhaps finish the project of unifying them. This is not currently possible, because the existence of a binding object in the binding table has semantic content, but this will change with #54654, so we can do such a change thereafter. 04 July 2024, 15:51:23 UTC
34bacaa delete possibly stale reset_gc_stats (#55015) See discussion in https://github.com/JuliaLang/julia/issues/55014. Doesn't seem breaking, but I can close the PR if it is. Closes https://github.com/JuliaLang/julia/issues/55014. 03 July 2024, 23:00:43 UTC
ce4f090 Make ScopedValues public (#54574) 03 July 2024, 16:47:12 UTC
1193997 Don't require that `@inbounds` depends only on local information (#54270) Co-authored-by: Steven G. Johnson <stevenj@alum.mit.edu> 02 July 2024, 20:24:13 UTC
2712633 remove stale objprofile (#54991) As mentioned in https://github.com/JuliaLang/julia/issues/54968, `OBJPROFILE` exposes a functionality which is quite similar to what the heap snapshot does, but has a considerably worse visualization tool (i.e. raw printf's compared to the snapshot viewer from Chrome). Closes https://github.com/JuliaLang/julia/issues/54968. 02 July 2024, 19:32:22 UTC
7579659 inference: add missing `MustAlias` widening in `_getfield_tfunc` (#54996) Otherwise it may result in missing `⊑` method error in uses cases by external abstract interpreters using `MustAliasesLattice` like JET. 02 July 2024, 17:23:59 UTC
6cf3a05 RFC: Make `include_dependency(path; track_content=true)` the default (#54965) By changing the default to `true` we make it easier to build relocatable packages from already existing ones when 1.11 lands. This keyword was just added during 1.11, so its not yet too late to change its default. 02 July 2024, 17:22:30 UTC
f2558c4 Add timing to precompile trace compile (#54962) I think this tool is there mainly to see what's taking so long, so timing information is helpful. 01 July 2024, 22:08:21 UTC
1fdc6a6 NFC: create an actual set of functions to manipulate GC thread ids (#54984) Also adds a bunch of integrity constraint checks to ensure we don't repeat the bug from https://github.com/JuliaLang/julia/pull/54645. 01 July 2024, 19:28:23 UTC
6139779 remove reference to a few stale GC environment variables (#54990) Did a quick grep and couldn't find any reference to them besides this manual. 01 July 2024, 19:08:53 UTC
4b4468a repl: Also ignore local imports with specified symbols (#54982) Otherwise it's trying to find the `.` package, which obviously doesn't exist. This isn't really a problem - it just does some extra processing and loads Pkg, but let's make this correct anyway. 01 July 2024, 14:27:03 UTC
00c700e refactor `contextual` test files (#54970) - moved non-contextual tests into `staged.jl` - moved `@overlay` tests into `core.jl` - test `staged.jl` in an interpreter mode 01 July 2024, 04:11:07 UTC
41bde01 address a TODO in gc_sweep_page (i.e. save an unnecessary fetch-add) (#54976) 29 June 2024, 18:24:32 UTC
334e4d9 simplify handling of buffered pages (#54961) Simplifies handling of buffered pages by keeping them in a single place (`global_page_pool_lazily_freed`) instead of making them thread local. Performance has been assessed on the serial & multithreaded GCBenchmarks and it has shown to be performance neutral. 28 June 2024, 15:46:50 UTC
8791d54 cglobal: Fall back to runtime intrinsic (#54914) Our codegen for `cglobal` was sharing the `static_eval` code for symbols with ccall. However, we do have full runtime emulation for this intrinsic, so mandating that the symbol can be statically evaluated is not required and causes semantic differences between the interpreter and codegen, which is undesirable. Just fall back to the runtime intrinsic instead. 28 June 2024, 11:07:52 UTC
89e391b Fix accidental early evaluation of imported `using` binding (#54956) In `using A.B`, we need to evaluate `A.B` to add the module to the using list. However, in `using A: B`, we do not care about the value of `A.B`, we only operate at the binding level. These two operations share a code path and the evaluation of `A.B` happens early and is unused on the `using A: B` path. I believe this was an unintentional oversight when the latter syntax was added. Fixes #54954. 28 June 2024, 11:06:26 UTC
b5d0b90 inference: implement an opt-in interface to cache generated sources (#54916) In Cassette-like systems, where inference has to infer many calls of `@generated` function and the generated function involves complex code transformations, the overhead from code generation itself can become significant. This is because the results of code generation are not cached, leading to duplicated code generation in the following contexts: - `method_for_inference_heuristics` for regular inference on cached `@generated` function calls (since `method_for_inference_limit_heuristics` isn't stored in cached optimized sources, but is attached to generated unoptimized sources). - `retrieval_code_info` for constant propagation on cached `@generated` function calls. Having said that, caching unoptimized sources generated by `@generated` functions is not a good tradeoff in general cases, considering the memory space consumed (and the image bloat). The code generation for generators like `GeneratedFunctionStub` produced by the front end is generally very simple, and the first duplicated code generation mentioned above does not occur for `GeneratedFunctionStub`. So this unoptimized source caching should be enabled in an opt-in manner. Based on this idea, this commit defines the trait `abstract type Core.CachedGenerator` as an interface for the external systems to opt-in. If the generator is a subtype of this trait, inference caches the generated unoptimized code, sacrificing memory space to improve the performance of subsequent inferences. Specifically, the mechanism for caching the unoptimized source uses the infrastructure already implemented in JuliaLang/julia#54362. Thanks to JuliaLang/julia#54362, the cache for generated functions is now partitioned by world age, so even if the unoptimized source is cached, the existing invalidation system will invalidate it as expected. In JuliaDebug/CassetteOverlay.jl#56, the following benchmark results showed that approximately 1.5~3x inference speedup is achieved by opting into this feature: ## Setup ```julia using CassetteOverlay, BaseBenchmarks, BenchmarkTools @MethodTable table; pass = @overlaypass table; BaseBenchmarks.load!("inference"); benchfunc1() = sin(42) benchfunc2(xs, x) = findall(>(x), abs.(xs)) interp = BaseBenchmarks.InferenceBenchmarks.InferenceBenchmarker() # benchmark inference on entire call graphs from scratch @benchmark BaseBenchmarks.InferenceBenchmarks.@inf_call pass(benchfunc1) @benchmark BaseBenchmarks.InferenceBenchmarks.@inf_call pass(benchfunc2, rand(10), 0.5) # benchmark inference on the call graphs with most of them cached @benchmark BaseBenchmarks.InferenceBenchmarks.@inf_call interp=interp pass(benchfunc1) @benchmark BaseBenchmarks.InferenceBenchmarks.@inf_call interp=interp pass(benchfunc2, rand(10), 0.5) ``` ## Benchmark inference on entire call graphs from scratch > on master ``` julia> @benchmark BaseBenchmarks.InferenceBenchmarks.@inf_call pass(benchfunc1) BenchmarkTools.Trial: 61 samples with 1 evaluation. Range (min … max): 78.574 ms … 87.653 ms ┊ GC (min … max): 0.00% … 8.81% Time (median): 83.149 ms ┊ GC (median): 4.85% Time (mean ± σ): 82.138 ms ± 2.366 ms ┊ GC (mean ± σ): 3.36% ± 2.65% ▂ ▂▂ █ ▂ █ ▅ ▅ █▅██▅█▅▁█▁▁█▁▁▁▁▅▁▁▁▁▁▁▁▁▅▁▁▅██▅▅█████████▁█▁▅▁▁▁▁▁▁▁▁▁▁▁▁▅ ▁ 78.6 ms Histogram: frequency by time 86.8 ms < Memory estimate: 52.32 MiB, allocs estimate: 1201192. julia> @benchmark BaseBenchmarks.InferenceBenchmarks.@inf_call pass(benchfunc2, rand(10), 0.5) BenchmarkTools.Trial: 4 samples with 1 evaluation. Range (min … max): 1.345 s … 1.369 s ┊ GC (min … max): 2.45% … 3.39% Time (median): 1.355 s ┊ GC (median): 2.98% Time (mean ± σ): 1.356 s ± 9.847 ms ┊ GC (mean ± σ): 2.96% ± 0.41% █ █ █ █ █▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█▁▁▁▁▁█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█ ▁ 1.35 s Histogram: frequency by time 1.37 s < Memory estimate: 637.96 MiB, allocs estimate: 15159639. ``` > with this PR ``` julia> @benchmark BaseBenchmarks.InferenceBenchmarks.@inf_call pass(benchfunc1) BenchmarkTools.Trial: 230 samples with 1 evaluation. Range (min … max): 19.339 ms … 82.521 ms ┊ GC (min … max): 0.00% … 0.00% Time (median): 19.938 ms ┊ GC (median): 0.00% Time (mean ± σ): 21.665 ms ± 4.666 ms ┊ GC (mean ± σ): 6.72% ± 8.80% ▃▇█▇▄ ▂▂▃▃▄ █████▇█▇▆▅▅▆▅▅▁▅▁▁▁▁▁▁▁▁▁██████▆▁█▁▅▇▆▁▅▁▁▅▁▅▁▁▁▁▁▁▅▁▁▁▁▁▁▅ ▆ 19.3 ms Histogram: log(frequency) by time 29.4 ms < Memory estimate: 28.67 MiB, allocs estimate: 590138. julia> @benchmark BaseBenchmarks.InferenceBenchmarks.@inf_call pass(benchfunc2, rand(10), 0.5) BenchmarkTools.Trial: 14 samples with 1 evaluation. Range (min … max): 354.585 ms … 390.400 ms ┊ GC (min … max): 0.00% … 7.01% Time (median): 368.778 ms ┊ GC (median): 3.74% Time (mean ± σ): 368.824 ms ± 8.853 ms ┊ GC (mean ± σ): 3.70% ± 1.89% ▃ █ ▇▁▁▁▁▁▁▁▁▁▁█▁▇▇▁▁▁▁▇▁▁▁▁█▁▁▁▁▇▁▁▇▁▁▇▁▁▁▇▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▇ ▁ 355 ms Histogram: frequency by time 390 ms < Memory estimate: 227.86 MiB, allocs estimate: 4689830. ``` ## Benchmark inference on the call graphs with most of them cached > on master ``` julia> @benchmark BaseBenchmarks.InferenceBenchmarks.@inf_call interp=interp pass(benchfunc1) BenchmarkTools.Trial: 10000 samples with 1 evaluation. Range (min … max): 45.166 μs … 9.799 ms ┊ GC (min … max): 0.00% … 98.96% Time (median): 46.792 μs ┊ GC (median): 0.00% Time (mean ± σ): 48.339 μs ± 97.539 μs ┊ GC (mean ± σ): 2.01% ± 0.99% ▁▂▄▆▆▇███▇▆▅▄▃▄▄▂▂▂▁▁▁ ▁▁▂▂▁ ▁ ▂▁ ▁ ▃ ▃▇██████████████████████▇████████████████▇█▆▇▇▆▆▆▅▆▆▆▇▆▅▅▅▆ █ 45.2 μs Histogram: log(frequency) by time 55 μs < Memory estimate: 25.27 KiB, allocs estimate: 614. julia> @benchmark BaseBenchmarks.InferenceBenchmarks.@inf_call interp=interp pass(benchfunc2, rand(10), 0.5) BenchmarkTools.Trial: 10000 samples with 1 evaluation. Range (min … max): 303.375 μs … 16.582 ms ┊ GC (min … max): 0.00% … 97.38% Time (median): 317.625 μs ┊ GC (median): 0.00% Time (mean ± σ): 338.772 μs ± 274.164 μs ┊ GC (mean ± σ): 5.44% ± 7.56% ▃▆██▇▅▂▁ ▂▂▄▅██████████▇▆▅▅▄▄▃▃▃▃▃▃▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▁▂▂▂▂▂▂▂▁▂▁▁▂▁▂▂ ▃ 303 μs Histogram: frequency by time 394 μs < Memory estimate: 412.80 KiB, allocs estimate: 6224. ``` > with this PR ``` @benchmark BaseBenchmarks.InferenceBenchmarks.@inf_call interp=interp pass(benchfunc1) BenchmarkTools.Trial: 10000 samples with 6 evaluations. Range (min … max): 5.444 μs … 1.808 ms ┊ GC (min … max): 0.00% … 99.01% Time (median): 5.694 μs ┊ GC (median): 0.00% Time (mean ± σ): 6.228 μs ± 25.393 μs ┊ GC (mean ± σ): 5.73% ± 1.40% ▄█▇▄ ▁▂▄█████▇▄▃▃▃▂▂▂▃▂▂▂▂▂▂▂▂▂▂▂▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ ▂ 5.44 μs Histogram: frequency by time 7.47 μs < Memory estimate: 8.72 KiB, allocs estimate: 196. julia> @benchmark BaseBenchmarks.InferenceBenchmarks.@inf_call interp=interp pass(benchfunc2, rand(10), 0.5) BenchmarkTools.Trial: 10000 samples with 1 evaluation. Range (min … max): 211.000 μs … 36.187 ms ┊ GC (min … max): 0.00% … 0.00% Time (median): 223.000 μs ┊ GC (median): 0.00% Time (mean ± σ): 280.025 μs ± 750.097 μs ┊ GC (mean ± σ): 6.86% ± 7.16% █▆▄▂▁ ▁ ███████▇▇▇▆▆▆▅▆▅▅▅▅▅▄▅▄▄▄▅▅▁▄▅▃▄▄▄▃▄▄▃▅▄▁▁▃▄▁▃▁▁▁▃▄▃▁▃▁▁▁▃▃▁▃ █ 211 μs Histogram: log(frequency) by time 1.46 ms < Memory estimate: 374.17 KiB, allocs estimate: 5269. ``` 28 June 2024, 01:08:21 UTC
ca0b2a8 LAPACK: Avoid repr call in `chkvalidparam` (#54952) We were calling `repr` here to interpolate the character with the quotes into the error message. However, this is overkill for this application, and `repr` introduces dynamic dispatch into the call. This PR hard-codes the quotes into the string, which matches the pattern followed in the other error messages following `chkvalidparam`. 27 June 2024, 18:37:53 UTC
17e6e69 Make a few places resilient to inference not working (#54948) When working on Base, if you break inference (in a way that preserves correctness, but not precision), it would be nice if the system bootstrapped anyway, since it's easier to poke at the system if the REPL is running. However, there were a few places where we were relying on the inferred element type for empty collections while passing those values to callees with narrow type signatures. Switch these to comprehensions with declared type instead, so that even if inference is (temporarily) borked, things will still boostrap fine. 27 June 2024, 18:18:33 UTC
d6dd59b Reduce branches in 2x2 and 3x3 stable_muladdmul for standard cases (#54951) We may use the knowledge that `alpha != 0` at the call site to hard-code `alpha = true` in the `MulAddMul` constructor if `alpha isa Bool`. This eliminates the `!isone(alpha)` branches in `@stable_muladdmul`, and reduces latency in matrix multiplication. ```julia julia> using LinearAlgebra julia> A = rand(2,2); julia> @time A * A; 0.596825 seconds (1.05 M allocations: 53.458 MiB, 5.94% gc time, 99.95% compilation time) # nightly v"1.12.0-DEV.789" 0.473140 seconds (793.52 k allocations: 39.946 MiB, 3.28% gc time, 99.93% compilation time) # this PR ``` In a separate session, ```julia julia> @time A * Symmetric(A); 0.829252 seconds (2.37 M allocations: 120.051 MiB, 1.98% gc time, 99.98% compilation time) # nightly v"1.12.0-DEV.789" 0.712953 seconds (2.06 M allocations: 103.951 MiB, 2.17% gc time, 99.98% compilation time) # This PR ``` 27 June 2024, 18:17:47 UTC
beb4f19 Revert "reflection: refine and accurately define the options for `names`" (#54959) It breaks over 500 packages: https://s3.amazonaws.com/julialang-reports/nanosoldier/pkgeval/by_date/2024-06/23/report.html 27 June 2024, 17:28:43 UTC
5163d55 Add optimised findall(isequal(::Char), ::String) (#54593) This uses the same approach as the existing findnext and findprev functions in the same file. The following benchmark: ```julia using BenchmarkTools s = join(rand('A':'z', 10000)); @btime findall(==('c'), s); ``` Gives these results: * This PR: 3.489 μs * 1.11-beta1: 31.970 μs 27 June 2024, 16:50:47 UTC
f3298ee SuiteSparse: Bump version (#54950) This should have no functional changes, however, it will affect the version of non-stdlib JLLs. I'd like to see if we can add this as a backport candidate to 1.11 since it doesn't change Julia functionality at all, but does allow some non-stdlib JLLs to be kept current. Otherwise at least the SPEX linear solvers and the ParU linear solvers will be missing multiple significant features until 1.12. 27 June 2024, 13:30:47 UTC
back to top