Revision 9c42df6e7474f9e6d33bb97813c921c57e94a499 authored by Shuhei Kadowaki on 05 November 2023, 12:56:50 UTC, committed by GitHub on 05 November 2023, 12:56:50 UTC
Currently call-site inlining fails on freshly-inferred edge if its
source isn't inlineable. This happens because such sources are
exclusively cached globally. For successful call-site inlining, it's
necessary to cache these locally also, ensuring the inliner can access
them later.

To this end, the type of the `cache_mode` field of `InferenceState` has
been switched to `UInt8` from `Symbol`. This change allows it to encode
multiple caching strategies. A new caching mode can be introduced, e.g.
`VOLATILE_CACHE_MODE`, in the future.

@nanosoldier `runbenchmarks("inference", vs=":master")`
1 parent 816e31a
History
File Mode Size
docs
src
test
Project.toml -rw-r--r-- 170 bytes

back to top