0% found this document useful (0 votes)
55 views44 pages

Leyden Jvmls 2023 08 08

Uploaded by

AR
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views44 pages

Leyden Jvmls 2023 08 08

Uploaded by

AR
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Project Leyden

Capturing Lightning in a Bo le

Mark Reinhold
Chief Architect, Java Pla orm Group, Oracle
John Rose
JVM Senior Architect, Java Pla orm Group, Oracle

JVM Language Summit


2023/8/8

Copyright © 2023, Oracle and/or its affiliates


tf
tf
tt
Leyden: Goal

Improve the startup time, warmup time, and footprint


of Java programs

Copyright © 2023, Oracle and/or its affiliates 2


Leyden: Means

Shi computa on temporally,


later and earlier in me
Constrain Java’s natural dynamism,
to enable more and be er shi ing
Selec vely, per the needs of each par cular program
Compa bly, to preserve program meaning

Copyright © 2023, Oracle and/or its affiliates 3


ti
ti
ft
ti
ti
tt
ti
ft
Shi ing computa on
• We can shi two kinds of computa on
– Work expressed directly by a program (e.g., invoke a method)
– Work done on behalf of a program (e.g., compile a method to na ve code)
• Java implementa ons already have features that can shi computa on
– Automa cally: Compile- me constant folding (shi s EARLIER in me)
Garbage collec on (LATER)
– Or op onally: Ahead-of- me (AOT) compila on (EARLIER)
Pre-digested class-data archives (CDS) (EARLIER)
Lazy class loading and ini aliza on (LATER)
– Either way, always preserving program meaning per the Speci ca on
• So as to ensure compa bility

Copyright © 2023, Oracle and/or its affiliates 4


ft
ti
ti
ft
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ft
fi
ti
ti
ti
ft
ti
Leyden will explore new ways to shi computa on
• Some kinds of shi ing will likely require no speci ca on changes
– E.g., expand lambdas into ordinary bytecode (EARLIER)
• Others will de nitely require speci ca on changes
– E.g., eliminate dead code (stripping) (EARLIER)
• Yet others will be new pla orm features that allow developers
to express temporal shi ing directly in source code
– E.g., lazy sta c nal elds (LATER)

Copyright © 2023, Oracle and/or its affiliates 5


ti
fi
fi
fi
ft
ft
tf
fi
ti
ft
fi
ti
ti
Constraining dynamism
• Shi ing computa on o en requires code analysis
– But: Java’s dynamic features make code analysis di cult
• We could simplify code analysis by imposing a closed-world constraint
– Forbids dynamic class loading and severely limits re ec on
– Many applica ons don’t work under this constraint
– Many developers aren’t willing to live with this constraint
• Leyden will therefore explore a spectrum of constraints,
up to and including the closed-world constraint
– Selec vely degrade Java’s natural dynamism
to enable more and be er shi ing of computa on
– Developers can choose how to trade func onality for performance

Copyright © 2023, Oracle and/or its affiliates 6


ft
ti
ti
ti
tt
ft
ft
ti
ti
ffi
fl
ti
Condensers: Tools for shi ing & constraining computa on
The key new concept of Leyden

• A condenser is a tool in the JDK that:


– Performs some of the computa on encoded in a program image
• Thereby shi ing it earlier in me
– Transforms the image into a new, faster image that may contain:
• New code (e.g., ahead-of- me compiled methods)
• New data (e.g., serialized heap objects)
• New metadata (e.g., pre-loaded classes)
• New constraints (e.g., no class rede ni on)

Copyright © 2023, Oracle and/or its affiliates 7


ft
ti
ti
fi
ti
ti
ft
ti
Key proper es of condensers
• Condensers are meaning-preserving
– The resul ng image has the same meaning as the original
• Condensers are composable
– The image output by one condenser can be the input to another
– A par cular condenser can be applied mul ple mes, if needed
• Condensers are selectable
– Developers choose how to condense, and when
• If you’re tes ng or debugging, then don’t bother — just run normally
– Insofar as shi ing computa on requires accep ng constraints, you can trade
func onality for performance via the condensers that you choose

Copyright © 2023, Oracle and/or its affiliates 8


ti
ti
ti
ti
ft
ti
ti
ti
ti
ti
Performance is an emergent property
• The performance of your program depends upon
the condensers that you choose
• Given su ciently powerful condensers:
– If you shi enough computa on earlier or later in me,
you might even be able to produce a fully-sta c na ve image
– This will likely require accep ng many constraints
• Leyden need not specify fully-sta c na ve images directly
– Instead, it will enable su cient shi ing of computa on
and constraining of dynamism
– Fully-sta c na ve images can fall out as an emergent property

Copyright © 2023, Oracle and/or its affiliates 9


ti
ft
ffi
ti
ffi
ti
ti
ft
ti
ti
ti
ti
ti
ti
Leyden roadmap
• Introduce condensers into the Java Pla orm
– Evolve the Java Pla orm Speci ca on to allow
meaning-preserving whole-program transforma ons
– Evolve the run- me image format to accommodate
new code, data, and metadata
• Explore new ways to shi computa on and constrain dynamism
• Explore related improvements

Copyright © 2023, Oracle and/or its affiliates 10


ti
tf
ft
fi
ti
ti
tf
ti
Leyden progress! [Link]/projects/leyden
• Introduce condensers
– Toward Condensers (design note, Goetz, Reinhold, & Sandoz)
• Shi computa on and constrain dynamism
– Pre-generate lambda classes (prototype branch, Heidinga)
– Condensing Indy Bootstraps (design note, Goetz)
– Computed Constants (dra JEP, Minborg & Cimadamore)
– Experiments in shi ing specula ve compila on (Rose et al.)
• Related improvements
– Herme c applica on packaging (prototype branch, Zhou)
– JMOD-less linking (prototype, Gehwolf)

Copyright © 2023, Oracle and/or its affiliates 11


ft
ti
ti
ti
ft
ft
ti
ti
Project Leyden
Capturing Lightning in a Bo le

Mark Reinhold
Chief Architect, Java Pla orm Group, Oracle
John Rose
JVM Senior Architect, Java Pla orm Group, Oracle

JVM Language Summit


2023/8/8

Copyright © 2023, Oracle and/or its affiliates 12


tf
tf
tt
Sta c AOT vs. dynamic JIT… a dilemma
• The Java answer is never “Choose One, Lose One”:
Java balances both sta c and dynamic reasoning.
• HotSpot specula vely op mizes dynamic states,
in e ect conver ng them to sta c states.
• In Leyden, such op miza ons can be shi ed,
specula vely op mizing before app. startup. (image from JVMLS 2010)

• Result: Users can drive startup me and warmup me into the noise,
maintaining compa bility
– No new constraints, no code change required

Copyright © 2023, Oracle and/or its affiliates 13


ff
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ft
ti
Concepts and metrics (some de ni ons)
• Startup ac vity is setup computa on to get through a rst task, used for all tasks.
• Startup me is therefore (at most) the me of the rst task, less other tasks.
Time[Startup] ≤ Time[Task 1] - Time[Task 2]
Caveat: Not all apps have a repeatable representa ve task. Take with grains of salt…
• Warmup ac vity is op miza on e ort (by JVM, not app) to reach peak performance.
• Peak performance may be de ned as a sta s cal maximum, minus variance (noise).
(Noise is o en in the 3-5% range: say peak is reached at 95% throughput or be er.)
• Warmup me is therefore the me it takes to reach 95% or higher of eventual peak.
• Startup me may also be measured as me to reach 80% (Pareto…) of eventual peak.

(This is the startup me and warmup me we wish to drive into the noise.)

Copyright © 2023, Oracle and/or its affiliates 14


ti
ti
ti
ft
ti
ti
ti
ti
ti
fi
ti
ti
ff
ti
ti
ti
ti
ti
fi
ti
fi
ti
fi
tt
A tale oftypical
two graphs: What
ideal
startup looks
improved
like today
typical ideal
typical ideal

NOTE: IDEALIZED MODEL


300
•x 300
<clinit> activity unique to first iteration,
measured in CPU milliseconds
(time per single task – unspecific units)

online JIT activity for warmup


contributes to area between blue and gray,
200
measured in CPU seconds or minutes
200

100

100 0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
(20 task repetitions – unspecific task)
15
A tale oftypical
two graphs: Atideallarger scalesimproved
it’s all warmup
typical ideal
typical ideal

NOTE: IDEALIZED MODEL


300
• x 300
<clinit> activity unique to first iteration,
measured in CPU milliseconds
(time per single task – unspecific units)

online JIT activity for warmup


contributes to area between blue and gray,
200
measured in CPU seconds or minutes
200

100 note new time scale,


expanded from 20
measurements to 100

100 0
1 10 19 28 37 46 55 64 73 82 91 100
(100 task repetitions – unspecific task)
16
Challenge: Make startup/warmup
typical ideal
faster at mul ple scales
improved

typical ideal
typical ideal

NOTE: IDEALIZED MODEL


300
• x 300
<clinit> activity unique to first iteration,
measured in CPU milliseconds
(time per single task – unspecific units)

online JIT activity for warmup


contributes to area between blue and gray,
200
measured in CPU seconds or minutes
200
CHALLENGE: We wish to push the
blue line downward, closer to the
ideal. (“Closer” simply means less
100
total area between the curves.)

100 0
1 10 19 28 37 46 55 64 73 82 91 100
(100 task repetitions – unspecific task)
17

ti
HotSpot Tiered Compila on: A primer
• Tiers 0..4 are execu on modes, transparent except for performance.
• Tier 0 is the JVM bytecode interpreter. It collects full pro ling.
• Tier 1 is the simplest possible (C1) code. No pro ling. Use is rare.
• Tier 2 is simple code with pro ling at method entry (only). Limited use.
• Tier 3 is simple code which also collects full pro ling. Spins up quickly.
• Tier 4 is op mized code which bene ts from pro ling, but collects none.
• Tier 4 may de-op mize on awkward inputs; lower Tiers may not.
• De-op miza on is followed by further pro ling, and re-op miza on.

Copyright © 2023, Oracle and/or its affiliates 18


ti
ti
ti
ti
ti
fi
ti
fi
fi
fi
fi
fi
fi
ti
ti
HotSpot Tiered Compila on: A primer
JVM execution modes and transitions
in the standard HotSpot execution policy.
Rare T1 states are omitted.

JVM execution modes


and transitions in the standard
START
HotSpot execution policy.
T0 (interpreter)
handles inits (Rare T1 states are omitted.)
full profiling
deoptimization,
reprofiling T4 (JIT)
compiled after inits
re-opt. as needed
uses profile

T2 (C1 JIT) T3 (C1 JIT)


handles inits handles inits
limited profiling full profiling

Copyright © 2023, Oracle and/or its affiliates 19


ti
Speedups are courtesy of HotSpot Tiered Compila on
• Startup is handled by slower Tiers 0..3, star ng with the interpreter.
• Startup resolves symbols, runs class inits (<clinit>), runs indy BSMs.
• Warmup happens as code shi s from lower ers to higher ones.
• First, lower ers must gather pro les (execu on paths and types).
• The JIT then uses those pro les to op mize Tier 4 code. This takes me.
• Peak is reached when (most) code stabilizes in the highest Tier 4.

Copyright © 2023, Oracle and/or its affiliates 20


ti
fi
ft
fi
ti
ti
ti
ti
ti
ti
Leyden can shi work to link, pro le, ini alize, & compile
• Leyden can shi the e ort of collec ng pro les and genera ng JIT code.
• An earlier run that gathers JVM informa on for Leyden is a training run.
• A later run that uses such informa on is called a deployment run.
• Some ini aliza on states can be recorded for replay in deployment.
• Persistent pro les gathered in training can be applied in deployment.
• JIT code can be generated at startup from persistent pro les. (Helpful.)
• Or else, JIT code can be archived AOT for fast loading. (Very helpful!)

Copyright © 2023, Oracle and/or its affiliates 21


ti
fi
ti
ft
ft
ff
ti
ti
ti
fi
fi
ti
fi
ti
Key concepts: How training prepares for deployment
• De ni on: A training run is a representa ve execu on of an applica on.
• Typical inputs and con g. drive training run startup through expected paths and states.
• Preferably, training run warmup (repe ve tasks) leads to peak performance.
• During training, the JVM gathers ini al states, pro les, JIT code, into CDS and/or logs.
Op onally, mul ple training runs are executed, and resul ng logs of data are merged.
• The applica on is then dis lled (terminal condensa on step) into the op mized version.
• Execu ng the op mized applica on is called a deployment run.
• The deployment run starts with ini al states, bene ts from archived pro les and code.
• Op onally auto-train: Hide these steps “under the hood” for con nuous improvement.

If you can compose a system benchmark, you can compose a warmup training run.

Copyright © 2023, Oracle and/or its affiliates 22


fi
ti
ti
ti
ti
ti
ti
ti
fi
ti
ti
ti
ti
ti
ti
ti
fi
fi
ti
ti
ti
ti
ti
fi
ti
The unreasonable e ec veness of training runs
• Java has always been both sta c and dynamic: Locally sta c, and globally dynamic.
• Training runs, which observe the app, are the dynamic “ ip side” of sta c app analysis.
• We can use a Turing Machine (training) to exercise another T. M. (app), capturing
reusable code and replayable pro le and data states.
• Sta c views, though rich, tend to build models that are fragile, “needy” of constraints.
• Dynamic observa ons, if speculated, can be used as if they were sta cally deduced.
• This approach copes well with surprises during deployment. It is not surprising, since
on-the- y adapta on is Java’s dis nct strength. (Specula ve techniques allow for
unplanned futures; this is a HotSpot core competency.)
• During deployment, we invisibly re-op mize JIT states captured from the training run.

Copyright © 2023, Oracle and/or its affiliates 23


ti
fl
ti
ti
ti
ff
ti
fi
ti
ti
fl
ti
ti
ti
ti
Our results are now promising enough to share
• We can now persist many states from training runs. (See next slide.)
– Using these states improves startup/warmup mes for many use cases.
• Specula ng on these states is robust, giving balance points between all-
sta c and all-dynamic solu ons.
– Many recorded states can so ly speculated, not rmly constrained.
– With specula ve op miza ons, success is a habit, but failure is also an op on.
• Online adap ve op miza ons (JIT Tier 4) s ll get best performance.
– We use HotSpot’s JIT re-compila on to ll lingering performance gaps.
• Policy challenge: Combine the old and new tac cs as needed, smoothly.

Copyright © 2023, Oracle and/or its affiliates 24


ti
ti
ti
ti
ti
ti
ti
ft
ti
ti
ti
fi
ti
fi
ti
ti
ti
States we can capture from training
• Class le events and other historical data:
load, link, ini alize (<clinit>), JIT compiles.
• Resolu on of API points and indy (stored in
constant pool images in the CDS archive).
• Execu on pro les, code (all ers).
• (And more later, such as lazy states…)

Once captured, such data “looks sta c”,


helping op miza ons. But it was “born
dynamic”. And it can change, triggering (image from JVMLS 2010)
re-op miza on.
Copyright © 2023, Oracle and/or its affiliates 25
ti
fi
ti
ti
ti
ti
ti
fi
ti
ti
ti
Can this be made safe and sane?
• Key idea: premain, a speci ed loca on for re-execu ng recorded ac ons.
– Java de nes main as the rst event of an applica on.
– Leyden adds premain, an earlier event.
– Premain warms up the applica on, rebuilding recorded states from training run.
• Some states which evolve from training runs are deemed safe to
checkpoint. Other states are discarded.
– The saved states from training are formalized as premain ac ons.
– Some are irreversible, others are subject to specula on.
– The JVM can (or might not) use CDS-like tech to set up these states before
deployment. If it does, it appears that parts of premain “run super fast”.

Copyright © 2023, Oracle and/or its affiliates 26


fi
fi
ti
fi
ti
ti
ti
ti
ti
ti
Archived code states arise from training run warmup
• Pro les from end of training run are preserved in CDS, for online re-op miza on.
• Code is generated throughout training run and stored in archive for use with CDS.
• Code states include C1 (Tier 2 mainly) and op mized code (Tier 4).
• Code not classi ed as “hot” (i.e., used infrequently or only for setup) stays in Tier 2.
Improves startup, as an alterna ve between the interpreter and aggressive op miza on.
• Code is loaded from archive to online 5x to 500x faster than recompila on!
• Net savings can be seconds or even minutes, from online JIT avoidance.
• More savings from interpreter avoidance: The app. runs JIT code immediately.
Archived code can de-opt into interpreter for corner cases, but is seldom discarded.
• Using the best available archived code improves startup and warmup ( me to peak).
• These savings are signi cant. They can make the 2nd task run fast like the 100th.

Copyright © 2023, Oracle and/or its affiliates 27


fi
fi
fi
ti
ti
ti
ti
ti
ti
ti
ti
JVM execution modes and transitions,
as extended by Project Leyden.

JVM execu on modes and transi ons (new)


Note: Heavy lines mark new modes & transitions.
Rare T1 states are omitted.

•x warmer
methods
T4 (AOT)
after T4 (AOT)
dependent init dependencies
START init barriers
(trained) classes re-opt. as needed
uses profile
load uses profile
cooler
methods default
background
(trained) policy JVM execution modes and
deoptimization, reoptimization
(untrained)
reprofiling
transitions, as extended by
T4 (JIT)
Project Leyden.
T2 (C1 AOT) T0 (interpreter)
compiled after inits
handles inits handles inits Note: Heavy lines mark new
re-opt. as needed
limited profiling full profiling
uses profile modes & transitions.

T2 (C1 JIT) T3 (C1 JIT)


handles inits handles inits
limited profiling full profiling

28
ti
ti
A tale of three graphs: Be er startup
typical ideal improved
typical ideal improved

NOTE: IDEALIZED MODEL


• x300 1. Classes are preloaded and prelinked.
typical ideal
2. Some indy/condy sites are pre-resolved.
3. Partially-optimized code with init-barriers
(time per single task – unspecific units)

is temporarily installed from archive until deps ripen.


200
4. Highly-optimized code is installed from archive
after deps ripen.

100 5. Code may be reprofiled and reoptimized


to better match deployment-time behavior.

0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
(20 task repetitions – unspecific task)
29
tt
A tale of three graphs: Be er warmup
typical ideal improved
typical ideal improved

NOTE: IDEALIZED MODEL


• x300 1. Classes are preloaded and prelinked.
typical ideal
2. Some indy/condy sites are pre-resolved.
3. Partially-optimized code with init-barriers
(time per single task – unspecific units)

is temporarily installed from archive until deps ripen.


200
4. Highly-optimized code is installed from archive
after deps ripen.

100 5. Code may be reprofiled and reoptimized


to better match deployment-time behavior.

0
1 10 19 28 37 46 55 64 73 82 91 100
(100 task repetitions – unspecific task)
30
tt
A tale of three graphs: The long-term view
typical
typical idealideal improved
improved

NOTE: IDEALIZED MODEL


• x3000 1. Classes are preloaded and prelinked.
typical ideal
2. Some indy/condy sites are pre-resolved.
3. Partially-optimized code with init-barriers
(cumulative time – unspecific units)

is temporarily installed from archive until deps ripen.


2000
4. Highly-optimized code is installed from archive
after deps ripen.

1000

5. Code may be reprofiled and reoptimized


to better match deployment-time behavior.
0
1 10 19 28 37 46 55 64 73 82 91 100
(100 task repetitions – unspecific task)
31
Case study: javac startup, warmup, peak
• Use javac to compile a series of 100 small source les. First task triggers startup.
Each sub-task does it all again, and discards results. Repe on triggers warmup.
• States from training run are stored in CDS archive (with JIT cache).
These AOT states include loaded metadata, selected resolu on results.
Indy resolu ons include hidden class metadata and method handles in Java heap.
CDS does this already, but this use is broader and deeper — CDS has been
underu lized!
• Init-barriers code (Tier 4, C2) traps ONCE to interpreter to handle <clinit> events.
Unlike default Tier 4, init-barriers code has a usable fast path a er <clinit>.
• Regular archived code is loaded only when all <clinit> deps “ripen”.
• When most or all archived code is installed, Tier 4 recompila on begins.
CAVEAT: Work in progress. More tuning needed for machinery, policy, user model.

Copyright © 2023, Oracle and/or its affiliates 32


ti
ti
fi
ti
ti
ti
ti
ft
javac, the rst few itera ons

NOTE: ACTUAL MEASUREMENTS


600

baseline premain ideal


450 STARTUP

300

150
WARMUP

0
1 4 7 10
33
fi
ti
javac, more itera ons, showing start of recompila on
100 ← STARTUP

baseline premain ideal


75

50
WARMUP
RECOMPILATION
25

0
1 34 67 100 34
ti
ti
javac, viewed cumula vely (total execu on me)
7000

baseline premain ideal


5250

3500

1750
Tuning and policy work is needed
to bring down the green line further.
0
1 34 67 100 133 166 199 35
ti
ti
ti
javac, in the longest me scales, experiences GC noise
55 ← STARTUP Integral
millisecond
baseline premain ideal scale shows
44 quantization
noise also.
WARMUP
33 The baseline
PEAK policy is still
champion,
22
for now.

11
But it is
RECOMPILE CONTINUALLY … early days.

36
0
ti
Lessons from javac case study
• It works: We can shi computa on to premain, via CDS back to training runs.
• There are lots of startup and warmup states to push back to premain.
• There is no one “magic bullet” technique. Let’s keep hun ng for more.
• Mul ple me-scales (of warmup) are important. Let’s try to chase them all.
• When AOT stu can go wrong, use JIT re-op miza on as a fallback.
• Machinery doesn’t tune itself. Well-tuned policy is a way of life, not a possession.

Copyright © 2023, Oracle and/or its affiliates 37


ti
ti
ff
ft
ti
ti
ti
ti
XML Validation, lower is better
Case 600
study: JVM2008 XML valida on benchmark
average op time, sampled at 1 second intervals, lower is better

475
milliseconds per op

baseline
CDS only (no archived code)
premain (with archived code)
350

225

startup is instant one-second samples


(within first second)
100
0 5 10 15 20 25 30
Copyright © 2023, Oracle and/or its affiliates 38
seconds (samples)
ti
Lessons from XML valida on case study
• Some mes startup is the only interes ng win; warmup is already OK for smaller apps.
• In this case, we can decisively improve startup, compared to the baseline policy.
• Benchmark noise can make it hard to decide where is the peak performance.
• Over me, the baseline policy s ll seems to win by a hair; this may need more work.

Copyright © 2023, Oracle and/or its affiliates 39


ti
ti
ti
ti
ti
Case study: Spring Boot applica on framework startup
Spring v3.1.2 Hello World boot times
seconds, average of 3 runs, lower is better
1.600

default (JDK 22)


AppCDS only (no premain)
1.200 premain (no indy or clinit)
premain (no clinit)
premain
SBaot + PM (no indy, clinit)
SBaot + PM (no clinit)
SBaot + premain
0.800 Spring v3.1.2 Hello
World boot times
for premain, with
and without various
options.
0.400

Time in seconds,
average of 3 runs,
lower is better
wall clock from JVM boot Spring self-measurement
40
wall clock from JVM boot Spring self-measurement
ti
Lessons from Spring Boot case study
• There are many tac cs which can improve startup.
• We win big because the tac cs all work in synergy (are not mutually exclusive).
• AppCDS is a win, back-shi ing loading and resolu on through premain.
Code archiving is further win, back-shi ing JIT work through premain.
• Back-shi ing indy resolu on states unlocks further op miza ons.
The clever Tier 4 code which checks for <clinit> wins, but only a li le.
• Low-level JVM metrics give a li le more data than Spring self-reported me.

Copyright © 2023, Oracle and/or its affiliates 41


ft
ti
ti
ft
ti
tt
ft
ti
ti
ti
tt
ti
Current status of premain work: Fresh beginnings
• For now, premain ac vi es are derived automa cally from training runs.
• Op mizable states generated by premain are dumped into the CDS/JIT archive.
(Other premain states are dropped, assumed to be reconstructed at deployment.)
• In the future, user-de ned ac vi es can march in this parade, as well.
This requires work on characterizing which of those ac vi es are trusted as pure.
• No loss of Java’s natural dynamism, no new constraints on the programming model.
• Need user-friendly work ows (not ag soup): mul -run condensa on, “auto-train”.
This requires more work on moving CDS/JIT states into log les and vice versa.
• Plenty of addi onal opportunity for performance tuning, policy integra on, JIT work.

There is so much more we can do. Join us! [Link]/projects/leyden

Copyright © 2023, Oracle and/or its affiliates 42


ti
ti
ti
fi
ti
fl
ti
ti
fl
ti
ti
ti
ti
fi
ti
ti
Ques ons?

[Link]/projects/leyden
Copyright © 2023, Oracle and/or its affiliates 43
ti
Project Leyden
Capturing Lightning in a Bo le

Mark Reinhold
Chief Architect, Java Pla orm Group, Oracle
John Rose
JVM Senior Architect, Java Pla orm Group, Oracle

JVM Language Summit


2023/8/8

Copyright © 2023, Oracle and/or its affiliates


tf
tf
tt

You might also like