Professional Documents
Culture Documents
cost ranking
library c d
potential j d f
cost of j d g h
Tuning/search intial
Tuning Create scenarios d f set j d b & j & d
cost ranking
begins primitives, (size = k)
for the next Devision engine Collect results e g b j g
Synthesis arbitrarily true cost known
iteration
input data ordered f h d f from i = 1
Main tuning loop
g a estimated cost
h i k jobs of k jobs
Learning / Results Exit criteria met
Rules i c (combination f d (combination
meta analysis archive
highest e order = 3) is the average order = 3)
j
Background archiving loop cost cost of
f & d
Sensitivity Test
Feedback to
synthesis tool team (10 scenarios in parallel,
1-hot for each primitive)
Surrogate Model
× Multiple: Configuration
Parallel LSPD Flow Runs Parallel LSPD Flow Runs
… Macros per chip S2: Evaluate
Tapeouts per chip EDA tools
QoR QoR cost QoR QoR cost
Chips per technology
statistics analysis statistics analysis Area, Power, Delay
Technology nodes
S3: Model update
GP Regression
Archive populated from Archived QoR results are
many iterative DSE runs Archive reused for training the
of parameter tuning recommender system
Fig. 7.: The diagram of the workflow proposed in [34].
Archive data filter Training e.g., selected archived data:
(offline) > 320K LSPD flow runs, from
Hyper-parameters
> 2.5K iterative tuning runs, sian process (GP) models. The learning model will be
The Recommender across multiple 14nm
used to predict QoR metric values for parameter config-
Trained chip design and tapeouts
System-based Tuning Model urations that have not been evaluated by the synthesis
Flow tool. The sampling step is performed to select points
Macro Recommendation Recommended for the fine-tuning of learning models. During sampling,
Cost function (online) Scenarios
based on the predictions (sometimes with predictive un-
certainties), some Pareto-optimal driven frameworks [4,7]
Fig. 5.: The sketch of the framework proposed in [36]. do additional work, namely, classification on the inputs.
They iteratively discard points that are either redundant
current state st or suboptimal, and it terminates when no more points can
be removed. With high probability, the remaining points
netlist param.
define an optimal or Pareto set of the given parameter
update current state
space.
placement reward
A state-of-the-art Bayesian optimization technique
tool agent with weighted and summed cost function is exploited
environment Rt = −HPWL
<latexit sha1_base64="yTUhriHW0LBCHdCbA1GPulwXK5Q=">AAAB/nicbVDLSsNAFL2pr1pfUXHlZrAIbiyJCHYjFNx04aKKfUAbwmQ6bYdOJmFmIpRQ8FfcuFDErd/hzr9x0mahrQcGDufcyz1zgpgzpR3n2yqsrK6tbxQ3S1vbO7t79v5BS0WJJLRJIh7JToAV5UzQpmaa004sKQ4DTtvB+Cbz249UKhaJBz2JqRfioWADRrA2km8f9UKsRzJM732NrtE5qjfat1PfLjsVZwa0TNyclCFHw7e/ev2IJCEVmnCsVNd1Yu2lWGpGOJ2WeomiMSZjPKRdQwUOqfLSWfwpOjVKHw0iaZ7QaKb+3khxqNQkDMxkFlYtepn4n9dN9KDqpUzEiaaCzA8NEo50hLIuUJ9JSjSfGIKJZCYrIiMsMdGmsZIpwV388jJpXVRcp+LeXZZr1byOIhzDCZyBC1dQgzo0oAkEUniGV3iznqwX6936mI8WrHznEP7A+vwBfMWUeg==</latexit>