Professional Documents
Culture Documents
Grech PIC LesHouces2019
Grech PIC LesHouces2019
simulation of plasmas
Mickael Grech, LULI, CNRS
mickael.grech@gmail.com
Les Houches, Mai 2019
References
References
Computational Electrodynamics
A. Taflove
References
Computational Electrodynamics
A. Taflove
Numerical Recipies
W. H. Press et al.
The Particle-In-Cell (PIC) method is a central tool for simulation
over a wide range of physics studies
The Particle-In-Cell (PIC) method is a central tool for simulation
over a wide range of physics studies
Cosmology
source: K. Heitmann, Argonne National Lab source: Gauss Center for Supercomputing
The Particle-In-Cell (PIC) method is a central tool for simulation
over a wide range of physics studies
Cosmology Space propulsion (Plasma thruster)
source: K. Heitmann, Argonne National Lab source: Gauss Center for Supercomputing
source: K. Heitmann, Argonne National Lab source: Gauss Center for Supercomputing
source: K. Heitmann, Argonne National Lab source: Gauss Center for Supercomputing
Plasma
p
@ t fs + · rfs + FL ·rp fs = 0
ms
Our starting point is the Vlasov-Maxwell description
for a collisionless plasma
Plasma
p
@ t fs + · rfs + FL ·rp fs = 0
ms
Electromagnetic Field
⇢ 1
r·E= @t E = J + c2 r ⇥ B
✏0 <latexit sha1_base64="TYRbxHaHQJHTE54Ro7kLa1sp9Zc=">AAACOHicbVDLSgMxFM34tr6qLt0Ei+CqzIigC4WCCO5UsFXolJLJ3LHBTDIkd8QyzGe58TPciRsXirj1C0wfC18HQg7n3Htzc6JMCou+/+RNTE5Nz8zOzVcWFpeWV6qray2rc8OhybXU5ipiFqRQ0ESBEq4yAyyNJFxGN0cD//IWjBVaXWA/g07KrpVIBGfopG71NFQskoyGPNZIQ4Q7HA4tDMRlEUYJPS7pIQ0Tw3jxzY5kDs43PV26CzIrpBvnl91qza/7Q9C/JBiTGhnjrFt9DGPN8xQUcsmsbQd+hp2CGRRcQlkJcwsZ4zfsGtqOKpaC7RTDJUq65ZSYJtq4o5AO1e8dBUut7aeRq0wZ9uxvbyD+57VzTPY7hVBZjqD46KEklxQ1HaRIY2GAo+w7wrgRblfKe8xlhC7rigsh+P3lv6S1Uw/8enC+W2scjOOYIxtkk2yTgOyRBjkhZ6RJOLknz+SVvHkP3ov37n2MSie8cc86+QHv8ws1q68W</latexit>
<latexit sha1_base64="BaebWYH9Fn4I75gFqCm1OgJ/Qm0=">AAACYXicbVFdSxtBFJ3dWj/Sqmt89GVoKBTEsBuE9sFCUATxyUKjQjaGu5O7OmR2dpm5WxKW/ZN986Uv/SNOPh6i8cLA4dxzv84khZKWwvDZ8z9sfNzc2t5pfPq8u7cfHDRvbV4agT2Rq9zcJ2BRSY09kqTwvjAIWaLwLhlfzPJ3f9BYmevfNC1wkMGjlqkUQI4aBpO4AEMS1JB4TDihecvK4Kiu4iTllzX/yU/i1ICoIkdhYaVyhWG9ok5UiQv5dc2PuXjoxBoSBTHJDO17bc/rYdAK2+E8+DqIlqDFlnEzDP7Go1yUGWoSCqztR2FBg2q2vVBYN+LSYgFiDI/Yd1CDGz2o5nNr/tUxI57mxj1NfM6uVlSQWTvNEqfMgJ7s29yMfC/XLyn9MaikLkpCLRaD0lJxyvnMbj6SBgWpqQMgjHS7cvEEzk1yn9JwJkRvT14Ht512FLajX6et7tnSjm12xL6wbyxi31mXXbEb1mOC/fM2vF1vz/vv7/iB31xIfW9Zc8hehX/0AkMOuME=</latexit>
✏0
r·B=0
<latexit sha1_base64="r5ylIevset9XlIvs+tIwF2Y0GQA=">AAACEHicbVBNS8NAEN34WetX1aOXxSJ6KokIelAoevFYwarQhLLZTHRxswm7E7GE/AQv/hUvHhTx6tGb/8Ztm4NfDwYe783s7Lwwk8Kg6346E5NT0zOztbn6/MLi0nJjZfXcpLnm0OWpTPVlyAxIoaCLAiVcZhpYEkq4CG+Oh/7FLWgjUnWGgwyChF0pEQvO0Er9xpavWCgZ9XmUIvUR7nD0aKEhKgs/jOlRSQ+p22803ZY7Av1LvIo0SYVOv/HhRynPE1DIJTOm57kZBgXTKLiEsu7nBjLGb9gV9CxVLAETFKPdJd20SkTjVNtSSEfq94mCJcYMktB2JgyvzW9vKP7n9XKM94NCqCxHUHy8KM4lxZQO06GR0MBRDixhXAv7V8qvmWYcbYZ1G4L3++S/5Hyn5bkt73S32T6o4qiRdbJBtolH9kibnJAO6RJO7skjeSYvzoPz5Lw6b+PWCaeaWSM/4Lx/AduNnGk=</latexit>
@t B =
<latexit sha1_base64="U83MJP9etrKKQVLPNX5XEkcy+8c=">AAACMHicbVDLSsNAFJ34tr6qLt0MFsGNJRFBFwqiiC4VrApNKTfTGx2cTMLMjVhCPsmNn6IbBUXc+hVOaxe+Dgwczn2cOyfKlLTk+8/e0PDI6Nj4xGRlanpmdq46v3Bm09wIbIhUpeYiAotKamyQJIUXmUFIIoXn0fV+r35+g8bKVJ9SN8NWApdaxlIAOaldPQwzMCRBtYmHhLfUX1kY7JRFGMV8r+Q7fC3UECkISSZo/+s6KNvVml/3++B/STAgNTbAcbv6EHZSkSeoSSiwthn4GbWK3jFCYVkJc4sZiGu4xKajGpx1q+j7lnzFKR0ep8Y9Tbyvfp8oILG2m0SuMwG6sr9rPfG/WjOneKtVSJ3lhFp8GcW54pTyXnq8Iw0KUl1HQBjpbuXiCgwIchlXXAjB7y//JWfr9cCvBycbtd3tQRwTbIkts1UWsE22y47YMWswwe7YI3thr9699+S9ee9frUPeYGaR/YD38QmEI6sO</latexit>
r⇥E
Our starting point is the Vlasov-Maxwell description
for a collisionless plasma
Plasma
p
@ t fs + · rfs + FL ·rp fs = 0
ms
Z
✓ ◆ ⇢(t, x) = dpfs (t, x, p)
p Z
FL = qs E+ ⇥B p
ms J(t, x) = qs dp fs (t, x, p)
ms
Electromagnetic Field
⇢ 1
r·E= @t E = J + c2 r ⇥ B
✏0 <latexit sha1_base64="TYRbxHaHQJHTE54Ro7kLa1sp9Zc=">AAACOHicbVDLSgMxFM34tr6qLt0Ei+CqzIigC4WCCO5UsFXolJLJ3LHBTDIkd8QyzGe58TPciRsXirj1C0wfC18HQg7n3Htzc6JMCou+/+RNTE5Nz8zOzVcWFpeWV6qray2rc8OhybXU5ipiFqRQ0ESBEq4yAyyNJFxGN0cD//IWjBVaXWA/g07KrpVIBGfopG71NFQskoyGPNZIQ4Q7HA4tDMRlEUYJPS7pIQ0Tw3jxzY5kDs43PV26CzIrpBvnl91qza/7Q9C/JBiTGhnjrFt9DGPN8xQUcsmsbQd+hp2CGRRcQlkJcwsZ4zfsGtqOKpaC7RTDJUq65ZSYJtq4o5AO1e8dBUut7aeRq0wZ9uxvbyD+57VzTPY7hVBZjqD46KEklxQ1HaRIY2GAo+w7wrgRblfKe8xlhC7rigsh+P3lv6S1Uw/8enC+W2scjOOYIxtkk2yTgOyRBjkhZ6RJOLknz+SVvHkP3ov37n2MSie8cc86+QHv8ws1q68W</latexit>
<latexit sha1_base64="BaebWYH9Fn4I75gFqCm1OgJ/Qm0=">AAACYXicbVFdSxtBFJ3dWj/Sqmt89GVoKBTEsBuE9sFCUATxyUKjQjaGu5O7OmR2dpm5WxKW/ZN986Uv/SNOPh6i8cLA4dxzv84khZKWwvDZ8z9sfNzc2t5pfPq8u7cfHDRvbV4agT2Rq9zcJ2BRSY09kqTwvjAIWaLwLhlfzPJ3f9BYmevfNC1wkMGjlqkUQI4aBpO4AEMS1JB4TDihecvK4Kiu4iTllzX/yU/i1ICoIkdhYaVyhWG9ok5UiQv5dc2PuXjoxBoSBTHJDO17bc/rYdAK2+E8+DqIlqDFlnEzDP7Go1yUGWoSCqztR2FBg2q2vVBYN+LSYgFiDI/Yd1CDGz2o5nNr/tUxI57mxj1NfM6uVlSQWTvNEqfMgJ7s29yMfC/XLyn9MaikLkpCLRaD0lJxyvnMbj6SBgWpqQMgjHS7cvEEzk1yn9JwJkRvT14Ht512FLajX6et7tnSjm12xL6wbyxi31mXXbEb1mOC/fM2vF1vz/vv7/iB31xIfW9Zc8hehX/0AkMOuME=</latexit>
✏0
r·B=0
<latexit sha1_base64="r5ylIevset9XlIvs+tIwF2Y0GQA=">AAACEHicbVBNS8NAEN34WetX1aOXxSJ6KokIelAoevFYwarQhLLZTHRxswm7E7GE/AQv/hUvHhTx6tGb/8Ztm4NfDwYe783s7Lwwk8Kg6346E5NT0zOztbn6/MLi0nJjZfXcpLnm0OWpTPVlyAxIoaCLAiVcZhpYEkq4CG+Oh/7FLWgjUnWGgwyChF0pEQvO0Er9xpavWCgZ9XmUIvUR7nD0aKEhKgs/jOlRSQ+p22803ZY7Av1LvIo0SYVOv/HhRynPE1DIJTOm57kZBgXTKLiEsu7nBjLGb9gV9CxVLAETFKPdJd20SkTjVNtSSEfq94mCJcYMktB2JgyvzW9vKP7n9XKM94NCqCxHUHy8KM4lxZQO06GR0MBRDixhXAv7V8qvmWYcbYZ1G4L3++S/5Hyn5bkt73S32T6o4qiRdbJBtolH9kibnJAO6RJO7skjeSYvzoPz5Lw6b+PWCaeaWSM/4Lx/AduNnGk=</latexit>
@t B =
<latexit sha1_base64="U83MJP9etrKKQVLPNX5XEkcy+8c=">AAACMHicbVDLSsNAFJ34tr6qLt0MFsGNJRFBFwqiiC4VrApNKTfTGx2cTMLMjVhCPsmNn6IbBUXc+hVOaxe+Dgwczn2cOyfKlLTk+8/e0PDI6Nj4xGRlanpmdq46v3Bm09wIbIhUpeYiAotKamyQJIUXmUFIIoXn0fV+r35+g8bKVJ9SN8NWApdaxlIAOaldPQwzMCRBtYmHhLfUX1kY7JRFGMV8r+Q7fC3UECkISSZo/+s6KNvVml/3++B/STAgNTbAcbv6EHZSkSeoSSiwthn4GbWK3jFCYVkJc4sZiGu4xKajGpx1q+j7lnzFKR0ep8Y9Tbyvfp8oILG2m0SuMwG6sr9rPfG/WjOneKtVSJ3lhFp8GcW54pTyXnq8Iw0KUl1HQBjpbuXiCgwIchlXXAjB7y//JWfr9cCvBycbtd3tQRwTbIkts1UWsE22y47YMWswwe7YI3thr9699+S9ee9frUPeYGaR/YD38QmEI6sO</latexit>
r⇥E
1st Remark
Normalization: the Vlasov-Maxwell (relativistic) description
provides us with a set of natural units
Velocity c
Plasma Charge e
Mass me
p Momentum me c
@ t fs + · rfs + FL ·rp fs = 0
ms Energy, Temperature me c2
Time !r 1
Length c/!r
Electromagnetic Field Number density nr = ✏0 me !r2 /e2
Current density e c nr
r·E=⇢
<latexit sha1_base64="Kv7Di3Bw6V2mFXdXfyWcywQnVGc=">AAACJHicbVBNSwMxEM36WetX1aOXYBE8lV0RVFAoiOBRwWqhW0o2O2uD2WRJZsWy7J/x4l/x4kHFgxd/i2ntQa0PBh7vzWQyL8qksOj7H97U9Mzs3Hxlobq4tLyyWltbv7I6NxxaXEtt2hGzIIWCFgqU0M4MsDSScB3dngz96zswVmh1iYMMuim7USIRnKGTerWjULFIMhryWCMNEe5x9GhhIC6LMEroaUmPfxqRzME5pq/LXq3uN/wR6CQJxqROxjjv1V7DWPM8BYVcMms7gZ9ht2AGBZdQVsPcQsb4LbuBjqOKpWC7xWhvSbedEtNEG1cK6Uj9OVGw1NpBGrnOlGHf/vWG4n9eJ8fkoFsIleUIin8vSnJJUdNhZDQWBjjKgSOMG+H+SnmfGcbRBVt1IQR/T54krd3GYSO42Ks3m+M0KmSTbJEdEpB90iRn5Jy0CCcP5Im8kFfv0Xv23rz379YpbzyzQX7B+/wC7RmmYw==</latexit>
@t E = J+r ⇥ B Pressure me c2 nr
Electric field me c !r /e
r·B=0 @t B = r⇥E
Magnetic field me !r /e
Poynting flux me c3 nr /2
1st Remark
Normalization: the Vlasov-Maxwell (relativistic) description
provides us with a set of natural units
Velocity c
Plasma Charge e
Mass me
p Momentum me c
@ t fs + · rfs + FL ·rp fs = 0
ms Energy, Temperature me c2
Time !r 1
Length c/!r
Electromagnetic Field Number density nr = ✏0 me !r2 /e2
Current density e c nr
r·E=⇢
<latexit sha1_base64="Kv7Di3Bw6V2mFXdXfyWcywQnVGc=">AAACJHicbVBNSwMxEM36WetX1aOXYBE8lV0RVFAoiOBRwWqhW0o2O2uD2WRJZsWy7J/x4l/x4kHFgxd/i2ntQa0PBh7vzWQyL8qksOj7H97U9Mzs3Hxlobq4tLyyWltbv7I6NxxaXEtt2hGzIIWCFgqU0M4MsDSScB3dngz96zswVmh1iYMMuim7USIRnKGTerWjULFIMhryWCMNEe5x9GhhIC6LMEroaUmPfxqRzME5pq/LXq3uN/wR6CQJxqROxjjv1V7DWPM8BYVcMms7gZ9ht2AGBZdQVsPcQsb4LbuBjqOKpWC7xWhvSbedEtNEG1cK6Uj9OVGw1NpBGrnOlGHf/vWG4n9eJ8fkoFsIleUIin8vSnJJUdNhZDQWBjjKgSOMG+H+SnmfGcbRBVt1IQR/T54krd3GYSO42Ks3m+M0KmSTbJEdEpB90iRn5Jy0CCcP5Im8kFfv0Xv23rz379YpbzyzQX7B+/wC7RmmYw==</latexit>
@t E = J+r ⇥ B Pressure me c2 nr
Electric field me c !r /e
r·B=0 @t B = r⇥E
Magnetic field me !r /e
Poynting flux me c3 nr /2
Ns
X
+ wp S(x xp ) [@t pp qs (E + vp ⇥ B)] = 0
p=1
<latexit sha1_base64="iOw7YmAr+4gcX6HqZt2jhstj9nI=">AAAB93icbVBNS8NAEN3Ur1o/GvXoZbEInkoignorePHgoYKxhTaUzXbTLt3sht2JUkN/iRcPKl79K978N27bHLT1wcDjvRlm5kWp4AY879sprayurW+UNytb2zu7VXdv/96oTFMWUCWUbkfEMMElC4CDYO1UM5JEgrWi0dXUbz0wbbiSdzBOWZiQgeQxpwSs1HOr3RsWg+aDIRCt1WPPrXl1bwa8TPyC1FCBZs/96vYVzRImgQpiTMf3UghzooFTwSaVbmZYSuiIDFjHUkkSZsJ8dvgEH1ulj2OlbUnAM/X3RE4SY8ZJZDsTAkOz6E3F/7xOBvFFmHOZZsAknS+KM4FB4WkKuM81oyDGlhCqub0V0yHRhILNqmJD8BdfXibBaf2y7t+e1RqNIo0yOkRH6AT56Bw10DVqogBRlKFn9IrenCfnxXl3PuatJaeYOUB/4Hz+AKAOk0M=</latexit>
<latexit
@t r · E + r · J = 0 <latexit sha1_base64="imD9vJizGCG+U84pAXpsJ0GXtwk=">AAACInicbVDJSgNBEO2JW4xb1KOXxiAIQpgRwQWEgAjiKYIxQiaEmk5P0qSnZ+iuEcKQf/Hir3jx4HYS/Bg7y0ETHxQ83quiql6QSGHQdb+c3Nz8wuJSfrmwsrq2vlHc3LozcaoZr7FYxvo+AMOlULyGAiW/TzSHKJC8HvQuhn79gWsjYnWL/YQ3I+goEQoGaKVW8cxPQKMA2ULqKwgkUJ+1Y6SZH4T0ckAP/pGvB/Scuq1iyS27I9BZ4k1IiUxQbRU//HbM0ogrZBKMaXhugs1suJ9JPij4qeEJsB50eMNSBRE3zWz044DuWaVNw1jbUkhH6u+JDCJj+lFgOyPArpn2huJ/XiPF8KSZCZWkyBUbLwpTSTGmw8BoW2jOUPYtAaaFvZWyLmhgaGMt2BC86ZdnSe2wfFr2bo5KlcokjTzZIbtkn3jkmFTIFamSGmHkkTyTV/LmPDkvzrvzOW7NOZOZbfIHzvcPVGqiXA==</latexit>
3rd Remark
If one does things in a smart way, only Maxwell-Ampère
& Maxwell-Faraday Eqs. need to be solved
<latexit sha1_base64="iOw7YmAr+4gcX6HqZt2jhstj9nI=">AAAB93icbVBNS8NAEN3Ur1o/GvXoZbEInkoignorePHgoYKxhTaUzXbTLt3sht2JUkN/iRcPKl79K978N27bHLT1wcDjvRlm5kWp4AY879sprayurW+UNytb2zu7VXdv/96oTFMWUCWUbkfEMMElC4CDYO1UM5JEgrWi0dXUbz0wbbiSdzBOWZiQgeQxpwSs1HOr3RsWg+aDIRCt1WPPrXl1bwa8TPyC1FCBZs/96vYVzRImgQpiTMf3UghzooFTwSaVbmZYSuiIDFjHUkkSZsJ8dvgEH1ulj2OlbUnAM/X3RE4SY8ZJZDsTAkOz6E3F/7xOBvFFmHOZZsAknS+KM4FB4WkKuM81oyDGlhCqub0V0yHRhILNqmJD8BdfXibBaf2y7t+e1RqNIo0yOkRH6AT56Bw10DVqogBRlKFn9IrenCfnxXl3PuatJaeYOUB/4Hz+AKAOk0M=</latexit>
<latexit
@t r · E + r · J = 0 <latexit sha1_base64="imD9vJizGCG+U84pAXpsJ0GXtwk=">AAACInicbVDJSgNBEO2JW4xb1KOXxiAIQpgRwQWEgAjiKYIxQiaEmk5P0qSnZ+iuEcKQf/Hir3jx4HYS/Bg7y0ETHxQ83quiql6QSGHQdb+c3Nz8wuJSfrmwsrq2vlHc3LozcaoZr7FYxvo+AMOlULyGAiW/TzSHKJC8HvQuhn79gWsjYnWL/YQ3I+goEQoGaKVW8cxPQKMA2ULqKwgkUJ+1Y6SZH4T0ckAP/pGvB/Scuq1iyS27I9BZ4k1IiUxQbRU//HbM0ogrZBKMaXhugs1suJ9JPij4qeEJsB50eMNSBRE3zWz044DuWaVNw1jbUkhH6u+JDCJj+lFgOyPArpn2huJ/XiPF8KSZCZWkyBUbLwpTSTGmw8BoW2jOUPYtAaaFvZWyLmhgaGMt2BC86ZdnSe2wfFr2bo5KlcokjTzZIbtkn3jkmFTIFamSGmHkkTyTV/LmPDkvzrvzOW7NOZOZbfIHzvcPVGqiXA==</latexit>
<latexit sha1_base64="iOw7YmAr+4gcX6HqZt2jhstj9nI=">AAAB93icbVBNS8NAEN3Ur1o/GvXoZbEInkoignorePHgoYKxhTaUzXbTLt3sht2JUkN/iRcPKl79K978N27bHLT1wcDjvRlm5kWp4AY879sprayurW+UNytb2zu7VXdv/96oTFMWUCWUbkfEMMElC4CDYO1UM5JEgrWi0dXUbz0wbbiSdzBOWZiQgeQxpwSs1HOr3RsWg+aDIRCt1WPPrXl1bwa8TPyC1FCBZs/96vYVzRImgQpiTMf3UghzooFTwSaVbmZYSuiIDFjHUkkSZsJ8dvgEH1ulj2OlbUnAM/X3RE4SY8ZJZDsTAkOz6E3F/7xOBvFFmHOZZsAknS+KM4FB4WkKuM81oyDGlhCqub0V0yHRhILNqmJD8BdfXibBaf2y7t+e1RqNIo0yOkRH6AT56Bw10DVqogBRlKFn9IrenCfnxXl3PuatJaeYOUB/4Hz+AKAOk0M=</latexit>
<latexit
@t r · E + r · J = 0 <latexit sha1_base64="imD9vJizGCG+U84pAXpsJ0GXtwk=">AAACInicbVDJSgNBEO2JW4xb1KOXxiAIQpgRwQWEgAjiKYIxQiaEmk5P0qSnZ+iuEcKQf/Hir3jx4HYS/Bg7y0ETHxQ83quiql6QSGHQdb+c3Nz8wuJSfrmwsrq2vlHc3LozcaoZr7FYxvo+AMOlULyGAiW/TzSHKJC8HvQuhn79gWsjYnWL/YQ3I+goEQoGaKVW8cxPQKMA2ULqKwgkUJ+1Y6SZH4T0ckAP/pGvB/Scuq1iyS27I9BZ4k1IiUxQbRU//HbM0ogrZBKMaXhugs1suJ9JPij4qeEJsB50eMNSBRE3zWz044DuWaVNw1jbUkhH6u+JDCJj+lFgOyPArpn2huJ/XiPF8KSZCZWkyBUbLwpTSTGmw8BoW2jOUPYtAaaFvZWyLmhgaGMt2BC86ZdnSe2wfFr2bo5KlcokjTzZIbtkn3jkmFTIFamSGmHkkTyTV/LmPDkvzrvzOW7NOZOZbfIHzvcPVGqiXA==</latexit>
One gets: @t (r · E
<latexit sha1_base64="CpsL09UBDxu5xNYCkMt9wxNn+Y0=">AAACE3icbVA9SwNBEN3zM8avqKXNYhAiYrgTQS2EgAiWCkaFXAhzm71kyd7usTsnhCM/wsa/YmOhYmtj579xE1No9MHA470ZZuZFqRQWff/Tm5qemZ2bLywUF5eWV1ZLa+vXVmeG8TrTUpvbCCyXQvE6CpT8NjUckkjym6h3OvRv7rixQqsr7Ke8mUBHiVgwQCe1SrthCgYFyBbSSqggkhCytsY8jGJ6NqB7NDRdvUNPqE9bpbJf9Uegf0kwJmUyxkWr9BG2NcsSrpBJsLYR+Ck28+FCJvmgGGaWp8B60OENRxUk3Dbz0VMDuu2UNo21caWQjtSfEzkk1vaTyHUmgF076Q3F/7xGhvFRMxcqzZAr9r0oziRFTYcJ0bYwnKHsOwLMCHcrZV0wwNDlWHQhBJMv/yX1/epxNbg8KNdq4zQKZJNskQoJyCGpkXNyQeqEkXvySJ7Ji/fgPXmv3tt365Q3ntkgv+C9fwEZMpx4</latexit>
⇢) = 0
3rd Remark
If one does things in a smart way, only Maxwell-Ampère
& Maxwell-Faraday Eqs. need to be solved
<latexit sha1_base64="iOw7YmAr+4gcX6HqZt2jhstj9nI=">AAAB93icbVBNS8NAEN3Ur1o/GvXoZbEInkoignorePHgoYKxhTaUzXbTLt3sht2JUkN/iRcPKl79K978N27bHLT1wcDjvRlm5kWp4AY879sprayurW+UNytb2zu7VXdv/96oTFMWUCWUbkfEMMElC4CDYO1UM5JEgrWi0dXUbz0wbbiSdzBOWZiQgeQxpwSs1HOr3RsWg+aDIRCt1WPPrXl1bwa8TPyC1FCBZs/96vYVzRImgQpiTMf3UghzooFTwSaVbmZYSuiIDFjHUkkSZsJ8dvgEH1ulj2OlbUnAM/X3RE4SY8ZJZDsTAkOz6E3F/7xOBvFFmHOZZsAknS+KM4FB4WkKuM81oyDGlhCqub0V0yHRhILNqmJD8BdfXibBaf2y7t+e1RqNIo0yOkRH6AT56Bw10DVqogBRlKFn9IrenCfnxXl3PuatJaeYOUB/4Hz+AKAOk0M=</latexit>
<latexit
@t r · E + r · J = 0 <latexit sha1_base64="imD9vJizGCG+U84pAXpsJ0GXtwk=">AAACInicbVDJSgNBEO2JW4xb1KOXxiAIQpgRwQWEgAjiKYIxQiaEmk5P0qSnZ+iuEcKQf/Hir3jx4HYS/Bg7y0ETHxQ83quiql6QSGHQdb+c3Nz8wuJSfrmwsrq2vlHc3LozcaoZr7FYxvo+AMOlULyGAiW/TzSHKJC8HvQuhn79gWsjYnWL/YQ3I+goEQoGaKVW8cxPQKMA2ULqKwgkUJ+1Y6SZH4T0ckAP/pGvB/Scuq1iyS27I9BZ4k1IiUxQbRU//HbM0ogrZBKMaXhugs1suJ9JPij4qeEJsB50eMNSBRE3zWz044DuWaVNw1jbUkhH6u+JDCJj+lFgOyPArpn2huJ/XiPF8KSZCZWkyBUbLwpTSTGmw8BoW2jOUPYtAaaFvZWyLmhgaGMt2BC86ZdnSe2wfFr2bo5KlcokjTzZIbtkn3jkmFTIFamSGmHkkTyTV/LmPDkvzrvzOW7NOZOZbfIHzvcPVGqiXA==</latexit>
One gets: @t (r · E
<latexit sha1_base64="CpsL09UBDxu5xNYCkMt9wxNn+Y0=">AAACE3icbVA9SwNBEN3zM8avqKXNYhAiYrgTQS2EgAiWCkaFXAhzm71kyd7usTsnhCM/wsa/YmOhYmtj579xE1No9MHA470ZZuZFqRQWff/Tm5qemZ2bLywUF5eWV1ZLa+vXVmeG8TrTUpvbCCyXQvE6CpT8NjUckkjym6h3OvRv7rixQqsr7Ke8mUBHiVgwQCe1SrthCgYFyBbSSqggkhCytsY8jGJ6NqB7NDRdvUNPqE9bpbJf9Uegf0kwJmUyxkWr9BG2NcsSrpBJsLYR+Ck28+FCJvmgGGaWp8B60OENRxUk3Dbz0VMDuu2UNo21caWQjtSfEzkk1vaTyHUmgF076Q3F/7xGhvFRMxcqzZAr9r0oziRFTYcJ0bYwnKHsOwLMCHcrZV0wwNDlWHQhBJMv/yX1/epxNbg8KNdq4zQKZJNskQoJyCGpkXNyQeqEkXvySJ7Ji/fgPXmv3tt365Q3ntkgv+C9fwEZMpx4</latexit>
⇢) = 0
2) loop over all particles and project charge and current density
onto the grid
Initialization of a PIC simulation
⇢, J
2) loop over all particles and project charge and current density
onto the grid
E, B
2) loop over all particles and project charge and current density
onto the grid
4) add any (user defined) external fields provided they are divergence-free
The Particle-In-Cell loop
⇢, J
E, B
The Particle-In-Cell loop
⇢, J
E, B
E, B
E, B
E, B
E, B
E, B
Step 1
Field gathering: interpolation at particle position
E, B
Step 1
Field gathering: interpolation at particle position
x
E, B E, B
Step 1
Field gathering: interpolation at particle position
x
E, B E, B
Step 1
Field gathering: interpolation at particle position
S(x)
x
E, B E, B
Step 1
Field gathering: interpolation at particle position
S(x)
x
E, B E, B
Step 1
Field gathering: interpolation at particle position
S(x)
x
E, B E, B
Z
(E, B)p ⌘ dx (E, B)(x) S(x xp )
Step 1
Field gathering: interpolation at particle position
S(x)
x
E, B E, B
Z
(E, B)p ⌘ dx (E, B)(x) S(x xp )
Step 2
The Boris leap-frog pusher is a very popular method
to advance particles
Step 2
The Boris leap-frog pusher is a very popular method
to advance particles
n t t
Step 2
The Boris leap-frog pusher is a very popular method
to advance particles
n t t
x(n)
p
E(n) , B(n)
Step 2
The Boris leap-frog pusher is a very popular method
to advance particles
n t t
x(n)
p
E(n) , B(n)
t
(n 1
2) t (n + 12 ) t
Step 2
The Boris leap-frog pusher is a very popular method
to advance particles
n t t
x(n)
p
E(n) , B(n)
1
(n 2)
up t
(n 1
2) t (n + 12 ) t
Step 2
The Boris leap-frog pusher is a very popular method
to advance particles
n t t
x(n)
p
E(n) , B(n)
1 1
(n (n+ 2 )
up 2) up t
(n 1
2) t (n + 12 ) t
Step 2
The Boris leap-frog pusher is a very popular method
to advance particles
n t t
(n+1)
x(n)
p x p
E(n) , B(n)
1 1
(n (n+ 2 )
up 2) up t
(n 1
2) t (n + 12 ) t
Step 2
The Boris leap-frog pusher is a very popular method
to advance particles
n t t
(n+1)
x(n)
p x p
E(n) , B(n)
1 1
(n (n+ 2 )
up 2) up t
(n 1
2) t (n + 12 ) t
Step 2
The Boris leap-frog pusher is a very popular method
to advance particles
n t t
(n+1)
x(n)
p x p
E(n) , B(n)
1 1
(n (n+ 2 )
up 2) up t
(n 1
2) t (n + 12 ) t
Step 2
The Boris leap-frog pusher is a very popular method
to advance particles
n t t
(n+1)
x(n)
p x p
E(n) , B(n)
1 1
(n (n+ 2 )
up 2) up t
(n 1
2) t (n + 12 ) t
Step 2
The Boris leap-frog pusher is a very popular method
to advance particles
n t t
(n+1)
x(n)
p x p
E(n) , B(n)
1 1
(n (n+ 2 )
up 2) up t
(n 1
2) t (n + 12 ) t
1
(n 2)
up
Step 2
The Boris leap-frog pusher is a very popular method
to advance particles
n t t
(n+1)
x(n)
p x p
E(n) , B(n)
1 1
(n (n+ 2 )
up 2) up t
(n 1
2) t (n + 12 ) t
1
(n 2) qs t 1
um = up + Ep (n 2)
ms 2 up
um
Step 2
The Boris leap-frog pusher is a very popular method
to advance particles
n t t
(n+1)
x(n)
p x p
E(n) , B(n)
1 1
(n (n+ 2 )
up 2) up t
(n 1
2) t (n + 12 ) t
1
(n 2) qs t 1
um = up + Ep (n 2)
ms 2 up
1 um
up =
(n
up 2 ) + qs t M(Bp ) um
ms up
Step 2
The Boris leap-frog pusher is a very popular method
to advance particles
n t t
(n+1)
x(n)
p x p
E(n) , B(n)
1 1
(n (n+ 2 )
up 2) up t
(n 1
2) t (n + 12 ) t
1
(n 2) qs t 1
um = up + Ep (n 2)
ms 2 up
1 um
(n 2 ) qs
up = up + t M(Bp ) um
ms up
1
(n+ 2 ) qs t 1
(n+ 2 )
up = up + Ep up
ms 2
Step 3
Charge-conserving current deposition scheme are available
among which Esirkepov’s is ‘most’ popular
xx
xx
(n)
xp
<latexit sha1_base64="v99XcHHQZRnT1nDUY8D6sfDiTBQ=">AAACBHicbVA9SwNBEN2LXzF+RS3THAYhNuFOBC0DNpYRzAck8djbTJIle3vH7pwkHFfY+FdsLBSx9UfY+W/cJFdo4oOBx3szOzvPjwTX6DjfVm5tfWNzK79d2Nnd2z8oHh41dRgrBg0WilC1fapBcAkN5CigHSmggS+g5Y+vZ37rAZTmobzDaQS9gA4lH3BG0UhesdRFmOD8ncQXMaTJxIvuk4o8S1OvWHaqzhz2KnEzUiYZ6l7xq9sPWRyARCao1h3XibCXUIWcCUgL3VhDRNmYDqFjqKQB6F4yX57ap0bp24NQmZJoz9XfEwkNtJ4GvukMKI70sjcT//M6MQ6uegmXUYwg2WLRIBY2hvYsEbvPFTAUU0MoU9z81WYjqihDk1vBhOAun7xKmudV16m6txflWi2LI09K5IRUiEsuSY3ckDppEEYeyTN5JW/Wk/VivVsfi9aclc0ckz+wPn8A+3yY6w==</latexit>
xx
(n)
xp
<latexit sha1_base64="v99XcHHQZRnT1nDUY8D6sfDiTBQ=">AAACBHicbVA9SwNBEN2LXzF+RS3THAYhNuFOBC0DNpYRzAck8djbTJIle3vH7pwkHFfY+FdsLBSx9UfY+W/cJFdo4oOBx3szOzvPjwTX6DjfVm5tfWNzK79d2Nnd2z8oHh41dRgrBg0WilC1fapBcAkN5CigHSmggS+g5Y+vZ37rAZTmobzDaQS9gA4lH3BG0UhesdRFmOD8ncQXMaTJxIvuk4o8S1OvWHaqzhz2KnEzUiYZ6l7xq9sPWRyARCao1h3XibCXUIWcCUgL3VhDRNmYDqFjqKQB6F4yX57ap0bp24NQmZJoz9XfEwkNtJ4GvukMKI70sjcT//M6MQ6uegmXUYwg2WLRIBY2hvYsEbvPFTAUU0MoU9z81WYjqihDk1vBhOAun7xKmudV16m6txflWi2LI09K5IRUiEsuSY3ckDppEEYeyTN5JW/Wk/VivVsfi9aclc0ckz+wPn8A+3yY6w==</latexit>
x(n+1)
p
<latexit sha1_base64="5a+gLP7Z3Z/pKXGa6SgJrKe6wK8=">AAACBXicbVDLSsNAFJ3UV62vqktdBItQEUoigi4LblxWsA9oY5hMb9uhk0mYuZGWkI0bf8WNC0Xc+g/u/Bunj4VWD1w4nHPv3LkniAXX6DhfVm5peWV1Lb9e2Njc2t4p7u41dJQoBnUWiUi1AqpBcAl15CigFSugYSCgGQyvJn7zHpTmkbzFcQxeSPuS9zijaCS/eNhBGOH0nVRBN0tHfnyXluWpe5JlfrHkVJwp7L/EnZMSmaPmFz873YglIUhkgmrddp0YvZQq5ExAVugkGmLKhrQPbUMlDUF76XR7Zh8bpWv3ImVKoj1Vf06kNNR6HAamM6Q40IveRPzPayfYu/RSLuMEQbLZol4ibIzsSSR2lytgKMaGUKa4+avNBlRRhia4ggnBXTz5L2mcVVyn4t6cl6rVeRx5ckCOSJm45IJUyTWpkTph5IE8kRfyaj1az9ab9T5rzVnzmX3yC9bHNxCEmOQ=</latexit>
xx
(n)
xp
<latexit sha1_base64="v99XcHHQZRnT1nDUY8D6sfDiTBQ=">AAACBHicbVA9SwNBEN2LXzF+RS3THAYhNuFOBC0DNpYRzAck8djbTJIle3vH7pwkHFfY+FdsLBSx9UfY+W/cJFdo4oOBx3szOzvPjwTX6DjfVm5tfWNzK79d2Nnd2z8oHh41dRgrBg0WilC1fapBcAkN5CigHSmggS+g5Y+vZ37rAZTmobzDaQS9gA4lH3BG0UhesdRFmOD8ncQXMaTJxIvuk4o8S1OvWHaqzhz2KnEzUiYZ6l7xq9sPWRyARCao1h3XibCXUIWcCUgL3VhDRNmYDqFjqKQB6F4yX57ap0bp24NQmZJoz9XfEwkNtJ4GvukMKI70sjcT//M6MQ6uegmXUYwg2WLRIBY2hvYsEbvPFTAUU0MoU9z81WYjqihDk1vBhOAun7xKmudV16m6txflWi2LI09K5IRUiEsuSY3ckDppEEYeyTN5JW/Wk/VivVsfi9aclc0ckz+wPn8A+3yY6w==</latexit>
x(n+1)
p
<latexit sha1_base64="5a+gLP7Z3Z/pKXGa6SgJrKe6wK8=">AAACBXicbVDLSsNAFJ3UV62vqktdBItQEUoigi4LblxWsA9oY5hMb9uhk0mYuZGWkI0bf8WNC0Xc+g/u/Bunj4VWD1w4nHPv3LkniAXX6DhfVm5peWV1Lb9e2Njc2t4p7u41dJQoBnUWiUi1AqpBcAl15CigFSugYSCgGQyvJn7zHpTmkbzFcQxeSPuS9zijaCS/eNhBGOH0nVRBN0tHfnyXluWpe5JlfrHkVJwp7L/EnZMSmaPmFz873YglIUhkgmrddp0YvZQq5ExAVugkGmLKhrQPbUMlDUF76XR7Zh8bpWv3ImVKoj1Vf06kNNR6HAamM6Q40IveRPzPayfYu/RSLuMEQbLZol4ibIzsSSR2lytgKMaGUKa4+avNBlRRhia4ggnBXTz5L2mcVVyn4t6cl6rVeRx5ckCOSJm45IJUyTWpkTph5IE8kRfyaj1az9ab9T5rzVnzmX3yC9bHNxCEmOQ=</latexit>
xx
(n)
xp
<latexit sha1_base64="v99XcHHQZRnT1nDUY8D6sfDiTBQ=">AAACBHicbVA9SwNBEN2LXzF+RS3THAYhNuFOBC0DNpYRzAck8djbTJIle3vH7pwkHFfY+FdsLBSx9UfY+W/cJFdo4oOBx3szOzvPjwTX6DjfVm5tfWNzK79d2Nnd2z8oHh41dRgrBg0WilC1fapBcAkN5CigHSmggS+g5Y+vZ37rAZTmobzDaQS9gA4lH3BG0UhesdRFmOD8ncQXMaTJxIvuk4o8S1OvWHaqzhz2KnEzUiYZ6l7xq9sPWRyARCao1h3XibCXUIWcCUgL3VhDRNmYDqFjqKQB6F4yX57ap0bp24NQmZJoz9XfEwkNtJ4GvukMKI70sjcT//M6MQ6uegmXUYwg2WLRIBY2hvYsEbvPFTAUU0MoU9z81WYjqihDk1vBhOAun7xKmudV16m6txflWi2LI09K5IRUiEsuSY3ckDppEEYeyTN5JW/Wk/VivVsfi9aclc0ckz+wPn8A+3yY6w==</latexit>
x(n+1)
p
<latexit sha1_base64="5a+gLP7Z3Z/pKXGa6SgJrKe6wK8=">AAACBXicbVDLSsNAFJ3UV62vqktdBItQEUoigi4LblxWsA9oY5hMb9uhk0mYuZGWkI0bf8WNC0Xc+g/u/Bunj4VWD1w4nHPv3LkniAXX6DhfVm5peWV1Lb9e2Njc2t4p7u41dJQoBnUWiUi1AqpBcAl15CigFSugYSCgGQyvJn7zHpTmkbzFcQxeSPuS9zijaCS/eNhBGOH0nVRBN0tHfnyXluWpe5JlfrHkVJwp7L/EnZMSmaPmFz873YglIUhkgmrddp0YvZQq5ExAVugkGmLKhrQPbUMlDUF76XR7Zh8bpWv3ImVKoj1Vf06kNNR6HAamM6Q40IveRPzPayfYu/RSLuMEQbLZol4ibIzsSSR2lytgKMaGUKa4+avNBlRRhia4ggnBXTz5L2mcVVyn4t6cl6rVeRx5ckCOSJm45IJUyTWpkTph5IE8kRfyaj1az9ab9T5rzVnzmX3yC9bHNxCEmOQ=</latexit>
In 2D & 3D, Esirkepov’s method allows to conserve charge (within machine presicion)
1 1 1
(n+ 2 ) (n+ 2 ) x (n+ 2 )
(Jx,p ) 1 = (Jx,p ) 1 + qs wp (Wx ) 1
i+ 2 ,j i 2 ,j t i+ 2 ,j
1 1 1
(n+ 2 ) (n+ 2 ) y (n+ 2 )
(Jy,p ) 1 = (Jy,p ) 1 + qs w p (Wy ) 1
i,j+ 2 i,j 2 t j,i+ 2
A. Taflove, Computation electrodynamics: The finite-difference time-domain method, 3rd Ed. (2005)
Step 4
The Finite-Difference Time-Domain (FDTD) method is
a popular method for solving Maxwell’s Equations
n t t
A. Taflove, Computation electrodynamics: The finite-difference time-domain method, 3rd Ed. (2005)
Step 4
The Finite-Difference Time-Domain (FDTD) method is
a popular method for solving Maxwell’s Equations
n t J(n+ 12 ) t
⇢(n)
A. Taflove, Computation electrodynamics: The finite-difference time-domain method, 3rd Ed. (2005)
Step 4
The Finite-Difference Time-Domain (FDTD) method is
a popular method for solving Maxwell’s Equations
n t J(n+ 12 ) t
⇢(n)
E(n)
A. Taflove, Computation electrodynamics: The finite-difference time-domain method, 3rd Ed. (2005)
Step 4
The Finite-Difference Time-Domain (FDTD) method is
a popular method for solving Maxwell’s Equations
n t J(n+ 12 ) t
⇢(n)
E(n) E(n+1)
A. Taflove, Computation electrodynamics: The finite-difference time-domain method, 3rd Ed. (2005)
Step 4
The Finite-Difference Time-Domain (FDTD) method is
a popular method for solving Maxwell’s Equations
n t J(n+ 12 ) t
1
⇢(n) B(n+ 2 )
E(n) E(n+1)
A. Taflove, Computation electrodynamics: The finite-difference time-domain method, 3rd Ed. (2005)
Step 4
The Finite-Difference Time-Domain (FDTD) method is
a popular method for solving Maxwell’s Equations
n t J(n+ 12 ) t
1
⇢(n) B(n+ 2 )
E(n) E(n+1)
A. Taflove, Computation electrodynamics: The finite-difference time-domain method, 3rd Ed. (2005)
Step 4
The Finite-Difference Time-Domain (FDTD) method is
a popular method for solving Maxwell’s Equations
n t J(n+ 12 ) t
1 3
⇢(n) B(n+ 2 ) B(n+ 2 )
E(n) E(n+1)
A. Taflove, Computation electrodynamics: The finite-difference time-domain method, 3rd Ed. (2005)
Step 4
The Finite-Difference Time-Domain (FDTD) method is
a popular method for solving Maxwell’s Equations
n t J(n+ 12 ) t
1 3
⇢(n) B(n+ 2 ) B(n+ 2 )
E(n) E(n+1)
Nuter et al., Eur. Phys. J. D (2014); All papers by B. Godfrey, from the 70’s up to now !!!
Step 4
Numerical analysis of the FDTD solvers gives you access to
the numerical dispersion relation & CFL condition
The numerical electromagnetic wave equation in a vacuum
@tN E = +rN ⇥ B
@tN B = rN ⇥ E
Nuter et al., Eur. Phys. J. D (2014); All papers by B. Godfrey, from the 70’s up to now !!!
Step 4
Numerical analysis of the FDTD solvers gives you access to
the numerical dispersion relation & CFL condition
The numerical electromagnetic wave equation in a vacuum
1 1
(n+ 2 ) (n 2)
@tN F = t 1
F F
@tN E N
= +r ⇥ B
with:
@tN B = rN ⇥ E @µN F = µ 1 F 1 F 1
i+ 2 i 2
Nuter et al., Eur. Phys. J. D (2014); All papers by B. Godfrey, from the 70’s up to now !!!
Step 4
Numerical analysis of the FDTD solvers gives you access to
the numerical dispersion relation & CFL condition
The numerical electromagnetic wave equation in a vacuum
1 1
(n+ 2 ) (n 2)
@tN F = t 1
F F
@tN E N
= +r ⇥ B
with:
@tN B = rN ⇥ E @µN F = µ 1 F 1 F 1
i+ 2 i 2
Using the standard technique to derive the wave equation leads to:
X
N N
@tt E+ @µµ E=0
µ
Nuter et al., Eur. Phys. J. D (2014); All papers by B. Godfrey, from the 70’s up to now !!!
Step 4
Numerical analysis of the FDTD solvers gives you access to
the numerical dispersion relation & CFL condition
The numerical electromagnetic wave equation in a vacuum
1 1
(n+ 2 ) (n 2)
@tN F = t 1
F F
@tN E N
= +r ⇥ B
with:
@tN B = rN ⇥ E @µN F = µ 1 F 1 F 1
i+ 2 i 2
Using the standard technique to derive the wave equation leads to:
X
N N
@tt E+ @µµ E=0
µ
(n) ⇥
(Ey ) 1 = Ey0 exp ı ikx x + (j + 12 )ky y
i,j+ 2 ,k
+kkz z n! t]}
Nuter et al., Eur. Phys. J. D (2014); All papers by B. Godfrey, from the 70’s up to now !!!
Step 4
Numerical analysis of the FDTD solvers gives you access to
the numerical dispersion relation & CFL condition
Nuter et al., Eur. Phys. J. D (2014); All papers by B. Godfrey, from the 70’s up to now !!!
Step 4
Numerical analysis of the FDTD solvers gives you access to
the numerical dispersion relation & CFL condition
Nuter et al., Eur. Phys. J. D (2014); All papers by B. Godfrey, from the 70’s up to now !!!
Step 4
Numerical analysis of the FDTD solvers gives you access to
the numerical dispersion relation & CFL condition
Nuter et al., Eur. Phys. J. D (2014); All papers by B. Godfrey, from the 70’s up to now !!!
Step 4
Numerical analysis of the FDTD solvers gives you access to
the numerical dispersion relation & CFL condition
Nuter et al., Eur. Phys. J. D (2014); All papers by B. Godfrey, from the 70’s up to now !!!
Step 4
Numerical analysis of the FDTD solvers gives you access to
the numerical dispersion relation & CFL condition
vph /c
The FDTD solver is subject to numerical 1
dispersion as the numerical light wave 9
ky y
9
velocity is found to depend on its 0.
=
0.9
wavenumber and orientation. v ph
0.8
=
h
=
vp
0.7
h
vp
kx x
Nuter et al., Eur. Phys. J. D (2014); All papers by B. Godfrey, from the 70’s up to now !!!
A quick summary
The PIC approach in a nutshell
There are still a few things to know before running
your first PIC simulation
There are still a few things to know before running
your first PIC simulation
• noise is inherent to PIC code
electromagnetic fluctuations inherent to a thermal plasma
the level of noise is however much exaggerated in PIC codes
There are still a few things to know before running
your first PIC simulation
• noise is inherent to PIC code
electromagnetic fluctuations inherent to a thermal plasma
the level of noise is however much exaggerated in PIC codes
• some numerical instabilities have to be taken care off carefully
- numerical heating usually requires
x . De
- numerical-Cherenkov can also plague simulation with relativistically
drifting particles
There are still a few things to know before running
your first PIC simulation
• noise is inherent to PIC code
electromagnetic fluctuations inherent to a thermal plasma
the level of noise is however much exaggerated in PIC codes
• some numerical instabilities have to be taken care off carefully
- numerical heating usually requires
x . De
- numerical-Cherenkov can also plague simulation with relativistically
drifting particles
• PIC codes are usually very robust, beware of your results!
A PIC code will most likely not crash, even if your simulation is
complete non-sense!
There are still a few things to know before running
your first PIC simulation
• noise is inherent to PIC code
electromagnetic fluctuations inherent to a thermal plasma
the level of noise is however much exaggerated in PIC codes
• some numerical instabilities have to be taken care off carefully
- numerical heating usually requires
x . De
- numerical-Cherenkov can also plague simulation with relativistically
drifting particles
• PIC codes are usually very robust, beware of your results!
A PIC code will most likely not crash, even if your simulation is
complete non-sense!
• High-performance computing:
getting ready for the super-computers
t = 220 ⌦ci1
Simulation box: 1280 !cpi ⇥ 256 !cpi
ne 25600 ⇥ 10240 PIC cells
nr 1
run up to t = 800 ⌦
t = 370 ⌦ci1 ci
Nt ⇠ 9.5 ⇥ 105 timesteps
y !pi /c
1
t= 800 ⌦ci
x !pi /c
Parallelization is mandatory for large-scale PIC simulation
t = 220 ⌦ci1
Simulation box: 1280 !cpi ⇥ 256 !cpi
ne 25600 ⇥ 10240 PIC cells
nr 1
run up to t = 800 ⌦
t = 370 ⌦ci1 ci
Nt ⇠ 9.5 ⇥ 105 timesteps
y !pi /c
1
t= 800 ⌦ci
Required simulation time:
14 000 000 hours ~ 1600 years!!!
x !pi /c
Parallelization is mandatory for large-scale PIC simulation
t = 220 ⌦ci1
Simulation box: 1280 !cpi ⇥ 256 !cpi
ne 25600 ⇥ 10240 PIC cells
nr 1
run up to t = 800 ⌦
t = 370 ⌦ci1 ci
Nt ⇠ 9.5 ⇥ 105 timesteps
y !pi /c
1
t= 800 ⌦ci
Required simulation time:
14 000 000 hours ~ 1600 years!!!
Solution:
x !pi /c
share the work on 16384 CPUs !!!
High-performance computing new paradigms & challenges
High-performance computing new paradigms & challenges
Tianhe-2 34 PF:
17 MW
High-performance computing new paradigms & challenges
Parallelism*
massive
hybrid MPI-OpenMP
dynamic (load balance)
Memory
shared vs. distributed
cache use
Vectorization**
SIMD
Parallel I/O
hdf5, openPMD
High-performance computing new paradigms & challenges
Parallelism*
massive
hybrid MPI-OpenMP
dynamic (load balance)
Memory
shared vs. distributed
cache use
Vectorization**
SIMD
Parallel I/O
hdf5, openPMD
My Simulation (LWFA)
Step 1: Parallelization
PIC codes are well adapted to massive parallelism
My Super-Computer
My Simulation (LWFA)
CE
computing element
CE-0 CE-1
CE-2 CE-3
CE-4 CE-5
CE-6 CE-7
Step 1: Parallelization
PIC codes are well adapted to massive parallelism
My Super-Computer
My Simulation (LWFA)
CE
computing element
CE-0 CE-1
CE-2 CE-3
CE-4 CE-5
CE-6 CE-7
Domain Decomposition
Step 1: Parallelization
PIC codes are well adapted to massive parallelism
My Super-Computer
My Simulation (LWFA)
CE
computing element
CE-0 CE-1
CE-2 CE-3
CE-4 CE-5
CE-6 CE-7
Domain Decomposition
Step 1: Parallelization
PIC codes are well adapted to massive parallelism
My Super-Computer
My Simulation (LWFA)
CE
computing element
CE-0 CE-1
CE-2 CE-3
CE-4 CE-5
CE-6 CE-7
Domain Decomposition
Step 1: Parallelization
PIC codes are well adapted to massive parallelism
My Super-Computer
My Simulation (LWFA)
CE
computing element
CE-0 CE-1
CE-2 CE-3
CE-4 CE-5
CE-6 CE-7
Domain Decomposition
Step 1: Parallelization
PIC codes are well adapted to massive parallelism
My Super-Computer
My Simulation (LWFA)
CE
computing element
CE-0 CE-1
CE-2 CE-3
CE-4 CE-5
CE-6 CE-7
Domain Decomposition
Step 1: Parallelization
PIC codes are well adapted to massive parallelism
My Super-Computer
My Simulation (LWFA)
CE
computing element
CE-0 CE-1
MPI
Message
Passing
CE-2 CE-3
Interface
CE-4 CE-5
CE-6 CE-7
Domain Decomposition
Step 1: Parallelization
PIC codes are well adapted to massive parallelism
My Super-Computer
My Simulation (LWFA)
CE
computing element
CE-0 CE-1
MPI
Message
Passing
CE-2 CE-3
Interface
CE-4 CE-5
CE-6 CE-7
Domain Decomposition
Step 1: Parallelization
PIC codes are well adapted to massive parallelism
My Super-Computer
My Simulation (LWFA)
CE
computing element
CE-0 CE-1
MPI
Message
Passing
CE-2 CE-3
Interface
CE-4 CE-5
CE-6 CE-7
Domain Decomposition
Shared memory
Step 1: Parallelization
PIC codes are well adapted to massive parallelism
My Super-Computer
My Simulation (LWFA)
CE
computing element
CE-0 CE-1
MPI
Message
Passing
CE-2 CE-3
Interface
CE-4 CE-5
CE-6 CE-7
My Super-Computer
My Simulation (LWFA)
CE
computing element
CE-0 CE-1
MPI
Message
Passing
CE-2 CE-3
Interface
CE-4 CE-5
CE-6 CE-7
Domain Decomposition
Patch Decomposition Shared memory
Step 1: Parallelization
PIC codes are well adapted to massive parallelism
My Super-Computer
My Simulation (LWFA)
CE
computing element
CE-0 CE-1
MPI
Message
Passing
CE-2 CE-3
Interface
CE-6 CE-7
Domain Decomposition
Patch Decomposition Shared memory
Step 1: Parallelization
PIC codes are well adapted to massive parallelism
My Super-Computer
My Simulation (LWFA)
CE
computing element
CE-0 CE-1
MPI
Message
Passing
CE-2 CE-3
Interface
CE-6 CE-7
Domain Decomposition
Patch Decomposition Shared memory
Step 1: Parallelization
PIC codes are well adapted to massive parallelism
My Super-Computer
My Simulation (LWFA)
CE
computing element
CE-0 CE-1
MPI
Message
Passing
CE-2 CE-3
Interface
CE-6 CE-7
Domain Decomposition
Patch Decomposition Shared memory
Step 1: Parallelization
PIC codes are well adapted to massive parallelism
My Super-Computer
My Simulation (LWFA)
CE
computing element
CE-0 CE-1
MPI
Message
Passing
CE-2 CE-3
Interface
CE-6 CE-7
Domain Decomposition
Patch Decomposition Shared memory
Step 1: Parallelization
PIC codes are well adapted to massive parallelism
My Super-Computer
My Simulation (LWFA)
CE
computing element
CE-0 CE-1
MPI
Message
Passing
CE-2 CE-3
Interface
CE-6 CE-7
Domain Decomposition
Patch Decomposition Shared memory
Step 1: Parallelization
PIC codes are well adapted to massive parallelism
My Super-Computer
My Simulation (LWFA)
CE
computing element
CE-0 CE-1
MPI
Message
Passing
CE-2 CE-3
Interface
CE-6 CE-7
Domain Decomposition
Patch Decomposition Shared memory
Step 1: Parallelization
PIC codes are well adapted to massive parallelism
My Super-Computer
My Simulation (LWFA)
CE
computing element
CE-0 CE-1
MPI
Message
Passing
CE-2 CE-3
Interface
CE-6 CE-7
Domain Decomposition
Patch Decomposition Shared memory
Step 1: Parallelization
PIC codes are well adapted to massive parallelism
My Super-Computer
My Simulation (LWFA)
CE
computing element
CE-0 CE-1
MPI
Message
Passing
CE-2 CE-3
Interface
CE-6 CE-7
Domain Decomposition
Patch Decomposition Shared memory
Step 1: Parallelization
PIC codes are well adapted to massive parallelism
My Super-Computer
My Simulation (LWFA)
CE
computing element
CE-0 CE-1
MPI
Message
Passing
CE-2 CE-3
Interface
CE-6 CE-7
Domain Decomposition
Patch Decomposition Shared memory
Step 1: Parallelization
Hybrid parallelization significantly improves performance
100 iterations [s]
time to compute
iteration number
Step 1: Parallelization
Hybrid parallelization significantly improves performance
100 iterations [s]
time to compute
time of bubble
formation
iteration number
Step 1: Parallelization
Hybrid parallelization significantly improves performance
100 iterations [s]
time to compute
time of bubble
formation
iteration number
Step 1: Parallelization
Hybrid parallelization significantly improves performance
100 iterations [s]
time to compute
iteration number
Step 1: Parallelization
Hybrid parallelization significantly improves performance
100 iterations [s]
time to compute
iteration number
Step 1: Parallelization
Dynamic load balancing further improve performances
Step 1: Parallelization
Dynamic load balancing further improve performances
Step 1: Parallelization
Dynamic load balancing further improve performances
Step 1: Parallelization
Dynamic load balancing further improve performances
64 MPI x 12 OMP
64 MPI x 12 OMP
speed-up x1.6
same with DLB
Step 1: Parallelization
Hybrid + Dynamic Load Balancing
Step 1: Parallelization
Hybrid + Dynamic Load Balancing
x 8
-u p
e d
p e
S
Step 2: Vectorization
Vectorization in a nutshell
Step 2: Vectorization
Vectorization in a nutshell
x2.6
x3
Step 2: Vectorization
Laser-driven hole-boring
Z
(c/ω)
a) X (c/ω)
Normalized electron
Y density (ne/nc)
(c/ω)
b) Front of the
right flow
Z Isotropization
(c/ω) region
Density
filaments Front of the
left flow
Y
(c/ω)
species 1
species 2
Collisions can be introduced using an ad-hoc Monte-Carlo module
species 1
species 2
Collisions can be introduced using an ad-hoc Monte-Carlo module
species 1
species 2
1
2
Collisions can be introduced using an ad-hoc Monte-Carlo module
species 1
species 2
Nanbu, Phys. Rev. E 55, 4642 (1997); J. Comp. Phys. 145, 639 (1998)
F. Pérez et al., Phys. Plasmas 19, 083104 (2012)
PIC codes are then able to treat purely collisional processes
Thermalization Isotropization
(hydrogen plasma) (electron plasma)
J. Derouillat et al., SMILEI: a collaborative, open-source, multi-purpose PIC code for plasma simulation,
to be submitted (available upon request)
Similarly field and collisional ionization can be treated using
a Monte-Carlo approach
Z⋆ = 3
Z⋆ = 4
0.50
0.25
0.00
0 4 8 12 16
b) ct/λ0
R. Nuter et al., Phys. Plasmas 18, 033107 (2011); F. Pérez et al., Phys. Plasmas 19, 083104 (2012)
J. Derouillat et al., SMILEI: a collaborative, open-source, multi-purpose PIC code for plasma simulation,
to be submitted (available upon request)
Adding Quantum Electrodynamics (QED) effect is also
very interesting for forthcoming multi-petawatt facilities
Adding Quantum Electrodynamics (QED) effect is also
very interesting for forthcoming multi-petawatt facilities
Nonlinear Thomson and
High-Energy Photon Production
Compton sca6ering
e
h
e +n L !e + h
Bremsstrahlung
e
Z+
h
Adding Quantum Electrodynamics (QED) effect is also
very interesting for forthcoming multi-petawatt facilities
e +n L !e + h h+n L ! e + e+
Bremsstrahlung Pair produc8on in strong
Coulomb field
e e
Z + e Z+
Z+
h
e + e+
h e
e
Outlines
Relativistically-Induced Transparency
PIC codes are an excellent tool to support theoretical modelling
Even 1D simulation can bring a deep insight into the physics at play
Relativistically-Induced Transparency
E. Siminos et al., Phys. Rev. E 86, 056404 (2012) A. Grassi et al., Phys. Rev. E (in press)
PIC codes can help design & interpret experimental campaigns
2D and 3D simulations on super-computers will be necessary here
PIC codes can help design & interpret experimental campaigns
2D and 3D simulations on super-computers will be necessary here
High-harmonic generation
& electron acceleration from
laser-solid interaction
High-harmonic generation
& electron acceleration from
laser-solid interaction
G. Bouchard, F. Quéré, CEA/IRAMIS A. Sävert et al., Phys. Rev. Lett. 115, 055002 (2015)
PIC codes are very versatile: they can be applied to a wide range
of physical scenarii, from laser-plasma interaction to astrophysics
PIC codes are very versatile: they can be applied to a wide range
of physical scenarii, from laser-plasma interaction to astrophysics
http://www.maisondelasimulation.fr/smilei
Checkout SMILEI !!!
Code, Diagnostics/visualization tools and tutorials
available online!
http://www.maisondelasimulation.fr/smilei