You are on page 1of 7

MNRAS 497, 4580–4586 (2020) doi:10.

1093/mnras/staa2269
Advance Access publication 2020 August 5

Laboratory quantification of a plenoptic wavefront sensor with extended


objects
Zhentao Zhang,1,2‹ Nazim Bharmal,2 Tim Morris2 and Yonghui Liang1‹
1 College of Advanced Interdisciplinary Studies, National University of Defense Technology, Deya Road, 410073 Changsha, China

Downloaded from https://academic.oup.com/mnras/article/497/4/4580/5881328 by Instituto de Astrofisica de Canarias user on 21 April 2023


2 Centre for Advanced Instrumentation, Department of Physics, University of Durham, South Road, DH1 3LE Durham, UK

Accepted 2020 July 2. Received 2020 June 15; in original form 2020 April 3

ABSTRACT
Adaptive optics (AO) is widely used in ground-based telescopes to compensate the effects of atmosphere distortion, and the
wavefront sensor is a significant component in the AO systems. The plenoptic wavefront sensor has been proposed as an
alternative wavefront sensor adequate for extended objects and wide field of views. In this paper, a experimental bench has been
set up to investigate the slope measurement accuracy and closed-loop wavefront correction performance for extended objects.
From the experimental results, it has been confirmed that plenoptic wavefront sensor is suitable for extended objects wavefront
sensing with proper optical design. The slope measurements have a good linearity and accuracy when observing extended
objects. The image quality is significantly improved after closed-loop correction. A method of global tip/tilt measurement using
only plenoptic wavefront sensor frame is proposed in this paper, it is also a potential advantage of plenoptic wavefront sensor in
extended objects wavefront sensing.
Key words: instrumentation: adaptive optics.

La Laguna has patented CAFADIS camera as a new wavefront sensor


1 I N T RO D U C T I O N
(Rodrı́guez-Ramos et al. 2008) and also investigated the capability
The images obtained by ground-based telescope are always degraded of using CAFADIS camera as wavefront sensor in LGS-based AO
because of the atmospheric turbulence. Adaptive optics (AO) has system (Rodrıguez-Ramos et al. 2010) and solar AO (Rodrı́guez-
been widely used for the compensation of image degradation (Hardy Ramos et al. 2009, 2012). Besides, some simulation work has been
1998). In general, the wavefront distortion produced by turbulence done to analyse the slope measurement accuracy, open- and closed-
is detected by the wavefront sensor and corrected by a deformable loop performance of plenoptic camera (Rodrı́guez-Ramos et al. 2012;
mirror (DM, Tyson 2015). Jiang et al. 2016a, b), but little laboratory or test bench results are
In astronomical AO systems, Shack–Hartmann wavefront sensor published.
(SHWS) is most commonly used due to its effectiveness in measuring In this paper, a test bench of extended objects closed-loop
local wavefront slope when observing point-liked sources (Neal, AO system using plenoptic wavefront sensor is established, some
Copland & Neal 2002). For extended objects, correlation-based important parameters during optical design is discussed, and the
SHWS is one reliable option (Sidick et al. 2008). Instead of forming a closed-loop results are presented and discussed. We also propose
spot (the image of point-liked souce), the lenslets form low-resolution a way to using only plenoptic sensor to measure both the global
images of the observed object. However the field of view (FoV) of slope and local slope simultaneously. The layout of this paper is as
each lenslet is restricted for two reasons (Poyneer 2003). First, the follows. In Section 2, we described the basic principles of plenoptic
FoV should be large enough to contain the whole object. Secondly, wavefront sensing and closed-loop control method. In Section 3, the
the FoV needs to be small enough so that the subimages will not slope measurement results and closed-loop results are presented and
overlap with each other. These are two conflicting restrictions in a analysed. In Section 3.4, we draw our conclusions.
fixed set of optics. Plenoptic camera has been proposed to be an
alternative wavefront sensor in recent years and getting increasing
attention of scholars. With proper optical design, it can avoid
overlapping between subaperture images regardless of the size of the 2 P R I N C I P L E O F P L E N O P T I C WAV E F RO N T
objects (Ng et al. 2006), which is a significant advantage in wide FoV S E N S O R A N D R E C O N S T RU C T I O N
or extended objects wavefront sensing. A lot of research has been
As shown in Fig. 1, the plenoptic wavefront sensor places a lenslet
conducted, since Clare and Lane described the way to use plenoptic
array at the back focal plane of the objective lens and the detector at
camera for slope measurement (Clare & Lane 2005). Universidad de
the back focal plane of the lenslet array. When the f-number of the
lenslet units matches the f-number of the objective lens, plenoptic
sensor can both avoid overlapping and make the best use of the
 E-mail: zhangzhentao09@nudt.edu.cn (ZZ); yonghuiliang@sina.com (YL) detector pixels.
C 2020 The Author(s)

Published by Oxford University Press on behalf of the Royal Astronomical Society


Quantification of a plenoptic wavefront sensor 4581

Downloaded from https://academic.oup.com/mnras/article/497/4/4580/5881328 by Instituto de Astrofisica de Canarias user on 21 April 2023


Figure 1. The principle diagram of plenoptic wavefront sensor.

2.1 Slope estimation


Think of each subaperture image is the re-imaged pupil on the Figure 2. The optical layout of the plenoptic wavefront sensor.
detector from different point of view, every coordinates in sub-
aperture images are corresponding to the certain position on pupil
plane, different subaperture images means different incident angle voltages and DM eigenmode can be presented as:
of incoming light. plenoptic sensor image I(i, j) can be decomposed
M = V0 V T (2)
to 4D discrete plenoptic function I(u, v, x, y), where (u, v) is the the
location of the lenslet in whole lenslet array and (x, y) is the pixels’ where M = [m1 , m2 , ···, mn ]T is the coefficients matrix of eigenmode,
relative coordinates in subaperture image. By collecting all the pixels n is the amount of DM actuators used in the system, and mi is
with same (x, y) and ordering those pixels based on their (u, v) the normalized coefficient vector of ith eigenmode. The interaction
coordinates, a recomposed subaperture image Ix, y (u, v) is generated vector of ith eigenmode can be obtained by poking the DM actuators
x y
(Rodrı́guez-Ramos et al. 2009), and the local slope (Sx,y , Sx,y ) can using mi as control voltage signal. Combining the interaction vectors
be derived with appropriate slope estimation method. of all eigenmodes orderly, we then have the complete interaction
Our work aimed at analysing the performance of plenoptic camera matrix. The main characteristics of DM eigenmodes are listed as
with extended objects, so we picked cross-correlation algorithm as follows:
our slope estimation method. Cross-correlation slope estimation have
a) The eigenmodes are orthogonal to each other;
two steps. First, calculate the cross-correlation between reference
b) The number of eigenmodes is the actuator number used in the
and recomposed subaperture image to get the correlation image. The
system;
next step is to find its peak intensity location, which indicates the
c) The eigenmodes are arranged based on the spatial frequency
shift between the reference and the distorted images. To obtain the
from lower to higher. The whole wavefront control procedure can be
subpixel shift accuracy, a windowed, thresholded, centre-of-mass
divided into four steps:
measurement is applied because of its better centroiding accuracy
compare to windowed 2D parabolic fit under high signal-to-noise First, measure the interaction matrix I and get the eigenmode
ratio (Townson, Kellerer & Saunter 2015). coefficient matrix M; Secondly, generate the new interaction matrix
IEM according to M; then, calculate the pseudo-inverse matrix C of
matrix IEM using singular value decomposition, C is also called the
2.2 Wavefront control control matrix in this paper; and lastly, the DM control voltages vector
Control system is the vital link between the wavefront sensor and the VDM can be derived from the wavefront sensor measurements s :
DM in AO systems. Via suitable reconstruction algorithm, control VDM = V T Cs (3)
system converts the slope measurements to DM control signals to
compensate the wavefront error.
There are several kinds of reconstruction algorithms in practical 3 E X P E R I M E N TA L R E S U LT S
AO systems, such as zonal and modal algorithms. Mirror-eigenmode- An extended object closed-loop AO test bench system has been built.
based modal reconstruction is used in this paper because of the Several experiments have been conducted on this test bench to test the
orthogonality between each eigenmode. gradient measurements accuracy and the wavefornt error correction
The eigenmode can be derived from the interaction matrix which performance in closed-loop system.
describes the linear dependencies between the wavefront gradient
measurements and voltages applied on each DM actuators (Li et al.
2006). Using singular value decomposition method, the interaction 3.1 Experimental setup
matrix I can be decomposed as : The laboratory bench comprises a 40-actuator DM by Thorlab Inc, an
I = U V T
(1) OLED screen as extended objects simulator, a plenoptic wavefront
sensor and a imaging camera. Fig. 2 is a brief optical layout of the
V matrix consists of the DM eigenmode weighting vectors. Assume test bench. The ray tracing in Fig. 2 only shows the case when the
we drive each DM actuators with same voltage amplitude V0 to object is an on-axis point source just for convenience. L1 lens is
generate the interaction matrix, the relationship between control the collimated lens, the light become collimated bean after passing

MNRAS 497, 4580–4586 (2020)


4582 Z. Zhang et al.
L1. Lens L2 and L3 form an 1:1 optical relay system to generate a
conjugated plane of pupil plane which is also the position of the DM.
Then, the beam is transmitted through beam splitter, the reflected
light goes into the camera1 after passing through the imaging lens
L4. The rest of the light goes into the plenoptic wavefront sensor. The
L5 is the main lens of the plenoptic wavefront sensor, it concentrates
the light on the lenslet array which is set on the focal plane of L5. Lens
L6 and L7 form another optical relay which help adjust the image

Downloaded from https://academic.oup.com/mnras/article/497/4/4580/5881328 by Instituto de Astrofisica de Canarias user on 21 April 2023


size on the final detector camera2 and make sure every sub-aperture
images on the camera2 contains integer pixels.
When designing the plenoptic wavefront sensor, it is important to
make the f-number of the lenslet units equivalent to the f-number
of the main lens. Considering that lenslet arrays are more difficult
to manufacture compare to other optical elements in the system,
we picked a lenslet array which would satisfied wavefront sensing
requirements first and then design the rest of the optical system to
suit the lenslet array. Choosing the lenslet array is a trade-off process.
The most important parameter is the f-number. In plenoptic wavefront
sensor, smaller f-number would give the system wider FoV, but the Figure 3. The experiments setup.
angular resolution would be decreased, which would affect the slope
measurement accuracy. Although higher angular resolution can be 3.2 AO control software
achieved by using a lenslet array which has smaller lenslet pitch size,
The control software can be divided into three major parts, image
the subimage would cover less pixels on the detector, which would
processing, slope estimation, and DM control software. During the
leads to the decrease of the spatial resolution of the local slope
image processing, bias and flats calibration would be applied to
measurements. Since the object simulator used in the experiments is
the image and use calibrated image to generate the recomposed
not very large, we decided to use a lenslet array with big f-number
subaperture image. Then, the local slope would be measured using
to guarantee a good wavefront reconstruction results. The lenslet
correlation. Finally, the DM control signal can be derived from
array used in this paper has 9 mm × 9 mm mount window aperture,
the slope measurements vector based on DM eigenmode modal
contains 28 × 28 available lenslet units when taking vignetting effect
reconstruction.
into consideration, each of them has 300 μm× 300 μm square pitch
size, the f-number is around 64 when λ = 550 nm. The next step is
to pick the main lens according to the f-number. Due to the limited
3.2.1 Plenoptic image pre-processing
experimental condition, there is not enough space for a main lens
with long focal length. An iris diaphragm is placed at pupil plane to In order to get the accurate plenoptic image, the raw images obtained
limit the pupil diameter so that the f-number of the main lens can from detector require the calibration to remove the individual pixel
match the f-number of lenslet units and the focal length is suitable bias pattern, dark current pattern, and quantum efficiency (QE) pat-
for the bench. In this paper, the pupil diameter is 4.67 mm. tern. Assuming the detector is working under the same temperature
The detector used in plenoptic sensor is CS2100M-USB from condition, the true plenoptic signal P can be solved from these four
Thorlabs. It has 1920 × 1080 pixels, the pixel size is 5.04 μm× equations:
5.04 μm, the whole imaging area is 9.678 mm × 5.4432 mm. How-
ever, the plenoptic sensor image on lenslet array’s focal plane is the BF = B (4)
same size with the lenslet array which is 9 mm × 9 mm, it is around DF = B + D · Td + BG · Q · Td
two times larger than the detector size in vertical direction. So a FF = B + D · Tf + BG · Q · Tf + F · Q · Tf
simple optical relay is needed. The function of this optical relay
PF = B + D · Tp + BG · Q · Tp + P · Q · Tp
is to re-image the focal plane image on the detector with a proper
scaling factor so that the camera can capture the full image and On the left-hand side, BF, DF, FF, and PF are the bias frame
the pixel numbers in both horizontal and vertical directions within (extremely short exposure time), dark frame (with the light source
every subaperture are integers. After the designed optical relay, the off), flat-field frame (uniform illumination on the detector), and
plenoptic image on the detector covers 840 × 840 pixels, every plenoptic frame. On the right-hand side, B is the bias pattern, D is the
subaperture images has 30 × 30 pixels. dark current pattern, BG is the environmental background pattern, F
The DM or a piece of perspex has been used as a turbulence is the normalized flat-field illumination, Q is the QE pattern, P is the
simulator in the experiment. The perspex is stuck on the iris true plenoptic sensor signal. Td , Tf , Tp is the exposure time of the
diaphragm, which is the pupil in the system. The DM is placed at corresponding frame. If Td = Tf = Tp , solving for P gives:
the conjugated pupil plane generated by a 1:1 optical relay, light spot
PF − DF
on the DM is basically the same size with pupil diameter, which is P =F · (5)
4.67 mm. However, the DM has a circle pupil with 10 mm diameter, FF − DF
so only centred 24 actuators have response on plenoptic wavefront Since the PF, DF, and FF are all measured frames and therefore are
sensor, we only use those actuators during the wavefront correction. known, and the background light has been treated very carefully
The light reflected by the DM would be divided into two parts by so the BG can be taken as a fixed pattern, then P can be directly
a beam splitter, 30 per cent goes into imaging camera and rest of calculated from equation (5). Although the measured frames DF, and
the light goes into plenoptic sensor. The final experiments setup are FF would have random noise, such noise can be reduced by averaging
shown in Fig. 3. many frames.

MNRAS 497, 4580–4586 (2020)


Quantification of a plenoptic wavefront sensor 4583

Downloaded from https://academic.oup.com/mnras/article/497/4/4580/5881328 by Instituto de Astrofisica de Canarias user on 21 April 2023


Figure 4. The calibrated plenoptic image, recomposed image map, and one
of the recomposed supaperture image (row = 14 and column = 14).

Once the calibrated image has been got, the next step is to re-
arranging the pixels to form the recomposed subaperture image which
can be used to estimate the local slope. One important procedure in
recomposing subaperture image is to identify the positions of each
lenset on plenoptic image. In fact, this can be solved during the optical
alignment. When printing a large enough symmetric pattern on the
Figure 5. The cell image and three recomposed subaperture images at
OLED screen, like a square or a diamond, the plenoptic image should
different position.
also be symmetric, then one can easily find the centre of the plenoptic
image which is also the centre position of all microlens. Assuming
the whole system is well aligned, every subaperture image should
have same integer pixels on plenoptic image, thus the location of
every microlens on plenoptic image can be easily calculated. Fig. 4.
is the recompose results when printing the 1951 USAF resolution
test chart pattern on the OLED screen as extended object, Fig. 4(a)
is the raw plenoptic image, Fig. 4(b) is the complete recomposed
image, Fig. 4(c) is one of the supaperture image. The surrounding
numbers in the pattern are cut in this paper due to the limited OLED
screen size.

3.2.2 Slope estimation


As illustrated in Section 2.1, cross-correlation slope estimation Figure 6. The normalized correlation matrix of I and IEM .
requires a reference image to fulfill slope estimation process. A com-
mon way to select the reference image in correlation-based SHWS
is to pick one subaperture images. This would introduce a global interaction matrix I can also be used as the interaction matrix in
tilt error to all the slope measurements. As for the plenoptic sensor, Direct Gradient (DG) reconstruction algorithm, which is also a
each lenslet can be considered as a giant cell pixel. The intensity of commonly used wavefront reconstruction method in AO systems.
each cell pixel is the summation of all the detector pixels intensity For comparison, we calculated the normalized correlation matrix of I
in corresponding subaperture image. In fact, the cell image is a low- and IEM . The correlation matrix describes the coupling effect between
resolution image of the objects which indicates the image motion. By actuators or modes used during wavefront control. When there is no
comparing the the successive cell image with a previously restored coupling, the correlation matrix should be diagonal, otherwise, the
reference cell image, the relative position displacement at certain time off-diagonal elements would have non-zero values. As shown in
can be determined. Thus, local slope and global tilt can be measured Fig. 6, Fig. 6(a) is the correlation matrix of I, and Fig. 6(b) is the
simultaneously using plenoptic sensor. The measurement results and correlation matrix of IEM . Comparing the two images, the coupling
accuracy analysis will be shown in later sections. Fig. 5. shows effect between DM eigenmodes is much weaker than DM actuators.
an example of cell image and three recomposed subaperture image
at different position when there is no turbulence. The recomposed
3.3 Results and analysis
subaperture at different position is equivalent to observe the object
from corresponding point of view. When the figure was taken when Several experiments are conducted on the system to demonstrate
there is no turbulence and the object is only 2D, the recomposed its performance. We first test the slope measurement accuracy of
subaperture image should be equally same. However, they are not. plenoptic wavefront sensor using cell image as reference. After that,
The reason is that the light from different point of angle is effected by the turbulence is induced and the AO system is working under closed-
different aberrations caused by non-perfect optical design and system loop condition, the wavefront correction performance is measured on
misalignment, which can be partly corrected in Non-common Path the imaging camera.
Aberration (NCPA) correction progress.

3.3.1 Slope measurement


3.2.3 Wavefront control
The DMP40 DM from Thorlabs is used as global tip/tilt generator
As illustrated in Section 2.2, we are getting the DM eigenmode in this section, it has three spiral arms and can provide ±2 mrad
coefficient matrix M from the interaction matrix I. In fact, the original tip/tilt. The correlation slope estimation method is applied for slope

MNRAS 497, 4580–4586 (2020)


4584 Z. Zhang et al.

Downloaded from https://academic.oup.com/mnras/article/497/4/4580/5881328 by Instituto de Astrofisica de Canarias user on 21 April 2023


Figure 8. The image of resolution chart and nine PSFs on imaging camera
Figure 7. The slope measurement results. after NCPA correction.

estimation. For comparison, we also take a plenoptic sensor frame local slope measurements using cell image as reference. The black
when there is no turbulence, the recomposing supaperture images line marked with ‘+’ is the local slope measurement results plus
from this frame then can be used as the ideal reference image in global tip/tilt measurement results. The green line marked with ‘•’
correlation slope estimation. is the slope measurement results using ideal reference. As expected,
When using the cell image as reference for correlation slope using correlation to measure the relative shift between cell image
estimation, the final results are composed of two parts: and recomposed subaperture image can only obtain the information
Part A: the local slope of every recomposed subaperture. This part of local slope. During the experiments, the high-order turbulence
can be derived by calculating the relative shift between recomposed aberration is always constant, flat or a fixed perspex, so the local slope
subaperture image and cell image. measurement of each recomposed subaperture is almost invariant.
Part B: the global tip/tilt. As illustrated in previous section, cell The final measurement results is the summation of measured local
image is the low resolution image of the objects, which indicates slope and global tip/tilt, compared to slope measurements when
the image motion. The global tip/tilt information can be obtained by using ideal reference, the measurement results are basically the same,
calculating the relative shift between current cell image and previous which means the slope measurement method proposed in this paper
restored reference cell image via correlation method. are able to measure the actual slope information including both local
Fig. 7 is the slope measurement results of one of the recomposed slope and global slope simultaneously only use plenoptic sensor
subaperture (row = 14 and column = 14) when using resolution chart frames. In reality, the ideal plenoptic sensor frame is unobtainable
as object. Fig. 7(a) is the slope measurement results when only use and the choice of the reference cell image has a significant impact
DM to generate tip/tilt aberrations from −1.0 to 1.0 pixels, and the on global tip/tilt measurements results, so it needs to be treated
Fig. 7(b) is the slope measurement results when a piece of perspex more carefully. The global tip/tilt control and reference cell image
is placed at pupil to induce a constant turbulence and the DM is also choosing strategies requires more discussion and will be presented
generating the same tip/tilt as Fig. 7(a). The blue solid line is the in future work.

MNRAS 497, 4580–4586 (2020)


Quantification of a plenoptic wavefront sensor 4585

Downloaded from https://academic.oup.com/mnras/article/497/4/4580/5881328 by Instituto de Astrofisica de Canarias user on 21 April 2023


Figure 9. The image of resolution chart before and after closed-loop
wavefront correction.

3.3.2 Closed loop results


To evaluate the wavefront correction performance, we light up nine
small patches on the OLED screen at different position to simulate
nine point sources at different angle of view, each patch has 4 × 4
bright pixels (20 μm× 20 μm). The Strehl Ratio (SR) of those nine
point spread functions (PSF) then can be measured from the imaging
camera frame and those SRs indicates the wavefront correction
performance at corresponding angle of view. The average of those Figure 10. The images of nine PSFs before and after closed-loop correction
SRs is used as the metrics to evaluate the system performance in and the average SR of each iteration.
NCPA correction and closed-loop wavefront control. When there
is no turbulence in the system, the image of resolution chart and obtain the slope vector. The DM control voltages can be derived using
nine point sources after NCPA correction are shown in Fig. 8. equation (3), and the DM voltages for nth iteration can be expressed
The average SR of these nine PSFs is 0.91. Then, the atmosphere as: Vn = Vn − 1 − VDM · g, where g is the gain of closed-loop, and V0
turbulence is induced by driving the DM with random voltages. The is the initial random voltages applied on the DM. Thus, one single
control software demonstrated in Section 3.2 is used for closed- iteration of closed-loop control is finished, the closed-loop correction
loop wavefront correction. The DM is used as both atmosphere in this paper runs 20 iterations. Fig. 9 is the image of the resolution
turbulence simulator and wavefront corrector to perform an internal chart before and after closed-loop wavefront correction with gain
closed-loop AO system, the global tip/tilt is excluded from the equals 0.5. Visually, the image quality after closed-loop(Fig. 9(b))
input turbulence aberrations. When a random aberration is generated has a significant improvement than before wavefront correction
by the DM, we first capture the plenoptic sensor frames to get (Fig. 9a), but it is hard to tell the difference between the closed-
the recomposed subaperture and cell images. Then, the correlation loop results and the image captured when there is no turbulence
between recomposed subaperture and cell images is calculated to aberrations (Fig. 8a). To evaluate closed-loop wavefront correction

MNRAS 497, 4580–4586 (2020)


4586 Z. Zhang et al.
performance, we recorded the DM voltages for each iteration and DATA AVA I L A B I L I T Y
regenerate the DM surface with nine bright patches on the OLED as
The data underlying this article will be shared on reasonable request
object, so that we are able to measure the average PSF under the same
to the corresponding author.
turbulence conditions with each iteration. The results are shown in
Fig. 10. Fig. 10(a) is the initial normalized PSFs and Fig. 10(b) is
the normalized PSFs after closed-loop correction. To show the PSFs
clearly, the interested areas are cut and presented according to their REFERENCES
actual location in the raw frame. Compare nine pairs of PSFs, after
Clare R. M., Lane R. G., 2005, J. Opt. Soc. Am. A, 22, 117

Downloaded from https://academic.oup.com/mnras/article/497/4/4580/5881328 by Instituto de Astrofisica de Canarias user on 21 April 2023


closed-loop correction, the energies are more concentrated. Before
Hardy J. W., 1998, Adaptive Optics for Astronomical Telescopes, Vol. 16,
correction, the average SR is 0.40, and after three iterations the
Oxford University Press on Demand, New York, NY
averaged SR has been improved to 0.81 and become relatively stable Jiang P., Xu J., Liang Y., Mao H., 2016a, Opt. Eng., 55, 033105
around 0.8 in further iterations. Jiang P., Xu J., Liang Y., Mao H., 2016b, J. Mod. Opt., 63, 1573
Li E., Dai Y., Wang H., Zhang Y., 2006, Appl. Opt., 45, 5651
Neal D. R., Copland J., Neal D. A., 2002, in Angela D., Singh B., eds,
3.4 Conclusion Proceedings of SPIE Vol. 4779, Advanced Characterization Techniques
for Optical, Semiconductor, and Data Storage Components. SPIE, Seattle,
We have demonstrated the feasibility of plenoptic wavefront sensor
Washonton, USA, p. 148
in extended objects closed-loop AO system from a laboratory setup, Ng R. et al., 2006, Digital Light Field Photography. Stanford University,
correlation-based wavefront sensing and DM eigenmode control Stanford
method are used in the closed-loop control software. The closed-loop Poyneer L. A., 2003, Appl. Opt., 42, 5807
results are given and show that the image quality has a significant Rodrıguez-Ramos J. et al., 2010, in Clénet Y., FuscoT., Rousset G., Conan
improvement after closed-loop wavefront correction. We also pro- J.-M., 1st AO4ELT Conference. EDP Sciences, France
posed a way to measure the global tip/tilt information in correlation Rodrı́guez-Ramos J., Castellá B. F., Nava F. P., Fumero S., 2008, in Hubin
wavefront sensing using cell images, the slope measurement results N., Max C. E., Wizinowich P. L., eds, Adaptive Optics Systems. SPIE,
show that our method is able to measure the global slope and local p. 70155Q
slope only using plenoptic sensor frame. In this paper, we tested the Rodrı́guez-Ramos L. F., Martı́n Y., Dı́az J. J., Piqueras J., Rodrı́guez-Ramos
J., 2009, in Warren P. G., Hart M., eds, Astronomical and Space Optical
performance of plenoptic wavefront sensor with only one turbulence
Systems. SPIE, p. 74390I
generated by the DM or a perspex with poor optical quality, more
Rodrı́guez-Ramos L. F., Montilla I., Fernández-Valdivia J., Trujillo-Sevilla
experiments with multiple turbulence layer generated by rotating J., Rodrı́guez-Ramos J., 2012, in Ellerbroek B. L., Marchetti E., Véran
phase plates and real-time corrections will be conducted in the future J.-P., eds, Adaptive Optics Systems III. SPIE. p. 844745
to verify the performance of plenoptic wavefront sensor under the Sidick E., Green J. J., Morgan R. M., Ohara C. M., Redding D. C., 2008, Opt.
conditions which is much closer to real atmosphere. Lett., 33, 213
Townson M., Kellerer A., Saunter C., 2015, MNRAS, 452, 4022
Tyson R. K., 2015, Principles of Adaptive Optics. CRC Press, Florida,
AC K N OW L E D G E M E N T S USA

ZZ acknowledges CSC funding. We also acknowledge support from


the UKRI Science and Technology Facilities Council (NAB, TJM,
and ZZ; ST/P000541/1 and ST/T000244/1). This paper has been typeset from a TEX/LATEX file prepared by the author.

MNRAS 497, 4580–4586 (2020)

You might also like