You are on page 1of 14

OPTICS FINAL PROJECT

PROJECT 1:
DESIGN A LENS FOR A LIDAR-BASED AUTONOMOUS
VEHICLE WITH A 30-DEGREE FIELD OF VIEW AND 100-
METER RANGE.
PROJECT 2:
MODEL THE PROPAGATION OF AN ELECTROMAGNETIC
WAVE THROUGH A DISPERSIVE MEDIUM

 MOHAMAD AFIQ ADHA


BIN AZMAN (U2103695)
 ABDUL HANAN ARIF
NAME BIN ABDULLAH
(U2103269)
 AHMAD IZZUDDIN
DARWISYAH BIN IDRIS
(U2103447)

COURSE SIF2006 – OPTICS


DR. MUHAMMAD IMRAN
LECTURER’S NAME MUSTAFA BIN ABDUL
KHUDUS
PROJECT 1: DESGIN A LENS FOR A LIDAR-BASED AUTONOMOUS VEHICLE WITH 30-DEGREE FIELD
OF VIEW AND 100-METER RANGE.

INTRODUCTION

Lidar, a technology that uses laser light pulses to measure the distance to objects, finds applications
in various fields like autonomous vehicles, robotics, and surveying. In the realm of autonomous
driving, lidar plays a crucial role by generating a 3D map of the car's surroundings. This map is utilized
by the vehicle's computer system to ensure safe navigation and avoid accidents. Compared to other
sensors like cameras and radar, lidar offers several advantages. It exhibits precise distance
measurement, even in low-light situations, and has the ability to penetrate fog and dust, which can
be challenging for cameras and radar systems.

Nevertheless, lidar does come with certain drawbacks. It tends to be a costly sensor and can pose
challenges when integrating it into a car's design. Additionally, adverse weather conditions such as
rain and snow can affect lidar's performance. Despite these disadvantages, lidar remains an
indispensable sensor for autonomous vehicles. It furnishes the car's computer system with accurate
and dependable information necessary for navigating safely in diverse environments.

THEORY

An autonomous vehicle equipped with lens-based lidar employs a lidar sensor with a lens element to
construct a 3D representation of its surroundings. The inclusion of a lens enhances the focus of the
lidar beam, resulting in improved measurement accuracy. Furthermore, this particular type of lidar is
less susceptible to weather-related interferences compared to other variants. The lidar sensor emits
a laser beam, which is then directed by the lens towards the environment for reflection off objects.
The lens aids in sharpening the beam, contributing to its heightened precision. Subsequently, the
sensor measures the time it takes for the beam to return, allowing the calculation of object distances
based on the speed of light.

Figure 1: Lidar operating principle

From the figure 1, the working principle of lidar started by emitting a short pulse of light by using laser
emitter. Then, the light pulse travels to the object and is reflected back to the lidar sensor. Next, we
measure the time gap for the light pulse to travel to the object and back to the sensor. And, we
calculated the distance to the object which is range based on the speed of light. Hence, to determine
the range, we can use the equation:
𝑐
𝑅 = 2 ∙ Δ𝑡 Equation 1

Where, R is the range, c is the speed of light (3 × 108 𝑚/𝑠), and Δ𝑡 is the time gap of pulse of energy
to travel from its emitter to the observed object and then back to the receiver.
The range calculation employed in lens-based lidar systems for autonomous vehicles relies on the time
of flight (TOF) principle. According to the TOF principle, the duration it takes for a light pulse to travel
to an object and return can be utilized to determine the distance between the lidar sensor and the
object.

Lidar-based autonomous vehicles may encounter a phenomenon known as phase shift, which is
influenced by the wavelength of light employed by the lidar sensor. The phase shift arises from the
fact that the lidar beam must travel a greater distance to reach objects that are situated farther away.
This increased travel distance introduces a delay in the arrival of the lidar beam, leading to a phase
shift. This phase shift can then be utilized to determine the distance to objects. Therefore, we can
calculate the phase shift of the, ∆Φ, occurred between the reflected and the emitted signal by using
equation:
2𝜋𝑓𝐵
∆Φ = 2𝑅 Equation 2
𝑐

Where, 𝑓𝐵 is the constant frequency of the emitted beam.

By emitting a series of laser beams, the lidar sensor enables the vehicle's computer to generate a
comprehensive 3D map of the surrounding environment. This map serves as crucial input for the car's
computer to facilitate safe navigation and collision avoidance. In addition, the lens-based lidar system
boasts advantages over other autonomous vehicle setups. Notably, it exhibits greater resilience to
weather conditions. The lens's role in beam focus reduces the likelihood of scattering due to rain or
snow, enhancing the system's performance and reliability under adverse weather circumstances.

In summary, the advantages of incorporating a lens in a lidar-based autonomous vehicle are as follows:
Firstly, lens lidar exhibits high-precision distance measurement capabilities, even in low-light
situations. This attribute is crucial for autonomous cars, as accurate perception of the surroundings is
essential for safe navigation. Secondly, lens lidar enables the creation of detailed 3D maps of the
environment, providing valuable information for the vehicle's computer to navigate and avoid
collisions. Lastly, lens lidar is less susceptible to the adverse effects of weather conditions compared
to other lidar types. This resilience is particularly significant for autonomous cars, as they must operate
reliably in various weather conditions.

In conclusion, lens lidar-based autonomous vehicles hold great promise as they can equip autonomous
cars with the necessary information for safe navigation across diverse environments. However, before
widespread adoption, certain challenges such as cost, safety, and other factors need to be addressed
and overcome.
DESIGN

Figure 2: Design of a lens for emitting block of lidar-based autonomous vehicle.

Lens used in the figure above is aspheric lens. An aspheric lens refers to a lens with a surface that
deviates from a perfect sphere, possessing a more intricate shape. These lenses find applications in
various fields, including lidar-based autonomous vehicles.

In the context of a lidar-based autonomous vehicle, the aspheric lens is employed to concentrate the
lidar beam, ensuring precise targeting and enhancing measurement accuracy. The aspheric lens also
aids in minimizing light scattering, further improving measurement precision. Therefore, to calculate
the beam width, ∅, after collimation by the lens to achieve a narrow beam as a point light source,
we can use the equation:
𝜃
∅ = 2 × 𝑓𝐿1 × tan(2 ) + ∅𝐿𝑠 Equation 3

Where, 𝑓𝐿1 is the focal length of the lens, 𝜃 is the beam divergence angle, and ∅𝐿𝑠 is laser emitter
width.

Moreover, when it comes to emitting power, we know that the stronger the reflected pulse is the
higher the intensity of the laser pulse being emitted, but this power must also assure the safety of
the human body, particularly the eyes. So, to calculate the average power of the laser pulse, we can
use the equation:

𝑃𝑎𝑣𝑔 = 𝑃𝑝𝑒𝑎𝑘 × 𝑓 × ∆𝑇 Equation 4

Where, 𝑃𝑎𝑣𝑔 is the average power, 𝑃𝑝𝑒𝑎𝑘 is the peak power, and ∆𝑇 is the pulse width.

Furthermore, compared to conventional spherical lenses, aspheric lenses offer several advantages.
They are more compact, facilitating easier integration into lidar sensors. Additionally, aspheric lenses
provide a wider field of view, enabling the lidar sensor to capture a broader area. Moreover, they
exhibit improved resistance to aberrations, which enhances measurement accuracy.
However, aspheric lenses tend to be more expensive than traditional spherical lenses due to the
intricacies involved in their manufacturing process. Furthermore, they can be more sensitive to
temperature changes, potentially affecting measurement accuracy.

All in all, aspheric lenses play a crucial role in lidar-based autonomous vehicles by enhancing
measurement accuracy, a vital aspect of safe navigation. Nonetheless, their higher cost compared to
spherical lenses should be taken into consideration.

Figure 3: Design of a lens for receiving block of lidar-based autonomous vehicle.

In this setup, an optical element known as a plano-convex focusing lens is positioned in front of the
sensor to capture the laser beam reflected from the target. The purpose of the plano-convex lens is
to concentrate the lidar beam and collect the reflected light from the target. The lens has a flat surface
facing the laser and a curved surface facing the target. The curvature of the lens focuses the laser
beam to form a small spot on the target. Subsequently, the lens collects the reflected light from the
target and directs it onto the sensor.

The field of view (FOV) of the system is determined by the focal length of the lens, while the diameter
of the lens influences its light-gathering capacity. As shown in the accompanying figure, the focusing
lens can only collect the reflected light from objects within its designated FOV. It needs to be calibrated
such that the collected reflected beam adequately covers the active area of the sensor, ensuring
optimal performance. Combining the diameter of the sensor with the focal length of the focusing lens,
a FOV angle determined by equation:
𝑑
𝜑𝐹𝑂𝑉 = 2 × tan−1 (2𝑓𝑠 ) Equation 5
𝐿2

Where, 𝜑𝐹𝑂𝑉 is the field of view angle, 𝑑𝑠 is the diameter of the sensor, and 𝑓𝐿2 is the focal length of
the focus lens.

We also can determine the diameter of field of view (FOV) by using equation:
𝜑𝐹𝑂𝑉
𝑑𝐹𝑂𝑉 = 2 × 𝐷 × tan 2
Equation 6
Where, D is the distance between focus lens and the target object and 𝑑𝐹𝑂𝑉 is the diameter of the
field of view.

CALCULATION

To obtain 30-degree field of view and 100-meter range of the lidar-based autonomous vehicle, we
need to determine the parameters related.

To get 30-degree field of view, we need to know the focal length of the lens by using equation 5 and
6.

𝑓𝐿2

𝑑𝑠 𝜑𝐹𝑂𝑉 = 30° 𝑑𝐹𝑂𝑉

Distance, D

𝜑𝐹𝑂𝑉 = 30°
If diameter of the sensor, 𝑑𝑠 = 100 mm

Equation 5:
𝑑𝑠
𝜑𝐹𝑂𝑉 = 2 × tan−1 ( )
2𝑓𝐿2
𝑑𝑠
𝑓𝐿2 = 𝜑
2 tan( 𝐹𝑂𝑉
2 )
100 × 10−3
𝑓𝐿2 =
30
2 tan( 2 )

𝑓𝐿2 = 0.1866 𝑚
Then, to determine the diameter of the field of view, we use equation 6:
𝜑𝐹𝑂𝑉
𝑑𝐹𝑂𝑉 = 2 × 𝐷 × tan
2
The distance between focus lens and the target object is the same as the range, R=D, then D=100 m.
𝜑𝐹𝑂𝑉
𝑑𝐹𝑂𝑉 = 2 × 𝐷 × tan
2
30
𝑑𝐹𝑂𝑉 = 2 × 100 × tan
2
𝑑𝐹𝑂𝑉 = 53.60 𝑚
To get 100-meter range, we need to know the time gap of pulse of energy to travel from its emitter
to the observed object and then back to the receiver by using equation 1.
Laser Emitter

Sensor
Receiver

Range, R
Equation 1:
𝑐
𝑅 = 2 ∙ Δ𝑡 𝑐 = 3 × 108 𝑚/𝑠 𝑅 = 100 𝑚
2𝑅
Δ𝑡 =
𝑐

2(100)
Δ𝑡 =
(3×108 )

Δ𝑡 = 0.000667 𝑚𝑖𝑙𝑙𝑖𝑠𝑒𝑐𝑜𝑛𝑑𝑠
From the range 100-meter, we can calculate the phase shift of the emitted beam by using equation
2:
2𝜋𝑓𝐵
∆Φ = 2𝑅
𝑐
If the constant frequency of the emitted beam is, 𝑓𝐵 = 200 𝑘𝐻𝑧

Hence, the phase shift,


2𝜋𝑓𝐵
∆Φ = 2𝑅
𝑐
2𝜋(200 × 103 )
∆Φ = 2(100)
(3 × 108 )
∆Φ = 0.8378 rad
QUESTION 2

MODEL THE PROPAGATION OF ELECTROMAGNETIC WAVES THROUGH A DISPERSIVE MEDIUM.

Dispersive medium:

A dispersive medium is a material that causes light to travel at different speeds depending on the color
or frequency of the light. This can cause different colors of light to separate or spread out over time.
Dispersive media are found in many materials like air, water, and glass.

Propagation of electromagnetic waves through dispersive medium:

When an electromagnetic wave travels through a dispersive medium, the wave's different frequencies
are affected differently by the medium's refractive index. This causes the wave to slow down and
change direction as it passes through the medium, and different frequencies of the wave can become
separated or spread out. This effect is known as dispersion, and it is caused by the interaction between
the wave and the electrons in the medium. The amount of dispersion that occurs depends on the
properties of the medium, such as its refractive index and the frequency of the wave.

Now we will use light as our electromagnetic wave as it is an electromagnetic wave that is visible to
the human eye and has a short wavelength. It will pass through a dispersive medium which is a glass
prism.

Image retrieved from https://www.turbosquid.com/3d-models/3d-model-light-glass-prism-1712892


When white light passes through a glass prism, the different colors of the light are refracted (bent) by
different amounts, due to the prism's dispersive properties. This causes the colors to separate and
form a spectrum. The amount of refraction that occurs depends on the prism's angle and the
properties of the glass, such as its refractive index. The reason the colors separate is that each color of
light has a slightly different wavelength and frequency, which causes it to interact with the glass prism
in a slightly different way. The longer wavelengths (red light) are refracted less than the shorter
wavelengths (violet light), so the red light bends the least and the violet light bends the most. The
other colors of light (orange, yellow, green, blue, and indigo) are refracted at intermediate angles,
which causes them to form a spectrum of colors.

The above diagram shows a wave packet constructed from the sum of 100 cosine functions of various
frequencies. The medium is dispersive, different frequency components move through it at various
rates. The wave now takes on a new form. Both the phase velocity and the group velocity are visible.
Lower frequencies travel more quickly than higher frequencies in this example due to the dispersion.
The wave packet consequently disperses, with the longer wavelengths moving more quickly and the
shorter wavelengths following behind.

Ratio of angular speed, 𝒟 of the F and C wavelengths to the deviation 𝛿

𝒟 𝑛𝐹 − 𝑛𝐶
=
𝛿 𝑛𝐷 − 1

𝒟
This ratio also known as dispersive power: Δ =
𝛿

Average value of dispersion:

∆𝑛 𝑛𝐹 − 𝑛𝐶
=
∆𝜆 𝜆𝐹 − 𝜆𝐶

Resolving power:

𝑑𝑛
ℜ=𝑏
𝑑𝜆

Minimum resolvable wavelength difference in region approx. 550 nm

𝜆
(Δ𝜆)𝑚𝑖𝑛 = ℜ
= __ Å

Wavelength (nm) Characterization Refractive index, n (flint glass)


486.1 F, blue 1.7328
589.2 D, yellow 1.7205
656.3 C, red 1.7076

We can consider only the blue, yellow, and red colors for calculation, as these colors are the primary
colors of light. By combining these colors in different proportions, we can create any other color of
light. The colors in between these primary colors, such as orange and green, can be created by
mixing different amounts of the primary colors.

Calculation

Average value of dispersion:

∆𝑛 1.7328 − 1.7205
=
∆𝜆 486 − 589

= −1.19 × 10−4 𝑛𝑚−1

Resolving power:

𝑑𝑛
ℜ=𝑏
𝑑𝜆

= (0.05 × 109 𝑛𝑚)(1.19 × 10−4 𝑛𝑚−1 )

= 5971

Minimum resolvable wavelength difference in region approx. 550 nm

𝜆
(Δ𝜆)𝑚𝑖𝑛 = ℜ

5500
=
5971

≅1Å

FDTD simulation for electromagnetic waves (light) through glass prism:

Figure 1: Project Layout


We use OptiFDTD to simulate electromagnetic waves through a dispersive medium.
We use a glass prism as the dispersive medium. Figure 1 shows the project layout of the
simulation. The red line in the figure represents the input wave or light source with a
wavelength of 550 nm. The purple tringle in the middle of the figure is the prism with a
refractive index of 1.65. The observation line is represented by the green line in the figure.
Below is the result of the simulation:

Figure 2: Magnetic field amplitude in x-direction in 2D

Figure 3: Magnetic field amplitude in x-direction in 3D

Figure 4: Magnetic field amplitude in z-direction in 2D


Figure 5: Magnetic field amplitude in z-direction in 3D

Figure 6: Electric field amplitude in y-direction in 2D

Figure 7: Electric field amplitude in y-direction in 3D

Figure 8: Poynting Vector in 2D


Figure 9: Poynting Vector in 3D

REFERENCES:
Nguyen, T. N., Cheng, C., Liu, D., & Le, M. (2021, December 23). Improvement of
Accuracy and Precision of the LiDAR System Working in High Background
Light Conditions. Electronics. https://doi.org/10.3390/electronics11010045
Royo, S., & Ballesta-Garcia, M. (2019). An Overview of Lidar Imaging Systems for
Autonomous Vehicles. Applied Sciences, 9(19), 4093.
https://doi.org/10.3390/app9194093
Syed, S. U. H. (2022). Lidar Sensor in Autonomous Vehicles. ResearchGate.
https://www.researchgate.net/publication/359263639_Lidar_Sensor_in_Auton
omous_Vehicles
Russell, D. (2018). Dispersion. Pennsylvania State University.
https://www.acs.psu.edu/drussell/demos/dispersion/dispersion.html
Kirk T. (2015). Electromagnetism. The University of Texas at Austin.
https://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/node75.html
Lindroth, A. (2017). The role of the teacher in a digitalized classroom. Lund University
Publications. https://lucris.lub.lu.se/ws/portalfiles/portal/6310363/624955.pdf
Russell, D. (2018). Dispersion. Pennsylvania State University.
https://www.acs.psu.edu/drussell/demos/dispersion/dispersion.html

You might also like