0% found this document useful (0 votes)
10 views3 pages

Script Toán CC

Uploaded by

Danh Thân
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views3 pages

Script Toán CC

Uploaded by

Danh Thân
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Hello everyone, to continue to illustrate the application of calculus in

various fields, I’m an IT, and I will introduce to everyone the use of
calculus in computer science.

Calculus is a branch of mathematics that deals with the study of rates of


change and the accumulation of quantities. It has two main branches:
differential calculus, which concerns instantaneous rates of change and
slopes of curves, and integral calculus, which deals with the
accumulation of quantities and the areas under and between curves.
Calculus is widely used in many fields, particularly in computer science,
to solve problems involving motion, growth, optimization, and more.

In order to provide more information, I will introduce to everyone 4


tools of integration that can be deeply applied in computer science. First,
We have numerical integration with the trapezoidal rule and Simpson’s
rule. Numerical Differentiation with finite differences. Optimization
with gradient descent. Root finding with the Bisection method and
Newton’s method.

Let’s start, First I’ll talk about nummerical integration. Numerical


integration methods are used to approximate the value of a definite
integral. This is useful in computer science for tasks such as calculating
areas under curves, which is important in graphics rendering and physics
simulations. Now you can see, this is a graph and the area under the
curve. And we need to calculate the area under the curve. As I mentioned
before, we have 2 ways. Coming to the first way, you can see this is a
trapezoid, and yes it is the trapezoidal rule with the formula below.
Coming to the second way, you can see this is a mathematician, and yes
it is the simpson's rule. And these 2 formulas help us to calculate the
approximate area under the curve when we cannot use algebra to
calculate.

Next, I’ll describe about Numerical Differentiation: Numerical


differentiation methods, such as finite differences, are used to
approximate the derivative of a function. This can be useful in computer
science for tasks such as calculating rates of change or gradients in
optimization algorithms. As you can see, this is the function with its
derivative, and we want to find the approximate derivative of this
function, so we use the finite differences with the given formula. For
example, if we want to calculate the derivative of f of x equal x square at
x equal 2. We find the derivative of this function is 2x. Then we choose
h equal 0,01 so the derivative of f of x at 2 is approximate to f of 2 plus
0,01 minus f of 2 divided by 0,01 equal 2,01 square minus 2 square
divide by 0,01 equal 4,01

Following the previous content, I’ll demonstrate Root


Finding: Numerical methods like the bisection method, Newton’s
method, or the secant method are used to find the roots of a function
(i.e., the values of x for which f(x) = 0). This is important in computer
science for solving equations that cannot be solved algebraically, such as
in optimization problems and Determining critical points in optimization
problems.

The first method, you can see this is the method and it is divided by the
line, yes it is the bisection method. Bisection Method: Given a function
f(x) that changes sign over the interval [a, b], the method iteratively
narrows down the interval containing the root by halving it. We have 3
steps. First, Choose two points, a and b, such that f(a)×f(b)<0, indicating
that the root lies between them. Evaluate f(c). There are three
possibilities: If f(c)=0, then c is the root. If f(c).f(a)<0 the root lies in the
interval [a;c]. if f(c).f(b)<0, the roots lies in the interval [c;b].
Next, you can see he is Newton, and yes the following is Newton’s
method. He stated that x n plus 1 equal to x n minus f of xn divided by
the derivative of f of xn. The first step we guess x0. Next, we compute f
of x n and the derivative of f of x n. Then we update the approximation
using this formula. Repeat until the approximation is sufficiently
accurate.

Finally, you can see this is the battery, and we gonna recharge it, yes it is
the optimization: numerical optimization methods, such as gradient
descent or the simplex method, are used to find the minimum or
maximum of a function. This is important in computer science for tasks
such as training machine learning models or solving constraint
satisfaction problems. Now I’ll talk about the gradient descent, which is
stated by the formula, x n plus 1 equal to xn minus alpha multiple, the
derivative of f of x n. In the first step, we choose x0 equal to 0 and alpha
equal to 0,1. Then, we calculate x1 until x n+1. Repeat this process until
convergence, and you can find the minimum or maximum of a function

Economy

You might also like