You are on page 1of 42

Department of Mathematics & Statistics

Agenda and Working Papers

For the Meeting of


Graduate Research Committee
(Mathematics & Statistics, Male& Female)

Wednesday, February 22, 2023


at 01:00 pm in the Faculty Seminar Room

Faculty of Sciences
International Islamic University
Islamabad
INTERNATIONAL ISLAMIC UNIVERSITY, ISLAMABAD
FACULTY OF SCIENCES
DEPARTMENT OF MATHEMATICS & STATISTICS

Subject: AGENDA ITEMS FOR THE MEETING OF THE GRADUATE RESEARCH COMMITTEE
OF MATHEMATICS & STATISTICS (MALE& FEMALE)

The following agenda items are received for placing before the Graduate Research
Committee of Mathematics & Statistics (Male& Female).

Agenda
Subject
Item No.

1 Revision of Scheme of Studies (Maths & Stats)

2 Research Proposal/Synopsis of MS Mathematics Student(s (Female)

3 Research Proposal/Synopsis of MS Statistics Student(s) (Female)

4 Research Proposal/Synopsis of Ph.D. Statistics Student(s) (Female)

5 Research Proposal/Synopsis of MS Mathematics Student(s) (Male)

6 Research Proposal/Synopsis of Ph.D. Mathematics Student(s) (Male)

Page 2 of 6
INTERNATIONAL ISLAMIC UNIVERSITY, ISLAMABAD
FACULTY OF SCIENCES
DEPARTMENT OF MATHEMATICS & STATISTICS
1. REVISION OF SCHEME OF STUDIES OF MATHEMATICS & STATISTICS

Soft Copy is shared separately via Email

2. RESEARCH PROPOSAL/SYNOPSIS OF MS (MATHEMATICS) STUDENTS (FEMALE)

S. Name, Reg. No of Name of Supervisor/


Title
No. Student Co-Supervisor
Nida Sabir Study of Unsteady Tank Drainage
1. Dr. Khadija Maqbool
750-FBAS/MSMA/S21 Flow of a Viscous Fluid
Maria Moqaddas On generalized fuzzy sets and fixed
2. Dr. Maliha Rashid
766-FBAS/MSMA/S21 point theorems
Study of Fixed Points in the
Kainat Naeem
3. Dr. Amna Kalsoom Perspective of Data Science with
769-FBAS/MSMA/S21
Convex Optimization Techniques
4.

3. RESEARCH PROPOSAL/SYNOPSIS OF MS (STATISTICS) STUDENTS (FEMALE)

S. Name, Reg. No of Name of Supervisor/


Title
No. Student Co-Supervisor
Computational Impact of Mixture
Anum Touqeer Shah Distribution on Multimodal
1. Dr. Ehtasham ul Haq
187-FBAS/MSST/F21 Continuous Functions using Real-
Coded Genetic Algorithms

4. RESEARCH PROPOSAL/SYNOPSIS OF PH.D. (STATISTICS) STUDENTS (FEMALE)

S. Name, Reg. No of Name of Supervisor/


Title
No. Student Co-Supervisor
Improving Efficiency of Machine
Iqra Sardar Dr. Farzana Akhtar
1. Learning Models by Hyperparameter
13-FBAS/PHDST/S21 Abbasi
Optimization

5. RESEARCH PROPOSAL/SYNOPSIS OF MS (MATHEMATICS) STUDENTS (MALE)

S. Name, Reg. No of Name of Supervisor/


Title
No. Student Co-Supervisor
Convergence Rate Analysis for Fixed
Arslan Bin Amjad Point Iterative Approximation
1. Prof. Dr. M. Arshad
794-FBAS/MSMA/F21 Procedures of Non-Expansive
Mappings
Some Fixed Point Theorems for
Saad Jamil
2. Prof. Dr. M. Arshad Nonlinear Operators in Quasi-Metric
796-FBAS/MSMA/F21
Spaces

Page 3 of 6
6. RESEARCH PROPOSAL/SYNOPSIS OF PH.D. (MATHEMATICS) STUDENTS (MALE)

S. Name, Reg. No of Name of Supervisor/


Title
No. Student Co-Supervisor
Muhammad Usman Effects of Surface Undulations on
2. Javed Dr. Ahmer Mehmood Turbulent Boundary Layer over a
130-FBAS/PHDMA/F20 Non-Flat Plate

(Prof. Dr. Nasir Ali)


Chairperson (Maths & Stats)

Page 4 of 6
Synopsis of
PhD Statistics (Female)
Student(s)

Page 5 of 6
Improving Efficiency of Machine Learning Models by Hyperparameter
Optimization
Summary
Hyperparameter optimization is the process of finding the best combination of hyperparameters
for a machine learning model to achieve optimal performance. Hyperparameters are parameters
that cannot be learned from the data, such as the learning rate or regularization strength. These
parameters must be set manually by the user and can have a significant impact on the performance
of the model.

There are several methods for hyperparameter optimization, including manual tuning, grid search,
random search, and Bayesian optimization. Manual tuning involves trying different values for each
hyperparameter until the best combination is found. Grid search involves defining a range of values
for each hyperparameter and evaluating the model for every possible combination of
hyperparameters within those ranges. Random search involves randomly sampling combinations
of hyperparameters within a defined range. Bayesian optimization involves using a probabilistic
model to predict the performance of different combinations of hyperparameters and choosing the
combination with the highest predicted performance.

The choice of optimization method depends on the complexity of the model and the number of
hyperparameters. For simple models with a small number of hyperparameters, manual tuning or
grid search may be sufficient. For more complex models with a large number of hyperparameters,
random search or Bayesian optimization may be more effective.

Hyperparameter optimization is an important step in the development of machine learning models,


as it can significantly improve the performance of the model. It is important to keep in mind that
the optimal hyperparameters may vary depending on the dataset, so it is important to re-optimize
hyperparameters when working with new datasets. Additionally, hyperparameter optimization can
be computationally expensive, so it is important to use efficient optimization methods and consider
the trade-off between computation time and performance improvement.
PhD Research Proposal
Improving Efficiency of Machine Learning Models by
Hyperparameter Optimization

Submitted by
Iqra Sardar
Reg #:13-FBAS/PHDST/S21

Supervised by
Dr. Farzana Akhtar Abbasi

Department of Mathematics and Statistics,


Faculty of Basic and Applied Sciences
International Islamic University Islamabad
2023
Improving Efficiency of Machine Learning Models by Hyperparameter Optimization

Abstract

Most machine learning models are operated by a set of hyperparameters whose values must
be carefully chosen which often have a significant impact on performance. It requires
extensive knowledge of machine learning models and state-of-the-art optimization
techniques. Several automatic hyperparameter optimization techniques can be used to find
optimal hyperparameter configurations without time-consuming and unreliable manual
process. The process of hyperparameter optimization can be computationally expensive,
but it can greatly improve the efficiency of machine learning models. By selecting the
optimal set of hyperparameters, a model can achieve better accuracy and require less
computational resources to train. This, in turn, can lead to more efficient and effective
applications of machine learning in a wide range of domains. This work will provide
hyperparameter optimization simple techniques like grid or random search to more
complex ones like Bayesian optimization, Hyperband, Gradient-Based Optimization,
Particle swarm optimization and Genetic algorithm. Real-world datasets will be used to
compare the performance of different hyperparameter optimization techniques.

Iqra Sardar Reg #:13-FBAS/PHDST/S21 2


Improving Efficiency of Machine Learning Models by Hyperparameter Optimization

Table of Contents

1. Introduction .............................................................................................................4
2. Literature Review ............................................................................................................ 7
3. Problem Statement .......................................................................................................... 8
4. Aims and Objectives........................................................................................................ 9
5. Research Methodology ................................................................................................. 10
6. References ........................................................................................................................ 15

Iqra Sardar Reg #:13-FBAS/PHDST/S21 3


Improving Efficiency of Machine Learning Models by Hyperparameter Optimization

1. Introduction

Machine learning (ML) is a subfield of artificial intelligence that focuses on the


development of algorithms and statistical models that can enable computers to learn and
make predictions or decisions without being explicitly programmed to do so. ML has many
practical applications, including natural language processing, computer vision, speech
recognition, and autonomous systems. It has become an important tool for data analysis
and has been used to solve many real-world problems, such as improving healthcare
outcomes, detecting fraud, and optimizing business processes. In general, developing
efficient ML algorithms are highly configurable by their hyperparameters. These
parameters frequently have a significant impact on the complexity, behavior, speed, and
other characteristics of the learner, and their values must be carefully chosen to achieve
optimal performance. Human trial-and-error is time-consuming, often prejudiced, error-
prone, and computationally irreproducible when it comes to selecting these values.
ML models are two types of parameters: model parameters, which can be initialized
and updated through the data learning process, and hyperparameters, which cannot be
directly estimated from data learning and must be set before training an ML model because
they define the model architecture [1]. Parameters known as hyperparameters are used to
set up a ML model (for example, the penalty parameter c in a support vector machine and
the learning rate to train a neural network) or to define the technique that will be used to
minimize the loss function (for example, the activation function and optimizer types in a
neural network and the kernel type in a support vector machine) [2].
A wide range of alternatives must be investigated to develop an optimal ML model.
Hyperparameter tuning refers to the process of constructing the best model architecture
with an optimal hyperparameter configuration. Tuning hyperparameters is seen as a critical
component of developing an efficient ML model, particularly for tree-based ML models
and deep neural networks, which have numerous hyperparameters [3]. Due to the many
types of hyperparameters used by ML algorithms, including categorical, discrete, and
continuous, the hyperparameter tuning method varies amongst them [4]. Manual testing is
a conventional method for tuning hyperparameters that is still utilized in graduate student

Iqra Sardar Reg #:13-FBAS/PHDST/S21 4


Improving Efficiency of Machine Learning Models by Hyperparameter Optimization

research, even though it needs a thorough understanding of the ML algorithms employed


and their hyperparameter value settings [5].
Manual tuning is useless for many reasons due to factors such as many
hyperparameters, complex models, time-consuming model evaluations, and non-linear
hyperparameter interactions. These characteristics have sparked significant interest in
strategies for autonomous optimization of hyperparameters, sometimes known as
hyperparameter optimization (HPO) [6]. The prime objective of HPO is to automate the
hyperparameter tuning process and enable users to efficiently apply ML models to practical
issues [7]. After HPO process, the optimal model architecture of ML model is predicted to
be obtained.
HPO is important part of AutoML for determining the optimal hyperparameters for
neural network (NN) architectures and the model training procedure. Hyperparameters are
parameters that cannot be modified during ML training. They can be used to establish the
structure of the model, such as the number of hidden layers and the activation function, or
to determine the efficiency and accuracy of model training, such as the learning rate (LR)
of stochastic gradient descent (SGD), batch size, and optimizer (hyp). HPO dates back to
the early 1990s [8,9] and it is commonly used for NN as ML becomes more prevalent. HPO
is the final step of model design and the first step of NN training. Due to the impact of
hyperparameters on training precision and speed, they must be thoroughly calibrated with
experience prior to training [10].
The HPO procedure automatically optimizes the hyperparameters of a ML model
to remove humans from the loop of a ML system. HPO, as an exchange of human efforts,
necessitates a huge number of computational resources, particularly when numerous
hyperparameters are optimized concurrently. Various works on HPO of ML algorithms and
toolkits have resulted from the problems of how to use compute resources and build an
efficient search space.
Conceptually, HPO purposes are threefold [11]: reduce the costly menial work of
artificial intelligence (AI) experts and lower the research and development threshold,
improve the accuracy and efficiency of NN training [12], and make the choice of
hyperparameter set more convincing and the training results more reproducible [13].

Iqra Sardar Reg #:13-FBAS/PHDST/S21 5


Improving Efficiency of Machine Learning Models by Hyperparameter Optimization

HPO has been increasingly important in recent years due to two major trends in the
development of deep learning models. The first trend is the upscaling of NN to improve
precision [14]. Authors [15-17] found that in most circumstances, more complicated ML
models with deeper and wider layers perform better than those with basic architectures.
The second tendency is to create a complex, lightweight model that offers satisfactory
accuracy with fewer weights and parameters [18-20]. Due to the stricter selections of
hyperparameters, it is more challenging to modify empirical values. In both circumstances,
hyperparameter tuning is crucial: a model with a complicated structure requires more
hyperparameters to be tuned, and a model with a finely built structure requires that each
hyperparameter be tuned to a narrow range to duplicate the accuracy. It is possible to
manually tune the hyperparameters of a widely used model since the capacity to manually
tune is dependent on experience and researchers can always draw on the knowledge of
earlier efforts. This is also true for miniature models. For a larger model or a freshly
published model, however, the broad variety of hyperparameter options necessitates a
tremendous number of menial efforts, as well as a substantial amount of time and computer
resources for trial and error.
Important motivations for applying HPO techniques to ML models are listed below
[21]:
• It reduces the required human work, as many ML developers spend a significant
amount of time for tuning the hyperparameters, especially for large datasets or
sophisticated ML algorithms with many hyperparameters.
• It improves the performance of ML models. Many ML hyperparameters have varying
optimal values for optimal performance in various datasets or tasks.
• It makes research and models more replicable. Only when the same level of
hyperparameter tuning process is done, different ML algorithms can be compared
fairly; hence, utilizing the same HPO approach on multiple ML algorithms helps
discover the best appropriate ML model for a particular problem.

Iqra Sardar Reg #:13-FBAS/PHDST/S21 6


Improving Efficiency of Machine Learning Models by Hyperparameter Optimization

2. Literature Review

To find ideal hyperparameters, it is critical to use the right optimization technique.


Because many HPO problems are non-convex or non-differentiable optimization problems,
traditional optimization approaches may be inadequate for them, resulting in a local rather
than a global optimum [22]. Gradient descent is a sort of conventional optimization
algorithm that can be used to tune continuous hyperparameters by calculating their
gradients [23]. A gradient based technique, for example, can be used to maximize the
learning rate in a neural network. Many other optimization techniques, such as decision
theoretic approaches, Bayesian optimization models, multi-fidelity optimization
techniques, and metaheuristics algorithms, are more suitable for HPO problems than
standard optimization methods such as gradient descent [24]. Aside from recognizing
continuous hyperparameters, several of these algorithms can also detect discrete,
categorical, and conditional hyperparameters. Decision theoretic approaches are based on
the notion of creating a hyperparameter search space, then detecting the hyperparameter
combinations in the search space, and finally choosing the best performing hyperparameter
combination.
Grid search (GS) [25] is a decision theoretic approach that searches for the optimal
configuration in a defined area of hyperparameters exhaustively.
Random search (RS) [26] is another decision theoretic approach that, given
restricted execution time and resources, randomly finds hyperparameter combinations in
the search space. Each hyperparameter setting is treated independently in GS and RS.
Bayesian optimization (BO) [27] models, unlike GS and RS, determine the next
hyperparameter value based on the results of previously tested hyperparameter values,
thereby avoiding many unnecessary evaluations; therefore, BO can detect the optimal
hyperparameter combination in fewer iterations than GS and RS.
BO can represent the distribution of the objective function using multiple models
as the surrogate function, such as the Gaussian process (GP), random forest (RF), and tree
structured Parzen estimators (TPE) models for application to varied applications. BO-RF
and BO-TPE can preserve variable conditionality [28]. Consequently, they can be utilized
to optimize conditional hyperparameters such as the kernel type and penalty parameter c

Iqra Sardar Reg #:13-FBAS/PHDST/S21 7


Improving Efficiency of Machine Learning Models by Hyperparameter Optimization

in a support vector machine (SVM). It is challenging to parallelize BO models since they


operate sequentially to balance the search of undiscovered regions with the exploitation of
currently tested parts. Typically, training a ML model requires considerable time and space.
Multi-fidelity optimization techniques are developed to address challenges with
limited resources, with bandit-based algorithms being the most prevalent. Hyperband [29]
is a well-known bandit-based optimization method that can be viewed as an enhanced
variant of RS. It creates compact versions of datasets and assigns the same budget to each
hyperparameter combination. In each cycle of Hyperband, configurations of ineffective
hyperparameters are discarded to save time and resources.
Metaheuristic algorithms are a set of approaches used to tackle difficult, non-
convex optimization problems with a large search space and a big search space, to which
HPO problems belong [30]. Among all metaheuristic methods, genetic algorithm (GA) [31]
and particle swarm optimization (PSO) [32] are the two most prominent metaheuristic
algorithms used for HPO problems. Each generation of genetic algorithms identifies well
performing hyperparameter combinations and passes them on to the next generation until
the optimal combination is determined. In PSO algorithms, each particle communicates
with other particles to identify and update the current global optimal solution at each
iteration until the final optimal solution is found.
Metaheuristics can efficiently search the search space for optimal or nearly optimal
solutions. Due to their excellent efficiency, they are ideally suited for HPO problems with
a vast configuration space. For instance, they can be utilized in deep neural networks
(DNNs) with a vast configuration space and various hyperparameters, such as activation
and optimizer types, learning rate, dropout rate, etc.

3. Problem Statement

Hyperparameter optimization is a crucial step in the ML pipeline, as it can greatly


impact the performance of a model. A model's hyperparameters are parameters that are not
learned from the data but are set by the practitioner prior to training. The problem statement
of HPO can be stated as follows: Given a ML algorithms with a set of hyperparameters,

Iqra Sardar Reg #:13-FBAS/PHDST/S21 8


Improving Efficiency of Machine Learning Models by Hyperparameter Optimization

find the combination of hyperparameters that results in the best performance on a given
task, such as prediction accuracy or F1 score.
This problem is challenging for several reasons. First, there is often many
hyperparameters to optimize, and the relationship between each hyperparameter and the
model's performance is often non-linear and complex. Second, the performance of a model
can vary greatly depending on the training data and the performance metric used. Third,
the optimization process can be computationally expensive, as it typically involves training
multiple models with different hyperparameters.
To address these challenges, practitioners often use various optimization
algorithms, such as grid search, random search, and Bayesian optimization, to find the
optimal hyperparameters. However, choosing the right algorithm and setting up the
optimization process correctly can be difficult, and requires a good understanding of the
ML algorithm and the data. As the mathematical formalization of HPO is essentially ‘black
box’ optimization, often in a higher dimensional space, this is better delegated to
appropriate algorithms and machines to increase efficiency and ensure reproducibility.
With these challenges in mind, this work theoretically and algorithmically presents
HPO, with numerous practical applications. Although utilizing HPO techniques to modify
the hyperparameters of ML models significantly increases model performance, many other
elements, such as computational complexity, still have significant potential for
improvement. On the other hand, because different HPO models have different advantages
and problems to solve, an overview of them is required for optimal optimization algorithm
selection in terms of different types of ML models and problems.

4. Aims and Objectives

The aim of the study is to optimize hyperparameters for enhanced performance of ML


models. Further comparing these ML models by HPO toolkits that provides close sourced
libraries and frameworks for practical use. The specific objectives are:
• To examine ML algorithms and their key hyperparameters.
• To examine HPO strategies and their applications to various ML models through
suitable algorithm selection in practical scenario.

Iqra Sardar Reg #:13-FBAS/PHDST/S21 9


Improving Efficiency of Machine Learning Models by Hyperparameter Optimization

• To analyze real-world datasets, assess its impact on the performance of ML algorithms.


This could involve working with large, complex datasets, such as medical images,
financial data, or natural language text.
• To highlight the HPO research domain's open challenges and research directions.

5. Research Methodology

The primary objective of ML is to solve optimization problems. The weight


parameters of a ML model are initialized and then optimized using an optimization
algorithm until the objective function approaches a minimum value or the accuracy
approaches a maximum value [33]. Similarly, hyperparameter optimization methods seek
to optimize the architecture of ML model by identifying the ideal hyperparameter
configurations. This section discusses the fundamental ideas of hyperparameter
optimization for ML models.

5.1 Hyperparameter optimization problem

The objective of HPO is to obtain optimal or near-optimal model performance by


modifying hyperparameters within the specified budgets [7]. The mathematical expression
of the function f differs based on the objective function and performance metric function
of the selected ML method. Various metrics, like as accuracy, precision, F1-score, and
recall, can be utilized to evaluate the performance of a model. In contrast, time budgets are
a key limitation for optimizing HPO models in practice and must be taken into
consideration. To maximize the objective function of an ML model with a decent number
of hyperparameter settings typically requires a significant amount of time. Every time a
hyperparameter value is evaluated, the complete ML model must be retrained, and the
validation set must be processed to provide a performance score. The primary HPO process
is as follows [34]:
1. Choose an objective function and performance metrics.
2. Identify the hyperparameters that need to be tuned, summaries their types, and
identify the best optimization approach.

Iqra Sardar Reg #:13-FBAS/PHDST/S21 10


Improving Efficiency of Machine Learning Models by Hyperparameter Optimization

3. As the baseline model, train the ML model using the default hyperparameter setup
or common values.
4. Begin the optimization process by selecting a wide search space as the
hyperparameter viable domain based on manual testing and/or domain expertise.
5. Narrow the search space based on the regions of presently tested well-performing
hyperparameter values, or, if necessary, explore additional search spaces.
6. As the final solution, return the best-performing hyperparameter configuration.
Hyperparameter optimization problems can arise in many different forms, but here are
some of the most common:
• Overfitting: Overfitting occurs when a model is trained too closely on the training
data, causing it to perform poorly on new, unseen data. One of the main challenges
in HPO is to find hyperparameters that prevent overfitting while still achieving high
performance on the training data.
• Model Complexity: Another challenge is to find hyperparameters that balance
model complexity with performance. If a model is too simple, it may not capture
the underlying patterns in the data, leading to poor performance. On the other hand,
if a model is too complex, it may be prone to overfitting and perform poorly on new
data.
• Time Constraints: Hyperparameter optimization can be computationally expensive,
and in some applications, there may be time constraints on how long the
optimization process can take. This can make it challenging to find the optimal
hyperparameters in a reasonable amount of time.
• Dataset Diversity: Hyperparameters that work well on one dataset may not work
well on another dataset. This can make it challenging to find hyperparameters that
generalize well across different datasets.
• Model Selection: As mentioned earlier, there are many different ML algorithms
available, each with its own set of hyperparameters. One of the challenges in
hyperparameter optimization is to select the most appropriate model and
hyperparameters for a given task.

Iqra Sardar Reg #:13-FBAS/PHDST/S21 11


Improving Efficiency of Machine Learning Models by Hyperparameter Optimization

• Black-box Optimization: In some cases, the relationship between hyperparameters


and model performance may be unknown or difficult to determine. This can make
hyperparameter optimization a difficult, black-box optimization (BBO) problem.
• Model Stability: The choice of hyperparameters can affect the stability of a model.
Researchers are exploring how hyperparameter optimization can be used to
improve the stability of ML models.
To identify optimal hyperparameter designs for ML models, proper optimization methods
should be applied to HPO issues.

5.2 Hyperparameters in ML Models

Hyperparameters are parameters that are set before the training of a ML model, as opposed
to model parameters, which are learned during training. The choice of hyperparameters can
greatly affect the performance of a ML model.
Some common examples of hyperparameters in ML models include:
• Learning rate: The learning rate determines how quickly a model updates its parameters
during training. A learning rate that is too high can cause the model to converge slowly
or not at all, while a learning rate that is too low can cause the model to converge too
slowly.
• Regularization strength: Regularization is a technique used to prevent overfitting in ML
models. The regularization strength determines the amount of regularization applied to
the model.
• Number of hidden layers: In deep learning models, the number of hidden layers can
greatly affect the performance of the model. Too few hidden layers can result in a model
that is too simple to capture the underlying patterns in the data, while too many hidden
layers can result in a model that is too complex and prone to overfitting.
• Number of neurons: The number of neurons in each layer of a deep learning model can
also affect the performance of the model. Too few neurons can result in a model that is
too simple, while too many neurons can result in a model that is too complex and prone
to overfitting.

Iqra Sardar Reg #:13-FBAS/PHDST/S21 12


Improving Efficiency of Machine Learning Models by Hyperparameter Optimization

• Dropout rate: Dropout is a regularization technique used in deep learning models to


prevent overfitting. The dropout rate determines the fraction of neurons that are
dropped out during training.
• Kernel size: In convolutional neural networks (CNN), the kernel size determines the
size of the filters used to extract features from the data.
• Stride size: The stride size determines the step size used when moving the kernel over
the input data in CNN.
The specific hyperparameters will depend on the specific algorithm being used. The goal
of hyperparameter optimization is to find the best combination of hyperparameters for a
given task that results in the highest performance.
In general, ML models are categorized as either supervised or unsupervised learning
models, depending on whether they are designed to model labelled or unlabeled datasets
[35].
• Supervised learning algorithms are a class of ML models that map input features to
a target by training on labelled data [36]. They primarily include linear models, k-
nearest neighbors (KNN), support vector machines (SVM), naive Bayes (NB),
decision-tree-based models and deep learning models.
• Unsupervised learning algorithms are used to discover patterns in unlabeled data and
are classified as clustering or dimensionality reduction methods based on their goals.
Clustering methods include k-means, density-based spatial clustering of applications
with noise (DBSCAN), hierarchical clustering, and expectation-maximization (EM),
while principal component analysis (PCA) and linear discriminant analysis (LDA) are
two common dimensionality reduction algorithms [37].
• Furthermore, ensemble learning approaches such as voting, bagging, Random Forest
(RF) and Boosting including AdaBoost, gradient boosting, XGBoost and LightGBM
combine several unique models to increase model performance.
• The significant hyperparameters of typical ML models are explored in this study based
on Python and R libraries.

Iqra Sardar Reg #:13-FBAS/PHDST/S21 13


Improving Efficiency of Machine Learning Models by Hyperparameter Optimization

5.3 Hyperparameters optimization techniques


• Manual Search
• Grid Search
• Random Search
• Gradient-Based Optimization
• Bayesian Optimization
o Bayesian Optimization with Gaussian Processes (BO-GP)
o Bayesian Optimization with Tree-structured Parzen Estimator (BO-TPE)
• Halving
o Grid Search
o Randomized search
• HyperBand
• Bayesian Optimization HyperBand (BOHB)
• Particle swarm optimization (PSO)
• Genetic algorithm (GA)

Iqra Sardar Reg #:13-FBAS/PHDST/S21 14


Improving Efficiency of Machine Learning Models by Hyperparameter Optimization

6. References

[1] M. Kuhn, K. Johnson, Applied Predictive Modeling, Springer, 2013, ISBN:


9781461468493.
[2] G.I. Diaz, A. Fokoue-Nkoutche, G. Nannicini, H. Samulowitz, An effective algorithm
for hyperparameter optimization of neural networks, IBM J. Res. Dev. 61 (2017) 1–20,
https://doi.org/10.1147/JRD.2017.2709578.
[3] F. Hutter, L. Kotthoff, J. Vanschoren (Eds.), Automatic Machine Learning: Methods,
Systems, Challenges, Springer, 2019, ISBN 9783030053185.
[4] N. Decastro-García, Á.L. Muñoz Castañeda, D. Escudero García, M.V. Carriegos,
Effect of the sampling of a dataset in the hyperparameter optimization phase over the
efficiency of a machine learning algorithm, Complexity (2019),
https://doi.org/10.1155/2019/6278908.
[5] S. Abreu, Automated Architecture Design for Deep Neural Networks, arXiv preprint
arXiv:1908.10714, (2019). http://arxiv.org/abs/1908.10714.
[6] O.S. Steinholtz, A Comparative Study of Black-box Optimization Algorithms for
Tuning of Hyper-parameters in Deep Neural Networks, M.S. thesis, Dept. Elect. Eng.,
Luleå Univ. Technol., (2018).
[7] R.E. Shawi, M. Maher, S. Sakr, Automated machine learning: State-of-the-art and open
challenges, arXiv preprint arXiv:1906.02287, (2019). http://arxiv.org/abs/1906.02287.
[8] B. D. Ripley. Statistical aspects of neural networks. Networks and chaos statistical and
probabilistic aspects, 50: 40–123, (1993).
[9] R. D. King, C. Feng, A. Sutherland. Stat log: comparison of classification
algorithms on large real-world problems. Applied Artificial Intelligence an International
Journal, 9(3):289–333, (1995).
[10] J. Rodriguez. Understanding hyperparameters optimization in deep learning models:
Concepts and tools, (2018).
[11] M. Feurer, F. Hutter. Hyperparameter optimization. In Automated Machine Learning,
3–33. Springer, (2019).
[12] G´. Melis, C. Dyer, P. Blunsom. On the state of the art of evaluation in neural language
models. arXiv preprint arXiv:1707.05589, (2017).

Iqra Sardar Reg #:13-FBAS/PHDST/S21 15


Improving Efficiency of Machine Learning Models by Hyperparameter Optimization

[13] J. Bergstra, D. Yamins, D. Daniel Cox. Making a science of model search:


Hyperparameter optimization in hundreds of dimensions for vision architectures. (2013).
[14] M. Tan, Q.V Le. Efficient net: Rethinking model scaling for convolutional neural
networks. arXiv preprint arXiv:1905.11946, (2019).
[15] K. He, X. Zhang, S. Ren, J. Sun. Deep residual learning for image recognition. In
Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778,
(2016).
[16] S. Zagoruyko, N. Komodakis. Wide residual networks. arXiv preprint
arXiv:1605.07146, 2016.
[17] Y. Huang, Y. Cheng, A. Bapna, O. Firat, D. Chen, M. Chen, H. Lee, J. Ngiam, Q. V
Le, Yonghui Wu, et al. Gpipe: Efficient training of giant neural networks using pipeline
parallelism. In Advances in Neural Information Processing Systems, 103–112, (2019).
[18] N. Ma, X. Zhang, H. Zheng, J. Sun. Shufflenet v2: Practical guidelines for efficient
CNN architecture design. In Proceedings of the European Conference on Computer Vision
(ECCV), 116–131, (2018).
[19] M.Sandler, A. Howard, M.Zhu, A. Zhmoginov, L. Chen. Mobilenetv2: Inverted
residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision
and pattern recognition, 4510–4520, (2018).
[20] M. Tan, B. Chen, R. Pang, V. Vasudevan, M. Sandler, A. Howard, Q. V Le. Mnasnet:
Platform-aware neural architecture search for mobile. In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition, 2820–2828, (2019).
[21] F. Hutter, L. Kotthoff, J. Vanschoren (Eds.), Automatic Machine Learning: Methods,
Systems, Challenges, Springer, 2019, ISBN 9783030053185.
[22] G. Luo, A review of automatic selection methods for machine learning algorithms and
hyper-parameter values, Netw. Model. Anal. Heal. Inf. Bioinf, 51–16, (2016).
https://doi.org/10.1007/s13721-016-0125-6.
[23] D. Maclaurin, D. Duvenaud, R.P. Adams, Gradient-based Hyperparameter
Optimization through Reversible Learning, arXiv preprint arXiv:1502.03492, (2015).
http://arxiv.org/abs/1502.03492.

Iqra Sardar Reg #:13-FBAS/PHDST/S21 16


Improving Efficiency of Machine Learning Models by Hyperparameter Optimization

[24] N. Decastro-García, Á.L. Muñoz Castañeda, D. Escudero García, M.V. Carriegos,


Effect of the sampling of a dataset in the hyperparameter optimization phase over the
efficiency of a machine learning algorithm, Complexity, (2019).
https://doi.org/10.1155/2019/6278908.
[25] J. Bergstra, R. Bardenet, Y. Bengio, B. Kégl, Algorithms for hyper-parameter
optimization, Proc. Adv. Neural Inf. Process. Syst. 2546–2554, (2011).
[26] B. James, B. Yoshua, Random search for hyper-parameter optimization, J. Mach.
Learn. Res. 13 (1), 281–305, (2012).
[27] K. Eggensperger, M. Feurer, F. Hutter, J. Bergstra, J. Snoek, H. Hoos, K. Leyton-
Brown, towards an empirical foundation for assessing Bayesian optimization of
hyperparameters, Bayes Opt Work, 1–5, (2013).
[28] K. Eggensperger, F. Hutter, H.H. Hoos, K. Leyton-Brown, Efficient benchmarking of
hyperparameter optimizers via surrogates, Proc. Natl. Conf. Artif. Intell. 2, 1114–1120
(2015).
[29] L. Li, K. Jamieson, G. DeSalvo, A. Rostamizadeh, A. Talwalkar, Hyperband: a novel
bandit-based approach to hyperparameter optimization, J. Mach. Learn. Res. 18, 1–52,
(2012).
[30] Q. Yao, et al., Taking Human out of Learning Applications: A Survey on Automated
Machine Learning, arXiv preprint arXiv:1810.13306, (2018).
http://arxiv.org/abs/1810.13306.
[31] S. Lessmann, R. Stahlbock, S.F. Crone, Optimizing hyperparameters of support vector
machines by genetic algorithms, Proc. 2005 Int. Conf. Artif. Intell. ICAI’05. 1 (2005) 74–
80.
[32] P. R. Lorenzo, J. Nalepa, M. Kawulok, L.S. Ramos, J.R. Paster, Particle swarm
optimization for hyper-parameter selection in deep neural networks, Proc. ACM Int. Conf.
Genet. Evol. Comput. 481–488, (2017).
[33] S. Sun, Z. Cao, H. Zhu, J. Zhao, A Survey of Optimization Methods from a Machine
Learning Perspective, arXiv preprint arXiv:1906.06821, (2019).
https://arxiv.org/abs/1906.06821.
[34] G. Luo, A review of automatic selection methods for machine learning algorithms and
hyper-parameter values, Netw. Model. Anal. Heal. Inf. Bioinf. 5, 1–16, (2016).

Iqra Sardar Reg #:13-FBAS/PHDST/S21 17


Improving Efficiency of Machine Learning Models by Hyperparameter Optimization

https://doi.org/10.1007/s13721-016-0125-6.
[35] A. Moubayed, M. Injadat, A. Shami, H. Lutfiyya, DNS typo-squatting domain
detection: a data analytics & machine learning based approach, 2018 IEEE Glob. Commun.
Conf. GLOBECOM. (2018), https://doi.org/10.1109/GLOCOM.2018.8647679.
[36] R. Caruana, A. Niculescu-Mizil, An empirical comparison of supervised learning
algorithms, ACM Int. Conf. Proc. Ser. 148, 161–168, (2006).
https://doi.org/10.1145/1143844.1143865.
[37] O. Kramer, Scikit-Learn, in Machine Learning for Evolution Strategies, Springer
International Publishing, Cham, Switzerland, 45–53, (2016).

Iqra Sardar Reg #:13-FBAS/PHDST/S21 18


Synopsis of
PhD Mathematics (Male)
Student(s)

Page 6 of 6
Effects of Surface Undulations on Turbulent Boundary Layer over a Non-Flat Plate
The main purpose of this research is to investigate the effects of surface roughness on the
turbulence, the momentum and thermal transport phenomena, and the important physical
parameters which can alter the fluid flow in turbulent boundary layer significantly. The surface
undulations cause the small-scale and large-scale vortices and eddies, which are referred as
turbulent structures. These turbulent structures can enhance the thermal and momentum transport
within the boundary layer. The reason behind this fact is that the non-flatness of the surface
disrupts the smooth fluid flow resulting in the fluctuations and vortices which propagate through
the boundary layer. Consequently, the mean flow profile experiences greater fluctuations and
exhibits the enhanced turbulence, which can affect the aerodynamic and hydrodynamic forces
acting on the surface. Specifically, the presence of the surface undulations may cause change in
the size, shape and intensity of the turbulent structures which are produced during the fluid flow.
It can also alter the mean velocity profile and skin-friction drag acting on the surface.

i. Problem Statement
The turbulent boundary layer flow phenomena, mathematically, is governed by partial differential
equations which are non-similar in nature with more than one independent variable and involving
relevant physical parameters. The surface roughness/non-flatness of the surface is one of the
important factor which can enhance the turbulence in the boundary layer. In this study, we will
analyze the effects of surface undulations and some important physical parameters on the turbulent
boundary layer flow over non-flat surface. The impact of surface non-flatness on the mean flow
velocity, skin-friction drag, and boundary layer thickness will also be investigated numerically.
The obtained results will be compared with the integral solution to validate the accuracy of the
results.

ii. Research Methods


In the present study, the fluid flow problems will be characterized by the reduced form of Navier-
Stokes equations which is known as boundary layer equations. By utilizing the suitable similarity
transformations the boundary layer equations will be converted into a system of non-similar partial
differential equations. In order to find the solution of this non-similar system of partial differential
equations a well-known numerical scheme, namely, Keller-Box method will be used. The obtained
results will be compared with the already available results in literature in order to ensure the
validity and accuracy of the obtained solution.

iii. Expected Outcomes


On the successful completion of this study we shall be able to insight the effects of surface
undulations on the turbulence level, momentum and thermal transport phenomena within the
boundary layer in a full detail. The effects of some physical parameters such as Reynolds number,
Prandtl number, and wave amplitude on the skin-friction drag, flow velocity field and on the
thickness of the boundary layer will also be reported.

Muhammad Usman Javed, Reg. # 130-FBAS/PHDMA/F20


Effects of Surface Undulations on Turbulent Boundary
Layer over a Non-Flat Plate

PhD Research Proposal

Submitted By
Muhammad Usman Javed
Reg. # 130-FBAS/PHDMA/F20

Supervisor
Dr. Ahmer Mehmood

Department of Mathematics and Statistics


Faculty of Basic and Applied Sciences
International Islamic University Islamabad, Pakistan
2023
Abstract

The boundary layer phenomenon is considered as a bedrock of fluid mechanics due to its vast
applications in different fields of engineering such as aerodynamics, aeronautics, hydrodynamics
and metrology. The boundary layer flow can be categorized into two classes namely laminar and
turbulent boundary layer flows. The turbulent boundary layer flows are complex in nature than
laminar flows and hence difficult to investigate. The situation gets worsened in the presence of
surface undulations. The effects of surface undulations can be significant and complex in turbulent
boundary layer flows. One of the important aspects of surface undulations on turbulent boundary
layer is an overall enhancement in turbulence and formation of small-scale/large-scale turbulent
structures i.e. vortices and eddies. These turbulent structures can enhance the thermal and
momentum transport within the boundary layer. The reason behind this fact is that the non-flatness
of the surface disrupts the smooth fluid flow resulting in the fluctuations and vortices which
propagate through the boundary layer. Consequently, the mean flow profile experiences greater
fluctuations and exhibits the enhanced turbulence, which can affect the aerodynamic and
hydrodynamic forces acting on the surface. Specifically, the presence of the surface
undulations/non-flatness of the surface may cause change in the size, shape and intensity of the
turbulent structures which are produced during the fluid flow. It can also alter the mean velocity
profile and skin-friction drag acting on the surface. Therefore, keeping in view the aforementioned
effects of surface undulations. The main objective of this research is to explore the effects of
surface roughness/non-flatness on the turbulence, the momentum and thermal transport
phenomena within the turbulent boundary layer, and the important physical parameters which can
alter the fluid flow in turbulent boundary layer significantly. For this purpose, the governing partial
differential equations will be reduced into non-linear differential equations. The reduced set of
non-linear differential equations will be tackled through, namely, Keller-Box method. The
obtained results will be compared with the available literature in order to ensure the validity and
accuracy of obtained solution.

i
Table of Contents

Abstract………………………………………………………………………………... ii

1. Introduction……………………………………………………………………………. 01
2. Literature review………………………………………………………………………. 05
3. Problem Statement…………………………………………………………………….. 07
4. Aims and objectives…………………………...………………………………………. 08
5. Research Methodology………………………………………………………………... 08
6. References……………………………………………………………………………... 11

ii
1. Introduction

If someone looks back at the historical background of fluid mechanics he finds it as old as the
ancient civilization. Every man solves certain fluid flow problems in his daily life routine, such as
lifting heavy loads, drinking water, breathes the air or stirring a sugar spoon in a cup of tea. By
stretching to this point one can claim that similar examples can also be found in ancient times
when man had been used their simple weapons such as long and elegant arrows and spears for
hunting birds and animals. From the awareness sense of prehistoric humans about flow
phenomenon of water from high altitude to low altitude and mechanism of irrigation system,
design of boats and ships, it reveal that the humans were aware to the fluid flow problems and tried
to solve it. The revolutionary developments made during the history of ancient Greek civilization
and rise of Roman Empire.

Fluid mechanics is the branch of applied mechanics in which study the fluid flow behavior
under the action of applied forces. Fluid mechanics is the combination of fluid kinematics, fluid
dynamics and fluid statics. Fluid kinematics is the analysis of fluid in motion considering without
applied forces, fluid dynamics is the study of fluid under the influence of applied forces whereas,
fluid statics is study of fluid flow at static position. This subject has so many applications in
numerous scientific and engineering disciplines such as mathematics, physics, applied chemistry,
civil engineering, mechanical engineering, aerodynamic engineering, aeronautical engineering,
marine engineering, chemical engineering, automobile engineering, and petroleum and gas
engineering etc.

A Greek philosopher and scientist Aristotle (384-322 B.C.) conceived the idea of density and
uniform acceleration. Probably after Aristotle, the Greek mathematician Archimedes (212-287
B.C.) was the first who examined the viscous flow problem and presented the postulates of
buoyancy principle in his research. Due to his revolutionary development, he famed with the title;
“the father of fluid statistics.” A Roman engineer S. J. Frontinus (40-103 A.D.) used the idea of
Archimedes and elaborated his work in full detail. Unfortunately, there were no further notable
developments occurred in the hydrodynamics field after the fall of Roman Empire. But the
progress in the field of fluid mechanics remained continue at smalls pace. The next major
breakthrough in this regard was due to S. I. Newton (1642-1727), who proposed the laws of
motions and law of viscosity.

1
Another great mathematician L. Euler (1707-1783) described the principle of conservation of
mass which is commonly known as equation of continuity and elaborated the role of pressure in
fluid flow phenomenon. Euler’s studies were limited to a perfect fluid and he served twenty years
of his life in the development of hydrodynamic theory. Lagrange called the Euler as the founder
of classical hydrodynamics. He writes “Euler didn’t contribute to fluid mechanics but he created
it.” In 1738, a Swiss mathematician D. Bernoulli (1700-1782) derived the differential equation of
motion in his famous theorem namely, Bernoulli theorem which is still popular. A French
mathematician J. R. D’Alembert (1717-1783) brought out his famous paradox which is known by
his name stating that “a body immersed in an incompressible and inviscid potential flow has zero
drag with constant velocity relative to the fluid.” This paradox stayed unresolved till 1904.

Further marvelous contributions in the field of fluid dynamics were carried out by J. Lagrange
(1736-1813), P. Laplace (1749-1827) and S. Poisson (1781-1840). Later, the progress in fluid
dynamics was retarded for many years because the theoretical and experimental approaches
developed along with two different perspectives. This is due to the lack of coincidence between
the theory and practical results. In 1833, O. Reynolds (1842-1912) gave the concept of laminar
and turbulent flows on the basis of his developed famous parameter, namely, Reynolds number.

Fig. 1: Configuration of boundary layer flow on a flat plate.

In 1904, a German mathematician L. Prandtl (1875-1953) introduced a strong correlation


between two divergent branches (theoretical and experimental) of fluid dynamics. He presented
the concept of a boundary layer at the “Third International Mathematical Congress held in

2
Heidelberg” and published later in the proceedings of this congress in the following year. He
showed that the flow past a body can be divided into two regions; a very thin layer region close to
the boundary where the viscosity effects are prominent and the remaining region outside this layer
in which the effect of viscosity can be neglected. The French mathematician C. L. Navier (1785-
1836) and G. G. Stokes (1819-1903) obtained the generalized equations of motion which are
commonly known as Navier-Stokes equations for a viscous fluid in 1827 and 1845 respectively.
Further refinements and contributions continued in the study of fluid dynamics with the passage
of time. Under the supervision of L. Prandtl, H. Blasius (1883-1970) proposed the analytic solution
of two-dimensional boundary layer equations.

In fluid dynamics, the fluid flow regime can be divided into two distinct categories as
laminar flow and turbulent flow. In a laminar boundary layer flow, the fluid particles move in
parallel layers and do not allow any type of mixing. These types of boundary layer flows are
observed by high viscosity and low-velocity gradients (low Reynolds number) such as the flow in
a smooth pipe and the flow in a wind tunnel of a solid surface. In contrast, turbulent boundary
layer flow is characterized by an irregular manner and fluid layers cross each other. In turbulent
boundary layer flows, the momentum transport phenomenon takes place due to the irregular
motion of fluid particles. Such type of flows occur at high Reynolds number and high speed such
as flow over an aircraft wing, flow over wind turbines and flow over the blade shape bodies.

The flow geometry is one of the important factors which can affect the fluid flow within
the boundary layer. A number of flow geometries are available in the literature which are used by
eminent scientists and engineers to explore the physical characteristics of fluid flow. A bulk of
literature on the investigations of laminar and turbulent boundary layer flows over a flat plate is
already available. But the laminar and turbulent boundary layer flow over a non-flat plate is not
given much attention as compared to flat surfaces. The pool of studies gets more limited in the
case of turbulent boundary layer flow over a non-flat plate. Therefore, our sincere interest is to
investigate the effects of surface non-flatness/undulations on the turbulent boundary layer over a
non-flat plate. The turbulent boundary layer flow over a non-flat plate has great importance in fluid
dynamics due to its dynamic and complex nature. In many engineering fields, these type of flows
has numerous applications in the practical life such as offshore wind energy systems, design of
ship hulls and design of aircraft wings.

3
In view of aforementioned importance, one may choose theoretical or experimental
approach to investigate the turbulent boundary layer flow over a non-flat plate. This flow
phenomenon is governed by the mathematical equations which are known as the continuity and
momentum equations. The momentum equations are also known as Navier-Stokes equations.
Mathematical model for the case of steady incompressible turbulent boundary layer flow over a
flat plate is given as [36]

𝜕𝑢 𝜕𝑣
+ 𝜕𝑦 = 0, (1)
𝜕𝑥

𝜕𝑢 𝜕𝑢 𝑑𝑢𝑒 2
𝜕 𝑢 𝜕
𝑢 𝜕𝑥 + 𝑣 𝜕𝑦 = 𝑢𝑒 + 𝜈 𝜕𝑦 2 − 𝜕𝑦 < ̅̅̅̅̅
𝑢′𝑣′ >, (2)
𝑑𝑥

where 𝑢 and 𝑣 represents the mean values of fluctuating velocity components. One can see a
additional term occurs in the above equation which is due to the turbulent fluctuating flow. This
term is also known as Reynolds shear stresses. Here, the eddy viscosity is defined as

̅̅̅̅̅ >= 𝜖 𝜕𝑢⁄𝜕𝑦.


−< 𝑢′𝑣′ (3)

Subjected to the boundary conditions

𝑢(𝑥, 0) = 0, 𝑣(𝑥, 0) = 𝑣𝑤 , lim 𝑢(𝑥, 𝑦) = 𝑢𝑒 (𝑥). (4)


𝑦→∞

A number of turbulence models such as C − S model, 𝑘 − 𝜖 model, 𝑘 − 𝜔 model and


boussinesq model etc. are available in the literature to define the term eddy viscosity 𝜖. In all of
them, Cebeci-Smith turbulence model [34] is one which gave the more efficient and good results.
In this model, turbulent boundary layer is divided into two regimes, one is defined as inner region
and the other is outer region. The eddy viscosity in the inner region is defined as

𝜕𝑢 2 𝜕𝑣 2
𝜖𝑖 = 𝜌𝐿2 √(𝜕𝑦) + (𝜕𝑥) , (5)

where

𝐿 = 𝑘𝑦[1 − exp(− 𝑦⁄𝐴)], (6)

here 𝐿 is the mixing length constant, 𝑘 is the von Kármán constant and 𝐴 is the damping length
parameter and it is defined as

4
𝑦 𝑑𝑝 −0.5
𝐴 = 𝐴+ [1 + 𝑢 ] , 𝐴+ = 26, and 𝑢𝜏 = (𝜏𝑤 ⁄𝜌)0.5. (7)
𝜏𝜌 𝑑𝑥

On the other hand, the eddy viscosity has the following form in the outer region

𝜖0 = 𝛼𝜌𝛾|∫0 (1 − 𝑢̅)|𝑑𝑦. (8)

here 𝛼 = 0.0168 and 𝛾 is intermittency factor.

2. Literature Review
Most of the fluid flows encountered in engineering practices are turbulent flows in nature. The
term “turbulent flow” was first introduced by L. Kelvin in 1887. In fact, the researchers before
Kelvin were also well-aware of the turbulence but they used the term “sinuous” for it. About 500
years ago, L. da vinci [1] represented the phenomena of three-dimensional turbulent flow through
his famous drawing of a water jet impacting a water pool. Probably in 1839, H. Poiseuille [2] was
the first scientist who proposed the observations of turbulent flow through experimental results.
He observed that, there are two types of flow occurring in round tubes; the first is known as laminar
and the other is turbulent. In 1854, H. Poiseuille [3] in his second published article represented
that, both viscosity and velocity are affected the boundary layer between the two fluid flow regimes
such as laminar and turbulent.

Fig. 2: L. vinci [1] drawing of turbulence phenomena.

5
J. Boussinesqs [4] studies based on empirical hypotheses about turbulent flow. He
developed a linear relationship between Reynolds stresses and the mean strain rates. Many current
turbulence models are still based on this empirical hypothesis. In 1833, O. Reynolds [5] performed
the Reynolds’dye experiment. He illustrated that the flow becomes turbulent when the parameter
𝑅𝑒 = 𝑈𝐿⁄𝜈 , namely, Reynold number, crosses a certain critical value. He also proposed the idea
of Reynolds stresses which consist of the sum of mean and fluctuating parts due to the turbulent
velocity components of the flow. In 1921, G. I. Taylor [6] proposed the idea of the statistical theory
of turbulence and developed a correlation function in the study of turbulent diffusion. He also
elaborated on the concept of a turbulent spectrum.

1922, L. Richardson [11] proposed a description on convective turbulent flow on the basis
of experimental data. He stated that the movement of turbulent energy continues from large to
small eddies until it is separated by viscous dissipation. In 1925, L. Prandtl [7-8] and T. von
Kármán [9-10] suggested the mechanism of turbulent flow through their successful mixing length
theory. They also presented results in the form of a prediction of the eddy viscosity. In 1941, a
Russian mathematician A. N. Kolmogorov [12] used the idea of L. Richardson’s and published his
most significant results on turbulent theory. The Studies of Hinze [13], Chapman [14], Batchlor
[15] and Townsend [16] on turbulent flows became famous in the 19th century.

Due to the availability of modern and efficient tools such as CFD tools, the literature on the
turbulence phenomenon got richer day by day. Currently, a bulk of research is available on
turbulence phenomenon that is useful to enrich the further analysis of this subject. In 1904, L.
Prandtl [17] gave the concept of boundary layer and pointed out that viscous forces, though small,
play a fundamental role in the determination of the flow. H. Blasius [18] was the first who
illustrated the application of the evolutionary theory of boundary layer in his Ph.D. thesis at
Gottingen, Germany. He obtained the expression for boundary layer thickness and wall shear
stress. He also described that a numerical solution was needed for the velocity profile. Analysis of
the laminar boundary layer over a flat plate is also done by E. Goldstein [19] and L. Howarth [20]
under different circumstances. The turbulent boundary layer flow over a flat plate has always been
a problem of interest due to its various applications in engineering. The turbulent boundary layer
fluid flow phenomenon over a flat plate is also available in literature and can be found in [21-23].

6
Laminar boundary layer flow over a non-flat plate is discussed by A. Raees and I. Pop [24]
and M. A. Hossain and I. Pop [25]. A. Mehmood [26-27] has recently studied the analysis of
laminar boundary layer flow over a non-flat plate. To the best of our knowledge, the investigations
on the turbulent boundary layer on a non-flat plate is literally very few [28-33]. Therefore, there is
a dire need to extend the existing literature to the case of non-flat surfaces. So, we will utilize this
opportunity to investigate the turbulent boundary layer flow over a non-flat plate surface.

3. Problem Statement

The turbulent boundary layer flow is a widely studied problem in fluid mechanics due to its
inherited complexities in the behavior of the flow. Many famed researchers of the 19th and 20th
centuries admitted that turbulent fluid flow analysis is more complicated and challenging than
laminar flow analysis. The study of turbulent phenomena has practical significance in various
engineering applications such as the design of aircraft, wind turbines, and ships.

From the literature survey, it is revealed that a bulk of investigations on turbulent boundary
layer flows over a flat plate is available but no investigations have been found on the study of
turbulent boundary layer flows over a non-flat plate except a few one. Keeping in mind the amount
of insufficient knowledge on such a dynamic and complex phenomenon, we shall focus our
attentions to investigate the turbulent boundary layer flow over a non-flat plate under various
configurations. We shall also investigate the convective transport phenomenon within the turbulent
boundary layer by considering the surface undulations. The best possible geometrical situations
shall also be considered to obtain the maximum enhanced rate of momentum and convective
transport. We shall give our full attention to develop an understanding both mathematical and
physical point of view by assuming practical situations. Therefore, the present study will contribute
insights into the flow behavior and improve our understanding on turbulent boundary layer flows
over a non-flat plate.

The various solution techniques such as series solutions, experimental approaches, integral
methods, numerical methods and modern CFD tools etc. are available in literature to obtain the
solution of turbulent boundary layer flow problems. Due to developments of built-in and
commercial software’s a desired accuracy can be achieved. But such software’s or computing tools
are not available to us at large. Therefore, we will use famous numerical technique, namely, Keller-

7
Box method to insight the turbulent flow phenomenon over non-flat plate. The obtained results
will be compared with the integral solution to validate the accuracy of the numerical method.

4. Aims and Objectives

The aim of our research is to investigate the effects of surface undulations on turbulent
boundary layer phenomena over a non-flat plate in detail. The specific phases of our research are:

1) To analyze the impact of non-flatness on the turbulent boundary layer flow.


2) To analyze the influence of non-flatness on the enhancement of momentum and convective
transport.
3) To identify the flow parameters which cause to enhance the turbulence phenomena.
4) To investigate the flow parameters which cause rapid development in the turbulent
phenomena.
5) To know the reason behind the non-similar nature of turbulent flow over a non-flat plate.

A list of various flow problems with different flow assumptions that shall be considered to
achieve our research aim are given below

 Turbulent boundary layer phenomenon over a non-flat horizontal plate.


 Turbulent boundary layer phenomenon over a non-flat vertical plate.
 Turbulent boundary layer phenomenon over a non-flat inclined plate.
 Turbulence of stagnation point flow over a non-flat plate.
 Turbulent boundary layer phenomenon over a non-flat moving plate.
 Convective transport in aforementioned phenomena.

We are confident that, our research will provide valuable physical mechanism involved in the
effects of surface undulations on turbulent boundary layer over a non-flat plate and our findings
will serve as a resource in the future work.

5. Research Methodology

A number of techniques such as series solution, integral method, experimental method,


analytical method and numerical method etc. are available in literature to determine the solution
of the problems in fluid mechanics. Besides this, a highly efficient computation machines are also
available to solve the fluid flow problem with desired accuracy. In the present analysis, the fluid

8
flow problems to be investigated are non-similar in nature. It is important to point out that the non-
similar flow problems are far more difficult to solve than self-similar flow problems. In non-similar
boundary layer flow problems the governing equations remain highly non-linear partial differential
equations. We can only use analytical or numerical methods for the computations of non-similar
flow problems due to lack of funds, experimental labs, most expensive experimental apparatus,
modern computation machines and devices. Therefore, a more suitable option in numerical or
analytical methods to obtain the solution of non-linear partial differential equations is a numerical
method. Because the numerical method gave more accurate and stable results than analytical
method. Numerical simulations are much faster than analytical methods. Analytical method has
more chances of errors than numerical methods. A number of numerical methods such as finite
difference method, spectral method, Keller-Box method and shooting method are exist in literature
to solve the fluid flow problems of complex geometry. For this purpose, we shall have to develop
a computer program in computational software, namely, MATLAB corresponding to every
consideration problem.

5.1 Solution Methodology

In our proposed study we shall use numerical method for the solution of non-similar governing
equations. To solve the non-similar governing equations an efficient and accurate numerical
approach is required. So many numerical methods are available in literature but we shall use the
box method proposed by Keller in 1970 which is also known as Keller-Box method. Their selection
is basically depend upon the accuracy and stability of the results under the consideration problem.
Keller-Box method is usually an implicit finite-difference technique of second order convergence
rate. Keller and Cebeci [36] and many other researchers [34-36] have been solved turbulent
boundary layer flows by using this method.

The outlines of the Keller-Box method are given as

1) Converted higher order partial differential equation into system of first order differential
equations.
2) The newly functions and their derivative are altered by the central difference
approximations.

9
3) In this scenario, the system of algebraic equations are obtained. Calculations started at the
leading edge (𝑥 = 0) of the 𝑥-coordinate plane. For this purpose, developing the net-
spacing in the 𝑥-coordinate plane and similar to the 𝑦-coordinate plane.
4) After that, obtained system of non-linear algebraic equations are linearized by using the
quasi-linearization method.
5) The system of algebraic linear equations can then be written into Jacobian matrix. The
Jacobian matrix is consists of block tridiagonal nature of coefficients matrices.
6) In last, a very efficient factorization procedure can be used to perform the remaining
simulations.

𝜼𝒋

𝝃𝒊

Fig. 3: Schematic of variable grid system.

10
6. References
1. M. Clayton, Leonardo Da Vinci, A life in drawing, (Royal Collection Trust, 2018).
2. G. Hagen, On the motion of water in narrow cylindrical tubes, German, Pogg. Ann. 46,
(1839): 423.
3. G. Hagen, On the influence of temperature on the movement of water through pipes,
German, Abhandl. Akad. Wiss. (1854): 17.
4. J. Boussinesq, Essai sur la th´eorie des eaux courantes, M´em. pr´es. par div. savant `a
l’Acad.Sci. (1877): 1–680.
5. O. Reynolds, On the dynamical theory of turbulent incompressible viscous fluids and the
determination of the criterion, Phil. Trans. R. Soc. London A (1895): 123–164.
6. G. I. Taylor, Statistical theory of turbulence, Proc. Roy. Soc. London A (1935): 421–444.
7. L. Prandtl, Bericht ¨uber Untersuchungen zur ausgebildeten Turbulenz, Zs. agnew. Math.
Mech. (1925): 136–139.
8. L. Prandtl, Turbulent flow, NACA Tech. Memo, 435. Originally delivered to 2nd. Internat.
Congr. Appl. Mech. Zurich, (1926).
9. T. von Kármán, On the statistical theory of turbulence, Proc. Nat. Acad. Sci., Wash.
(1937): 98-105.
10. T. von Kármán, Some remarks on the statistical theory of turbulence, Proc. 5th Int. Congr.
Appl. Mech., Cambridge, MA, (1938): 347.
11. L. F. Richardson, Weather Prediction by Numerical Process, Cambridge University Press
(2007).
12. A. N. Kolmogorov, The local structure of turbulence in incompressible viscous fluid for
very large Reynolds number, Dokl. Acad. Nauk. SSSR 30, (1941): 9–13; On degeneration
(decay) of isotropic turbulence in an incompressible viscous liquid, Dokl. Acad. Nauk.
SSSR 31, (1941): 538–540; Dissipation of energy in locally isotropic turbulence, Dokl.
Acad. Nauk. SSSR 32, (1941): 16–18.
13. J. O. Hinze, Turbulence, McGraw-Hill, New York, (1959).
14. G. T. Chapman, M. Tobak, Observations, theoretical ideas, and modeling of turbulent
flows — past, present and future, in theoretical approaches to turbulence, Dwoyer et al.
(eds), SpringerVerlag, New York, (1985): 19–49.

11
15. G. K. Batchelor, The Theory of Homogeneous Turbulence, Cambridge University Press,
Cambridge, (1953).
16. A. A. Townsend, The Structure of Turbulent Shear Flow, Cambridge University Press,
Cambridge, (1956).
17. L. Prandtl, Uber Flussigheirsbewegung bei sehr lkeiner reibunge, Proc. Third Int. Math.
Cong. Heidelberg, (1904): 484-491.
18. H. Blasius, Grenzschichten in Flüssigkeiten mit kleiner Reibung. Z. Math. U. Phys. 56,
(1908): 1-37.
19. S. Goldstein, Concerning some solutions of the boundary-layer equations in
hydrodynamics, Proc. Cambr. Phil. Soc. 26, (1930): 1-30.
20. L. Howarth, On the solution of the boundary layer equations, Proc. Roy. Soc. London A
164, (1938): 547-579.
21. N. A. Cumpsty, M. R. Head, The calculation of three-dimensional turbulent boundary
layers. Part I: Flow over the rear of an infinite swept wing, Aero. Quart. 18, (1967): 55-84.
22. B. C. Sakiadis, Boundary-layer behavior on continues solid surfaces: II the boundary-
layer on a continuous flat surface, AIChE 7(2), (1961): 221-225.
23. B. C. Sakiadis, Boundary-layer behavior on continues solid surfaces: III the boundary-
layer on a continuous flat surface, AIChE 7(3), (1961): 467-472.
24. D. A. Raees and I. Pop, Boundary layer flow and heat transfer on a continuous moving
wavy surface, Acta Mech., 112, (1995): 149-158.
25. M.A. Hossain and I. Pop, Magneto-hydrodynamic boundary layer flow and heat transfer
on a continuous moving wavy surface, Arch. Mech., 48(5) (1996): 813- 823.
26. A. Mehmood, M. S. Iqbal, Effect of Heat Absorption in Natural Convection Nanofluid
Flow Along a Vertical Wavy Surface, Journal of Molecular Liquids, 224, Part B, (2016):
1326-1331.
27. A. Mehmood et al., Entropy Analysis in Moving Wavy Surface Boundary Layer, Journal
of Thermal Sciences, Vol. 23, No. 1, (2019): 233-241.
28. F. A. Dvorak, Calculation of turbulent boundary layers on rough surfaces in pressure
gradient, AIAA Journal 7, (1969): 1752-1759.
29. P. S. Beebe, J. E. Cermak, Turbulent Flow over wavy boundary, Project Thesis, Colorado
State University, Colorado, (1972).

12
30. J. D. Hudson, L. Dykhno, T. J. Hanratty, Turbulence production in flow over a wavy wall,
Springer-Verlag, 20 (1996): 257-265.
31. P. A. Taylor, P. R. Gent, J. M. Keen, Some Numerical Solutions for Turbulent Boundary-
Layer Flow above Fixed, Rough, Wavy Surfaces, Geophys. J. R. astr. SOC. 44, (1976):
177-201.
32. P. Cherukat, Y. Na, and T.J. Hanratty, Direct Numerical Simulation of a Fully Developed
Turbulent Flow over a Wavy1 , Theoret. Comput. Fluid Dynamics 11. (1998): 109–134.
33. J. Buckles, T. J. Hanratty, Turbulent flow over large-amplitude wavy surfaces, J. Fluid
Mech. vol. 140. (1984): 27-44.
34. T. Cebeci, A. M. O. Smith, Analysts of Turbulent Boundary Layers, Academic Press, New
York, (1974).
35. T. Cebeci, P. Bradshaw, Physical and Computational Aspect of Convective Heat Transfer,
Springer, New York, (1988).
36. T. Cebeci, H. B. Keller, Accurate numerical methods for boundary layer flows, II: Two
dimensional turbulent flows, AIAA Journal, (1972): 1193-1199.

13

You might also like