Professional Documents
Culture Documents
Degree Level 1
Degree Level 1
Students will learn fundamental skills required by every IT professional, and the basic understanding of the underlying
computer system through computer architecture, operating systems, networks, and databases. Some specialized
modules will provide students with basic knowledge of web design and development. The modules will also help them
develop personal and organizational skills, as well as nurture creativity and innovation.
Common Modules
The course is a combination of Design thinking with Digital Innovation. Digital Thinking is a fundamental paradigm shift
from traditional ways of working and learning to be more agile and adaptive with the emerging digital technologies.
This module shifts students' system thinking towards innovation at early stage.
Cultural awareness (or cultural sensitivity, cross-cultural / intercultural awareness) refers to the awareness of our own
cultural identity, values and beliefs and the knowledge and acceptance of other's cultures. Cultural awareness helps us
break down cultural barriers, build cultural bridges, and learn how to love, and appreciate those different from us. We
can relate better to people with cultural differences as we begin to understand ourselves better. This results in more
cultural connection and less cultural conflict.
System analysis and design is a process that many companies use to evaluate particular business situations and develop
ways to improve them through more optimal methods. Companies may use this process to reshape their organization
or meet business objectives related to growth and profitability. Systems have been classified in different ways.
Common classifications are: (1) physical or abstract, (2) open or closed, and (3) “man – made” information systems.
Physical systems are tangible entities that may be static or dynamic in operation.
There are three approaches to system design. And, they include structured design, function-oriented design, and
object-oriented design. What are three parts of system analysis?
Basically there are three major components in every system, namely input, processing and output. In a system the
different components are connected with each other and they are interdependent. For example, human body
represents a complete natural system.
Examples of systems analysis might be making a change to some computer code to achieve a task, fixing a faulty air-
conditioning system, or analyzing the routines in your life to stop a mistake from happening.
4. Programming with Python
Python is a computer programming language often used to build websites and software, automate tasks, and conduct
data analysis. Python is a general-purpose language, meaning it can be used to create a variety of different programs
and isn't specialized for any specific problems.
Binary Math - Binary mathematics is the heart of the computer and an essential math field for computer programming
and technology. For all mathematical concepts, the binary number system uses only two digits, 0 and 1. It simplifies
the coding process and is essential for low-level instructions used in hardware programming.
College Algebra - College Algebra is the introductory course in algebra. The course is designed to familiarize learners
with fundamental mathematical concepts such as inequalities, polynomials, linear and quadratic equations, and
logarithmic and exponential functions.
Topics like factoring, linear equations, ratios, quadratic equations, and exponents are essential for computer science.
You need to have a clear understanding of these topics if you want to succeed in computer science. Algebra in
programming is used for the better development of math objects.
Statistics - Statistics is a branch of applied mathematics that involves the collection, description, analysis, and
inference of conclusions from quantitative data. The mathematical theories behind statistics rely heavily on differential
and integral calculus, linear algebra, and probability theory.
Statistics are used in virtually all scientific disciplines, such as the physical and social sciences as well as in business,
the humanities, government, and manufacturing. Statistics is fundamentally a branch of applied mathematics that
developed from the application of mathematical tools, including calculus and linear algebra to probability theory.
Applied mathematics is the application of mathematical methods by different fields such as physics, engineering,
medicine, biology, finance, business, computer science, and industry. Thus, applied mathematics is a combination of
mathematical science and specialized knowledge.
Calculus - Calculus is the mathematical study of change, in the same way that geometry is the study of shape and
algebra is the study of operations and their application to solving equations. Calculus Mathematics is broadly classified
into two different such: Differential Calculus. Integral Calculus.
Discrete Math - Discrete mathematics is the study of mathematical structures that are countable or otherwise distinct
and separable. Examples of structures that are discrete are combinations, graphs, and logical statements. Discrete
structures can be finite or infinite. Discrete Mathematics deals with the study of Mathematical structures. It deals with
objects that can have distinct separate values. It is also called Decision Mathematics or finite Mathematics.
An operating system is a piece of software that manages files, manages memory, manages processes, handles input
and output, and controls peripheral devices like disk drives and printers, among other things.
As we explore the Functions of the Operating System, we delve into a realm of capabilities that encompass resource
allocation, memory management, process scheduling, device management, file handling, user interfaces, and robust
system security measures.
The operating system's tasks, in the most general sense, fall into six categories: Elements of Computer System
Hardware
− Processor management
Software
− Memory management
People
− Device management
Procedures
− Storage management
− Application Interface Data
− User Interface Connectivity
For the most part, the IT industry largely focuses on the top five OSs, including Apple macOS, Microsoft Windows,
Google's Android OS, Linux Operating System, and Apple iOS.
An Operating system forms the core of any computer device. The functioning and processing of a computer system can
come to hold without an operating system. The different features and history of the development of OS have also been
discussed. For the reference of competitive exam aspirants, some sample questions have also been given further below
in this article. To comprehend Computer Knowledge and its key features, understanding this concept becomes a key
factor. Thus, one must carefully go through the various aspects related to this topic to understand it well.
An Operating System is the interface between the computer hardware and the end-user. Processing of data, running
applications, file management and handling the memory is all managed by the computer OS. Windows, Mac, Android
etc. are examples of Operating systems which are generally used nowadays.
All modern computing devices including Laptops, Tablet, mobile phones, etc. comprise an Operating System which
helps in the smooth working of the device.
To strengthen your command over Computer Awareness, the various terms, programs and applications:
• Microsoft Office • Types of Computer
• Computer Networks • High-Level Computer Languages
• Components of Computer • Hardware and Software
It took years to evolve the Operating Systems and make them as modernised and advanced as they are today. Given
below are the details about the evolution and history of Operating systems.
• Initially, the computers made did not have an Operating system and to run each program a different code was used.
This had made the processing of data more complex and time taking
• In 1956, the first operating systems were developed by General Motors to run a single IBM computer
• It was in the 1960s that IBM had started installing OS in the devices they launched
• The first version of the UNIX operating system was launched in the 1960s and was written in the programming
language C
• Later on, Microsoft came up with their OS on the request of IBM
• Today, all major computer devices have an operating system, each performing the same functions but with slightly
different features
Given below are the different types of Operating System along with brief information about each of them:
Difference Between Search Engine and Web Browser Difference Between Hardware and Software
Difference Between WWW and Internet Difference Between RAM and ROM
Difference Between Notepad and WordPad Difference Between Firewall and Antivirus
Functions of Operating System
Given below is a list of commonly used Operating systems along with their year of release.
Name of the OS Release Date
Android 2008
iOS 2007
Windows 1985
Mac OS 2001
MS-DOS 1981
Chrome OS 2011
Windows Phone 2010
Blackberry OS 1999
Firefox OS 2013
UNIX 1969
LINUX
Understanding operating systems or OS is essential to working in IT. OS types vary depending on the device and its
function. This video reviews what operating systems are, why they're important, and the different types of operating
systems in use today.
Every computer, smartphone or similar electronic device comes with special software called an operating system. An
operating system, also known as an OS, is the engine behind the utility value of computers and smartphones. There
are different types of operating systems depending on the device, manufacturer and user preference, and if you work,
or want to work, in the information technology field, it's important to understand them. Why they're important and
explore the different types of operating systems in use today.
Key takeaways:
• An operating system is software that supports and manages all the programs and applications used by a
computer or mobile device.
• An operating system uses a graphic user interface (GUI), a combination of graphics and text, that allows you to
interact with the computer or device.
• Every computer or smart device needs at least one operating system to run applications and perform tasks.
An operating system (OS) is a type of software interface between the user and the device hardware. This software
allows users to communicate with the device and perform the desired functions. Operating systems use two
components to manage computer programs and applications:
• The kernel is the core inner component that processes data at the hardware level. It handles input-output
management, memory and process management.
• The shell is the outer layer that manages the interaction between the user and the OS. The shell communicates
with the operating system by either taking the input from the user or a shell script. A shell script is a sequence
of system commands that are stored in a file.
1. Batch OS
The batch operating system does not have a direct link with the computer. A different system divides and allocates
similar tasks into batches for easy processing and faster response.
The batch operating system is appropriate for lengthy and time-consuming tasks. To avoid slowing down a device, each
user prepares their tasks offline and submits them to an operator. The advantages and disadvantages of using a batch
operating system include:
Advantages Disadvantages
Many users can share batch systems. There is little idle Some notable disadvantages are: Batch operating
time for batch operating systems. systems are challenging to debug.
It becomes possible to manage large workloads. Any failure of the system creates a backlog.
It's easy to estimate how long a task will take to be It may be costly to install and maintain good batch
completed. operating systems.
Batch operating systems are used for tasks such as managing payroll systems, data entry and bank statements.
2. Time-sharing or multitasking OS
The time-sharing operating system, also known as a multitasking OS, works by allocating time to a particular task and
switching between tasks frequently. Unlike the batch system, the time-sharing system allows users to complete their
work in the system simultaneously.
It allows many users to be distributed across various terminals to minimize response time. Potential advantages and
disadvantages of time-sharing operating systems include:
Advantages Disadvantages
There's a quick response during task performance. The user's data security might be a problem.
It minimizes the idle time of the processor. System failure can lead to widespread failures.
All tasks get an equal chance of being accomplished. Problems in data communication may arise.
It reduces the chance of software duplication. The integrity of user programs is not assured.
Examples of time-sharing operating systems include Multics and Unix.
3. Distributed OS
This system is based on autonomous but interconnected computers communicating with each other via
communication lines or a shared network. Each autonomous system has its own processor that may differ in size and
function. Distributed operating systems are used for tasks such as telecommunication networks, airline reservation
controls and peer-to-peer networks.
A distributed operating system serves multiple applications and multiple users in real time. The data processing
function is then distributed across the processors. Potential advantages and disadvantages of distributed operating
systems are:
Advantages Disadvantages
They allow remote working. If the primary network fails, the entire system shuts down.
They allow a faster exchange of data among users. They're expensive to install.
They minimize the load on the host computer. They require a high level of expertise to maintain.
They reduce delays in data processing.
Failure in one site may not cause much disruption to the
system.
They enhance scalability since more systems can be
added to the network.
4. Network OS
Network operating systems are installed on a server providing users with the capability to manage data, user groups
and applications. This operating system enables users to access and share files and devices such as printers, security
software and other applications, mostly in a local area network. Potential advantages and disadvantages of network
operating systems are:
Advantages Disadvantages
Centralized servers provide high stability. They require regular updates and maintenance.
Security issues are easier to handle through the servers. Servers are expensive to buy and maintain.
Users' reliance on a central server might be detrimental to
It's easy to upgrade and integrate new technologies.
workflows.
Remote access to the servers is possible.
Examples of network operating systems include Microsoft Windows, Linux and macOS X.
5. Real-time OS
Real-time operating systems provide support to real-time systems that require observance of strict time requirements.
The response time between input, processing and response is tiny, which is beneficial for processes that are highly
sensitive and need high precision. These processes include operating missile systems, medical systems or air traffic
control systems, where delays may lead to loss of life and property. Real-time operating systems may either be hard
real-time systems or soft real-time systems.
Hard real-time systems are installed in applications with strict time constraints. The system guarantees the completion
of sensitive tasks on time. Hard real-time does not have virtual memory. Soft real-time systems do not have equally
rigid time requirements. A critical task gets priority over other tasks. Potential advantages and disadvantages of real-
time operating systems include:
Advantages Disadvantages
They use device and systems maximally, hence more output. They have a low capacity to run tasks simultaneously.
They allow fast shifting from one task to another. They use heavy system resources.
The focus is on current tasks, and less focus is put on the They run on complex algorithms that are not easy to
queue. understand.
Real-time systems are meticulously programmed, hence free They're unsuitable for thread priority because of the
of errors. system's inability to switch tasks.
They can be used in embedded systems.
They allow easy allocation of memory.
Real-time operating systems are used for tasks such as scientific experiments, medical imaging, robotics and air traffic
control operations.
6. Mobile OS
Mobile operating systems run exclusively on small devices such as smartphones, tablets and wearables. The system
combines the features of a personal computer with additional features useful for a handheld device.
Mobile operating systems start when a device is powered on to provide access to installed applications. Mobile
operating systems also manage wireless network connectivity. Potential advantages and disadvantages of mobile
operating systems are:
Advantages Disadvantages
Most systems are easy for users to learn and operate. Some systems are not user-friendly.
Some mobile OS put a heavy drain on a device’s battery,
requiring frequent recharging.
Examples of mobile operating systems include Android OS, Apple and Windows mobile OS.
Operating System is a software running for computer systems, they can be useful for many things such as:
• Multitasking- Allowing you to open many applications at a time
• Error handling- Refers to the anticipation, detection, and resolution of programming, application, and
communications errors.
• Security- Passwords, no information will get leaked
• Input and Output controls- Controlling other devices like printers.
Of course there are more of the OS benefits but these are just some of them. The most common OS are: Windows,
MAC OS, Linux, Android, UNIX and DOS. Common computer operating systems are Linux, Windows, MAC OS and
Android, iOS, blackberry OS for mobile phones. Nowadays, there are other devices which have OS’s in them, for
example smart TVs and smart fridges.
Types of OSs Household devices with OSs
When the computer is first powered up, the initiating programs are loaded into the memory from the ROM (Read
Only Memory). These make sure that the hardware, processor and internal memory are functioning correctly. If there
aren’t any errors, the OS will be loaded into the memory.
When the computer shuts down, it deletes all the data which has been loaded from the memory (RAM), then they will
repeat the same process above when the computer is turned on again.
Interrupts- Signal sent from a device or from software to the processor, meaning that the processor will stop
temporary during the interrupt. This can happen due to the following:
• disk drive is ready to receive more data
• error has occurred (eg. paper jammed in the printer)
• <ctrl><Alt><Del> buttons are pressed
• software error has occurred
Buffers- Loading, this can occur by the slow speed of the input device, causing the output device to be slower than
usual.
von Neumann model:
The von Neumann model was invented by a scientist called John Von Neumann in 1945. Von Neumann computer
systems contain five main building blocks: the central processing unit (CPU), memory unit, arithmetic logic unit,and
input/output devices. These components are connected together using the system bus.
Cloud computing offers platform independence, the software is not required to be installed on any PC. There is
portability in cloud computing. Applications that execute on a cloud are over email or through web conferencing.
Prerequisites
To learn cloud computing, one should have basic knowledge of computer, Database Management System (DBMS) and
Networking. These subjects will help you to understand the concepts of cloud computing very easily.
1. Deployment Models
There are four types of access in the cloud:
• Public
• Private
• Hybrid
• Community
• Public Cloud
Public cloud is easily accessible to the general public. A private cloud is operated by the organisation it serves. Public
cloud is inexpensive. There are no wasted resources because you pay for what you use.
• Private Cloud
Private cloud only allows systems and services to be within an organisation. Private cloud is the best for business with
dynamic or unpredictable computing needs because they will have control over the environment of the cloud.
• Hybrid Cloud
Hybrid Cloud is a cloud service which includes both private and public clouds. Hybrid cloud is best for heavy workload
because it combines both public cloud and private cloud.
• Community Cloud
In the Community cloud, the resources are shared between several organisations. It allows several companies to work
together on the same platform, where they can share their resources.
2. Service Models
There are three service models:
• Infrastructure-as-a-Service (IaaS) – Here users are responsible
for managing data, applications and runtime. In IaaS, providers
manage virtualisation, servers, hard drives, storage, and
networking.
Example: (AWS) Amazon Web Services, Microsoft Azure.
• Platform-as-a-service (PaaS) – PaaS is used for development.
With PaaS, one can develop and customise applications. PaaS
makes it easy to for development, testing, and deployment of
applications.
Example: Apparenda
• Software-as-a-service (SaaS) – As it is a service based cloud, the
cloud provider delivers a complete software to the client. It
provides pre-configured hardware resources through a virtual
interface. It does not include any Operating System. It allows access to the software only.
Example: Google Apps, Salesforce, Workday.
History of Cloud Computing
The evolution of cloud computing started in 1950 with mainframe computing. Here multiple users are allowed to access
a mainframe. After 20 years around 1970, the concept of virtualisation came.
Virtualisation software made it possible to execute one or more operating systems simultaneously in an isolated
environment.
Ans. Cloud Computing is uploading data files and images over the centers available for users over the Internet.
Ans. Cloud computing ensures that a person can easily store large amount of data and then have convenient access to
the same.
Ans. It is more reliable because cloud computing offers load balancing and applications can be modified via the internet
any time.
7. Introduction to Networking
What is a network?
In information technology, a network is defined as the connection of at least two computer systems, either by a cable
or a wireless connection. The simplest network is a combination of two computers connected by a cable. This type
of network is called a peer-to-peer network.
There is no hierarchy in this network; both participants have equal privileges. Each computer has access to the data of
the other device and can share resources such as disk space, applications or peripheral devices (printers, etc.).
Today’s networks tend to be a bit more complex and don’t just consist of two computers. Systems with more than ten
participants usually use client-server networks. In these networks, a central computer (server) provides resources to
the other participants in the network (clients).
When you buy a new computer, the first thing you’ll probably try to do is connect to the Internet. To do this, you
establish a connection to your router, which receives the data from the Internet and then forwards it to the computer.
Of course that’s not all: Next, you could also connect your printer, smartphone or TV to the router so that these
devices are also connected to the Internet. Now you have connected different devices to each other via a central
access point and created your own network.
Definition: Network
A network is a group of two or more computers or other electronic devices that are interconnected for the purpose
of exchanging data and sharing resources.
Network example: your home Wi-Fi. The Wireless LAN (Wireless Local Area Network, i.e. the Wi-Fi network) in your
home is a good example of a small client-server network. The various devices in your home are wirelessly connected
to the router, which acts as a central node (server) for the household. The router itself is connected to a much larger
network: the Internet.
Since the devices are connected to the router as clients, they are part of the network and can use the same resource
as the server, namely the Internet. The devices can also communicate with each other without having to establish a
direct connection to each device. For example, you can send a print job to a Wi-Fi-enabled printer without first
connecting the printer to the computer using a cable.
Before the advent of modern networks, communication between different computers and devices was very
complicated. Computers were connected using a LAN cable. Mechanical switches were used so that peripheral devices
could also be shared. Due to physical limitations (cable length), the devices and computers always had to be very close
to each other. If you need an extremely stable connection, you should consider the possibility of a wired connection to
the router or device, despite the advantages of Wi-Fi.
The main task of a network is to provide participants with a single platform for exchanging data and sharing resources.
This task is so important that many aspects of everyday life and the modern world would be unimaginable without
networks. Here’s a real-life example: In a typical office, every workstation has its own computer. Without a network of
computers, it would be very difficult for a team to work on a project since there would be no common place to share
or store digital documents and information, and team members would not be able to share certain applications.
In addition, many offices only have one printer or a few printers that are shared by everyone. Without a network, the
IT department would have to connect every single computer to the printer, which is difficult to implement from a
technical standpoint. A network elegantly solves this problem because all computers are connected to the printer via
one central node.
These are only a few of the several dozen types of databases in use today. Other, less common databases are tailored to
very specific scientific, financial, or other functions. In addition to the different database types, changes in technology
development approaches and dramatic advances such as the cloud and automation are propelling databases in entirely
new directions. Some of the latest databases include:
Open source databases
• An open source database system is one whose source code is open source; such databases could be SQL or NoSQL
databases.
Cloud databases
• A cloud database is a collection of data, either structured or unstructured, that resides on a private, public, or
hybrid cloud computing platform. There are two types of cloud database models: traditional and database as a
service (DBaaS). With DBaaS, administrative tasks and maintenance are performed by a service provider.
Multimodel database
• Multimodel databases combine different types of database models into a single, integrated back end. This means
they can accommodate various data types.
Document/JSON database
• Designed for storing, retrieving, and managing document-oriented information, document databases are a
modern way to store data in JSON format rather than rows and columns.
Self-driving databases
• The newest and most groundbreaking type of database, self-driving databases (also known as autonomous
databases) are cloud-based and use machine learning to automate database tuning, security, backups, updates,
and other routine management tasks traditionally performed by database administrators.
Learn more about self-driving databases
Database software makes data management simpler by enabling users to store data in a structured form and then access
it. It typically has a graphical interface to help create and manage the data and, in some cases, users can construct their
own databases by using database software.
What is a database management system (DBMS)?
A database typically requires a comprehensive database software program known as a database management system
(DBMS). A DBMS serves as an interface between the database and its end users or programs, allowing users to retrieve,
update, and manage how the information is organized and optimized. A DBMS also facilitates oversight and control of
databases, enabling a variety of administrative operations such as performance monitoring, tuning, and backup and
recovery.
Some examples of popular database software or DBMSs include MySQL, Microsoft Access, Microsoft SQL Server,
FileMaker Pro, Oracle Database, and dBASE.
Database challenges
Today’s large enterprise databases often support very complex queries and are expected to deliver nearly instant
responses to those queries. As a result, database administrators are constantly called upon to employ a wide variety of
methods to help improve performance. Some common challenges that they face include:
• Absorbing significant increases in data volume. The explosion of data coming in from sensors, connected
machines, and dozens of other sources keeps database administrators scrambling to manage and organize their
companies’ data efficiently.
• Ensuring data security. Data breaches are happening everywhere these days, and hackers are getting more
inventive. It’s more important than ever to ensure that data is secure but also easily accessible to users.
• Keeping up with demand. In today’s fast-moving business environment, companies need real-time access to their
data to support timely decision-making and to take advantage of new opportunities.
• Managing and maintaining the database and infrastructure. Database administrators must continually watch
the database for problems and perform preventative maintenance, as well as apply software upgrades and
patches. As databases become more complex and data volumes grow, companies are faced with the expense of
hiring additional talent to monitor and tune their databases.
• Removing limits on scalability. A business needs to grow if it’s going to survive, and its data management must
grow along with it. But it’s very difficult for database administrators to predict how much capacity the company
will need, particularly with on-premises databases.
• Ensuring data residency, data sovereignty, or latency requirements. Some organizations have use cases that are
better suited to run on-premises. In those cases, engineered systems that are pre-configured and pre-optimized
for running the database are ideal.
Addressing all of these challenges can be time-consuming and can prevent database administrators from performing
more strategic functions.
Self-driving databases use cloud-based technology and machine learning to automate many of the routine tasks required
to manage databases, such as tuning, security, backups, updates, and other routine management tasks. With these tedious
tasks automated, database administrators are freed up to do more strategic work. The self-driving, self-securing, and
self-repairing capabilities of self-driving databases are poised to revolutionize how companies manage and secure their
data, enabling performance advantages, lower costs, and improved security.
9. Introduction to C Programming
C is a general-purpose computer programming language. It was created in the 1970s by Dennis Ritchie, and remains
very widely used and influential. By design, C's features cleanly reflect the capabilities of the targeted CPUs.
C is a powerful general-purpose programming language. It can be used to develop software like operating systems,
databases, compilers, and so on.
C is a procedural programming language with a static system that has the functionality of structured programming,
recursion, and lexical variable scoping. C was created with constructs that transfer well to common hardware
instructions. It has a long history of use in programs that were previously written in assembly language.
C programming language is a machine-independent programming language that is mainly used to create many types
of applications and operating systems such as Windows, and other complicated programs such as the Oracle database,
Git, Python interpreter, and games and is considered a programming foundation in the process of learning any other
programming language. Operating systems and diverse application software for computer architectures ranging from
supercomputers to PLCs and embedded systems are examples of such applications.
Specialised Modules
• Fundamentals of Web Design and Development
Web design and development is an umbrella term that describes the process of creating a website. Like the name
suggests, it involves two major skill sets: web design and web development. Web design determines the look and feel
of a website, while web development determines how it functions.
A broader range of skills will be learnt, in which students will gain a better understanding of frameworks and planning
techniques for the strategic management of organization’s computing resources, along with technical skills to evaluate,
design, configure and maintain shared computing infrastructure. They will gain solid understanding of the importance
of enterprise systems and network administration in virtual computing environments. They will have programming
skills needed in systems administration, network technologies, network design, and network security. We will further
nurture their creativity and innovation as well as independent learning to prepare them for the workplace.
Common Modules
3. Innovation Process
Specialised Modules
1. Introduction to Virtualization
4. Web Applications
8. Network Security
Students will undertake an Internship/Industrial Training for a minimum period of 16 weeks to prepare them for a
smooth transition from the classroom to the working environment.
Industrial Training refers to the placement of students in an organization to conduct supervised practical training in the
industry sector within the stipulated time before they are awarded a bachelor's degree.
DEGREE LEVEL 3
Students will make use of their previous studies and industrial experience to extend their familiarity in the field of
cloud computing and to refine their personal and professional development. Students will learn how to design and
manage cloud-based systems in enterprises using programming skills, management, and planning strategies. Students
will have a deeper understanding of enterprise network components, settings, and methodologies, as well as a better
understanding of edge computing concepts and applications. A final year project requires them to investigate and
develop a solution for a real-world problem - they will demonstrate their ability to combine technical knowledge,
critical thinking, and analytical skills to produce a personal achievement portfolio.
Common Modules
1. Project Management
2. Venture Building
Specialised Modules
5. Emergent Technology
A broader range of skills will be learnt, in which students will gain a better understanding of frameworks and planning
techniques for the strategic management of organization’s computing resources, along with technical skills to evaluate,
design, configure and maintain shared computing infrastructure. They will gain solid understanding of the importance
of enterprise systems and network administration in virtual computing environments. They will have programming
skills needed in systems administration, network technologies, network design, and network security. We will further
nurture their creativity and innovation as well as independent learning to prepare them for the workplace.
Common Modules
Probability model
A probability model is a mathematical representation of a random phenomenon. It is defined by its sample space,
events within the sample space, and probabilities associated with each event.
• The sample space S for a probability model is the set of all possible outcomes.
• An event A is a subset of the sample space S.
• A probability is a numerical value assigned to a given event A. The probability of an event is written P(A), and
describes the long-run relative frequency of the event.
Statistical model
A statistical model is a special class of mathematical model. What distinguishes a statistical model from other
mathematical models is that a statistical model is non-deterministic. Thus, in a statistical model specified via
mathematical equations, some of the variables do not have specific values, but instead have probability distributions;
i.e. some of the variables are stochastic. In the example above, ε is a stochastic variable; without that variable, the
model would be deterministic.
Statistical models are often used even when the physical process being modeled is deterministic. For instance, coin
tossing is, in principle, a deterministic process; yet it is commonly modeled as stochastic (via a Bernoulli process).
There are three purposes for a statistical model, according to Konishi & Kitagawa.
• Predictions
• Extraction of information
• Description of stochastic structures
Data Science
Applied probability is an important branch in probability, including computational probability. Statistics is using
probability theory to construct models to deal with data.
The main difference is that a probability model is only one (known) distribution, while a statistical model is a set of
probability models; the data is used to select a model from this set or a smaller subset of models that better (in a
certain sense) describe the phenomenon (in the light of the data).
A statistical model describes one or more variables and their relationships. In contrast, a probability model describes
the outcomes of a random event, sometimes called a random variable. The setting of a random variable refers to how
the event is configured.
2. Programming for Data Analysis
Programming is the technique that allows data scientists to interact with and send instructions to computers. There
are hundreds of programming languages out there, built for diverse purposes. Some of them are better suited for data
science, providing high productivity and performance to process large amounts of data.
A data type is a classification of data which tells the compiler or interpreter how the programmer intends to use the
data. Most programming languages support various types of data, including integer, real, character or string, and
Boolean.
What are the 2 most popular programming languages for data analysis?
Data analysts use SQL (Structured Query Language) to communicate with databases, but when it comes to cleaning,
manipulating, analysing, and visualizing data, you're looking at either Python or R.
The best data science programming languages are Python, R, Java, SQL, Scala, and Julia. Each of these languages has
unique features that are best used in different aspects of data science. Having some programming skills in multiple
languages can help you complete a wide range of data science tasks.
3. Innovation Process
Data may be grouped into four main types based on methods for collection: observational, experimental, simulation,
and derived. The type of research data you collect may affect the way you manage that data.
Specialised Modules
1. Introduction to Virtualization
What is Virtualization?
Virtualization is technology that you can use to create virtual representations of servers, storage, networks, and
other physical machines. Virtual software mimics the functions of physical hardware to run multiple virtual machines
simultaneously on a single physical machine. Businesses use virtualization to use their hardware resources efficiently
and get greater returns from their investment. It also powers cloud computing services that help organizations manage
infrastructure more efficiently.
To properly understand Kernel-based Virtual Machine (KVM), you first need to understand some basic concepts
in virtualization. Virtualization is a process that allows a computer to share its hardware resources with multiple
digitally separated environments. Each virtualized environment runs within its allocated resources, such as memory,
processing power, and storage. With virtualization, organizations can switch between different operating systems on
the same server without rebooting.
A virtual machine is a software-defined computer that runs on a physical computer with a separate operating system
and computing resources. The physical computer is called the host machine and virtual machines are guest machines.
Multiple virtual machines can run on a single physical machine. Virtual machines are abstracted from the computer
hardware by a hypervisor.
Hypervisor
The hypervisor is a software component that manages multiple virtual machines in a computer. It ensures that each
virtual machine gets the allocated resources and does not interfere with the operation of other virtual machines. There
are two types of hypervisors.
Type 1 hypervisor
A type 1 hypervisor, or bare-metal hypervisor, is a hypervisor program installed directly on the computer’s hardware
instead of the operating system. Therefore, type 1 hypervisors have better performance and are commonly used by
enterprise applications. KVM uses the type 1 hypervisor to host multiple virtual machines on the Linux operating
system.
Type 2 hypervisor
Also known as a hosted hypervisor, the type 2 hypervisor is installed on an operating system. Type 2 hypervisors are
suitable for end-user computing.
By using virtualization, you can interact with any hardware resource with greater flexibility. Physical servers consume
electricity, take up storage space, and need maintenance. You are often limited by physical proximity and network
design if you want to access them. Virtualization removes all these limitations by abstracting physical hardware
functionality into software. You can manage, maintain, and use your hardware infrastructure like an application on the
web.
Virtualization example
To meet these requirements, the company sets up three different dedicated physical servers for each application. The
company must make a high initial investment and perform ongoing maintenance and upgrades for one machine at a
time. The company also cannot optimize its computing capacity. It pays 100% of the servers’ maintenance costs but
uses only a fraction of their storage and processing capacities.
With virtualization, the company creates three digital servers, or virtual machines, on a single physical server. It
specifies the operating system requirements for the virtual machines and can use them like the physical servers.
However, the company now has less hardware and fewer related expenses.
Infrastructure as a service
The company can go one step further and use a cloud instance or virtual machine from a cloud computing provider
such as AWS. AWS manages all the underlying hardware, and the company can request server resources with varying
configurations. All the applications run on these virtual servers without the users noticing any difference. Server
management also becomes easier for the company’s IT team.
What are the benefits of virtualization?
Automated IT management
Now that physical computers are virtual, you can manage them by using software tools. Administrators create
deployment and configuration programs to define virtual machine templates. You can duplicate your infrastructure
repeatedly and consistently and avoid error-prone manual configurations.
Virtualization uses specialized software, called a hypervisor, to create several cloud instances or virtual machines on
one physical computer.
After you install virtualization software on your computer, you can create one or more virtual machines. You can access
the virtual machines in the same way that you access other applications on your computer. Your computer is called the
host, and the virtual machine is called the guest. Several guests can run on the host. Each guest has its own operating
system, which can be the same or different from the host operating system.
From the user’s perspective, the virtual machine operates like a typical server. It has settings, configurations, and
installed applications. Computing resources, such as central processing units (CPUs), Random Access Memory (RAM),
and storage appear the same as on a physical server. You can also configure and update the guest operating systems
and their applications as necessary without affecting the host operating system.
Hypervisors
The hypervisor is the virtualization software that you install on your physical machine. It is a software layer that acts as
an intermediary between the virtual machines and the underlying hardware or host operating system. The hypervisor
coordinates access to the physical environment so that several virtual machines have access to their own share of
physical resources.
For example, if the virtual machine requires computing resources, such as computer processing power, the request
first goes to the hypervisor. The hypervisor then passes the request to the underlying hardware, which performs the
task.
Type 1 hypervisors
A type 1 hypervisor—also called a bare-metal hypervisor—runs directly on the computer hardware. It has some
operating system capabilities and is highly efficient because it interacts directly with the physical resources.
Type 2 hypervisors
A type 2 hypervisor runs as an application on computer hardware with an existing operating system. Use this type of
hypervisor when running multiple operating systems on a single machine.
What are the different types of virtualization?
You can use virtualization technology to get the functions of many different types of physical infrastructure and all the
benefits of a virtualized environment. You can go beyond virtual machines to create a collection of virtual resources in
your virtual environment.
Server virtualization
Server virtualization is a process that partitions a physical server into multiple virtual servers. It is an efficient and cost-
effective way to use server resources and deploy IT services in an organization. Without server virtualization, physical
servers use only a small amount of their processing capacities, which leave devices idle.
Storage virtualization
Storage virtualization combines the functions of physical storage devices such as network attached storage (NAS) and
storage area network (SAN). You can pool the storage hardware in your data center, even if it is from different vendors
or of different types. Storage virtualization uses all your physical data storage and creates a large unit of virtual storage
that you can assign and control by using management software. IT administrators can streamline storage activities,
such as archiving, backup, and recovery, because they can combine multiple network storage devices virtually into a
single storage device.
Network virtualization
Any computer network has hardware elements such as switches, routers, and firewalls. An organization with offices in
multiple geographic locations can have several different network technologies working together to create its enterprise
network. Network virtualization is a process that combines all of these network resources to centralize administrative
tasks. Administrators can adjust and control these elements virtually without touching the physical components, which
greatly simplifies network management.
Software-defined networking
Software-defined networking (SDN) controls traffic routing by taking over routing management from data routing in
the physical environment. For example, you can program your system to prioritize your video call traffic over application
traffic to ensure consistent call quality in all online meetings.
Data virtualization
Modern organizations collect data from several sources and store it in different formats. They might also store data in
different places, such as in a cloud infrastructure and an on-premises data center. Data virtualization creates a software
layer between this data and the applications that need it. Data virtualization tools process an application’s data request
and return results in a suitable format. Thus, organizations use data virtualization solutions to increase flexibility for
data integration and support cross-functional data analysis.
Application virtualization
Application virtualization pulls out the functions of applications to run on operating systems other than the operating
systems for which they were designed. For example, users can run a Microsoft Windows application on a Linux machine
without changing the machine configuration. To achieve application virtualization, follow these practices:
• Application streaming – Users stream the application from a remote server, so it runs only on the end user's
device when needed.
• Server-based application virtualization – Users can access the remote application from their browser or client
interface without installing it.
• Local application virtualization – The application code is shipped with its own environment to run on all
operating systems without changes.
Desktop virtualization
Most organizations have nontechnical staff that use desktop operating systems to run common business applications.
For instance, you might have the following staff:
• A customer service team that requires a desktop computer with Windows 10 and customer-relationship
management software
You can use desktop virtualization to run these different desktop operating systems on virtual machines, which your
teams can access remotely. This type of virtualization makes desktop management efficient and secure, saving money
on desktop hardware. The following are types of desktop virtualization.
Cloud computing is the on-demand delivery of computing resources over the internet with pay-as-you-go pricing.
Instead of buying, owning, and maintaining a physical data center, you can access technology services, such as
computing power, storage, and databases, as you need them from a cloud provider.
Virtualization technology makes cloud computing possible. Cloud providers set up and maintain their own data centers.
They create different virtual environments that use the underlying hardware resources. You can then program your
system to access these cloud resources by using APIs. Your infrastructure needs can be met as a fully managed service.
Containerization is a way to deploy application code to run on any physical or virtual environment without changes.
Developers bundle application code with related libraries, configuration files, and other dependencies that the code
needs to run. This single package of the software, called a container, can run independently on any platform.
Containerization is a type of application virtualization.
You can think of server virtualization as building a road to connect two places. You have to recreate an entire virtual
environment and then run your application on it. By comparison, containerization is like building a helicopter that can
fly to either of those places. Your application is inside a container and can run on all types of physical or virtual
environments.
By using AWS, you have multiple ways to build, deploy, and get to market quickly on the latest technology. For example,
you might benefit from any of these services:
• Use Amazon Elastic Compute Cloud (Amazon EC2) to exercise granular control over your infrastructure. Choose
the processors, storage, and networking that you want.
• Use AWS Lambda for serverless computing so that you can run code without considering servers.
• Use Amazon Lightsail to implement virtual servers, storage, databases, and networking for a low, predictable
price.
2. Switching and Routing Essentials
Switching and routing essentials refer to the fundamental concepts and skills required to understand and configure
switches and routers in a network infrastructure. These courses provide hands-on training in the architecture,
components, and operations of switches and routers. They focus on key topics such as VLANs, spanning tree protocol,
routing protocols, subnetting, and network troubleshooting.
One course that covers switching and routing essentials is "CCNA: Switching, Routing, and Wireless Essentials" offered
by Cisco Networking Academy. This course is designed to provide a comprehensive understanding of switching
technologies and router operations that support small-to-medium business networks, including wireless local area
networks (WLAN).
Another course that focuses on routing and switching essentials is "CCNA 2: Routing and Switching Essentials". This
course dives into the architecture, components, and operations of routers and switches in a small network. It provides
hands-on experience in configuring a router and a switch, as well as understanding routing protocols and network
design principles.
Both courses mentioned are available online and offer practical training to enhance your skills in switching and routing.
Participating in these courses can help you acquire the necessary knowledge and expertise to design, deploy, and
troubleshoot networks efficiently.
It is important to note that these courses are offered by Cisco Networking Academy, which is a reputable organization
known for providing industry-standard networking certifications and training. So, by completing these courses, you can
enhance your understanding of switching and routing essentials and boost your credentials in the field of network
engineering.
Routing and Switching are different functions of network communications. The function of Switching is to switch data
packets between devices on the same network (or same LAN – Local Area Network). The function of Routing is to Route
packets between different networks (between different LANs – Local Area Networks).
While a wireless system provides a fixed or portable endpoint with access to a distributed network, a mobile system
offers all of the resources of that distributed network to something that can go anywhere, barring any issues with local
reception or technical area coverage.
Wireless technology is tech that allows people to communicate or data to be transferred from one point to another
without using cables or wires. A lot of the communication is done with radio frequency and infrared waves.
4. Web Applications
A web application or a web app is an interactive computer program made using the technologies like JavaScript,
Cascading Style Sheets (CSS), and HTML5. The final progressive web app is accessed using a web browser like Google
Chrome or Mozilla Firefox and often has a login or sign up mechanism.
5. Systems & Network Administration
System and network administrators manage an organization's technical infrastructure. Job activities include everything
from designing and implementing network schemas to managing digital licenses and hardware assets.
System administration functions include user management, system monitoring, backup and recovery, and access
control. System monitoring, backup, and recovery functions are typically integrated into an organization-wide
application. User management functions include user creation and assigning roles to users.
Network administration primarily consists of, but isn't limited to, network monitoring, network management, and
maintaining network quality and security. Network monitoring is essential to monitor unusual traffic patterns, the
health of the network infrastructure, and devices connected to the network.
Network Administrator manages and maintains corporate networks and servers. They install new hardware and
applications, update existing systems, and continually monitor network performance. Their role ensures a smooth and
secure network operation within a company.
Here are the four types of system administrators based on their roles and responsibilities:
Network Administrators
Network administrators manage the entire network infrastructure of an organization. They design and install computer
systems, routers, switches, local area networks (LAN), wide area networks (WAN), and intranet systems. They also
monitor the systems, provide maintenance and troubleshoot any problems when they arise.
Database Administrators
Database administrators (DBA) set up and maintain databases used in an organization. They may also be required to
integrate data from an old database into a new one or even create a database from scratch. In large organizations,
there are specialized DBAs who are only responsible for managing databases. In smaller organizations, the roles of
DBAs and server administrators can overlap.
Server/Web Administrators
Server or web administrators specialize in maintaining servers, web services and operating systems of the servers. They
monitor the speed of the internet to make sure that everything runs smoothly. They also analyze a website’s traffic
patterns and implement changes based on user feedback.
Security systems administrators monitor and maintain the security systems of an organization. They develop
organizational security procedures and also run regular data checkups - setting up, deleting and maintaining user
accounts.
In large organizations, all these roles may all be separate positions within one department. In smaller organizations,
they may be shared by a few system administrators, or even one single person.
Data center network infrastructure is a constellation of networking resources that provides connectivity between data
center components, users, and internal and external resources, to support the storage and processing of applications
and data.
Data centers are made up of three primary types of components: compute, storage, and network. However, these
components are only the top of the iceberg in a modern DC. Beneath the surface, support infrastructure is essential to
meeting the service level agreements of an enterprise data center.
At its simplest, a data center is a physical facility that organizations use to house their critical applications and data. A
data center's design is based on a network of computing and storage resources that enable the delivery of shared
applications and data. The key components of a data center design include routers, switches, firewalls, storage systems,
servers, and application-delivery controllers.
7. Human Computer Interaction (HCI)
Human-computer interaction (HCI) is the field of study that focuses on optimizing how users and computers interact
by designing interactive computer interfaces that satisfy users’ needs. It is a multidisciplinary subject covering
computer science, behavioral sciences, cognitive science, ergonomics, psychology, and design principles.
The emergence of HCI dates back to the 1980s, when personal computing was on the rise. It was when desktop
computers started appearing in households and corporate offices. HCI’s journey began with video games, word
processors, and numerical units.
However, with the advent of the internet and the explosion of mobile and diversified technologies such as voice-based
and Internet of Things (IoT), computing became omnipresent and omnipotent. Technological competence further led
to the evolution of user interactions. Consequently, the need for developing a tool that would make such man-machine
interactions more human-like grew significantly. This established HCI as a technology, bringing different fields such as
cognitive engineering, linguistics, neuroscience, and others under its realm.
Today, HCI focuses on designing, implementing, and evaluating interactive interfaces that enhance user experience
using computing devices. This includes user interface design, user-centered design, and user experience design.
8. Network Security
Network security is any activity designed to protect the usability and integrity of your network and data. It includes
both hardware and software technologies. It targets a variety of threats. It stops them from entering or spreading on
your network. Effective network security manages access to the network.
Network Security protects your network and data from breaches, intrusions and other threats. This is a vast and
overarching term that describes hardware and software solutions as well as processes or rules and configurations
relating to network use, accessibility, and overall threat protection.
Network Security involves access control, virus and antivirus software, application security, network analytics, types of
network-related security (endpoint, web, wireless), firewalls, VPN encryption and more.
Students will undertake an Internship/Industrial Training for a minimum period of 16 weeks to prepare them for a
smooth transition from the classroom to the working environment.
Industrial Training refers to the placement of students in an organization to conduct supervised practical training in the
industry sector within the stipulated time before they are awarded a bachelor's degree.
DEGREE LEVEL 3
Students will make use of their previous studies and industrial experience to extend their familiarity in the field of
cloud computing and to refine their personal and professional development. Students will learn how to design and
manage cloud-based systems in enterprises using programming skills, management, and planning strategies. Students
will have a deeper understanding of enterprise network components, settings, and methodologies, as well as a better
understanding of edge computing concepts and applications. A final year project requires them to investigate and
develop a solution for a real-world problem - they will demonstrate their ability to combine technical knowledge,
critical thinking, and analytical skills to produce a personal achievement portfolio.
Common Modules
1. Project Management
Project management is aimed at producing an end product that will effect some change for the benefit of the
organisation that instigated the project. It is the initiation, planning and control of a range of tasks required to deliver
this end product.
2. Venture Building
In corporate venture building, established companies build a separate venture from scratch. A new brand, team,
revenue stream, or P&L (profit-and-loss) is created to target untapped opportunity spaces – new customer segments,
technologies or capabilities – outside of the existing business.
Venture builders play a crucial role in the success of building new businesses, as they offer a high level of autonomy. In
the realm of corporate venturing, venture builders actively participate in the startup process by contributing to product
development, acquiring initial customers, and assembling the core team.
Specialised Modules
Cloud engineering is the application of engineering disciplines to cloud computing. It brings a systematic approach to
concerns of commercialization, standardization, and governance of cloud computing applications
Edge computing is an emerging computing paradigm which refers to a range of networks and devices at or near the
user. Edge is about processing data closer to where it's being generated, enabling processing at greater speeds and
volumes, leading to greater action-led results in real time.
Edge computing is a distributed information technology (IT) architecture in which client data is processed at the
periphery of the network, as close to the originating source as possible.
Data is the lifeblood of modern business, providing valuable business insight and supporting real-time control over
critical business processes and operations. Today's businesses are awash in an ocean of data, and huge amounts of
data can be routinely collected from sensors and IoT devices operating in real time from remote locations and
inhospitable operating environments almost anywhere in the world.
But this virtual flood of data is also changing the way businesses handle computing. The traditional computing
paradigm built on a centralized data center and everyday internet isn't well suited to moving endlessly growing rivers
of real-world data. Bandwidth limitations, latency issues and unpredictable network disruptions can all conspire to
impair such efforts. Businesses are responding to these data challenges through the use of edge computing
architecture.
In simplest terms, edge computing moves
some portion of storage and compute
resources out of the central data center and
closer to the source of the data itself.
Rather than transmitting raw data to a
central data center for processing and
analysis, that work is instead performed
where the data is actually generated --
whether that's a retail store, a factory floor,
a sprawling utility or across a smart city.
Only the result of that computing work at
the edge, such as real-time business
insights, equipment maintenance
predictions or other actionable answers, is
sent back to the main data center for
review and other human interactions.
Systems management is the administration of the information technology (IT) systems in an enterprise network or data
center. An effective systems management plan facilitates the delivery of IT as a service and allows an organization's
employees to respond quickly to changing business requirements and system activity.
There are five main hardware components in a computer system: Input, Processing, Storage, Output and
Communication devices. Are devices used for entering data or instructions to the central processing unit.
Specific examples are system engineering management plan, hardware, software and data development plans, system
integration plans, system verification plans, system validation plans, system operations plans, sustainment plans,
maintenance plans, training and manuals plans, system security plan and system safety plan.
Cloud application development is the process through which a Cloud-based app is built. It involves different stages of
software development, each of which prepares your app to go live and hit the market. The best Cloud app development
teams use DevOps practices and tools like Kubernetes.
A cloud app development is a software development process of building a cloud-based app. This process consists of
five main stages: discovery, design, development, testing, maintenance and support. If you or your team develop a
cloud mobile app, knowledge of web development is a must.
5. Emergent Technology
An enterprise network consists of physical and virtual networks and protocols that serve the dual purpose of
connecting all users and systems on a local area network (LAN) to applications in the data center and cloud as well as
facilitating access to network data and analytics.
One of the main components of enterprise automation is an AI-powered chatbot. While some businesses have
standard chatbots embedded on their website, we're talking about modern, advanced chatbot technology. Think of
chatbots that use conversational AI as an extension of your team.
It covers wide area network (WAN) technologies and quality of service (QoS) mechanisms used for secure remote
access along with the introduction of software-defined networking, virtualization, and automation concepts that
support the digitalization of networks.
Cloud Infrastructure is the collection of hardware and software elements such as computing power, networking,
storage, and virtualization resources needed to enable cloud computing. Cloud infrastructure types usually also include
a user interface (UI) for managing these virtual resources.
Cloud computing infrastructure is the collection of hardware and software elements needed to enable cloud computing.
It includes computing power, networking, and storage, as well as an interface for users to access their virtualized
resources.
Examples of cloud infrastructure automation include AWS CloudFormation, Azure Automation and Google Cloud
Deployment Manager, as well as third-party options, including Chef Automate, Puppet Enterprise, Red Hat Ansible
Automation Platform and VMware vRealize Automation.
The most important features of IoT on which it works are connectivity, analyzing,
integrating, active engagement, and many more. Some of them are listed below:
Connectivity: Connectivity refers to establish a proper connection between all the things of IoT to IoT platform it may
be server or cloud. After connecting the IoT devices, it needs a high speed messaging between the devices and cloud
to enable reliable, secure and bi-directional communication.
Analyzing: After connecting all the relevant things, it comes to real-time analyzing the data collected and use them to
build effective business intelligence. If we have a good insight into data gathered from all these things, then we call our
system has a smart system.
Integrating: IoT integrating the various models to improve the user experience as well.
Artificial Intelligence: IoT makes things smart and enhances life through the use of data. For example, if we have a
coffee machine whose beans have going to end, then the coffee machine itself order the coffee beans of your choice
from the retailer.
Sensing: The sensor devices used in IoT technologies detect and measure any change in the environment and report
on their status. IoT technology brings passive networks to active networks. Without sensors, there could not hold an
effective or true IoT environment.
Active Engagement: IoT makes the connected technology, product, or services to active engagement between each
other.
Endpoint Management: It is important to be the endpoint management of all the IoT system otherwise, it makes the
complete failure of the system. For example, if a coffee machine itself order the coffee beans when it goes to end but
what happens when it orders the beans from a retailer and we are not present at home for a few days, it leads to the
failure of the IoT system. So, there must be a need for endpoint management.
Appreciation of Ethics and Civilisations (UHMS) is about the concept of ethics from the perspective of a different
civilization. It aims to identify the system, level of development, progress, and the culture of a nation in strengthening
social cohesion.
Philosophy and Current Issues is a course which covers the relation between philosophy and the National Education
Philosophy and Rukun Negara. Philosophy and Contemporary Issues is an introduction to philosophy and how it is
related to contemporary issues in Malaysia and yourself.
Professional skills are career competencies and abilities used in the workplace that are beneficial for nearly any job.
Professional skills are a combination of both hard skills (job-specific duties that can be trained) and soft skills
(transferable traits like work ethic, communication, and leadership).
There are three types of skills: functional, self-management and special knowledge. Functional skills are abilities or
talents that are inherited at birth and developed through experience and learning. Examples are: making decisions,
repairing machines or calculating taxes.
In the Ministerial Guidelines, there are five (5) principles outlined which are known as the “TRUST Principles” (T – top
level commitment; R – risk assessment; U – undertake control measures; S – systematic review, monitoring and
enforcement; T – training and communication).