You are on page 1of 36

DEGREE LEVEL 1

Students will learn fundamental skills required by every IT professional, and the basic understanding of the underlying
computer system through computer architecture, operating systems, networks, and databases. Some specialized
modules will provide students with basic knowledge of web design and development. The modules will also help them
develop personal and organizational skills, as well as nurture creativity and innovation.

Common Modules

1. Digital Thinking and Innovation

The course is a combination of Design thinking with Digital Innovation. Digital Thinking is a fundamental paradigm shift
from traditional ways of working and learning to be more agile and adaptive with the emerging digital technologies.
This module shifts students' system thinking towards innovation at early stage.

2. Intercultural Awareness and Cultural Diversity

Cultural awareness (or cultural sensitivity, cross-cultural / intercultural awareness) refers to the awareness of our own
cultural identity, values and beliefs and the knowledge and acceptance of other's cultures. Cultural awareness helps us
break down cultural barriers, build cultural bridges, and learn how to love, and appreciate those different from us. We
can relate better to people with cultural differences as we begin to understand ourselves better. This results in more
cultural connection and less cultural conflict.

3. System Analysis & Design

System analysis and design is a process that many companies use to evaluate particular business situations and develop
ways to improve them through more optimal methods. Companies may use this process to reshape their organization
or meet business objectives related to growth and profitability. Systems have been classified in different ways.

Common classifications are: (1) physical or abstract, (2) open or closed, and (3) “man – made” information systems.
Physical systems are tangible entities that may be static or dynamic in operation.

There are three approaches to system design. And, they include structured design, function-oriented design, and
object-oriented design. What are three parts of system analysis?

Basically there are three major components in every system, namely input, processing and output. In a system the
different components are connected with each other and they are interdependent. For example, human body
represents a complete natural system.

Examples of systems analysis might be making a change to some computer code to achieve a task, fixing a faulty air-
conditioning system, or analyzing the routines in your life to stop a mistake from happening.
4. Programming with Python

Python is a computer programming language often used to build websites and software, automate tasks, and conduct
data analysis. Python is a general-purpose language, meaning it can be used to create a variety of different programs
and isn't specialized for any specific problems.

5. Mathematical Concepts for Computing

5 Types of Math Concepts Used in Computer Science

Binary Math - Binary mathematics is the heart of the computer and an essential math field for computer programming
and technology. For all mathematical concepts, the binary number system uses only two digits, 0 and 1. It simplifies
the coding process and is essential for low-level instructions used in hardware programming.

College Algebra - College Algebra is the introductory course in algebra. The course is designed to familiarize learners
with fundamental mathematical concepts such as inequalities, polynomials, linear and quadratic equations, and
logarithmic and exponential functions.

Topics like factoring, linear equations, ratios, quadratic equations, and exponents are essential for computer science.
You need to have a clear understanding of these topics if you want to succeed in computer science. Algebra in
programming is used for the better development of math objects.

Linear equations and inequalities Rational expressions and equations


Graphs and forms of linear equations Relating algebra and geometry
Functions Polynomial arithmetic
Quadratics: Multiplying and factoring Advanced function types
Quadratic functions and equations Transformations of functions
Complex numbers Rational exponents and radicals
Exponents and radicals Logarithms

Statistics - Statistics is a branch of applied mathematics that involves the collection, description, analysis, and
inference of conclusions from quantitative data. The mathematical theories behind statistics rely heavily on differential
and integral calculus, linear algebra, and probability theory.

Statistics are used in virtually all scientific disciplines, such as the physical and social sciences as well as in business,
the humanities, government, and manufacturing. Statistics is fundamentally a branch of applied mathematics that
developed from the application of mathematical tools, including calculus and linear algebra to probability theory.

Applied mathematics is the application of mathematical methods by different fields such as physics, engineering,
medicine, biology, finance, business, computer science, and industry. Thus, applied mathematics is a combination of
mathematical science and specialized knowledge.
Calculus - Calculus is the mathematical study of change, in the same way that geometry is the study of shape and
algebra is the study of operations and their application to solving equations. Calculus Mathematics is broadly classified
into two different such: Differential Calculus. Integral Calculus.

Discrete Math - Discrete mathematics is the study of mathematical structures that are countable or otherwise distinct
and separable. Examples of structures that are discrete are combinations, graphs, and logical statements. Discrete
structures can be finite or infinite. Discrete Mathematics deals with the study of Mathematical structures. It deals with
objects that can have distinct separate values. It is also called Decision Mathematics or finite Mathematics.

6. Operating Systems & Computer Architecture

An operating system is a piece of software that manages files, manages memory, manages processes, handles input
and output, and controls peripheral devices like disk drives and printers, among other things.

As we explore the Functions of the Operating System, we delve into a realm of capabilities that encompass resource
allocation, memory management, process scheduling, device management, file handling, user interfaces, and robust
system security measures.

The operating system's tasks, in the most general sense, fall into six categories: Elements of Computer System
Hardware
− Processor management
Software
− Memory management
People
− Device management
Procedures
− Storage management
− Application Interface Data
− User Interface Connectivity

For the most part, the IT industry largely focuses on the top five OSs, including Apple macOS, Microsoft Windows,
Google's Android OS, Linux Operating System, and Apple iOS.

An Introduction to Operating System

An Operating system forms the core of any computer device. The functioning and processing of a computer system can
come to hold without an operating system. The different features and history of the development of OS have also been
discussed. For the reference of competitive exam aspirants, some sample questions have also been given further below
in this article. To comprehend Computer Knowledge and its key features, understanding this concept becomes a key
factor. Thus, one must carefully go through the various aspects related to this topic to understand it well.

What is an Operating System?

An Operating System is the interface between the computer hardware and the end-user. Processing of data, running
applications, file management and handling the memory is all managed by the computer OS. Windows, Mac, Android
etc. are examples of Operating systems which are generally used nowadays.

All modern computing devices including Laptops, Tablet, mobile phones, etc. comprise an Operating System which
helps in the smooth working of the device.

To strengthen your command over Computer Awareness, the various terms, programs and applications:
• Microsoft Office • Types of Computer
• Computer Networks • High-Level Computer Languages
• Components of Computer • Hardware and Software

History of the Operating System

It took years to evolve the Operating Systems and make them as modernised and advanced as they are today. Given
below are the details about the evolution and history of Operating systems.

• Initially, the computers made did not have an Operating system and to run each program a different code was used.
This had made the processing of data more complex and time taking
• In 1956, the first operating systems were developed by General Motors to run a single IBM computer
• It was in the 1960s that IBM had started installing OS in the devices they launched
• The first version of the UNIX operating system was launched in the 1960s and was written in the programming
language C
• Later on, Microsoft came up with their OS on the request of IBM
• Today, all major computer devices have an operating system, each performing the same functions but with slightly
different features

MS Windows, the Operating system released by Microsoft:

Computer Shortcut Keys Components of Computer Web Browsers


Computer Virus Computer Abbreviations Storage Devices

Types of Operating System

Given below are the different types of Operating System along with brief information about each of them:

1. Batch Operating System


• There is no direct communication between the computer and the OS
• There is an intermediate, the Operator, which needs to distribute the work into batches and sort similar jobs
• Multiple users can use it
• Can easily manager a large amount of work
2. Real-Time Operating System
• It has a data processing system
• The processing time is very small between the user’s command and the output
• Used in fields where the response needs to be quick and rapid
3. Time-Sharing Operating System
• Multiple people at various terminals can use a program at the same time
• The main motive is to minimize the response time
4. Distributed Operating System
• When two or more systems are connected to each other and one can open files which are not present in their
system but in other devices connected in the network
• Its usage has now increased over the years
• They use multiple central processors to serve real-time applications
• Failure of one system does not affect the other systems connected in the network
5. Embedded Operating System
• These special Operating systems are built into larger systems
• They generally are limited to single specific functions like an ATM
6. Network Operating System
• They have one main server which is connected to other client servers
• All the management of files, processing of data, access to sharing files, etc. are performed over this small
network
• It is also a secure operating system for working with multiple users
7. Mobile Operating System
• With the advancement in the field of technology, smartphones now are released with an Operating system.
• They are designed in a manner that they can help a small device work efficiently

Fundamentals of Computer - Difference Between” Computer”

Difference Between Search Engine and Web Browser Difference Between Hardware and Software
Difference Between WWW and Internet Difference Between RAM and ROM
Difference Between Notepad and WordPad Difference Between Firewall and Antivirus
Functions of Operating System

Given below are the various functions of an Operating System:


• It helps with memory management. It keeps a track of the files being saved in the Main memory and the
primary memory of the computer device
• Whenever a computer is turned on, the Operating system automatically starts to work. Thus, the booting and
rebooting process of a computer device is also an important function of the OS
• It provides a user interface
• Managing of basic peripheral devices is done by the operating system
• Using the password protection option of an operating system, the data in the device can be kept secure
• It coordinates with the software and the user
• Easy navigation and organisation of files and programs are managed by the OS
• Any kind of program which needs to be run through the system is done by the operating system
• If any kind of error or bug is found during the program is detected using the operating system

List of Common Operating Systems

Given below is a list of commonly used Operating systems along with their year of release.
Name of the OS Release Date
Android 2008
iOS 2007
Windows 1985
Mac OS 2001
MS-DOS 1981
Chrome OS 2011
Windows Phone 2010
Blackberry OS 1999
Firefox OS 2013
UNIX 1969
LINUX

Types of Operating Systems (With OS Functions and Examples)

Understanding operating systems or OS is essential to working in IT. OS types vary depending on the device and its
function. This video reviews what operating systems are, why they're important, and the different types of operating
systems in use today.

Every computer, smartphone or similar electronic device comes with special software called an operating system. An
operating system, also known as an OS, is the engine behind the utility value of computers and smartphones. There
are different types of operating systems depending on the device, manufacturer and user preference, and if you work,
or want to work, in the information technology field, it's important to understand them. Why they're important and
explore the different types of operating systems in use today.

Key takeaways:
• An operating system is software that supports and manages all the programs and applications used by a
computer or mobile device.
• An operating system uses a graphic user interface (GUI), a combination of graphics and text, that allows you to
interact with the computer or device.
• Every computer or smart device needs at least one operating system to run applications and perform tasks.

What are operating systems?

An operating system (OS) is a type of software interface between the user and the device hardware. This software
allows users to communicate with the device and perform the desired functions. Operating systems use two
components to manage computer programs and applications:
• The kernel is the core inner component that processes data at the hardware level. It handles input-output
management, memory and process management.
• The shell is the outer layer that manages the interaction between the user and the OS. The shell communicates
with the operating system by either taking the input from the user or a shell script. A shell script is a sequence
of system commands that are stored in a file.

Operating system functions

Basic functions of an operating system include:


• Booting: An operating system manages the startup of a device.
• Memory management: An operating system coordinates computer applications and allocates space to different
programs installed in the computer.
• Data security: An operating system protects your data from cyberattacks.
• Loading and execution: An operating system starts and executes a program.
• Drive/disk management: An operating system manages computer drives and divides disks.
• Device control: An operating system enables you to allow or block access to devices.
• User interface: This part of an operating system, also known as UI, allows users to enter and receive information.
• Process management: The operating system allocates space to enable computer processes, such as storing and
sharing information.
Most operating systems come pre-installed on the device. However, users can change their OS or upgrade to a newer
version of the operating system for better device performance.

Types of operating systems


Here are the different types of operating systems you need to know:

1. Batch OS

The batch operating system does not have a direct link with the computer. A different system divides and allocates
similar tasks into batches for easy processing and faster response.

The batch operating system is appropriate for lengthy and time-consuming tasks. To avoid slowing down a device, each
user prepares their tasks offline and submits them to an operator. The advantages and disadvantages of using a batch
operating system include:

Advantages Disadvantages
Many users can share batch systems. There is little idle Some notable disadvantages are: Batch operating
time for batch operating systems. systems are challenging to debug.
It becomes possible to manage large workloads. Any failure of the system creates a backlog.
It's easy to estimate how long a task will take to be It may be costly to install and maintain good batch
completed. operating systems.
Batch operating systems are used for tasks such as managing payroll systems, data entry and bank statements.

2. Time-sharing or multitasking OS

The time-sharing operating system, also known as a multitasking OS, works by allocating time to a particular task and
switching between tasks frequently. Unlike the batch system, the time-sharing system allows users to complete their
work in the system simultaneously.

It allows many users to be distributed across various terminals to minimize response time. Potential advantages and
disadvantages of time-sharing operating systems include:

Advantages Disadvantages
There's a quick response during task performance. The user's data security might be a problem.
It minimizes the idle time of the processor. System failure can lead to widespread failures.
All tasks get an equal chance of being accomplished. Problems in data communication may arise.
It reduces the chance of software duplication. The integrity of user programs is not assured.
Examples of time-sharing operating systems include Multics and Unix.
3. Distributed OS
This system is based on autonomous but interconnected computers communicating with each other via
communication lines or a shared network. Each autonomous system has its own processor that may differ in size and
function. Distributed operating systems are used for tasks such as telecommunication networks, airline reservation
controls and peer-to-peer networks.
A distributed operating system serves multiple applications and multiple users in real time. The data processing
function is then distributed across the processors. Potential advantages and disadvantages of distributed operating
systems are:
Advantages Disadvantages
They allow remote working. If the primary network fails, the entire system shuts down.
They allow a faster exchange of data among users. They're expensive to install.
They minimize the load on the host computer. They require a high level of expertise to maintain.
They reduce delays in data processing.
Failure in one site may not cause much disruption to the
system.
They enhance scalability since more systems can be
added to the network.
4. Network OS
Network operating systems are installed on a server providing users with the capability to manage data, user groups
and applications. This operating system enables users to access and share files and devices such as printers, security
software and other applications, mostly in a local area network. Potential advantages and disadvantages of network
operating systems are:
Advantages Disadvantages
Centralized servers provide high stability. They require regular updates and maintenance.
Security issues are easier to handle through the servers. Servers are expensive to buy and maintain.
Users' reliance on a central server might be detrimental to
It's easy to upgrade and integrate new technologies.
workflows.
Remote access to the servers is possible.
Examples of network operating systems include Microsoft Windows, Linux and macOS X.
5. Real-time OS
Real-time operating systems provide support to real-time systems that require observance of strict time requirements.
The response time between input, processing and response is tiny, which is beneficial for processes that are highly
sensitive and need high precision. These processes include operating missile systems, medical systems or air traffic
control systems, where delays may lead to loss of life and property. Real-time operating systems may either be hard
real-time systems or soft real-time systems.
Hard real-time systems are installed in applications with strict time constraints. The system guarantees the completion
of sensitive tasks on time. Hard real-time does not have virtual memory. Soft real-time systems do not have equally
rigid time requirements. A critical task gets priority over other tasks. Potential advantages and disadvantages of real-
time operating systems include:
Advantages Disadvantages
They use device and systems maximally, hence more output. They have a low capacity to run tasks simultaneously.
They allow fast shifting from one task to another. They use heavy system resources.
The focus is on current tasks, and less focus is put on the They run on complex algorithms that are not easy to
queue. understand.
Real-time systems are meticulously programmed, hence free They're unsuitable for thread priority because of the
of errors. system's inability to switch tasks.
They can be used in embedded systems.
They allow easy allocation of memory.
Real-time operating systems are used for tasks such as scientific experiments, medical imaging, robotics and air traffic
control operations.
6. Mobile OS
Mobile operating systems run exclusively on small devices such as smartphones, tablets and wearables. The system
combines the features of a personal computer with additional features useful for a handheld device.
Mobile operating systems start when a device is powered on to provide access to installed applications. Mobile
operating systems also manage wireless network connectivity. Potential advantages and disadvantages of mobile
operating systems are:
Advantages Disadvantages
Most systems are easy for users to learn and operate. Some systems are not user-friendly.
Some mobile OS put a heavy drain on a device’s battery,
requiring frequent recharging.
Examples of mobile operating systems include Android OS, Apple and Windows mobile OS.

Common operating systems


Here are the most common operating systems in use:
Microsoft Windows Apple iOS Google Android
Created by Microsoft, Microsoft Apple iOS from Apple is used on Android from Google is the most
Windows is one of the most popular smartphones and tablets popular operating system in the
proprietary operating systems for manufactured by the same company. world. It's mainly used on tablets
computers in the world. Most personal Users of this system have access to and smartphones. It also runs on
computers come preloaded with a hundreds of applications. The devices made by other
version of Microsoft Windows. One operating system offers strong manufacturers. Users have
downside of Windows is that encryption capabilities to control access to numerous mobile
compatibility with mobile phones has unauthorized access to users' private applications available on the
been problematic. data. Google Play Store.
Apple macOS Linux
Developed by Apple, this proprietary operating system runs on Created by the Finnish programmer Linus
the manufacturer's personal computers and desktops. All Apple Torvalds, Linux is today developed by
and Macintosh computers come equipped with the latest programmer collaborators across the world who
version of macOS, previously known as OS X systems. The ability submit tweaks to the central kernel software.
to prevent bugs and fend off hackers make Apple operating Linux is popular with programmers and corporate
systems popular with their users. servers. It is available for free online.

What’s the difference between an open-source and proprietary OS?


An open-source operating system, such as those built on the Linux kernel (like the Android OS), makes available its
code to the public. This source code can be modified by anyone and the software grows based on open collaboration.
A proprietary OS, on the other hand, is branded software, where the code is protected by the maker, such as Microsoft
or Apple, meaning the code can’t be modified by others. Any changes made to the operating system will be from the
organization that owns it. In general, open-source software is updated and fixed more quickly than proprietary software.

Why is it important to know about operating systems?


Knowledge of operating systems is important for the following reasons:
• It allows you to understand the inner workings of a device.
• It enables you to fix minor issues with the device.
• It allows you to improve your coding skills.
• It allows you to determine what operating system is best for you.
Learning about OS and improving your understanding of basic computer technology isn't just for computer
enthusiasts—these are important skills for all computer users.

Which jobs work directly with operating systems?


Operating systems are the “brains” behind a computer’s functionality, and there are numerous tech jobs that work
directly with them, including software developers, web developers, software engineers, coders, and computer
programmers. While many programmers today are skilled in cross-platform programming, meaning the software they
develop is capable of running on multiple platforms, there are, of course, still specific-OS developers, such as a Linux
developer, a macOS developer or an Android developer.
Operating Systems and Computer Architecture

Operating System is a software running for computer systems, they can be useful for many things such as:
• Multitasking- Allowing you to open many applications at a time
• Error handling- Refers to the anticipation, detection, and resolution of programming, application, and
communications errors.
• Security- Passwords, no information will get leaked
• Input and Output controls- Controlling other devices like printers.

Of course there are more of the OS benefits but these are just some of them. The most common OS are: Windows,
MAC OS, Linux, Android, UNIX and DOS. Common computer operating systems are Linux, Windows, MAC OS and
Android, iOS, blackberry OS for mobile phones. Nowadays, there are other devices which have OS’s in them, for
example smart TVs and smart fridges.
Types of OSs Household devices with OSs

Linux Windows iOS


Some devices do not need operating systems because they carry out basic functions only, for example light switches,
they don’t need complex functions just turning on and off.

When the computer is first powered up, the initiating programs are loaded into the memory from the ROM (Read
Only Memory). These make sure that the hardware, processor and internal memory are functioning correctly. If there
aren’t any errors, the OS will be loaded into the memory.

When the computer shuts down, it deletes all the data which has been loaded from the memory (RAM), then they will
repeat the same process above when the computer is turned on again.

Interrupts and Buffers

Interrupts- Signal sent from a device or from software to the processor, meaning that the processor will stop
temporary during the interrupt. This can happen due to the following:
• disk drive is ready to receive more data
• error has occurred (eg. paper jammed in the printer)
• <ctrl><Alt><Del> buttons are pressed
• software error has occurred

Buffers- Loading, this can occur by the slow speed of the input device, causing the output device to be slower than
usual.
von Neumann model:
The von Neumann model was invented by a scientist called John Von Neumann in 1945. Von Neumann computer
systems contain five main building blocks: the central processing unit (CPU), memory unit, arithmetic logic unit,and
input/output devices. These components are connected together using the system bus.

Components of the Von Neumann Model


1. Memory Unit is the storage of information (data/program)
2. Processing Unit is the processing of Information
3. Input are the devices giving information into the computer. e.g. keyboard, mouse
4. Output: are the devices receiving information out of the computer. e.g. printer, monitor
5. Control Unit: Makes sure that all the other parts perform their tasks correctly and at the correct time.
The three system buses are:
1. Address Bus – Carries signals relating to the addresses between the processor and the memory. It
is unidirectional (signals travel in one direction only).
2. Data Bus – Sends data between the processor, the memory unit and the input/output devices. It is bi-
directional (data can travel in both directions)
3. Control Bus – Carries signals relating to the control and coordination of all activities within the computer
(examples: read and write operation). It is regarded as being both unidirectional and bi-directional due to the
internal connections within the computer architecture.

The simple model structure:

Steps of von Neumann model:


Firstly, the data address is stored in the Control Unit, then the address is then sent to the MAR (or the Memory Address
Register) using the control bus. The memory unit will then send the data to the processor using the data bus to decode
the data and will be sent back to the memory unit using the same bus. The decoded data is then sent to the output
using the data bus.

The read operation steps are:


1. Firstly the processor sends address of required data along the address bus
2. Then the processor sends a read signal along the control bus to the memory
3. The data is sent from the memory to the processor along the data bus
4. The processor then decodes and executes the data.The write operation steps are:
The write operation steps are:
1. Firstly the processor places data on the data bus and address of destination on the address bus
2. Then the processor sends the write signal along the control bus and the data is sent along the data bus

Cache memory – used for high speed storage


Register – holds data or instructions during processing
Accumulator – register that is used for calculations
What is Cloud Computing?
In cloud computing, we can manipulate, configure and access the hardware and software remotely. In general, cloud
computing is accessing and storing the files and databases over the internet instead of accessing it on your
computer’s hard drive.

Cloud computing offers platform independence, the software is not required to be installed on any PC. There is
portability in cloud computing. Applications that execute on a cloud are over email or through web conferencing.

Prerequisites
To learn cloud computing, one should have basic knowledge of computer, Database Management System (DBMS) and
Networking. These subjects will help you to understand the concepts of cloud computing very easily.

Basic Concepts of Cloud Computing


To make cloud computing feasible and accessible to the end users, there are certain services and models:
1. Deployment Models
2. Service Models

1. Deployment Models
There are four types of access in the cloud:
• Public
• Private
• Hybrid
• Community

• Public Cloud
Public cloud is easily accessible to the general public. A private cloud is operated by the organisation it serves. Public
cloud is inexpensive. There are no wasted resources because you pay for what you use.
• Private Cloud
Private cloud only allows systems and services to be within an organisation. Private cloud is the best for business with
dynamic or unpredictable computing needs because they will have control over the environment of the cloud.
• Hybrid Cloud
Hybrid Cloud is a cloud service which includes both private and public clouds. Hybrid cloud is best for heavy workload
because it combines both public cloud and private cloud.
• Community Cloud
In the Community cloud, the resources are shared between several organisations. It allows several companies to work
together on the same platform, where they can share their resources.

2. Service Models
There are three service models:
• Infrastructure-as-a-Service (IaaS) – Here users are responsible
for managing data, applications and runtime. In IaaS, providers
manage virtualisation, servers, hard drives, storage, and
networking.
Example: (AWS) Amazon Web Services, Microsoft Azure.
• Platform-as-a-service (PaaS) – PaaS is used for development.
With PaaS, one can develop and customise applications. PaaS
makes it easy to for development, testing, and deployment of
applications.
Example: Apparenda
• Software-as-a-service (SaaS) – As it is a service based cloud, the
cloud provider delivers a complete software to the client. It
provides pre-configured hardware resources through a virtual
interface. It does not include any Operating System. It allows access to the software only.
Example: Google Apps, Salesforce, Workday.
History of Cloud Computing

The evolution of cloud computing started in 1950 with mainframe computing. Here multiple users are allowed to access
a mainframe. After 20 years around 1970, the concept of virtualisation came.
Virtualisation software made it possible to execute one or more operating systems simultaneously in an isolated
environment.

Benefits of Cloud Computing

• Applications and utilities can be accessed over the internet.


• Applications can be modified, configure and manipulated via the internet at any instance of time.
• To access, manipulate and modify, you don’t have to download or install any software.
• Cloud is platform independent, as it is available over the internet, one can access it anytime, whenever they want.
• It is more reliable because cloud computing offers load balancing.

Risks related to Cloud Computing


In some cloud’s data management and infrastructure management is provided only by the third party, so it’s very risky
to handover any valuable information to the service providers. It becomes challenging to switch from one cloud service
provider to another. Sometimes data deletion request is made and it may not get deleted, as backup files are not
available at the time of deletion.

Q 1. What is Cloud Computing?

Ans. Cloud Computing is uploading data files and images over the centers available for users over the Internet.

Q 2. What are the characteristics of Cloud Computing?


Ans. Characteristics of Cloud Computing:
• Easily accessible
• Minimum charges
• On-Demand Network
• Resource Pooling
• Adequate Storage
Q 3. What are the examples of cloud computing?
Ans. Given below are a few examples of cloud computing:
• Google Cloud • Salesforce
• Adobe Creative Cloud • Microsoft 365
• Creatio • Microsoft Power BI

Q 4. Why is cloud computing important?

Ans. Cloud computing ensures that a person can easily store large amount of data and then have convenient access to
the same.

Q 5. What is the benefit of Cloud Computing?

Ans. It is more reliable because cloud computing offers load balancing and applications can be modified via the internet
any time.
7. Introduction to Networking

What is a network?
In information technology, a network is defined as the connection of at least two computer systems, either by a cable
or a wireless connection. The simplest network is a combination of two computers connected by a cable. This type
of network is called a peer-to-peer network.

There is no hierarchy in this network; both participants have equal privileges. Each computer has access to the data of
the other device and can share resources such as disk space, applications or peripheral devices (printers, etc.).

Today’s networks tend to be a bit more complex and don’t just consist of two computers. Systems with more than ten
participants usually use client-server networks. In these networks, a central computer (server) provides resources to
the other participants in the network (clients).

When you buy a new computer, the first thing you’ll probably try to do is connect to the Internet. To do this, you
establish a connection to your router, which receives the data from the Internet and then forwards it to the computer.

Of course that’s not all: Next, you could also connect your printer, smartphone or TV to the router so that these
devices are also connected to the Internet. Now you have connected different devices to each other via a central
access point and created your own network.

Definition: Network

A network is a group of two or more computers or other electronic devices that are interconnected for the purpose
of exchanging data and sharing resources.

Network example: your home Wi-Fi. The Wireless LAN (Wireless Local Area Network, i.e. the Wi-Fi network) in your
home is a good example of a small client-server network. The various devices in your home are wirelessly connected
to the router, which acts as a central node (server) for the household. The router itself is connected to a much larger
network: the Internet.

Since the devices are connected to the router as clients, they are part of the network and can use the same resource
as the server, namely the Internet. The devices can also communicate with each other without having to establish a
direct connection to each device. For example, you can send a print job to a Wi-Fi-enabled printer without first
connecting the printer to the computer using a cable.

Before the advent of modern networks, communication between different computers and devices was very
complicated. Computers were connected using a LAN cable. Mechanical switches were used so that peripheral devices
could also be shared. Due to physical limitations (cable length), the devices and computers always had to be very close
to each other. If you need an extremely stable connection, you should consider the possibility of a wired connection to
the router or device, despite the advantages of Wi-Fi.

What are the tasks and advantages of a network?

The main task of a network is to provide participants with a single platform for exchanging data and sharing resources.
This task is so important that many aspects of everyday life and the modern world would be unimaginable without
networks. Here’s a real-life example: In a typical office, every workstation has its own computer. Without a network of
computers, it would be very difficult for a team to work on a project since there would be no common place to share
or store digital documents and information, and team members would not be able to share certain applications.

In addition, many offices only have one printer or a few printers that are shared by everyone. Without a network, the
IT department would have to connect every single computer to the printer, which is difficult to implement from a
technical standpoint. A network elegantly solves this problem because all computers are connected to the printer via
one central node.

The main advantages of networks are:


Shared use of data Central control of programs and data
Shared use of resources Central storage and backup of data
Shared processing power and storage capacity Easy management of authorizations and responsibilities
How does a network work?
In a typical client-server network, there is a central node called the
server. The server is connected to the other devices, which are called
clients. This connection is either wireless (Wireless LAN) or wired (LAN).
In a typical home network, the router assumes the role of the server. It
is connected to the Internet and provides the “Internet” resource for
the other devices (computers, smartphones, etc.). The router combines
all wired and wireless devices in a local network.
Client-server architecture Graphic: Typical structure of a home network
In larger networks, such as corporate networks, the server is usually a central computer. This computer is used
exclusively for running special server software and services, not regular applications and programs. The server must
operate continuously, whereas the other computers (clients) can be switched off.
The server and the client communicate as follows in this server-based network: The client first sends a request to the
server. The server evaluates the request and then transmits the response. In this model, the client always connects to
the server, never the other way around.
Network protocols
Network protocols ensure smooth communication between the different components in a network. They control data
exchange and determine how communication is established and terminated as well as which data is transmitted. There
are usually multiple network protocols that each perform a specific subtask and are hierarchically organized into layers.
Network addresses
In addition, it is necessary to ensure that the transmitter and receiver can be correctly identified. Network addresses
are used for this purpose. In computer networks, each computer typically has an IP address, similar to a telephone
number, that uniquely identifies the computer. This internal IP address is used only for communication between the
participants in the local network. For communication on the Internet, external IP addresses are used that are
automatically assigned by the Internet provider.
A distinction is also made between IPv4 and IPv6 addresses. IPv4 addresses used to be standard, but only a total of
around 4.3 billion of these addresses could be assigned before they were exhausted. Due to the massive expansion of
the Internet, additional IP addresses were urgently needed. Therefore, the new IPv6 standard was developed, allowing
up to 3.4 x 1038 (340 sextillion) addresses. This should be sufficient for the future.
You can find detailed information on the IP protocol and its important role in computer networks in our article “What
is the Internet Protocol?”.
What types of networks are there?
Networks are usually divided into different network types according to transmission type and range, that is, depending
on how or how far the data is transmitted.
Wireless vs. wired
Networks are classified by transmission type as either wireless or wired. Examples of wireless networks include Wi-
Fi networks based on the IEEE 802.11 standard, or the LTE networks used for mobile devices and smartphones. Wired
networks such as DSL are also known as broadband Internet.

Types of Networks range

Networks are typically classified by range as follows:


1. Personal Area Network (PAN): A PAN is used for interconnecting devices within a short range of approximately 10
meters. Examples include Bluetooth technology or Apple’s Airdrop ad hoc Wi-Fi service.
2. Local Area Network (LAN): Local area networks are among the most widespread networks and are used in
households or small and medium-sized companies.
3. Metropolitan Area Network (MAN): These types of networks cover cities or single geographic regions.
4. Wide Area Network (WAN): The nationwide broadband or cellular network in the US is an example of a Wide Area
Network.
5. GAN (Global Area Network): The best-known example of a global network is the Internet.
Note that there is some overlap between the different network types: As a Wi-Fi user, you are simultaneously part of
a WAN and a GAN when you’re connected to the Internet.
8. Introduction to Databases
A database is an organized collection of structured information, or data, typically stored electronically in a computer
system. A database is usually controlled by a database management system (DBMS).
Database defined
A database is an organized collection of structured information, or data, typically stored electronically in a computer
system. A database is usually controlled by a database management system (DBMS). Together, the data and the DBMS,
along with the applications that are associated with them, are referred to as a database system, often shortened to just
database.
Data within the most common types of databases in operation today is typically modelled in rows and columns in a
series of tables to make processing and data querying efficient. The data can then be easily accessed, managed, modified,
updated, controlled, and organized. Most databases use structured query language (SQL) for writing and querying data.
Learn more about Oracle Database
What is Structured Query Language (SQL)?
According to ANSI (American National Standards Institute), it is the standard language for relational database
management systems (used to communicate with a database). SQL is a programming language used by nearly
all relational databases to query, manipulate, and define data, and to provide access control. SQL was first
developed at IBM in the 1970s with Oracle as a major contributor, which led to implementation of the SQL ANSI
standard, SQL has spurred many extensions from companies such as IBM, Oracle, and Microsoft. Although SQL
is still widely used today, new programming languages are beginning to appear.
Structured query language (SQL) is a programming language for storing and processing information in a relational
database. A relational database stores information in tabular form, with rows and columns representing different
data attributes and the various relationships between the data values. SQL is a domain-specific language used in
programming and designed for managing data held in a relational database management system, or for stream
processing in a relational data stream management system.
Evolution of the database
Databases have evolved dramatically since their inception in the early 1960s. Navigational databases such as the
hierarchical database (which relied on a tree-like model and allowed only a one-to-many relationship), and the network
database (a more flexible model that allowed multiple relationships), were the original systems used to store and
manipulate data. Although simple, these early systems were inflexible. In the 1980s, relational databases became
popular, followed by object-oriented databases in the 1990s.
More recently, NoSQL databases came about as a response to the growth of the internet and the need for faster speed
and processing of unstructured data. Today, cloud databases and self-driving databases are breaking new ground when
it comes to how data is collected, stored, managed, and utilized.
What’s the difference between a database and a spreadsheet?
Databases and spreadsheets (such as Microsoft Excel) are both convenient ways to store information. The primary
differences between the two are:
• How the data is stored and manipulated
• Who can access the data
• How much data can be stored
Spreadsheets were originally designed for one user, and their characteristics reflect that. They’re great for a single user
or small number of users who don’t need to do a lot of incredibly complicated data manipulation. Databases, on the
other hand, are designed to hold much larger collections of organized information—massive amounts, sometimes.
Databases allow multiple users at the same time to quickly and securely access and query the data using highly complex
logic and language.
Types of databases
There are many different types of databases. The best database for a specific organization depends on how the
organization intends to use the data.
Relational databases
• Relational databases became dominant in the 1980s. Items in a relational database are organized as a set of tables
with columns and rows. Relational database technology provides the most efficient and flexible way to access
structured information.
Object-oriented databases
• Information in an object-oriented database is represented in the form of objects, as in object-oriented
programming.
Distributed databases
• A distributed database consists of two or more files located in different sites. The database may be stored on
multiple computers, located in the same physical location, or scattered over different networks.
Data warehouses
• A central repository for data, a data warehouse is a type of database specifically designed for fast query and
analysis.
NoSQL databases
• A NoSQL, or nonrelational database, allows unstructured and semistructured data to be stored and manipulated
(in contrast to a relational database, which defines how all data inserted into the database must be composed).
NoSQL databases grew popular as web applications became more common and more complex.
Graph databases
• A graph database stores data in terms of entities and the relationships between entities.
• OLTP databases. An OLTP database is a speedy, analytic database designed for large numbers of transactions
performed by multiple users.

These are only a few of the several dozen types of databases in use today. Other, less common databases are tailored to
very specific scientific, financial, or other functions. In addition to the different database types, changes in technology
development approaches and dramatic advances such as the cloud and automation are propelling databases in entirely
new directions. Some of the latest databases include:
Open source databases
• An open source database system is one whose source code is open source; such databases could be SQL or NoSQL
databases.
Cloud databases
• A cloud database is a collection of data, either structured or unstructured, that resides on a private, public, or
hybrid cloud computing platform. There are two types of cloud database models: traditional and database as a
service (DBaaS). With DBaaS, administrative tasks and maintenance are performed by a service provider.
Multimodel database
• Multimodel databases combine different types of database models into a single, integrated back end. This means
they can accommodate various data types.
Document/JSON database
• Designed for storing, retrieving, and managing document-oriented information, document databases are a
modern way to store data in JSON format rather than rows and columns.
Self-driving databases
• The newest and most groundbreaking type of database, self-driving databases (also known as autonomous
databases) are cloud-based and use machine learning to automate database tuning, security, backups, updates,
and other routine management tasks traditionally performed by database administrators.
Learn more about self-driving databases

What is database software?


Database software is used to create, edit, and maintain database files and records, enabling easier file and record creation,
data entry, data editing, updating, and reporting. The software also handles data storage, backup and reporting, multi-
access control, and security. Strong database security is especially important today, as data theft becomes more frequent.
Database software is sometimes also referred to as a “database management system” (DBMS).

Database software makes data management simpler by enabling users to store data in a structured form and then access
it. It typically has a graphical interface to help create and manage the data and, in some cases, users can construct their
own databases by using database software.
What is a database management system (DBMS)?
A database typically requires a comprehensive database software program known as a database management system
(DBMS). A DBMS serves as an interface between the database and its end users or programs, allowing users to retrieve,
update, and manage how the information is organized and optimized. A DBMS also facilitates oversight and control of
databases, enabling a variety of administrative operations such as performance monitoring, tuning, and backup and
recovery.
Some examples of popular database software or DBMSs include MySQL, Microsoft Access, Microsoft SQL Server,
FileMaker Pro, Oracle Database, and dBASE.

What is a MySQL database?


MySQL is an open source relational database management system based on SQL. It was designed and optimized for
web applications and can run on any platform. As new and different requirements emerged with the internet, MySQL
became the platform of choice for web developers and web-based applications. Because it’s designed to process millions
of queries and thousands of transactions, MySQL is a popular choice for ecommerce businesses that need to manage
multiple money transfers. On-demand flexibility is the primary feature of MySQL.
MySQL is the DBMS behind some of the top websites and web-based applications in the world, including Airbnb, Uber,
LinkedIn, Facebook, Twitter, and YouTube.

Using databases to improve business performance and decision-making


With massive data collection from the Internet of Things transforming life and industry across the globe, businesses
today have access to more data than ever before. Forward-thinking organizations can now use databases to go beyond
basic data storage and transactions to analyze vast quantities of data from multiple systems. Using database and other
computing and business intelligence tools, organizations can now leverage the data they collect to run more efficiently,
enable better decision-making, and become more agile and scalable. Optimizing access and throughput to data is critical
to businesses today because there is more data volume to track. It’s critical to have a platform that can deliver the
performance, scale, and agility that businesses need as they grow over time.
The self-driving database is poised to provide a significant boost to these capabilities. Because self-driving databases
automate expensive, time-consuming manual processes, they free up business users to become more proactive with their
data. By having direct control over the ability to create and use databases, users gain control and autonomy while still
maintaining important security standards.

Database challenges
Today’s large enterprise databases often support very complex queries and are expected to deliver nearly instant
responses to those queries. As a result, database administrators are constantly called upon to employ a wide variety of
methods to help improve performance. Some common challenges that they face include:
• Absorbing significant increases in data volume. The explosion of data coming in from sensors, connected
machines, and dozens of other sources keeps database administrators scrambling to manage and organize their
companies’ data efficiently.
• Ensuring data security. Data breaches are happening everywhere these days, and hackers are getting more
inventive. It’s more important than ever to ensure that data is secure but also easily accessible to users.
• Keeping up with demand. In today’s fast-moving business environment, companies need real-time access to their
data to support timely decision-making and to take advantage of new opportunities.
• Managing and maintaining the database and infrastructure. Database administrators must continually watch
the database for problems and perform preventative maintenance, as well as apply software upgrades and
patches. As databases become more complex and data volumes grow, companies are faced with the expense of
hiring additional talent to monitor and tune their databases.
• Removing limits on scalability. A business needs to grow if it’s going to survive, and its data management must
grow along with it. But it’s very difficult for database administrators to predict how much capacity the company
will need, particularly with on-premises databases.
• Ensuring data residency, data sovereignty, or latency requirements. Some organizations have use cases that are
better suited to run on-premises. In those cases, engineered systems that are pre-configured and pre-optimized
for running the database are ideal.
Addressing all of these challenges can be time-consuming and can prevent database administrators from performing
more strategic functions.

How autonomous technology is improving database management


Self-driving databases are the wave of the future—and offer an intriguing possibility for organizations that want to use
the best available database technology without the headaches of running and operating that technology.

Self-driving databases use cloud-based technology and machine learning to automate many of the routine tasks required
to manage databases, such as tuning, security, backups, updates, and other routine management tasks. With these tedious
tasks automated, database administrators are freed up to do more strategic work. The self-driving, self-securing, and
self-repairing capabilities of self-driving databases are poised to revolutionize how companies manage and secure their
data, enabling performance advantages, lower costs, and improved security.
9. Introduction to C Programming
C is a general-purpose computer programming language. It was created in the 1970s by Dennis Ritchie, and remains
very widely used and influential. By design, C's features cleanly reflect the capabilities of the targeted CPUs.
C is a powerful general-purpose programming language. It can be used to develop software like operating systems,
databases, compilers, and so on.

C is a procedural programming language with a static system that has the functionality of structured programming,
recursion, and lexical variable scoping. C was created with constructs that transfer well to common hardware
instructions. It has a long history of use in programs that were previously written in assembly language.

C programming language is a machine-independent programming language that is mainly used to create many types
of applications and operating systems such as Windows, and other complicated programs such as the Oracle database,
Git, Python interpreter, and games and is considered a programming foundation in the process of learning any other
programming language. Operating systems and diverse application software for computer architectures ranging from
supercomputers to PLCs and embedded systems are examples of such applications.

Use of C and Key Applications


C is one of the oldest and most fundamental programming languages, and it is extensively used all over the world. C is
a fast, portable language with a large library. It is a middle-level language with the advantages of both low-level and
high-level languages. And it's disheartening to learn that C programming is becoming less popular by the day. C has left
an indelible mark on practically every field and is widely used for application development and system development.

Some applications of the C programming language include:


Operating System
The C programming language was created with the intention of writing UNIX operating systems. Furthermore, the
execution time of programmes written in C is comparable to that of assembly language, making C the most important
component in the development of multiple operating systems. It was used to write the Unix kernel, Microsoft Windows
utilities and operating system apps, and a large portion of the Android operating system.
3D Movies
Applications written in C and C++ are commonly used to make 3D videos, because they handle a large quantity of data
and do many computations per second, these apps must be extremely efficient and quick. The less time it takes for
designers and animators to create movie shots, the more money the corporation saves.
Intermediate Language
C is occasionally used by implementations of other languages as an intermediate language. This method can be used
for portability or convenience, as it eliminates the need for machine-specific code generators by using C as an
intermediate language. C includes certain characteristics that aid compilation of generated code, such as line-number
preprocessor directives and optional unnecessary commas at the end of initializer lists. However, some of C's flaws
have encouraged the creation of additional C-based languages, such as, that are expressly designed for usage as
intermediate languages.
Play Important Role in Development of New Programming Language
The program written in C is easy and quick to execute. As a consequence, the C programming language has resulted in
the creation of many other languages. C++ (also known as C with classes), C#, Python, Java, JavaScript, Perl, PHP, Verilog,
D, Limbo, and the Unix C shell are examples of these languages. Every language employs the C programming language
to varying degrees. Python, for example, uses C to provide standard libraries, whereas C++, PHP, and Perl need C for
their syntaxes and control architectures.
Embedded Systems
The C programming language is the recommended language for creating embedded system drivers and applications.
The availability of machine-level hardware APIs, as well as the presence of C compilers, dynamic memory allocation,
and deterministic resource consumption, make this language the most popular.
How is the World Powered by C?
In today's world, almost everything is powered by computers. Computers are integral to our lives, from the smallest
electronic devices to the largest supercomputers. And while there are many types of computers, they all share one
thing in common: they're powered by the C programming language.
C is a versatile language that can create all sorts of applications. It's used to write the operating system for many of the
world's most popular computers and the software that runs on them. It's also used to create the websites and apps
we use daily. But C isn't just used for computers, it's also used to control the devices that we use in our everyday lives,
from cell phones to microwaves. It's estimated that over 90% of the world's electronic devices are powered by C.
So next time you're using your computer, or even just flipping a switch, remember that you're using the power of C.
While C is one of the more difficult languages to learn, it's still an excellent first language picks up because almost all
programming languages are implemented in it. This means that once you learn C, it'll be simple to learn more languages
like C++ and C#.

How to Learn C Programming?


If you want to learn C programming, there are a few things you should keep in mind.
− First, finding a good resource to teach you the language basics is essential. Once you have a solid foundation, you
can start practicing by writing small programs.
− Taking part in online forums or communities dedicated to C programming is also helpful, as you can learn from
others struggling with the language.
− Finally, don't be afraid to ask for help when you get stuck; many people will help beginners to learn C programming.
Benefits of C Language Over Other Programming Languages
C is a powerful programming language that offers several benefits over other languages.
• C is a universal language that can be used for various applications.
• C is a very efficient language that can write code that is both fast and reliable.
• C is a portable language, meaning that code written in C can be easily compiled and run on various platforms.
• C is a well-established language with a large and active community of developers constantly working on improving
and creating new tools and libraries.

10. Fundamental of Entrepreneurship


Entrepreneurship is when an individual who has an idea acts on that idea, usually to disrupt the current market with a
new product or service. Entrepreneurship usually starts as a small business but the long-term vision is much greater,
to seek high profits and capture market share with an innovative new idea.

Specialised Modules
• Fundamentals of Web Design and Development
Web design and development is an umbrella term that describes the process of creating a website. Like the name
suggests, it involves two major skill sets: web design and web development. Web design determines the look and feel
of a website, while web development determines how it functions.

What are 3 types of web development?


There are three main types of web development: front-end development, back-end development, and full stack
development.

Is web development design a good career?


Web design is a highly competitive field with plenty of room for career growth. With the right combination of skills,
experience, and portfolio projects, you can quickly rise up the ranks and become a sought-after designer. Plus,
specialized skills like UX/UI design open up even more opportunities.

Do I need to know coding for web design?


Learning to code for designers is not necessary. But if they know a bit about code, they can understand a developer's
perspective. It doesn't mean that they have to be an expert coder. But it would greatly benefit them if they knew a
little about how to code HTML and CSS, maybe a little bit of JavaScript.
DEGREE LEVEL 2

A broader range of skills will be learnt, in which students will gain a better understanding of frameworks and planning
techniques for the strategic management of organization’s computing resources, along with technical skills to evaluate,
design, configure and maintain shared computing infrastructure. They will gain solid understanding of the importance
of enterprise systems and network administration in virtual computing environments. They will have programming
skills needed in systems administration, network technologies, network design, and network security. We will further
nurture their creativity and innovation as well as independent learning to prepare them for the workplace.

Common Modules

1. Probability and Statistical Modelling

2. Programming for Data Analysis

3. Innovation Process

4. Research Methods for Computing and Technology

Specialised Modules

1. Introduction to Virtualization

2. Switching and Routing Essentials

3. Mobile & Wireless Technology

4. Web Applications

5. Systems & Network Administration

6. Data Centre Infrastructure

7. Human Computer Interaction (HCI)

8. Network Security

INTERNSHIP (16 weeks)

Students will undertake an Internship/Industrial Training for a minimum period of 16 weeks to prepare them for a
smooth transition from the classroom to the working environment.

Industrial Training refers to the placement of students in an organization to conduct supervised practical training in the
industry sector within the stipulated time before they are awarded a bachelor's degree.
DEGREE LEVEL 3

Students will make use of their previous studies and industrial experience to extend their familiarity in the field of
cloud computing and to refine their personal and professional development. Students will learn how to design and
manage cloud-based systems in enterprises using programming skills, management, and planning strategies. Students
will have a deeper understanding of enterprise network components, settings, and methodologies, as well as a better
understanding of edge computing concepts and applications. A final year project requires them to investigate and
develop a solution for a real-world problem - they will demonstrate their ability to combine technical knowledge,
critical thinking, and analytical skills to produce a personal achievement portfolio.

Common Modules

1. Project Management

2. Venture Building

Specialised Modules

1. Investigations in Cloud Engineering

2. Edge Computing Concepts and Applications

3. Computer Systems Management

4. Designing and Developing Applications on the cloud

5. Emergent Technology

6. Enterprise Networking and Automation

7. Cloud Infrastructure and Services

8. Internet of Things: Concepts and Applications

9. Cloud Engineering Project

MQA COMPULSORY SUBJECTS*

1. Appreciation of Ethics and Civilisation (M’sian Students)


2. Philosophy and Current Issues
3. Workplace Professional Skills
4. Integrity and Anti-corruption
DEGREE LEVEL 2

A broader range of skills will be learnt, in which students will gain a better understanding of frameworks and planning
techniques for the strategic management of organization’s computing resources, along with technical skills to evaluate,
design, configure and maintain shared computing infrastructure. They will gain solid understanding of the importance
of enterprise systems and network administration in virtual computing environments. They will have programming
skills needed in systems administration, network technologies, network design, and network security. We will further
nurture their creativity and innovation as well as independent learning to prepare them for the workplace.

Common Modules

1. Probability and Statistical Modelling

Probability model
A probability model is a mathematical representation of a random phenomenon. It is defined by its sample space,
events within the sample space, and probabilities associated with each event.

• The sample space S for a probability model is the set of all possible outcomes.
• An event A is a subset of the sample space S.
• A probability is a numerical value assigned to a given event A. The probability of an event is written P(A), and
describes the long-run relative frequency of the event.

The first two basic rules of probability are the following:


• Any probability P(A) is a number between 0 and 1 (0 < P(A) < 1).
• The probability of the sample space S is equal to 1 (P(S) = 1).

Statistical model
A statistical model is a special class of mathematical model. What distinguishes a statistical model from other
mathematical models is that a statistical model is non-deterministic. Thus, in a statistical model specified via
mathematical equations, some of the variables do not have specific values, but instead have probability distributions;
i.e. some of the variables are stochastic. In the example above, ε is a stochastic variable; without that variable, the
model would be deterministic.

Statistical models are often used even when the physical process being modeled is deterministic. For instance, coin
tossing is, in principle, a deterministic process; yet it is commonly modeled as stochastic (via a Bernoulli process).

There are three purposes for a statistical model, according to Konishi & Kitagawa.
• Predictions
• Extraction of information
• Description of stochastic structures

Differences between a statistical model and a probability model


The main difference is that a probability model is only one (known) distribution, while a statistical model is a set of
probability models; the data is used to select a model from this set or a smaller subset of models that better (in a
certain sense) describe the phenomenon (in the light of the data).

Data Science
Applied probability is an important branch in probability, including computational probability. Statistics is using
probability theory to construct models to deal with data.

What is probability and statistical models?

The main difference is that a probability model is only one (known) distribution, while a statistical model is a set of
probability models; the data is used to select a model from this set or a smaller subset of models that better (in a
certain sense) describe the phenomenon (in the light of the data).

What is the difference between statistical model and probability model?

A statistical model describes one or more variables and their relationships. In contrast, a probability model describes
the outcomes of a random event, sometimes called a random variable. The setting of a random variable refers to how
the event is configured.
2. Programming for Data Analysis

What is programming in data analysis?

Programming is the technique that allows data scientists to interact with and send instructions to computers. There
are hundreds of programming languages out there, built for diverse purposes. Some of them are better suited for data
science, providing high productivity and performance to process large amounts of data.

What are 3 types of data in programming?

A data type is a classification of data which tells the compiler or interpreter how the programmer intends to use the
data. Most programming languages support various types of data, including integer, real, character or string, and
Boolean.

What are the 2 most popular programming languages for data analysis?

Data analysts use SQL (Structured Query Language) to communicate with databases, but when it comes to cleaning,
manipulating, analysing, and visualizing data, you're looking at either Python or R.

The best data science programming languages are Python, R, Java, SQL, Scala, and Julia. Each of these languages has
unique features that are best used in different aspects of data science. Having some programming skills in multiple
languages can help you complete a wide range of data science tasks.

3. Innovation Process

4. Research Methods for Computing and Technology

What are the research methods in computing?

Data may be grouped into four main types based on methods for collection: observational, experimental, simulation,
and derived. The type of research data you collect may affect the way you manage that data.

Specialised Modules

1. Introduction to Virtualization

What is Virtualization?

Virtualization is technology that you can use to create virtual representations of servers, storage, networks, and
other physical machines. Virtual software mimics the functions of physical hardware to run multiple virtual machines
simultaneously on a single physical machine. Businesses use virtualization to use their hardware resources efficiently
and get greater returns from their investment. It also powers cloud computing services that help organizations manage
infrastructure more efficiently.

To properly understand Kernel-based Virtual Machine (KVM), you first need to understand some basic concepts
in virtualization. Virtualization is a process that allows a computer to share its hardware resources with multiple
digitally separated environments. Each virtualized environment runs within its allocated resources, such as memory,
processing power, and storage. With virtualization, organizations can switch between different operating systems on
the same server without rebooting.

A virtual machine is a software-defined computer that runs on a physical computer with a separate operating system
and computing resources. The physical computer is called the host machine and virtual machines are guest machines.
Multiple virtual machines can run on a single physical machine. Virtual machines are abstracted from the computer
hardware by a hypervisor.
Hypervisor

The hypervisor is a software component that manages multiple virtual machines in a computer. It ensures that each
virtual machine gets the allocated resources and does not interfere with the operation of other virtual machines. There
are two types of hypervisors.

Type 1 hypervisor

A type 1 hypervisor, or bare-metal hypervisor, is a hypervisor program installed directly on the computer’s hardware
instead of the operating system. Therefore, type 1 hypervisors have better performance and are commonly used by
enterprise applications. KVM uses the type 1 hypervisor to host multiple virtual machines on the Linux operating
system.

Type 2 hypervisor

Also known as a hosted hypervisor, the type 2 hypervisor is installed on an operating system. Type 2 hypervisors are
suitable for end-user computing.

Virtual machines and hypervisors are two important concepts in virtualization.

Why is virtualization important?

By using virtualization, you can interact with any hardware resource with greater flexibility. Physical servers consume
electricity, take up storage space, and need maintenance. You are often limited by physical proximity and network
design if you want to access them. Virtualization removes all these limitations by abstracting physical hardware
functionality into software. You can manage, maintain, and use your hardware infrastructure like an application on the
web.

Virtualization example

Consider a company that needs servers for three functions:


1. Store business email securely
2. Run a customer-facing application
3. Run internal business applications

Each of these functions has different configuration requirements:


• The email application requires more storage capacity and a Windows operating system.
• The customer-facing application requires a Linux operating system and high processing power to handle
large volumes of website traffic.
• The internal business application requires iOS and more internal memory (RAM).

To meet these requirements, the company sets up three different dedicated physical servers for each application. The
company must make a high initial investment and perform ongoing maintenance and upgrades for one machine at a
time. The company also cannot optimize its computing capacity. It pays 100% of the servers’ maintenance costs but
uses only a fraction of their storage and processing capacities.

Efficient hardware use

With virtualization, the company creates three digital servers, or virtual machines, on a single physical server. It
specifies the operating system requirements for the virtual machines and can use them like the physical servers.
However, the company now has less hardware and fewer related expenses.

Infrastructure as a service

The company can go one step further and use a cloud instance or virtual machine from a cloud computing provider
such as AWS. AWS manages all the underlying hardware, and the company can request server resources with varying
configurations. All the applications run on these virtual servers without the users noticing any difference. Server
management also becomes easier for the company’s IT team.
What are the benefits of virtualization?

Virtualization provides several benefits to any organization:

Efficient resource use


Virtualization improves hardware resources used in your data center. For example, instead of running one server on
one computer system, you can create a virtual server pool on the same computer system by using and returning servers
to the pool as required. Having fewer underlying physical servers frees up space in your data center and saves money
on electricity, generators, and cooling appliances.

Automated IT management
Now that physical computers are virtual, you can manage them by using software tools. Administrators create
deployment and configuration programs to define virtual machine templates. You can duplicate your infrastructure
repeatedly and consistently and avoid error-prone manual configurations.

Faster disaster recovery


When events such as natural disasters or cyberattacks negatively affect business operations, regaining access to IT
infrastructure and replacing or fixing a physical server can take hours or even days. By contrast, the process takes
minutes with virtualized environments. This prompt response significantly improves resiliency and facilitates business
continuity so that operations can continue as scheduled.

How does virtualization work?

Virtualization uses specialized software, called a hypervisor, to create several cloud instances or virtual machines on
one physical computer.

Cloud instances or virtual machines

After you install virtualization software on your computer, you can create one or more virtual machines. You can access
the virtual machines in the same way that you access other applications on your computer. Your computer is called the
host, and the virtual machine is called the guest. Several guests can run on the host. Each guest has its own operating
system, which can be the same or different from the host operating system.

From the user’s perspective, the virtual machine operates like a typical server. It has settings, configurations, and
installed applications. Computing resources, such as central processing units (CPUs), Random Access Memory (RAM),
and storage appear the same as on a physical server. You can also configure and update the guest operating systems
and their applications as necessary without affecting the host operating system.

Hypervisors

The hypervisor is the virtualization software that you install on your physical machine. It is a software layer that acts as
an intermediary between the virtual machines and the underlying hardware or host operating system. The hypervisor
coordinates access to the physical environment so that several virtual machines have access to their own share of
physical resources.

For example, if the virtual machine requires computing resources, such as computer processing power, the request
first goes to the hypervisor. The hypervisor then passes the request to the underlying hardware, which performs the
task.

The following are the two main types of hypervisors.

Type 1 hypervisors

A type 1 hypervisor—also called a bare-metal hypervisor—runs directly on the computer hardware. It has some
operating system capabilities and is highly efficient because it interacts directly with the physical resources.

Type 2 hypervisors

A type 2 hypervisor runs as an application on computer hardware with an existing operating system. Use this type of
hypervisor when running multiple operating systems on a single machine.
What are the different types of virtualization?

You can use virtualization technology to get the functions of many different types of physical infrastructure and all the
benefits of a virtualized environment. You can go beyond virtual machines to create a collection of virtual resources in
your virtual environment.

Server virtualization
Server virtualization is a process that partitions a physical server into multiple virtual servers. It is an efficient and cost-
effective way to use server resources and deploy IT services in an organization. Without server virtualization, physical
servers use only a small amount of their processing capacities, which leave devices idle.

Storage virtualization
Storage virtualization combines the functions of physical storage devices such as network attached storage (NAS) and
storage area network (SAN). You can pool the storage hardware in your data center, even if it is from different vendors
or of different types. Storage virtualization uses all your physical data storage and creates a large unit of virtual storage
that you can assign and control by using management software. IT administrators can streamline storage activities,
such as archiving, backup, and recovery, because they can combine multiple network storage devices virtually into a
single storage device.

Network virtualization
Any computer network has hardware elements such as switches, routers, and firewalls. An organization with offices in
multiple geographic locations can have several different network technologies working together to create its enterprise
network. Network virtualization is a process that combines all of these network resources to centralize administrative
tasks. Administrators can adjust and control these elements virtually without touching the physical components, which
greatly simplifies network management.

The following are two approaches to network virtualization.

Software-defined networking
Software-defined networking (SDN) controls traffic routing by taking over routing management from data routing in
the physical environment. For example, you can program your system to prioritize your video call traffic over application
traffic to ensure consistent call quality in all online meetings.

Network function virtualization


Network function virtualization technology combines the functions of network appliances, such as firewalls, load
balancers, and traffic analyzers that work together, to improve network performance.

Data virtualization
Modern organizations collect data from several sources and store it in different formats. They might also store data in
different places, such as in a cloud infrastructure and an on-premises data center. Data virtualization creates a software
layer between this data and the applications that need it. Data virtualization tools process an application’s data request
and return results in a suitable format. Thus, organizations use data virtualization solutions to increase flexibility for
data integration and support cross-functional data analysis.

Application virtualization

Application virtualization pulls out the functions of applications to run on operating systems other than the operating
systems for which they were designed. For example, users can run a Microsoft Windows application on a Linux machine
without changing the machine configuration. To achieve application virtualization, follow these practices:

• Application streaming – Users stream the application from a remote server, so it runs only on the end user's
device when needed.

• Server-based application virtualization – Users can access the remote application from their browser or client
interface without installing it.

• Local application virtualization – The application code is shipped with its own environment to run on all
operating systems without changes.
Desktop virtualization

Most organizations have nontechnical staff that use desktop operating systems to run common business applications.
For instance, you might have the following staff:

• A customer service team that requires a desktop computer with Windows 10 and customer-relationship
management software

• A marketing team that requires Windows Vista for sales applications

You can use desktop virtualization to run these different desktop operating systems on virtual machines, which your
teams can access remotely. This type of virtualization makes desktop management efficient and secure, saving money
on desktop hardware. The following are types of desktop virtualization.

Virtual desktop infrastructure


Virtual desktop infrastructure runs virtual desktops on a remote server. Your users can access them by using client
devices.

Local desktop virtualization


In local desktop virtualization, you run the hypervisor on a local computer and create a virtual computer with a different
operating system. You can switch between your local and virtual environment in the same way you can switch between
applications.

How is virtualization different from cloud computing?

Cloud computing is the on-demand delivery of computing resources over the internet with pay-as-you-go pricing.
Instead of buying, owning, and maintaining a physical data center, you can access technology services, such as
computing power, storage, and databases, as you need them from a cloud provider.

Virtualization technology makes cloud computing possible. Cloud providers set up and maintain their own data centers.
They create different virtual environments that use the underlying hardware resources. You can then program your
system to access these cloud resources by using APIs. Your infrastructure needs can be met as a fully managed service.

How is server virtualization different from containerization?

Containerization is a way to deploy application code to run on any physical or virtual environment without changes.
Developers bundle application code with related libraries, configuration files, and other dependencies that the code
needs to run. This single package of the software, called a container, can run independently on any platform.
Containerization is a type of application virtualization.

You can think of server virtualization as building a road to connect two places. You have to recreate an entire virtual
environment and then run your application on it. By comparison, containerization is like building a helicopter that can
fly to either of those places. Your application is inside a container and can run on all types of physical or virtual
environments.

How can AWS help with virtualization and cloud computing?

By using AWS, you have multiple ways to build, deploy, and get to market quickly on the latest technology. For example,
you might benefit from any of these services:

• Use Amazon Elastic Compute Cloud (Amazon EC2) to exercise granular control over your infrastructure. Choose
the processors, storage, and networking that you want.

• Use AWS Lambda for serverless computing so that you can run code without considering servers.

• Use Amazon Lightsail to implement virtual servers, storage, databases, and networking for a low, predictable
price.
2. Switching and Routing Essentials

Switching and routing essentials refer to the fundamental concepts and skills required to understand and configure
switches and routers in a network infrastructure. These courses provide hands-on training in the architecture,
components, and operations of switches and routers. They focus on key topics such as VLANs, spanning tree protocol,
routing protocols, subnetting, and network troubleshooting.

One course that covers switching and routing essentials is "CCNA: Switching, Routing, and Wireless Essentials" offered
by Cisco Networking Academy. This course is designed to provide a comprehensive understanding of switching
technologies and router operations that support small-to-medium business networks, including wireless local area
networks (WLAN).

Another course that focuses on routing and switching essentials is "CCNA 2: Routing and Switching Essentials". This
course dives into the architecture, components, and operations of routers and switches in a small network. It provides
hands-on experience in configuring a router and a switch, as well as understanding routing protocols and network
design principles.

Both courses mentioned are available online and offer practical training to enhance your skills in switching and routing.
Participating in these courses can help you acquire the necessary knowledge and expertise to design, deploy, and
troubleshoot networks efficiently.

It is important to note that these courses are offered by Cisco Networking Academy, which is a reputable organization
known for providing industry-standard networking certifications and training. So, by completing these courses, you can
enhance your understanding of switching and routing essentials and boost your credentials in the field of network
engineering.

Routing and Switching are different functions of network communications. The function of Switching is to switch data
packets between devices on the same network (or same LAN – Local Area Network). The function of Routing is to Route
packets between different networks (between different LANs – Local Area Networks).

3. Mobile & Wireless Technology

While a wireless system provides a fixed or portable endpoint with access to a distributed network, a mobile system
offers all of the resources of that distributed network to something that can go anywhere, barring any issues with local
reception or technical area coverage.

Wireless technology is tech that allows people to communicate or data to be transferred from one point to another
without using cables or wires. A lot of the communication is done with radio frequency and infrared waves.

4. Web Applications

A web application is software that runs in your web browser.


Businesses have to exchange information and deliver services
remotely. They use web applications to connect with customers
conveniently and securely.

A web application or a web app is an interactive computer program made using the technologies like JavaScript,
Cascading Style Sheets (CSS), and HTML5. The final progressive web app is accessed using a web browser like Google
Chrome or Mozilla Firefox and often has a login or sign up mechanism.
5. Systems & Network Administration

System and network administrators manage an organization's technical infrastructure. Job activities include everything
from designing and implementing network schemas to managing digital licenses and hardware assets.

System administration functions include user management, system monitoring, backup and recovery, and access
control. System monitoring, backup, and recovery functions are typically integrated into an organization-wide
application. User management functions include user creation and assigning roles to users.

Network administration primarily consists of, but isn't limited to, network monitoring, network management, and
maintaining network quality and security. Network monitoring is essential to monitor unusual traffic patterns, the
health of the network infrastructure, and devices connected to the network.

Network Administrator manages and maintains corporate networks and servers. They install new hardware and
applications, update existing systems, and continually monitor network performance. Their role ensures a smooth and
secure network operation within a company.

Here are the four types of system administrators based on their roles and responsibilities:

Network Administrators

Network administrators manage the entire network infrastructure of an organization. They design and install computer
systems, routers, switches, local area networks (LAN), wide area networks (WAN), and intranet systems. They also
monitor the systems, provide maintenance and troubleshoot any problems when they arise.

Database Administrators

Database administrators (DBA) set up and maintain databases used in an organization. They may also be required to
integrate data from an old database into a new one or even create a database from scratch. In large organizations,
there are specialized DBAs who are only responsible for managing databases. In smaller organizations, the roles of
DBAs and server administrators can overlap.

Server/Web Administrators

Server or web administrators specialize in maintaining servers, web services and operating systems of the servers. They
monitor the speed of the internet to make sure that everything runs smoothly. They also analyze a website’s traffic
patterns and implement changes based on user feedback.

Security Systems Administrators

Security systems administrators monitor and maintain the security systems of an organization. They develop
organizational security procedures and also run regular data checkups - setting up, deleting and maintaining user
accounts.

In large organizations, all these roles may all be separate positions within one department. In smaller organizations,
they may be shared by a few system administrators, or even one single person.

6. Data Centre Infrastructure

Data center network infrastructure is a constellation of networking resources that provides connectivity between data
center components, users, and internal and external resources, to support the storage and processing of applications
and data.

Data centers are made up of three primary types of components: compute, storage, and network. However, these
components are only the top of the iceberg in a modern DC. Beneath the surface, support infrastructure is essential to
meeting the service level agreements of an enterprise data center.

At its simplest, a data center is a physical facility that organizations use to house their critical applications and data. A
data center's design is based on a network of computing and storage resources that enable the delivery of shared
applications and data. The key components of a data center design include routers, switches, firewalls, storage systems,
servers, and application-delivery controllers.
7. Human Computer Interaction (HCI)

Human-computer interaction (HCI) is the field of study that focuses on optimizing how users and computers interact
by designing interactive computer interfaces that satisfy users’ needs. It is a multidisciplinary subject covering
computer science, behavioral sciences, cognitive science, ergonomics, psychology, and design principles.

The emergence of HCI dates back to the 1980s, when personal computing was on the rise. It was when desktop
computers started appearing in households and corporate offices. HCI’s journey began with video games, word
processors, and numerical units.

However, with the advent of the internet and the explosion of mobile and diversified technologies such as voice-based
and Internet of Things (IoT), computing became omnipresent and omnipotent. Technological competence further led
to the evolution of user interactions. Consequently, the need for developing a tool that would make such man-machine
interactions more human-like grew significantly. This established HCI as a technology, bringing different fields such as
cognitive engineering, linguistics, neuroscience, and others under its realm.

Today, HCI focuses on designing, implementing, and evaluating interactive interfaces that enhance user experience
using computing devices. This includes user interface design, user-centered design, and user experience design.

8. Network Security

Network security is any activity designed to protect the usability and integrity of your network and data. It includes
both hardware and software technologies. It targets a variety of threats. It stops them from entering or spreading on
your network. Effective network security manages access to the network.

Network Security protects your network and data from breaches, intrusions and other threats. This is a vast and
overarching term that describes hardware and software solutions as well as processes or rules and configurations
relating to network use, accessibility, and overall threat protection.

Network Security involves access control, virus and antivirus software, application security, network analytics, types of
network-related security (endpoint, web, wireless), firewalls, VPN encryption and more.

INTERNSHIP (16 weeks)

Students will undertake an Internship/Industrial Training for a minimum period of 16 weeks to prepare them for a
smooth transition from the classroom to the working environment.

Industrial Training refers to the placement of students in an organization to conduct supervised practical training in the
industry sector within the stipulated time before they are awarded a bachelor's degree.
DEGREE LEVEL 3

Students will make use of their previous studies and industrial experience to extend their familiarity in the field of
cloud computing and to refine their personal and professional development. Students will learn how to design and
manage cloud-based systems in enterprises using programming skills, management, and planning strategies. Students
will have a deeper understanding of enterprise network components, settings, and methodologies, as well as a better
understanding of edge computing concepts and applications. A final year project requires them to investigate and
develop a solution for a real-world problem - they will demonstrate their ability to combine technical knowledge,
critical thinking, and analytical skills to produce a personal achievement portfolio.

Common Modules

1. Project Management

Project management is aimed at producing an end product that will effect some change for the benefit of the
organisation that instigated the project. It is the initiation, planning and control of a range of tasks required to deliver
this end product.

2. Venture Building

What is a venture building?

In corporate venture building, established companies build a separate venture from scratch. A new brand, team,
revenue stream, or P&L (profit-and-loss) is created to target untapped opportunity spaces – new customer segments,
technologies or capabilities – outside of the existing business.

Why is venture building important?

Venture builders play a crucial role in the success of building new businesses, as they offer a high level of autonomy. In
the realm of corporate venturing, venture builders actively participate in the startup process by contributing to product
development, acquiring initial customers, and assembling the core team.

Specialised Modules

1. Investigations in Cloud Engineering

Cloud engineering is the application of engineering disciplines to cloud computing. It brings a systematic approach to
concerns of commercialization, standardization, and governance of cloud computing applications

2. Edge Computing Concepts and Applications

Edge computing is an emerging computing paradigm which refers to a range of networks and devices at or near the
user. Edge is about processing data closer to where it's being generated, enabling processing at greater speeds and
volumes, leading to greater action-led results in real time.

Edge computing is a distributed information technology (IT) architecture in which client data is processed at the
periphery of the network, as close to the originating source as possible.

Data is the lifeblood of modern business, providing valuable business insight and supporting real-time control over
critical business processes and operations. Today's businesses are awash in an ocean of data, and huge amounts of
data can be routinely collected from sensors and IoT devices operating in real time from remote locations and
inhospitable operating environments almost anywhere in the world.

But this virtual flood of data is also changing the way businesses handle computing. The traditional computing
paradigm built on a centralized data center and everyday internet isn't well suited to moving endlessly growing rivers
of real-world data. Bandwidth limitations, latency issues and unpredictable network disruptions can all conspire to
impair such efforts. Businesses are responding to these data challenges through the use of edge computing
architecture.
In simplest terms, edge computing moves
some portion of storage and compute
resources out of the central data center and
closer to the source of the data itself.
Rather than transmitting raw data to a
central data center for processing and
analysis, that work is instead performed
where the data is actually generated --
whether that's a retail store, a factory floor,
a sprawling utility or across a smart city.
Only the result of that computing work at
the edge, such as real-time business
insights, equipment maintenance
predictions or other actionable answers, is
sent back to the main data center for
review and other human interactions.

Thus, edge computing is reshaping IT and


business computing. Take a comprehensive
look at what edge computing is, how it
works, the influence of the cloud, edge use
cases, tradeoffs and implementation
considerations.

3. Computer Systems Management

Systems management is the administration of the information technology (IT) systems in an enterprise network or data
center. An effective systems management plan facilitates the delivery of IT as a service and allows an organization's
employees to respond quickly to changing business requirements and system activity.

There are five main hardware components in a computer system: Input, Processing, Storage, Output and
Communication devices. Are devices used for entering data or instructions to the central processing unit.

Specific examples are system engineering management plan, hardware, software and data development plans, system
integration plans, system verification plans, system validation plans, system operations plans, sustainment plans,
maintenance plans, training and manuals plans, system security plan and system safety plan.

4. Designing and Developing Applications on the cloud

Cloud application development is the process through which a Cloud-based app is built. It involves different stages of
software development, each of which prepares your app to go live and hit the market. The best Cloud app development
teams use DevOps practices and tools like Kubernetes.

A cloud app development is a software development process of building a cloud-based app. This process consists of
five main stages: discovery, design, development, testing, maintenance and support. If you or your team develop a
cloud mobile app, knowledge of web development is a must.

5. Emergent Technology

Example of Emerging technologies:


Artificial intelligence (AI) Machine learning Human-centered AI
Internet of things (IoT) Virtual Reality (VR) Datafication
Augmented reality (AR) Metaverse Predictive Analytics
Nanotechnology Blockchain and Web3 technology; Sustainable Technology
Robotics Intelligent automation and robotic process automation (RPA);
Automation Quantum computing Cybersecurity
Computing Neuromorphic computing Self-supervised learning
6. Enterprise Networking and Automation

An enterprise network consists of physical and virtual networks and protocols that serve the dual purpose of
connecting all users and systems on a local area network (LAN) to applications in the data center and cloud as well as
facilitating access to network data and analytics.

One of the main components of enterprise automation is an AI-powered chatbot. While some businesses have
standard chatbots embedded on their website, we're talking about modern, advanced chatbot technology. Think of
chatbots that use conversational AI as an extension of your team.

It covers wide area network (WAN) technologies and quality of service (QoS) mechanisms used for secure remote
access along with the introduction of software-defined networking, virtualization, and automation concepts that
support the digitalization of networks.

7. Cloud Infrastructure and Services

Cloud Infrastructure is the collection of hardware and software elements such as computing power, networking,
storage, and virtualization resources needed to enable cloud computing. Cloud infrastructure types usually also include
a user interface (UI) for managing these virtual resources.

Cloud computing infrastructure is the collection of hardware and software elements needed to enable cloud computing.
It includes computing power, networking, and storage, as well as an interface for users to access their virtualized
resources.

Examples of cloud infrastructure automation include AWS CloudFormation, Azure Automation and Google Cloud
Deployment Manager, as well as third-party options, including Chef Automate, Puppet Enterprise, Red Hat Ansible
Automation Platform and VMware vRealize Automation.

8. Internet of Things: Concepts and Applications

The Internet of Things (IoT) describes the network of physical objects—


“things”—that are embedded with sensors, software, and other technologies
for the purpose of connecting and exchanging data with other devices and
systems over the internet.

The most important features of IoT on which it works are connectivity, analyzing,
integrating, active engagement, and many more. Some of them are listed below:

Connectivity: Connectivity refers to establish a proper connection between all the things of IoT to IoT platform it may
be server or cloud. After connecting the IoT devices, it needs a high speed messaging between the devices and cloud
to enable reliable, secure and bi-directional communication.

Analyzing: After connecting all the relevant things, it comes to real-time analyzing the data collected and use them to
build effective business intelligence. If we have a good insight into data gathered from all these things, then we call our
system has a smart system.

Integrating: IoT integrating the various models to improve the user experience as well.

Artificial Intelligence: IoT makes things smart and enhances life through the use of data. For example, if we have a
coffee machine whose beans have going to end, then the coffee machine itself order the coffee beans of your choice
from the retailer.

Sensing: The sensor devices used in IoT technologies detect and measure any change in the environment and report
on their status. IoT technology brings passive networks to active networks. Without sensors, there could not hold an
effective or true IoT environment.

Active Engagement: IoT makes the connected technology, product, or services to active engagement between each
other.
Endpoint Management: It is important to be the endpoint management of all the IoT system otherwise, it makes the
complete failure of the system. For example, if a coffee machine itself order the coffee beans when it goes to end but
what happens when it orders the beans from a retailer and we are not present at home for a few days, it leads to the
failure of the IoT system. So, there must be a need for endpoint management.

9. Cloud Engineering Project

MQA COMPULSORY SUBJECTS*

1. Appreciation of Ethics and Civilisation (M’sian Students)

Appreciation of Ethics and Civilisations (UHMS) is about the concept of ethics from the perspective of a different
civilization. It aims to identify the system, level of development, progress, and the culture of a nation in strengthening
social cohesion.

2. Philosophy and Current Issues

Philosophy and Current Issues is a course which covers the relation between philosophy and the National Education
Philosophy and Rukun Negara. Philosophy and Contemporary Issues is an introduction to philosophy and how it is
related to contemporary issues in Malaysia and yourself.

3. Workplace Professional Skills

Professional skills are career competencies and abilities used in the workplace that are beneficial for nearly any job.
Professional skills are a combination of both hard skills (job-specific duties that can be trained) and soft skills
(transferable traits like work ethic, communication, and leadership).

There are three types of skills: functional, self-management and special knowledge. Functional skills are abilities or
talents that are inherited at birth and developed through experience and learning. Examples are: making decisions,
repairing machines or calculating taxes.

4. Integrity and Anti-corruption

In the Ministerial Guidelines, there are five (5) principles outlined which are known as the “TRUST Principles” (T – top
level commitment; R – risk assessment; U – undertake control measures; S – systematic review, monitoring and
enforcement; T – training and communication).

You might also like