You are on page 1of 30

Unit 2

Network management problem Introduction

Introduction Network management, in general, is a service that employs a variety of protocols, tools, applications, and devices
to assist human network managers in monitoring and controlling of the proper network resources, both hardware and
software, to address service needs and the network objectives. When transmission control protocol/internet protocol (TCP/IP)
was developed, little thought was given to network management. Prior to the 1980s, the practice of network management was
largely proprietary because of the high development cost.

The rapid development in the 1980s towards larger and more complex networks caused a significant diffusion of network
management technologies. The starting point in providing specific network management tools was in November 1987, when
Simple Gateway Monitoring Protocol (SGMP) was issued. In early 1988, the Internet Architecture Board (IAB) approved Simple
Network Management Protocol (SNMP) as a short-term solution for network management. Standards like SNMP and Common
Management Information Protocol (CMIP) paved the way for standardized network management and development of
innovative network management tools and applications.

A network management system (NMS) refers to a collection of applications that enable network components to be monitored
and controlled. In general, network management systems have the same basic architecture, as shown in Figure 12.1. The
architecture consists of two key elements: a managing device, called a management station, or a manager and the managed
devices, called management agents or simply an agent. A management station serves as the interface between the human
network manager and the network management system.

It is also the platform for management applications to perform management functions through interactions with the
management agents. The management agent responds to the requests from the management station and also provides the
management station with unsolicited information. Given the diversity of managed elements, such as routers, bridges, switches,
hubs and so on, and the wide variety of operating systems and programming interfaces, a management protocol is critical for
the management station to communicate with the management agents effectively. SNMP and CMIP are two well-known
network management protocols. A network management system is generally described using the Open System Interconnection
(OSI) network management model.

As an OSI network management protocol, CMIP was proposed as a replacement for the Display Network Management
Application Agent Agent Agent Managed Device Managed Device Managed Device Network Network Management Protocol
Figure 12.1: Typical Network Management Architecture [1] simple but less sophisticated SNMP; however, it has not been widely
adopted. For this reason, we will focus on SNMP in this chapter.

Overall, technology combined with a strong strategy can help mitigate the risk. Solutions such as two-factor authentication are
key to network security, says Pinson-Roxburgh. “Building an air gap is a useful method to limit external access to systems used
by high privilege users,” he adds.

At the very least, he says, firms should separate administrators’ accounts from their everyday user accounts. “Businesses could
also consider limiting the times when administrators are allowed access to the systems.”

At the same time, VPNs are “efficient and effective” at ensuring remote access to systems can be restricted from general internet
access and is encrypted, Pinson-Roxburgh advises. “This is a useful piece of technology which is relatively cheap and very
helpful.”

What is Network management?

Network management is the sum total of applications, tools and processes used to provision, operate, maintain, administer and
secure network infrastructure. The overarching role of network management is ensuring network resources are made available to
users efficiently, effectively and quickly. It leverages fault analysis and performance management to optimize network health.

Why do we need network management? A network brings together dozens, hundreds or thousands of interacting components.
These components will sometimes malfunction, be misconfigured, get over utilized or just fail. Enterprise network management
software must respond to these challenges by employing the best suited tools required to manage, monitor and control the
network.
Scope of Network management

network scope Under the broad umbrella of Computer Networking, there are a variety of career
opportunities. Network engineers, network architects, computer security professionals, and
network and computer systems administrators are all professions that may be pursued after
studying computer networking.
fundamental network scope A computer network is a basic system that connects computers from all
over the world and allows them to share resources. Computer networking may be used to link
computers at home, at work, and even between two computers that are located in separate
locations. The internet is the finest example since it allows thousands of machines to communicate
information across a secure network. IT network administrators at major corporations are
constantly in demand. If a candidate is interested in entering this sector, he should do it without
hesitation because technology is rapidly developing network scope.

Table Contents

1. Networking Scope

2. Networking Scope: Roles & Responsibilities

3. Career Path in Networking: Networking Professions

4. What is scope in computer networking

5. Future Scope in Computer Networking

Variety and multi-vendor environment

Digital transformation is enabling organizations to transcend departmental and geographical boundaries, bringing multiple
teams and vendors to work together on projects. Multi-vendor IT projects that provide organizations access to different
technology capabilities and skills, and better cost-efficiency have become the norm of the day. Organizations can now rely on an
ecosystem of alliances, vendors, and contractors for specialized skills from around the world, without having to invest in
developing such talent in-house. However, this multi-vendor environment puts enormous strain on a traditional project
management structure.

Project management must move to a higher maturity level to facilitate collaboration within an ensemble of teams from
different entities with differing competencies, levels of process maturity, and pace and style of working, and spread across
different geographies.

In a survey conducted by PMI India’s Excellence Enablers Forum (EEF) among practitioners in IT organizations in India, project
management in a multi-vendor environment emerged as one of the topmost challenges. This led EEF to dive deeper to
understand how it impacts project delivery and recommend the right approach for success in this environment. In this white
paper, PMI EEF lays down a broad set of project management capabilities that will enable organizations to meet emerging
requirements.

The white paper takes the help of a sample organization to explain how these challenges and recommendations will work in the
real-world.

Elements and Network management system

Fault Management
NetOutlook EMS supports advanced alarm management by detecting faults, failures and threshold crossing events in real-time.
Alarms can be filtered, labeled, sorted for rapid fault isolation, and exported for additional processes. Alarms can be forwarded to
other applications, such as email, for processing through the northbound interface. NetOutlook EMS provides fault isolation with
IEEE 802.1ag Loopback and Linktrace.
Configuration Management
Management and provisioning is centralized with an intuitive Graphical User Interface that visually displays all iConverter NIDs
on the network. NetOutlook EMS provides complete configuration capabilities of service parameters, traffic management, SLA
assurance and security. Bulk provisioning and configuration tasks can be scheduled or performed in real time. ITU-T Y.1564 and
RFC 2544 Service Activation Testing can be performed remotely with NetOutlook EMS for rapid service activation.
Accounting, Administration and Inventory
Newly installed NIDs are automatically detected and added to the inventory for management and provisioning. NetOutlook EMS
provides a complete infobase of the discovered iConverter NIDs in a tree view as well as geolocated map view. Both the tree and
map views facilitate informed troubleshooting with color-coded link status of all the NIDs. Inventory can be filtered, sorted and
queried with manual or automatic grouping of devices. NetOutlook EMS also supports discovery display of third-party
equipment.
Performance Monitoring
NetOutlook EMS supports real-time monitoring and historical reporting of SLA-impacting Key Performance Indicators (KPI).
Performance statistics are available at user-specified intervals and can be plotted in the graphical form displayed within the
NetOutlook EMS dashboard for analysis. Performance data can be analyzed and exported for reporting to third-party
applications.
Security
Northbound and Southbound interfaces of NetOutlook EMS provide full-session authentication and encryption via secure
protocols, including SSH, HTTPS and SNMPv3. Access privilege profiles can be assigned and customized for each user. User
activities are logged by NetOutlook EMS audit trail mechanism for future inspection.
BSS/OSS Integration
Designed to be a standalone management system or part of management suite, NetOutlook EMS supports standard Northbound
Interfaces (NBI) for cross-application integration with existing Billing Support Systems (BSS), Operational Support Systems
(OSS) and umbrella applications.
Resiliency and Backup/Restore
NID configuration settings can be backed up and restored on-demand or at user-defined periodic intervals. High availability is
achieved with both server-level redundancy and database-level redundancy. NetOutlook EMS features automated backup and
system errors logs. In case of disaster recovery, the system continues to operate uninterrupted.

NetOutlook EMS provides centralized management for iConverter GM3 NIDs and GM4 NIDs. iConverter NIDs are MEF Carrier
Ethernet 2.0 certified compliant to deliver advanced services, enable rapid service deployments, SLA assurance and
comprehensive fault management.
Scale and Complexicity
mong many possible measures which can be used to define the complexity of networks, the entropy of various network invariants
has been by far the most popular choice. Network invariants considered for defining entropy-based complexity measures include
number of vertices, number of neighbors, number of neighbors at a given distance [12], distance between vertices [13], energy of
network matrices such as Randić matrix [14] or Laplacian matrix [15], and degree sequences. There are multiple definitions of
entropies, usually broadly categorized into three families: thermodynamic entropies, statistical entropies, and information-
theoretic entropies. In the field of computer science, information-theoretic measures are the most prevalent, and they include
Shannon entropy [16], Kolmogorov-Sinai entropy [17], and Rényi entropy [18]. These entropies are based on the concept of the
information content of a system and they measure the amount of information required to transmit the description of an object.
The underlying assumption of using information-theoretic definitions of entropy is that uncertainty (as measured by entropy) is a
nondecreasing function of the amount of available information. In other words, systems in which little information is available
are characterized by low entropy and therefore are considered to be “simple.” The first idea to use entropy to quantify the
complexity of networks comes from Mowshowitz [19].

Despite the ubiquitousness of general-purpose entropy definitions, many researchers have developed specialized entropy
definitions aimed at describing the structure of networks [10]. Notable examples of such definitions include the proposal by Ji et
al. to measure the unexpectedness of a particular network by comparing it to the number of possible network configurations
available for a given set of parameters [20]. This concept is clearly inspired by algorithmic entropy, which defines the complexity
of a system not in terms of its information content, but in terms of its generative process. A different approach to measure the
entropy of networks has been introduced by Dehmer under the form of information functional [21]. Information functional can be
also used to quantify network entropy in terms of -neighborhoods of vertices [12, 13] or independent sets of vertices [22]. Yet
another approach to network entropy has been proposed by Körner, who advocates the use of stable sets of vertices as the basis to
compute network entropy [23]. Several comprehensive surveys of network entropy applications are also available [9, 11].

Within the realm of information science, the complexity of a system is most often associated with the number of possible
interactions between elements of the system. Complex systems evolve over time, they are sensitive to even minor perturbations at
the initial steps of development and often involve nontrivial relationships between constituent elements. Systems exhibiting high
degree of interconnectedness in their structure and/or behavior are commonly thought to be difficult to describe and predict, and,
as a consequence, such systems are considered to be “complex.” Another possible interpretation of the term “complex” relates to
the size of the system. In the case of networks, one might consider to use the number of vertices and edges to estimate the
complexity of a network. However, the size of the network is not a good indicator of its complexity, because networks which
have well-defined structures and behaviors are, in general, computationally simple.

In this work, we do not introduce a new complexity measure or propose new informational functional and network invariants, on
which an entropy-based complexity measure could be defined. Rather, we follow the observations formulated in [24] and we
present the criticism of the entropy as the guiding principle of complexity measure construction. Thus, we do not use any specific
formal definition of complexity, but we provide additional arguments why entropy may be easily deceived when trying to
evaluate the complexity of a network. Our main hypothesis is that algorithmic entropy, also known as Kolmogorov complexity, is
superior to traditional Shannon entropy due to the fact that algorithmic entropy is more robust, less dependent on the network
representation, and better aligned with intuitive human understanding of complexity.

Study the Importance of Types of Networks - LAN, MAN, and WAN


Previous

Table of Contents

What Is a Computer Network?


Types of Networks
What Is Local Area Network (LAN)?
What Is Metropolitan Area Network (MAN)?
What Is Wide Area Network (WAN)?
View More

Computer Networks are often differentiated based on the connection mode, like wired or wireless. They are categorized into
different types depending on the requirement of the network channel.

The network established is used to connect multiple devices to share software and hardware resources and tools.

In this article on ‘Types of Networks,’ we will look into different types of networks and some of their important features.

Learn the Fundamentals of How Business Works

Executive Certificate In General ManagementEXPLORE PROGRAM

What Is a Computer Network?

A computer network is a connection between two or more network devices, like computers, routers, and switches, to share
network resources.
The establishment of a computer network depends on the requirements of the communication channel, i.e., the network can be
wired or wireless.

Next, let’s look into the types of networks available.

Types of Networks

According to the communication requirements, multiple types of network connections are available. The most basic type of
network classification depends on the network's geographical coverage.

Below mentioned are different types of networks:

 PAN (Personal Area Network)

 LAN (Local Area Network)


 MAN (Metropolitan Area Network)

 WAN (Wide Area Network)

Let’s look into each of the network types in detail.

What Is Local Area Network (LAN)?

The Local Area Network (LAN) is designed to connect multiple network devices and systems within a limited geographical
distance. The devices are connected using multiple protocols for properly and efficiently exchanging data and services.

Classification of Devices

On completion of discovery, Network Manager automatically classifies all discovered network devices based on a
predefined device class hierarchy. You can change the way network devices are classified.

 Changing the device class hierarchy


Change the device class hierarchy to change the way network devices are classified. A common situation that requires a
change to the class hierarchy is when the discovery process identifies an unclassified device, that is, a device that is not
defined in the class hierarchy.
 AOC file samples
Use the AOC file samples to understand how Network Manager assigns discovered devices to the device classes in the
class hierarchy.
 Entity types
The entityType table contains all the entity types that are available in the NCIM topology database.

The fcaps the industry standard definition

What is FCAPS?

FCAPS (fault, configuration, accounting, performance and security) is a network management framework created by the

International Organization for Standardization (ISO). The primary objective of this network management model is to better

understand the major functions of network management systems.

Introduced in the early 1980s, the goal was to move away from a reactive form of network management to a proactive approach

-- for example, to empower administrators to take more control of their infrastructure to identify and rectify minor issues before

they become major problems.

What does FCAPS stand for?

FCAPS is an acronym for the five working levels of network management: fault, configuration, accounting, performance and

security. The FCAPS model is also known as the ISO network management model or the OSI network management model.

Sometimes, it is also referred to as the OSI/ISO network management model.

Fault management level

Network faults happen. This makes it critical to find them early before they cause serious issues. In the FCAPS model of network

management, organizations can find and correct network problems at the fault management level.

Today, the ability to detect, isolate, log and fix potential faults is a necessary component of every network. By reviewing

historical fault data, network administrators can also identify patterns and trends to enhance proactive measures that help

significantly improve network stability.

For example, you can also identify potential future issues and take steps to prevent them from occurring or recurring. With fault

management, the network stays operational, while minimizing any potential downtime.
Configuration management level

Configuration management plays a crucial role within the network. For example, it helps network administrators track and

manage deployments and related upkeep in a centralized manner.

Configuration management is a critical operational capability as it establishes the foundation for all other network management

functions. To make this process user-friendly and as seamless as possible, organizations must:

 centralize the storage of configurations

 set the stage for future expansion

 streamline device configurations and provisioning

 seamlessly track changes

For example, at the configuration management level, network operation is monitored and controlled. Hardware and programming

changes -- including the addition of new equipment and programs, modification of existing systems and removal of obsolete

systems and programs -- are coordinated. Organizations can also keep an inventory of equipment and programs and update them

regularly.
Accounting management level

The accounting management level or the allocation level is devoted to distributing resources optimally and fairly among network

subscribers. This makes the most effective use of the systems available, minimizing the cost of operation. Sometimes called the

administration level, the accounting level is also responsible for ensuring users are billed appropriately.

The function of accounting management in FCAPS is to help administrators configure users and groups based

on permissions granted to them within the system. Access is also restricted to ensure only authorized users are allowed to make

significant changes to critical network systems.


Identity
access management can help admins configure users and groups based on permissions and restrict access to authorized users
as part of the FCAPS accounting management level.

Performance management level

The performance management level helps better manage the overall performance of the network. Organizations can maximize

the throughput, avoid network bottlenecks and identify potential problems. A major part of this process is to determine which

improvements yield the most significant overall performance enhancement.

Performance management tools allow network administrators to monitor performance and troubleshoot issues in real time, while

remaining accessible and easy to use. Performance data is also regularly used to identify patterns and trends to make predictions.
Five steps of
performance management

Security management level

Security management concentrates on limiting and controlling access to digital assets located within the network. This is because

organizations must protect the network from hackers, unauthorized users, and physical or electronic sabotage at the security

management level.

They can use encryption protocols, user authentication tools and endpoint protection to add more layers and better secure their

network. Organizations can also add physical protection solutions to better secure networking equipment.

This approach helps maintain the confidentiality of user information where necessary or warranted. Security systems also allow

network administrators to control what each individual authorized user can (and cannot) do within the system.
What does the future hold for FCAPS?

Although FCAPS is complex and far-ranging, many of its principles are now outdated. It should be updated to reflect the new

reality of how we manage modern network infrastructure.

The FCAPS security management model was conceived in a pre-cloud computing era where ownership, responsibility and

control were unambiguous and straightforward. At the time, it was easy to assume that certain entities owned and controlled

assets both implicitly. We cannot do that anymore.


When applications live in the cloud, fault detection is more challenging because we work with virtualized servers. For example,

different tenants could experience a fault originating from the same source, like an overloaded link or an overloaded server.

Constant device additions and upgrades also contribute to configuration errors and, eventually, faults.

As a result, organizations must secure data, applications and services that run on the cloud to ensure regulatory compliance.

However, this responsibility is also shared, to a certain extent, by the cloud service providers who own and control the

equipment.

As such, we must reimagine the role of FCAPS in the cloud and on premises. For example, we need to define how FCAPS helps

improve reliability, availability, provisioning, orchestration, cost optimization and data protection in virtualized environments.

The motivation for automation

Automation saves time. This can be considered a universal fact, but the question is: How do we enable it? One key

is standardization.

Automation: What are some of the challenges?

First, let’s define automation.

Automation = process (i.e. fully defined series of steps) to manage resources, generally run by a

machine

Automation can cover anything really, e.g. a process to cook a meal or to fold clothes. To be effective, processes have to

be fully defined, covering all use cases to get the job done, e.g. if clothing article is wrinkled, then iron first.

Implementing automation is generally difficult, as it often ultimately entails writing code so it can be run by machines

(perhaps a robotic arm or two in the aforementioned examples). Supporting/maintaining it can be even harder as

resources scale and change.


What is standardization and how does it help?

Standardization = definition of common attributes for resources

Common synonyms include: abstraction, normalization, parameterization

Standardization is the practice of breaking down your resources into common attributes. For a clothing article, this

may include: type, wrinkled, clean, dry, shelf. It helps minimize automation logic as it allows any resource to be

treated simply as a set of values for these attributes. It also increases the impact of automation as new resources can be

incorporated just by specifying a set of values. This impact motivates us to further standardize and automate — a (not

so) “vicious” cycle — since it gives us confidence that the process being developed and maintained will be

scalable/future-proof (so long as you properly enforce the standardization over time).

An Example

Suppose you own a convenience store that sells various items and you need to automate the processes of stocking and

selling items. How would you go about this? First, you would break down your items into attributes that are important

to your processes, i.e. you would standardize them.

name, price, cost, vendor, expiration date, aisle, shelf, size, weight, refrigerated, minimum age to

purchase

With the above attributes defined for each item, you can define your processes to simply depend on values for these

attributes. This lets you define how to sell and stock a general “item” and not have to worry about any details beyond

your defined attributes. To help further automate these processes, you may also want to define attributes for a “sale”.

date/time, item name, quantity, payment method


This allows you to sum up how many items of each type you have left in stock so you know when to call a “vendor”

(potentially another resource to standardize) to restock it. It also helps you analyze what items sell best and when. Of

course there are many more resources and attributes that can be explored for this example, but we’ll leave the Running

a Convenience Store article of this series for another time.

Why automation not occurred

1. Impractical Expectations – A 100% Automation

The very first test automation failure reaps from impractical expectations. I have observed it many times in my career, once
you get an automation QA or staff on-board then the management expects them to automate testing for everything. As
pleasing as it may sound, it is not possible. You can’t go for a 100% automation testing as there are going to be few areas
where human inspection would be mandatory. One of these areas could be around the accessibility of your web application
or more.

For instance, if you are performing automated cross browser testing then the automation script for Selenium testing will
render the display of your web pages across different browsers or operating systems. However, to decide if the website is
rendering as per design, if the typography is well, if the text is appropriate is best evaluated manually.

2. What To Automate & How Much To Automate?

Many organizations do realize the problem statement of expecting a 100% automation testing but often struggle with the
following question. What can we automate & if not 100%, then how much automation can we achieve realistically for
our web product?

There is no perfect percentage or approximate figure for automation testing coverage that is applicable to every business. It
all depends upon the web application that you are offering, and since different businesses are catering to different needs. It
is only natural to expect a unique expectation around how much percentage of automation testing can one realistically go
for? The scope of automation testing will differ from an e-commerce web-applications to static, dynamic, or animated web
applications. So, if you are wondering why automation testing fails for your organization? Then I would recommend you to
evaluate the amount of automation testing required based on the type of web-application you are offering.

3. Improper Management Leading To Lack Of Visibility For Test Automation

I have been a victim of improper management back when I started my IT career as an automation tester. I was working for a
Service-based company and they allocated me my first project. This project had been running for a couple years then and I
was handed over a list of test automation scripts, right after I joined. The higher-ups of the project were about to leave the
organization and the management was too busy with the upcoming sprints to consider a thorough knowledge transition
sessions from the senior automation testers who were about to leave. What happened after they left wasn’t a pretty sight?
We were slammed with outages, and me being a fresher with minimum knowledge of how the various outbound and
inbound processes were being impacted by numerous automation scripts, was at the hearing end from my manager.
However, when I look back to that scenario I realize that it wasn’t entirely my fault.
I have seen teams with a handful of members in charge of implementing automation while the others are clueless about
what’s going on.
Don’t you think it’s a bit unrealistic to expect magic out of automation testing when half the team lacks visibility? Since
automation has to be a collaborative effort, it is important to educate every team member about the related tools and
processes, especially the freshers. You can accomplish this by holding team meetings and sessions to discuss tools, trends,
and practices related to automation.

4. No Understanding of Manual Testing or Exploratory Testing

This may surprise you a little, another reason why test automation fails for you could be the lack of manual testing skills,
or exploratory testing skills . Automating your test scripts doesn’t mean that team members can cut themselves some slack.
As we know by far, that an automated approach doesn’t cover everything and that is where the challenge begins. Because
now you have to dig deeper into your web-application and find critical test scenarios that were not yet revealed by your
teammates.

Automation is a way to save testing efforts. Software companies should use it to minimize repetitions and automate only
those elements that are less prone to changes. Once that is done, the company should allocate their resources to perform
extensive manual testing or exploratory testing to find unique test cases.

Read More: Exploratory Testing: It’s All About Discovery

5. Not Thinking Through And Scripting The Scenario

Automation seems like a one-stop destination for minimizing efforts. But a well-thought-out scenario is a must, before
developing a test automation scripts. Moreover, it can take a substantial amount of execution time of your automation tests.
The flexibility of frameworks and test automation tools plays a crucial role in how much time it takes to develop a scripted
scenario.

Since every scenario is different, scripting is a must. Even if you think it through, it’s all a waste without scripting the
scenario. Ensure that the coding skills of your test engineer are at par with the complexity of the tests. Complex tests take a
lot of time to automate. Therefore, with the development of brand new features, they often don’t get a chance to discover
regression bugs. Make sure you keep these things in mind before you write down your test scenario.

6. Lack Of Understanding About When To Use Automation And When Not To!

The most common reason behind “why test automation fails for your company?” is that the people are not aware of when
to automate and when not to. For instance, it’s alright to automate different webpage functionalities. But it’s not a good idea
to evaluate the padding, images, etc. rendering issues through test automation. If you are using coordinates to determine
element locations, it can lead to discrepancies when run on varying screen resolutions and sizes.
It’s not viable to use automation when you are testing something prone to a lot of changes. If you are testing out a stable
entity, automation is the way to go. Basically, mundane tasks that require a certain action to be repeated are best suited for
automated testing. So test automation can comfort your regression testing process.

7. Improper Selection of Staff And Resource Planning

I have seen a false belief rampant in the IT industry. People think that any developer or tester can carry out test automation.
Design, configuration, and implementation of test automation calls for a specific skill set. A tester carrying out automation
should know how to articulate ideas between managers, developers, and customers. He/she should also have a clear
understanding of development trends and should know what the development team is headed to.

Automation test engineers are some of the most difficult, yet, significant hires. To kickstart various automation projects, it is
essential to hire testers with extensive technical knowledge. Instead of one or a few people carrying out automation testing,
the entire team should be aware of what’s going on. Even though the investment in hiring technically sound staff is high, the
return is worthwhile.

8. Not Paying Enough Attention To Test Reports

Since automation testing is a relatively new phenomenon, chances of failure are high. There are so many new experiments
the testing team conducts that it becomes important to analyze the results accurately. After carrying out the tests, the tester
has to make a thorough test report. But here is why test automation fails for you! Your team is not paying enough attention
to test report analysis. If not carried out properly, the analysis can leave faults unattended and cause wastage of time,
resources and efforts.

Some tests succeed and some fail in automated testing. Therefore, it is mandatory to examine test reports for faults and
analyze the reason behind the failure of certain tests. It is better to conduct the analysis manually so as to uncover genuine
failures. It is vital to unmask hidden problems and make sure that they don’t get overlooked due to masking by other issues.

9. Bottom-Up Approach In Defining Your Automation Goals

Setting too good to be true objectives for automation seems perfect on paper. But when it comes to executing the steps, there
is a severe lack of clarity among team members. The biggest problem is that the goals are vague. They lack precision and
accuracy for obtaining real value from automation. What most firms do is that they start automating something very
complex and end up refactoring the whole framework. As a result, the team ends up losing a lot of time, money, and effort.

You can eliminate uncertainties by starting small and working your way up to complexities. Pick out stable functionalities
and begin with their automation initially. After that, gather feedback to determine what’s wrong. Once you achieve
consistency in their testing, continue with other functionalities. Have a customized approach for test automation since needs
can vary for different project contexts.

10. Selection Of The Right Tool For Efficient And Effective Testing

With a plethora of automation tools out there, sometimes it becomes challenging to choose the best. Improvement of the
overall testing procedure and meeting real requirements is the end goal. But most teams fail to shift through the chaff and
pick out the tools that best suit their testing needs. Automation testing is, without a doubt, highly dependent on the tool you
decide to go ahead with. Every tool has specific capabilities. But teams lack the level of expertise needed to get the best out
of these capabilities.
Moreover, firms get caught up in the hype of a particular tool. But after opting for it, they realize that it doesn’t provide
everything that they were hoping to get. Plus, every team has a budget and sometimes the cost of the tool exceeds that.
Before jumping on to choosing a hyped tool, carefully line out the requirements. After that, decide what you are expecting
from the tool. Be very specific in setting goals and check the correspondence with user acceptance criteria for products. You
can also consult experts who are experienced with the use of these tools.

Talking about automation testing tools, if you are looking to perform automated cross browser testing on cloud then
LambdaTest offers you a cloud-based Selenium Grid with 3000+ real browsers and operating systems, along with
integrations to multiple third-party CI/CD tools.

Your first 100 automation testing minutes are absolutely free with lifetime access to our product. Sign up now!

11. Ignoring False Negatives & False Positives

This is something which is often observed in almost every organization. Once the automation test suites are ready and they
seem to work fine, the management starts to relax. They start slacking off on in-depth analysis of test execution as they
believe that only pass-fail checking will do enough. But this is why test automation fails for them!

Sometimes, a system works fine fundamentally. However, automation scripts don’t reflect the same. They state otherwise
and cause a false positive scenario. Thus, it creates a situation of confusion and wastes time, effort, and resources. I have
seen how frustrating it is for the testing team trying to find something that isn’t there!

Another scenario is that when the automation script gives the green signal and there is something wrong. The system isn’t
working as it should but the script declares otherwise. Network issues can cause discrepancies in the test environment
settings. This occurs due to a lack of accuracy in the beginning stages of a database. Leaving a system in a compromised
state can cause catastrophic consequences in the long term.

12. Web Elements With Undefined IDs

It is mandatory for every web element to have an ID to execute efficient testing. But sometimes, the developers fail to allot
IDs to all web elements and this is why test automation fails. In this case, the automated script has to find these web
elements which takes up a lot of time. Moreover, if the script is unable to find these elements within a prescribed time
frame, the test fails. Thus, to ensure proper synchronization of the script, the team has to allot unique IDs to all web
elements.

Organization of network management of software


With networks scaling across wired, wireless, and virtual IT environments, network management only becomes
increasingly complex putting network admins in need of all the help they can get. With a myriad of network
management solutions available in the market, it is important to zero in on the right one. Network Management
Tools generally help you assemble various metrics of the network in a single easy to comprehend dashboard.
Among these network management solutions available, a reliable and effective network management solution
needn't necessarily take up a huge chuck of your budget. An effective, secure, yet an affordable network
management solution is your best bet to streamline network management. Comprehensive network management
solutions are to be preferred for large organizational networks, as they help reduce the complexity involved in
depending on multiple tools to manage networks. Choosing the right real-time network management software is
revolutionary in your marketing game.

OpManager tool is a proactive network management software that reduces network solutions outage and gain control
over the network quickly.

Configuration and operation


Configuration Management is the process of maintaining systems, such as computer hardware and software, in a desired state.

Configuration Management (CM) is also a method of ensuring that systems perform in a manner consistent with expectations

over time.

Originally developed in the US military and now widely used in many different kinds of systems, CM helps identify systems that

need to be patched, updated, or reconfigured to conform to the desired state. CM is often used with IT service management as

defined by the IT Infrastructure Library (ITIL). CM is often implemented with the use of configuration management tools such as

those incorporated into VMware vCenter.

Overview of VMware vRealize Automation SaltStack Config

WATCH NOW

VMware vRealize Automation SaltStack Config - Datasheet

DOWNLOAD NOW

Why is Configuration Management important?

Configuration Management helps prevent undocumented changes from working their way into the environment. By doing so,

CM can help prevent performance issues, system inconsistencies, or compliance issues that can lead to regulatory fines and

penalties. Over time, these undocumented changes can lead to system downtime, instability, or failure.

Performing these tasks manually is too complex in large systems. Software configuration management can involve hundreds or

thousands of components for each application, and without proper documentation IT organizations could easily lose track of

which systems require attention, what steps are necessary to remediate problems, what tasks should be prioritized and whether

changes have been validated and propagated throughout the system.

A Configuration management system allows the enterprise to define settings in a consistent manner, then to build and maintain

them according to the established baselines. A configuration management plan should include a number of tools that:

 Enable classification and management of systems in groups


 Make centralized modifications to baseline configurations

 Push changes automatically to all affected systems to automate updates and patching

 Identify problem configurations that are underperforming or non-compliant

 Automate prioritization of actions needed to remediate issues•

 Apply remediation when needed.


As organizations increasingly adopt a microservices architecture composed of many code segments of various size connected by

APIs, the need for a consistent configuration management process becomes even more apparent, where each service utilizes

metadata that encodes specs for resource allocation, secrets like passwords, and endpoints that define connections to other

services for registration and initialization.

Through the use of these tools, a configuration management plan provides a ‘single version of the truth’ for the desired state of

systems across the organization by giving visibility to any configuration modifications, enabling audit trails and tracking of every

change made to the system.


Configuration and protocal layering
Configuration is the manner in which components are arranged to make up the computer system. Configuration consists of both
hardware and software components. Sometimes, people specifically point to hardware arrangement as hardware configuration
and to software components as software configuration. Understanding of computer configuration is important as for certain
hardware or software applications, a minimum configuration are required.

A protocol is a set of rules and standards that primarily outline a language that devices will use to

communicate. There are an excellent range of protocols in use extensively in networking, and that

they are usually implemented in numerous layers.

It provides a communication service where the process is used to exchange the messages. When the

communication is simple, we can use only one simple protocol.

When the communication is complex, we must divide the task between different layers, so, we need to

follow a protocol at each layer, this technique we used to call protocol layering. This layering allows us

to separate the services from the implementation.

Each layer needs to receive a set of services from the lower layer and to give the services to the upper

layer. The modification done in any one layer will not affect the other layers.

Basic Elements of Layered Architecture

The basic elements of the layered architecture are as follows −

 Service − Set of actions or services provided from one layer to the higher layer.

 Protocol − It defines a set of rules where a layer uses to exchange the information with its peer entity. It is concerned

about both the contents and order of the messages used.


 Interface − It is a way through that the message is transferred from one layer to another layer.

Reasons

The reasons for using layered protocols are explained below −


 Layering of protocols provides well-defined interfaces between the layers, so that a change in one layer does not affect

an adjacent layer.
 The protocols of a network are extremely complicated and designing them in layers makes their implementation more

feasible.

Advantages

The advantages of layered protocols are as follows −

 Assists in protocol style, as a result of protocols that operate at a particular layer have outlined information that they

work and a defined interface to the layers on top of and below.


 Foster’s competition because products from completely different vendors will work along.
 Prevents technology or capability changes in one layer from touching different layers above and below.

 Provides a typical language to explain networking functions and capabilities.

Disadvantages

The disadvantages of layered protocols are as follows −

 The main disadvantages of layered systems consist primarily of overhead each in computation and in message headers

caused by the abstraction barriers between layers. Because a message typically should pass through several (10 or

more) protocol layers the overhead of those boundaries is commonly more than the computation being done.
 The upper-level layers cannot see what is within the lower layers, implying that an application cannot correct where in

an exceedingly connection a problem is or precisely what the matter is.


 The higher-level layers cannot control all aspects of the lower layers, so that they cannot modify the transfer system if

helpful (like controlling windowing, header compression, CRC/parity checking, et cetera), nor specify routing, and

should rely on the lower protocols operating, and cannot specify alternatives when there are issues.

DEPENDENCIES AMONG CONFIGURATION PARAMETERS


Configuring dependency providers

The Creating and injecting services topic describes how to use classes as dependencies. Besides classes, you can also use other
values such as Boolean, string, date, and objects as dependencies. Angular DI provides the necessary APIs to make the
dependency configuration flexible, so you can make those values available in DI.

Specifying a provider token

If you specify the service class as the provider token, the default behavior is for the injector to instantiate that class using
the new operator.

In the following example, the Logger class provides a Logger instance.

content_copyproviders: [Logger]
You can, however, configure a DI to use a different class or any other different value to associate with the Logger class. So
when the Logger is injected, this new value is used instead.

In fact, the class provider syntax is a shorthand expression that expands into a provider configuration, defined by
the Provider interface.

Angular expands the providers value in this case into a full provider object as follows:

content_copy[{ provide: Logger, useClass: Logger }]

The expanded provider configuration is an object literal with two properties:

 The provide property holds the token that serves as the key for both locating a dependency value and configuring the
injector.
 The second property is a provider definition object, which tells the injector how to create the dependency value. The
provider-definition key can be one of the following:
o useClass - this option tells Angular DI to instantiate a provided class when a dependency is injected
o useExisting - allows you to alias a token and reference any existing one.
o useFactory - allows you to define a function that constructs a dependency.
o useValue - provides a static value that should be used as a dependency.

SEEKING A MORE PRECISE DEIFINATION OF CONFIGURATION

Network configuration focuses on managing a network and its devices by applying the right set of policies, controls, and
configurations. It encompasses activities from device discovery to configuration backups for efficient network administration.

What are the types of network configuration?

Network administrators maintain a well-organized information repository of the network devices with details, such as device
location or network address and device settings, as part of configuration management. This configuration database works as a
guiding source for admins while making updates and changes in the network.

Generally, network topologies denote various types of network configuration. Network topology refers to the systematic
arrangement of nodes or devices in a network that allows them to exchange information.

Network topologies are of two types—physical and logical. Physical topology depicts the linkages between physical devices via
cables, wires, etc., in a network. In contrast, logical topology denotes how information is transferred through a network. The way
devices interact in a network is also a part of the logical topology.

Some of the popular physical network topologies are as follows:

o Bus: Every node or device in the network is connected in a linear order with a unidirectional data flow. Bus topology is cost-
effective, but it can break down quickly when there’s high network traffic.
o Ring: Nodes are connected circularly, while data can flow in one or both directions as per needs. Ring networks are easy to set
up and expand, but troubleshooting is often challenging.
o Star: A central server or node manages all other nodes with point-to-point communication. Star topology is commonly used in
local area networks because of benefits such as centralized control, better security, and easy configuration. However, the entire
network can crumble if the central server fails.
o Mesh: Nodes are linked in a web-like structure with point-to-point connections with every other node in the network. Data
transmits through routing (shortest-path approach) and flooding methods (broadcast approach). Mesh networks are highly reliable
but expensive to set up and maintain.
o Tree: Nodes are interconnected in hierarchical order with at least three levels. Tree network is an extension of star topology and
is used in wide area networks (WANs).
o Hybrid: Hybrid combines two or more topologies. Organizations looking for flexibility in their IT infrastructure prefer hybrid
networks.

CONFIGURATION AND TEMPORAL CONSEQUENCES

We investigate the temporal implications of information technology by examining its use in the work practices of
physicians and nurses in an emergency department. We conceptualize that the temporality in work practices is
constituted by temporal enactment (e.g., linearity), temporal construal (e.g., autonomy), and temporal spatiality (e.g.,
regionalization). By using this categorization we found that information technology impinges on temporal organizing
by imposing its specific temporal logics and by being location dependent. Distinct information technologies have
different impacts on temporality in work, and temporal effects of the same information technology vary across work
groups. This highlights the need for alternative technological configurations to support varying temporal practices.
The findings underscore the potential of information technology as a temporal boundary object that reconciles
differences in temporal organizing of work groups.

CONFIGURATION AND GLOBAL CONSISTENCY

Configuration management (CM) is a systems engineering process for establishing and maintaining consistency of
a product's performance, functional, and physical attributes with its requirements, design, and operational information
throughout its life.[1][2] The CM process is widely used by military engineering organizations to manage changes
throughout the system lifecycle of complex systems, such as weapon systems, military vehicles, and information
systems. Outside the military, the CM process is also used with IT service management as defined by ITIL, and with
other domain models in the civil engineering and other industrial engineering segments such as roads,
bridges, canals, dams, and buildings.

To begin, the private sector offers many good examples of cooperation. The industry deserves

credit for taking the lead in many areas—developing technical and risk management

standards, convening information-sharing forums, and spending considerable resources.

International bodies, including the Group of 7 Cyber Experts group and the Basel Committee,

are creating awareness and identifying sound practices for financial sector supervisors. This

is important work.

But there is more to be done, especially if we take a global perspective. There are four areas

where the international community can come together and boost the work being done at the

national level:

First, we need to develop a greater understanding of the risks: the source and nature of

threats and how they might impact financial stability. We need more data on threats and on

the impact of successful attacks to better understand the risks.

Second, we need to improve collaboration on threat intelligence, incident reporting and best

practices in resilience and response. Information sharing between the private and public
sector needs to be improved—for example, by reducing barriers to banks reporting issues to

financial supervisors and law enforcement.

Different public agencies within a country need to communicate seamlessly. And most

challenging, information sharing between countries must improve.

Third, and related, regulatory approaches need to achieve greater consistency. Today,

countries have different standards, regulations, and terminology. Reducing this inconsistency

will facilitate more communication.

Finally, knowing that attacks will come, countries need to be ready for them. Crisis

preparation and response protocols should be developed at both the national and cross-border

level, so as to be able to respond and recover operations as soon as possible. Crisis exercises

have become crucial in building resilience and the ability to respond, by revealing gaps and

weaknesses in processes and decision making.

Global state and practical systems

Global states

How do we find out if a particular property is true in a distributed system? For examples, we will look at:

Distributed Garbage Collection

Deadlock Detection

Termination Detection

Debugging
Distributed Garbage Collection

Objects are identified as garbage when there are no longer any references to them in the system

Garbage collection reclaims memory used by those objects

In figure 11.8a, process p2 has two objects that do not have any references to other objects, but one object does have a reference
to a message in transit. It is not garbage, but the other p2 object is

Thus we must consider communication channels as well as object references to determine unreferenced objects

Deadlock Detection
A distributed deadlock occurs when each of a collection of processes waits for another process to send it a message, and there is a
cycle in the graph of the waits-for relationship

In figure 11.8b, both p1 and p2 wait for a message from the other, so both are blocked and the system cannot continue

Termination Detection

It is difficult to tell whether a distributed algorithm has terminated. It is not enough to detect whether each process has halted

Data has become an essential aspect of organizations, and protecting it has become a very crucial task.

Small or big, every organization can be affected by the data breaches due to lack of awareness and lack of capability
to invest in protecting their data.

Many business owners would think that cyber-criminals will pass over attacking their company because they have a
small amount of data. But according to the U.S. Congressional, Small Business Committee has found that 71 percent
of the SMBs which are having less than 100 employees are facing Cyber Attacks. This means there is a chance by
which you could still be targeted by hackers even if you have a small business, so think about your security
protections.
Is your most sensitive data secured?

If not, it’s time to protect your data.

Is there any way to protect my company’s sensitive data?

So here is the article in which we are going to discuss the best practices to protect your company’s sensitive data.

Configuration and default values:

Partial state

The partial state is calculated by checking if there is a change in a folder, we do not distinguish what type of folder, it
could be contacts / events or emails.

Partial is typically transient and it resolves itself over time. If it remains partial for > 24 hours please reach out to
support.
Automatic update and recovery

Automatic updates allow users to keep their software programs updated without having to check
for and install available updates manually. The software automatically checks for available updates,
and if found, the updates are downloaded and installed without user intervention.

For example, the Microsoft Windows operating system has automatic updates to help keep Windows

updated with the latest bug fixes, feature updates, and other modifications automatically. Automatic

updates help keep software better protected from viruses and hacking attempts.

Other software programs allow automatic updates to be enabled by users, if not enabled by default

upon installation. Internet browsers feature automatic updates, including Google

Chrome and Microsoft Edge. Antivirus programs also feature automatic updates, to keep the computer

protected.

Some software programs allow for the automatic update feature to be disabled, requiring users to

manually check for and install available updates. However, turning off automatic updates is not

recommended, as it can leave the software and the computer vulnerable to viruses, hacking, and

becoming outdated.

System Recovery is an easy-to-use suite of utilities that can restore your drivers and programs, or restore your
computer's hard drive to its original factory condition. System Recover can fix a corrupted hard drive, restore
Windows to an earlier state, or remove all data and installed software from your device.

Commit and rollback during configuration

“commit” command Overview


As we know, when we change a configuration, it is stored in candidate configuration but will not be applied to the device. With
“show | compare” command we can see the changes in candidate configuration which are not yet applied to the device. With
“commit” command, changed will be pushed and applied to the active configuration.

Ju
niper Junos commit commands
In addition to normal “commit” command, there are some parameters for this command. with “commit check” command, you
can check the configuration changes to make sure if there is any mistake, conflict or incomplete commands. if there are some
errors then you will receive some messages to correct the commands.
However “commit check” command is not very necessary to use, since the “commit” command also shows these messages if
there are some errors.
But there is a very handy and useful parameter for commit command. with “commit confirmed MINUTES”, you can apply the
changes but temporary and for a specified time otherwise you confirm the commit command during this period.
The use case of this command is when you configure a changes in the device but remotely and if there is a mistake, then your
connection will be lost. With “commit confirm” command, if there is a mistake, the configuration will be restored after specified
time since you do not confirm your changes.
It is always recommended to use “commit confirmed” instead of “commit” to make sure that new changes create no problem in
the network. then we can confirm the commit command.
Junos “rollback” command Overview
With “rollback” command, we can restore one of the previous configuration versions.
By default up to 50 “commit” version of configurations are stored in the device locally from 0 to 49.
You can easily compare the current configuration with any of previous configuration versions and restore any of them.

Juniper
Junos rollback commands

The command “show | compare” is actually the same as “show | compare rollback 0”, which means to compare current
candidate configuration with the last active configuration (rollback 0). in other words, what changes are configured but they are
not still applied to the device.
The command “rollback 0” is very handy and useful which discard any changes in candidate configuration and restore latest
active configuration.
The “rollback n” command restore the configuration to the latest n+1 committed configuration version. Notice that the restored
version is not applied to the device and will be located in candidate configuration. Check the changes with “show | Compare”
command and then apply it with the “Commit” command.
To better understand these two commands, let’s touch them once again.

Automated rollback and timeout

In one of the modules of our application (SqL server 2000, Windows 2003,vb.net), we get ocassional timeouts. User ignores
these timeout message and proceeds capturing further transactions. Though these transaction are get saved into the database,
however transations captured after the timeout are getting rolledback, once the user logs out of the database. User were able to
print all reports with these transactions before logging out of the system.

What is Snapshot?

Introduction

A snapshot can be defined as a system state at a specific point of time in a computer system. The snapshot term is
introduced as the analogy in photography. The snapshot can assign to the capability given by some systems or to the
system snapshot copy of a system's state.
The hard drive snapshots contain the hard disk's directory structure, including each file and folder over the disk
efficiently. This backup type can also be assigned as a "disk image."

The disk images permit the complete disk to restore in case the main disk fails. Several disk programs that are
creating snapshots permit particular files to recover through the snapshot, rather than having to recover the complete
backup.

However, snapshots are used for various backup. It is good to store the snapshot in any removable drive, secondary
hard drive, or optical media, like DVDs and CDs.

Configuration

Effective configuration management is critical at all stages of the application lifecycle. Whether
implementing or extending functionality, providing support or performing patching and upgrades
change must be achieved efficiently, accurately and with minimum risk. At the same time governance
is necessary to ensure the correct controls are in place to prevent fraud and satisfy audit requirements
such as Sarbanes Oxley and ITIL. The complexity of the E-Business Suite and Cloud Applications
makes this a major challenge for organizations.

Separation of setup and activation

Snapper allows creating and managing file system snapshots. File system snapshots allow keeping a copy of the state of a file

system at a certain point of time. The standard setup of Snapper is designed to allow rolling back system changes. However, you

can also use it to create on-disk backups of user data. As the basis for this functionality, Snapper uses the Btrfs file system or

thinly-provisioned LVM volumes with an XFS or Ext4 file system.

You might also like