You are on page 1of 28

A Project Report

On
“ANALYZING ALARM DATA FOR UTILITY PROVIDERS”

At
“Cognizant Technology Solutions India Pvt. Ltd”

By
Ayaan Akbar Khan
Under the guidance of
Dr. Sheena Abraham
Submitted to

“Savitribai Phule Pune University”


In partial fulfilment of the requirement for the award of the degree of
Master of Business Administration (MBA 2019 Revised Pattern)
Through

Poona Institute of Management Sciences & Entrepreneurship


Pune-01.

1
ACKNOWLEDGEMENT

A project is a golden opportunity for learning and self-development. I consider myself very
lucky and honoured to have so many wonderful people lead me through in completion of this
project. However, it would not have been possible without the kind support and help of many
individuals and organizations. I would like to extend my sincere thanks to all of them.

I wish to express my indebted gratitude and special thanks to Mr. Imran Khan Pathan who in
spite of being extraordinarily busy with his duties, took time out to hear, guide and keep me on
the correct path and allowed me to carry out my project work at their esteemed organization.

I express my deepest thanks to our Director (In charge) Dr. Porinita Banerjee for taking part in
useful decision & giving necessary advices and guidance. I choose this moment to
acknowledge her contribution gratefully.

It is my glowing feeling to place on record my best regards, deepest sense of gratitude to for
her judicious and precious guidance which were extremely valuable for my study both
theoretically and practically.

My thanks and appreciations also go to my colleague in helping me in designing the project


and people who have willingly helped me out with their abilities.

Ayaan Akbar Khan Place: PUNE


MBA (General) – [2023-24]

2
DECLARATION

I the undersigned, hereby declare that this project report entitled ANALYZING ALARM

DATA FOR UTILITY PROVIDERS carried out at Cognizant Technology Solutions India

Pvt. Ltd. is a genuine work for the fulfilment of Master in Business Administration at Poona

Institute of Management Sciences & Entrepreneurship, Pune will be solely for the Academic

purpose.

To the best of my knowledge, any part of this context has not been submitted earlier for any

degree, diploma or certificate examination.

Ayaan Akbar Khan Place: PUNE

3
Company Guide Feedback

I would like to provide feedback on the 8-week summer internship program during which Mr.
Ayaan Khan worked on the project "Analyzing Alarm Data for Utility Providers." As the
supervisor and guide for this internship, I had the opportunity to closely observe Mr. Ayaan
Khan's performance and contributions. Throughout the internship, he consistently
demonstrated exemplary analytical skills. He was able to efficiently process and analyze
complex alarm data, leading to the discovery of valuable patterns and insights critical to the
project's objectives.

Additionally, Mr. Ayaan Khan exhibited a high level of technical competence, showcasing
mastery in data analysis and visualization tools, including Python and data visualization
libraries.

Communication was another strong suit for him. He consistently communicated their findings
and insights effectively, fostering productive collaboration within the team and ensuring
alignment with project goals and progress.

In terms of time management, he was consistently able to meet project deadlines and effectively
manage his time. He also showed a proactive attitude by consistently identifying opportunities
for process improvement and implementing enhancements in our data analysis methods. This
proactive approach substantially improved the project's efficiency and accuracy. Furthermore,
he conducted himself with the utmost professionalism, aligning with the organization's values
and demonstrating dedication to the project's success.

In conclusion, Mr. Ayaan Khan's performance during the summer internship was outstanding.
I have every confidence in Mr. Ayaan Khan future in the field of business analytics, and I
believe he will continue to excel in his professional endeavours.

Sincerely,

Pathan Imrankhan

Sr. Technical Project Manager

4
Table of Contents
Chapter Page
Title
No. No.
1 Executive Summary 8
1.1 Objectives
1.2 Methodology
1.3 Key Findings
1.4 Recommendations
2 Organization Profile 10
2.1 Overview
2.2 Key Information
2.3 Services and Offerings
2.4 Key Strengths
2.5 Mission and Vision
2.6 Core Values
3 Project Outline for Tasks Undertaken 12
3.1 Problems Encountered
3.1.1 Data Acquisition and Quality Issues

3.1.2 Data Preprocessing Challenges


3.1.3 Complexity of Alarm Patterns
3.1.4 Operational Inefficiencies
3.2 Tasks Undertaken
3.2.1 Data Collection and Preparation

3.2.2 Exploratory Data Analysis (EDA)


3.2.3 Anomaly Detection
3.2.4 Predictive Modelling
3.2.5 Recommendations and Insights

4 Research Methodology & Data Analysis 15


4.1 Detailed Findings
4.1.1 Sampling and Data Collection
4.1.2 Battery Issues
4.1.3 Sustained High Temperature
4.1.4 Loss of Power
4.1.5 Physical Tampering
4.1.6 Voltage Irregularities
5 Activity Charts & Graphs 18
5.1 Flow Chart for Data Analysis Process

5
5.2 Format of the Raw Sample Data
5.3 Graphs from the Power BI Screens & Insights
5.4 Gantt Charts
6 Learnings from the Organisation 24
6.1 Azure Data Factory
6.2 Databricks
6.3 PowerBI Dashboards
7 Contribution to the Organization 27
8 REFERENCES 28

6
Table of Figures
Sr. Page
Title
No No.
1 Flow chart for Data Analysis Process 18
2 Raw Sample Data in .txt file format 19
Analysed dataset source for visualization for month
3 19
of August 2023
4 Visualized data for month of August 2023 20
Analysed dataset source for visualization for month
5 20
of September 2023
6 Visualized data for month of September 2023 21
7 Gantt Chart for Week 5 22
8 Gantt Chart for Week 6 22
9 Gantt Chart for Week 8 22

7
1. Executive Summary

This project aimed to analyse alarm data for utility providers, employing data analytics
techniques to derive valuable insights and actionable recommendations. The primary objective
was to improve the efficiency and reliability of utility services by understanding alarm data and
its implications on operations.

1.1. Objectives
1. Data Collection: The initial phase involved gathering alarm data from utility providers
to establish a comprehensive dataset for analysis.
2. Exploratory Data Analysis (EDA): Through EDA, we sought to understand the
structure and patterns within the alarm data, providing a solid foundation for subsequent
analyses.
3. Anomaly Detection: Advanced analytics techniques were employed to identify
anomalies or irregularities within the alarm data, aiding in the pinpointing of areas that
required further investigation.
4. Predictive Modelling: Using historical data, predictive models were developed to
anticipate potential alarm-triggering events, allowing for proactive measures to be
taken.
5. Recommendations: The project concluded with actionable recommendations based on
data insights to enhance operational efficiency and reduce alarm-triggering incidents.

1.2. Methodology

The project followed a systematic approach:

1. Data Collection: We collaborated with utility providers to access alarm data while
ensuring data integrity and security.
2. Data Preprocessing: Data was cleaned and prepared for analysis, including addressing
missing values and outliers.
3. EDA: The data was explored using descriptive statistics, visualizations, and data
profiling techniques to uncover patterns and trends.

8
4. Anomaly Detection: Advanced analytics and machine learning algorithms were utilized
to identify unusual patterns or alarm-triggering events.
5. Predictive Modelling: Predictive models were developed using historical data to
anticipate potential alarm situations, allowing for proactive measures to be taken.

1.3. Key Findings


1. Alarm Patterns: The project successfully identified significant alarm patterns related to
specific operational conditions or events within the utility providers' infrastructure. This
insight can assist in more targeted decision-making and resource allocation.
2. Anomaly Identification: Anomalies in the alarm data were successfully identified,
providing utility providers with the information needed to focus on areas of concern for
further investigation and improvement.
3. Predictive Models: The project's predictive models demonstrated promising accuracy
in forecasting alarm-triggering incidents, offering an opportunity for more efficient and
timely responses.

1.4. Recommendations

1. Operational Improvements: It is recommended that utility providers consider operational


changes and enhancements based on the analysis to minimize alarm-triggering incidents. This
includes addressing specific patterns and anomalies identified during the project.

2. Response Optimization: Strategies for more efficient and timely responses to alarm events
should be implemented. This may include automation, better coordination, and streamlined
response protocols.

3. Data Integration: Encouragement is given to utility providers to integrate alarm data into
their decision-making processes. This integration can lead to better resource allocation and
planning, ultimately improving overall service reliability.

9
2. Organization Profile
Company Name: Cognizant Technology Solutions India Pvt Ltd.

2.1. Overview

Cognizant is a multinational technology company that provides a wide range of information


technology, consulting, and business process services to clients across various industries.
Founded in 1994, Cognizant has grown to become one of the world's leading technology
services companies, with a strong global presence and a focus on digital transformation, cloud
computing, and emerging technologies.

2.2. Key Information


➢ Founded: 1994
➢ Headquarters: Teaneck, New Jersey, USA
➢ CEO: Ravi Kumar S
➢ Number of Employees: Cognizant has tens of thousands of employees worldwide.
➢ Global Presence: Cognizant operates in numerous countries around the world, serving
clients from North America, Europe, Asia, and other regions.

2.3. Services and Offerings

Cognizant offers a broad range of services and solutions to its clients, including:

• IT Services Application development and maintenance, infrastructure services, cloud


computing, and cybersecurity.
• Consulting Business and technology consulting, strategy and digital transformation
services.
• Digital Services Digital strategy, user experience (UX) design, data analytics, and internet
of things (IoT) solutions.
• Business Process Services Outsourcing and managing various business processes for
clients.
• Industry-Specific Solutions Tailored solutions for various industries, including healthcare,
financial services, retail, and more.

10
2.4. Key Strengths

Cognizant is known for its expertise in digital transformation, helping businesses leverage
technology to innovate and stay competitive. The company is at the forefront of emerging
technologies and has a strong focus on research and development. It collaborates with clients
to drive efficiency, agility, and growth in the digital era.

2.5. Mission and Vision

Cognizant's mission is to be a trusted partner in building the future of business. The company
aims to help clients navigate the digital landscape and succeed in the fast-paced, ever-changing
world of technology.

2.6. Core Values

Cognizant emphasizes values such as integrity, collaboration, and client satisfaction in all its
endeavours. It is committed to responsible business practices and sustainable growth.

11
3. Project Outline for Tasks Undertaken
In the course of this project, several challenges were encountered and a series of tasks were
undertaken to address these challenges, ultimately leading to valuable insights and
recommendations for utility providers.

3.1. Problems Encountered


3.1.1 Data Acquisition and Quality Issues
One of the primary issues encountered at the outset was the difficulty in obtaining
comprehensive and clean alarm data from utility providers. Collaborative efforts were initiated
to secure access to their alarm data repositories. Simultaneously, challenges relating to data
security and confidentiality arose. These were addressed by implementing secure data transfer
protocols and ensuring compliance with legal and ethical standards for data handling.

3.1.2 Data Preprocessing Challenges


Dealing with missing values and outliers in the alarm data proved to be a significant hurdle.
Data cleansing activities were conducted to prepare the data for analysis, which included
imputing missing values and handling outliers. Additionally, it was essential to ensure data
consistency and standardization across different sources. Data transformation procedures were
developed to standardize data from diverse sources and establish a unified data format.

3.1.3 Complexity of Alarm Patterns


Identifying and understanding intricate patterns and relationships within the alarm data was a
crucial aspect of the project. Advanced data mining and clustering techniques were employed
to uncover hidden patterns, and causal analysis techniques were used to reveal connections and
dependencies. These methods aimed to uncover the underlying factors contributing to alarm-
triggering incidents.

3.1.4 Operational Inefficiencies


Assessing the impact of alarm-triggering incidents on utility provider operations was another
area of focus. Historical alarm data was analyzed to assess operational disruptions and calculate
financial and resource implications. It became evident that operational improvements were

12
necessary to reduce such incidents. Recommendations for operational enhancements were
suggested to mitigate alarm-triggering incidents and improve overall efficiency.

3.2. Tasks Undertaken


3.2.1 Data Collection and Preparation
To tackle the data acquisition and quality issues, collaborations with utility providers were
established to access alarm data. Secure data transfer and storage mechanisms were
implemented to ensure data integrity and security. Data cleansing and preprocessing activities
were carried out to clean and structure the data, addressing missing values, outliers, and
inconsistencies.

3.2.2 Exploratory Data Analysis (EDA)


Exploratory Data Analysis (EDA) played a pivotal role in understanding the data's
characteristics. Initial insights were uncovered through descriptive statistics and visualizations,
including histograms, box plots, and time series plots. Data profiling was conducted to identify
common alarm patterns and their frequency, as well as to investigate temporal and spatial
distributions of alarm-triggering incidents.

3.2.3 Anomaly Detection


Anomaly detection techniques were utilized to identify unusual patterns or alarm-triggering
events. Statistical methods, machine learning algorithms, and time series analysis were
employed to detect anomalies. These anomalies were thoroughly investigated to determine
their root causes.

3.2.4 Predictive Modelling


Predictive models were developed using historical alarm data to anticipate future alarm-
triggering incidents. Various machine learning algorithms were explored, and model
performance was assessed using cross-validation and metrics such as accuracy and AUC.
Continuous evaluation and fine-tuning of the models were performed to enhance their accuracy
and reliability.

13
3.2.5 Recommendations and Insights
Based on the insights gained from the analysis, actionable recommendations were provided to
utility providers. These recommendations included suggested operational changes and
response optimization strategies. It was emphasized that integrating alarm data into decision-
making processes was crucial for improving operational efficiency and incident management.

14
4. Research Methodology & Data Analysis
This study aimed to achieve several distinct objectives. Firstly, it focused on the meticulous
collection of alarm data from utility providers, forming the basis for subsequent analyses. The
initial phase sought to establish a robust dataset essential for a comprehensive investigation
into the factors triggering alarms in smart metering systems.

In the subsequent phases, the study engaged in Exploratory Data Analysis (EDA) to uncover
inherent structures and patterns within the collected alarm data. This foundational step provided
crucial insights into the dynamics of the data, setting the stage for more in-depth analyses. The
study then employed advanced analytics techniques for anomaly detection, identifying
irregularities within the alarm data to pinpoint areas requiring further investigation.
Additionally, predictive modelling was applied, leveraging historical data to develop models
capable of anticipating events likely to trigger alarms. This proactive approach enabled the
formulation of recommendations to enhance operational efficiency and reduce alarm-triggering
incidents, concluding the study with practical and actionable insights.

4.1 Detailed Findings

In order to comprehensively understand the operational challenges and vulnerabilities


associated with smart meters, a structured research methodology was employed, drawing
insights from two months of real-time data collected from the production environment of the
client.

4.1.1 Sampling and Data Collection

The study involved a diverse sample of smart meter installations across various geographic
locations and environmental conditions, encompassing two months of real-time data from the
client's production environment. Data was collected through a combination of real-time
monitoring, historical records, and incident reports, ensuring a robust representation of the
challenges faced in actual operational settings.

4.1.2 Battery Issues

15
The most prominent driver identified was related to battery issues, often stemming from aging,
sustained peaks, excessive heat, or discharge cycles beyond design margins. The two-month
real-time data allowed for a detailed analysis of battery performance, including types,
replacement strategies, and monitoring protocols. Comparative assessments of various battery
technologies were conducted, providing recommendations for optimal selections to enhance
reliability.

4.1.3 Sustained High Temperature

Temperature patterns emerged as a critical factor affecting smart meter functionality. Instances
of sustained high temperatures were scrutinized for their impact on electronic components,
leading to overheating alerts and potential shutdowns. The analysis explored the correlation
between temperature thresholds and meter malfunctions, leveraging the two-month real-time
data for a nuanced understanding. Proactive measures, such as environmental risk identification
and firmware updates for adaptive cooling, were proposed based on this comprehensive
dataset.

4.1.4 Loss of Power

Both sustained and temporary disruptions in power supply were investigated to understand
their cascading effects on meter uptimes. The study delved into backup battery reserves,
examining their depletion during outages and the subsequent risks posed to data integrity upon
power restoration. Outage analytics were employed to categorize events, offering insights into
predictability and diagnostic capabilities. The two-month real-time data provided a robust
foundation for understanding outage patterns and their implications, contributing to
recommendations for future resilience enhancements.

4.1.5 Physical Tampering

Unauthorized tampering, whether deliberate vandalism or accidental incidents, contributed


significantly to anomaly alerts. A geo-locational cluster analysis was employed to distinguish
between targeted human disruption campaigns and exposure-based events. Resolution paths

16
were proposed based on item origin patterns, differentiating between systemic issues requiring
public outreach and protective installations and localized challenges linked to neighbourhood
access. The two-month real-time data allowed for a detailed examination of tampering
incidents, facilitating a nuanced response strategy.

4.1.6 Voltage Irregularities

Voltage fluctuations, including surges, sags, and spikes, were identified as key contributors to
meter anomalies. The analysis highlighted the impact of upstream supply infrastructure
challenges on meter resilience. Suggestions for widening meter thresholds were made,
emphasizing the need for root cause correction within the grid itself to ensure consistent power
delivery quality. The two-month real-time data served as a foundation for understanding the
frequency and intensity of voltage irregularities, guiding recommendations for improving grid
stability.

This research methodology, anchored in a substantial two-month real-time dataset from the
client's production environment, aimed at a holistic understanding of smart metering
challenges. The findings provide actionable insights for optimizing performance, enhancing
reliability, and fortifying resilience against potential failures in practical operational scenarios.

17
5. Activity Charts & Graphs
5.1. Flow Chart for Data Analysis Process

Fig 1. Flow chart for Data Analysis Process

18
5.2. Format of the Raw Sample Data

Fig 2. Raw Sample Data in .txt file format

5.3. Graphs from the Power BI Screens & Insights For Visualisation

Fig 3. Analysed dataset source for visualization for month of August 2023

19
Fig 4. Visualized data for month of August 2023

Fig 5. Analysed dataset source for visualization for month of September 2023

20
Fig 5. Visualized data for month of September 2023

Data displays high-level smart meter alert trends while a detailed underlying data set enables
granular failure analysis and diagnostics. The most prevalent issues causing anomalies and
alerts from the smart meter network ranked by volume are:

1. Battery Failure

2. Overheating

3. Power Loss

4. Tampering

5. Voltage Irregularities

A combination of macro-level dashboard views paired with drilldown meter-specific analytics


provides clear visibility into fault patterns. These insights drive targeted action plans to enhance
reliability and availability across the grid infrastructure.

21
5.4. Gantt Charts

Fig 7. Gantt Chart for Week 5

Fig 8. Gantt Chart for Week 6

Fig 9. Gantt Chart for Week 8

The Gantt charts visualize the project schedules for weeks 5, 6, and 8. Specifically, the charts
display the tasks, timeframes, and sequencing corresponding to those designated weeks over
the project timeline. The purpose is to represent the planned progression of activities mapped
to calendar dates for those three discrete weekly intervals. Analyzing the charts enables

22
tracking achievement of schedule targets as the tasks transition across the allotted stretch
between the start and end boundaries. Comparing the planned weekly workflows to actual
progression provides insights into ongoing timeline pace.

23
6. Learnings From the Organisation

6.1 Azure Data Factory

I've gained proficiency in Azure Data Factory, a cloud-based data integration service by
Microsoft Azure. Here's a concise overview of key concepts:

1. Pipelines: Sequences of data-driven activities for tasks like data movement and
transformation [3].

2. Datasets: Represent data structures in data stores, defining schema and location.

3. Linked Services: Connection information for ADF to link with external data sources
[2].

4. Activities: Processing steps within pipelines, including data movement,


transformation, and control flow.

5. Triggers: Events determining when pipelines or activities run, such as time-based or


manual triggers.

6. Integration Runtimes: Compute infrastructure for moving data between data stores
[4].

7. Data Flow: Allows visual design, transformation, and aggregation of data at scale
(“Mapping data flows in Azure Data Factory”).

8. Monitoring: Azure Monitor for tracking activity and trigger runs, and overall pipeline
health.

9. Security: Integrates with Azure Active Directory for authentication, ensuring data [1].

10. Data Stores: Supports various data stores like Azure Blob Storage, Azure SQL
Database, and Azure Data Lake Storage for data movement and transformation ([3]).

My understanding enables me to design, schedule, and manage data workflows efficiently


in a secure and scalable manner ([3]).

24
6.2 Databricks

1. Databricks clusters: Provision Azure Databricks clusters to leverage distributed data


processing capabilities of Spark. Choose cluster type and size based on data volume
and transformation needs [5].

2. Notebooks: Use Databricks notebooks written in Python/Scala to develop data


pipelines. Notebooks allow step-by-step execution of code and visualizations [6].

3. Pandas DataFrames: With Pandas library in Python, data can be processed as


distributed DataFrames enabling efficient transformations through various functions
like filter, join, groupBy etc.

4. ETL Operations: Pipelines can perform various ETL tasks like extracting data from
files/tables, transforming via mappings, aggregations etc. and loading into MongoDB
[7].

5. Data loading: Once data is transformed, it can be loaded into MongoDB using
DataFrame write API and specifying parameters like database, collection names etc.

6. Performance optimization: Operations can be optimized in various ways like caching


DataFrames or adding index on MongoDB fields.

7. Scheduling: Notebooks can be scheduled to run data pipelines on timely basis,


integrated with version control.

8. Monitoring: Inbuilt dashboards provide tracking of runs, logs and metrics for auditing,
debugging and optimization.

6.3 PowerBI Dashboards

1. Connecting to MongoDB: Use the MongoDB connector in Power BI desktop to link


to MongoDB database. Provide connection string with database name, collection name
etc. [8]

2. Shaping data: Once connected, Power BI imports the data. The data can be shaped and
modelled using various transformations like filtering rows, pivoting columns etc.

3. Aggregations: Measures can be created using DAX calculations to aggregate numeric


columns. Useful functions include SUM (), AVERAGE (), COUNTROWS () etc.

25
4. Visualizing data: Various visuals like bar charts, line charts, pie charts, scatter plots
can be used from the wide range of visualizations in Power BI. [9]

5. Adding interactivity: Visuals can be made interactive by adding slicers, tooltips to


provide insights through user inputs. Drill down/drill up options allow more granular
analysis.

6. Publishing reports: Final reports and dashboards can be published to Power BI service
to share them with other users for collaboration. [10]

7. Refreshing data: Scheduled data refreshes from MongoDB to Power BI datasets can
keep the reports up to date.

26
7. Contribution to the Organization

I have built and managed the full end-to-end data pipeline, from extraction of raw CSV
data from on-premise systems to storage of processed parquet data in the cloud data
lake to visualization of insights in Power BI reports.
My specific contributions include:
• Developed ETL scripts to extract raw CSV data incrementally from on-premise storage
into the cloud
• Implemented a Databricks pipeline using PySpark to transform the raw CSV data into
analysis-ready parquet files, applying techniques like standardization, deduplication,
date handling, joins, compression, and partitioning
• Profiled and optimized data processing jobs for scale and reliability, leveraging Spark's
distributed computing model
• Automated pipeline triggers and error handling using dependencies and retries to ensure
timely yet robust data flow
• Designed cloud data lake architecture and lifecycle management best practices for the
transformed parquet data sets
• Modeled analytical data in MongoDB, optimizing for BI workloads through indexing,
aggregation, and query performance tuning
• Built Power BI reports and dashboards surfacing insights from MongoDB to business
users
• Managed schedules and infrastructure for a seamless self-serve analytics environment
By taking end-to-end ownership, I provided immense value through an automated,
scalable data pipeline that empowers the organization with fresh, actionable analytics.

➢ Suggestions to the Organization

Continued focus on diagnosing the specific failure modes causing alerts in conjunction
with preventative maintenance and meter firmware enhancements will collectively
uplift reliability over the long-term. Granular data analytics narrow problem scopes
while analytics-based issue prediction allows issues to be addressed before critical
thresholds are crossed. A dual approach addresses existing vulnerabilities in parallel
with optimizing monitoring visibility and diagnostic precision.

27
REFERENCES
[1] Microsoft Docs, https://docs.microsoft.com/en-us/azure/data-factory/data-movement-
security-considerations
[2] Microsoft Docs, https://docs.microsoft.com/en-us/azure/data-factory/concepts-datasets-
linked-services

[3] Microsoft Docs, https://docs.microsoft.com/en-us/azure/data-factory/

[4] Microsoft Docs, https://docs.microsoft.com/en-us/azure/data-factory/concepts-


integration-runtime

[5] Microsoft Docs, https://docs.azuredatabricks.net/clusters/create.html

[6] Databricks Docs, https://docs.databricks.com/notebooks/notebooks-manage.html

[7] Microsoft Docs, https://docs.azuredatabricks.net/spark/latest/data-sources/azure/cosmos-


db-connect.html#etl-best-practices

[8] Microsoft Docs, https://docs.microsoft.com/en-us/power-bi/connect-data/service-get-data

[9] Microsoft Docs, https://docs.microsoft.com/en-us/power-bi/visuals/power-bi-


visualization-types-for-reports-and-q-and-a
[10] Microsoft Docs, https://docs.microsoft.com/en-us/power-bi/collaborate-share/service-
publish-to-web

28

You might also like