You are on page 1of 33

SAP AI Core

Software Architecture-Assignment 1

Abhijith S Babu - 2023SL93100


Antony Ashwin Anto - 2023SL93002
Abhishek Bantiya - 2023SL93003
Gayathri Sivakumar Menon - 2023SL93087
Vishal Kumar - 2023SL93026
Table of contents
01 Purpose of the system →

02 Key requirements of the system →

03 Utility Tree of ASRs →

04 Tactics used to achieve top 5 ASRs→

05 Diagrams→

06 Description of how system works→


07 Key learnings→
01
Purpose of the
system→
About SAP AI Core
SAP AI Core is a platform as a service that is designed to
handle the AI assets of the customer. It enables data
driven decision making in business solutions and
provides seamless integration with other products of
SAP.

AI core can be accessed in various ways:


● Using AI Launchpad, a software interface
dedicated for SAP AI Core.
● Using API endpoints, which can be executed using
softwares such as POSTMAN.
● Using python SDK by connecting to AI Core
instance.
● Using dedicated AI core plugins or extensions for
commonly used softwares such as VS Code.
02
Key requirements
of the system→
Functional requirements
1. Manage the AI scenario lifecycle
Manage your ML artifacts and workflows, such as model training, metrics tracking,
data, models, and model deployments via a uniform API lifecycle.

2. Integrate your cloud infrastructure


Register Docker registry, synchronize AI content from git repository, and
register object store for training data and trained models. Productize AI
content and expose it as a service to consumers in the SAP BTP
marketplace.
Functional requirements(contd…)
3) Execute Pipelines
Execute pipelines as a batch job to preprocess or train models, or perform
batch inference. Entails the automated execution of predefined training
algorithms and stores the ready to inference models.

4) Serve inference requests


Deploy а trained machine learning model as a web service to serve
inference requests of trained models with high performance. The
deployment can be integrated with various SAP products.
Functional requirements(contd…)
5) Multi tenancy support
Implement multi-tenant services to segregate your AI assets and executions to isolate
your tenants within SAP AI Core.
Non-functional requirements
Performance Reliability Security
SAP AI Core runtime has Reliability ensures that the system SAP AI Core ensures security to
features that improve efficiency operates predictably and users as it acts as data
and help manage resource consistently, delivering accurate processor and is unaware of the
consumption. Users expect results and maintaining availability type of data. It allows bring your
quick and efficient responses. for users and processes. object store by allowing
High performance is essential Uninterrupted access to the SAP customers to register object
for training AI models AI Core ensures consistent store secret. It adheres to
efficiently. As the volume of data operation, mitigating financial several security protocols for
and the complexity of AI models losses and missed opportunities, encryption of data during
grow, a system is able to scale and operational disruptions. customers communication with
on demand. the service.
Non-functional requirements
Interoperability Reusability Testability
SAP AI Core connects various In SAP AI Core, we can create SAP AI core is used for the
internal and external tools. AI configurations for AI workflows execution of complex machine
core requires seamless which can be used multiple learning tasks. The integration
times. Reusability thus plays an with a number other systems
connections with container
important role in providing this can create issues. The system
registry, github workflows, and feature to multiple users. This must be testable to troubleshoot
cloud storage, in addition to can improve the business value and rectify the errors, without
integrating AI core service to of the product as well as save a compromising the availability
other softwares via plugins, lot of resources. beyond an extend.
which makes interoperability is
very important in SAP AI Core.
03
Utility Tree of
ASRs→
# ASR QA Business Impact on Justification
value architecture

1 Serve AI inference with low latency and high Performance High Medium Significant impact on business
throughput. outcomes while technical approaches
require medium effort optimisations and
additions to the system architecture.

2 Continuous availability across diverse Reliability High Medium Continuous availability is crucial for the
services, ensuring uninterrupted accessibility AI Core reliability and user satisfaction,
and accurate predictions holding high business value. Moderate
architectural adjustments ensure
consistent service without major
complexity or cost overruns, sustaining
operations while preserving user trust.

3 Seamless connections with container Interoperability High Medium Interactions with other systems by
registry, github workflows, and cloud storage providing proper authentication will
affect the architectural layout of the
system.

4 Ensure security through authorisation and Security Medium High Authentication through assigned
authentication role collections, ensuring both
authentication and authorization for
secure and controlled system
access
# ASR QA Business Impact on Justification
value architecture

5 Encryption in transit. Security High Medium Customer’s communication with the


service, including data upload and
download should be protected by
encryptions using latest protocol
versions

6 Integrating AI core service to other softwares Interoperability High Low Significant impact on the business
via plugins. value as it makes the platform easy to
access for the users.

7 Reduce response times by keeping commonly Performance High Low Critical performance metric but these
used nodes reserved and avoiding cold starts. techniques do not require any major
changes to the underlying system
architecture.

8 Execute pipelines as a batch job,use docker Reusability Low Medium Medium impact on architecture due to
images,Synchronize AI content from git making deployment consistent across
repository various environments contributing to a
more efficient, scalable, and secure AI
development and deployment process.
04
Tactics used to
achieve top 5 ASRs→
Performance (H,M)
Serve AI inference with low latency and high throughput.

Parallel processing: Inference requests coming to deployed trained models can be processed in
parallel with the help of a parameter in a predictor’s KServe called ‘containerConcurrency’ that
specifies how many requests can be processed concurrently.

Reduce resource consumption: SAP AI Core includes parameters to reduce the number of nodes used
based on current consumption or impose usage limits during periods of high consumption. These
parameters allow your workload the flexibility to scale based on demand, control resource demand and
consumption, and therefore costs.

Faster resources: If any workload needs GPU acceleration, it can use one of the GPU-enabled resource
plans. Otherwise, a resource plan can be chosen based on the anticipated CPU and memory need of
the workloads.

Reduce response times: Implement a global node pool, which keeps commonly used nodes reserved.
Cold starts can be avoided completely by scaling to 1. This keeps a single node warm, even when it is
not needed, reducing response time.
Reliability (H,M)
Continuous availability across diverse services, ensuring uninterrupted accessibility and
accurate predictions

Continuous Integration/Continuous Deployment (CI/CD): Set up CI/CD pipelines to automate the


testing and deployment process. This ensures that changes are tested and deployed in a controlled
and reproducible manner.

Redundancy Optimization: Implementing strategic redundancy in critical components to ensure high


availability without excessive architectural complexity, maximizing system reliability while minimizing
architectural overhauls.

Fault-Tolerant Design Framework: Employing fault-tolerant design principles within the existing
architecture to enhance reliability without imposing extensive architectural modifications, balancing
high availability with moderate architectural impact.

Dynamic Scaling Strategies: Leveraging dynamic scaling techniques to accommodate fluctuating


demands without imposing significant architectural changes, ensuring consistent service delivery
while minimizing substantial impacts on the existing architecture. Uses multiple hyperscaler object
stores.
Interoperability (H,M)
Seamless connections with container registry, github workflows, and cloud storage

Flexible Integration Interfaces: Designing the system with flexible integration interfaces that support
various object storage and code repository solutions. This flexibility ensures compatibility with both
existing and emerging technologies.

Plugin Architecture: Implement a modular and extensible architecture that supports plugins for
different interfaces. This allows users to easily integrate SAP AI Core with their preferred tools,
promoting a diverse and adaptable ecosystem.

AI API: The AI API lets you manage your AI assets (such as training scripts, data, models, and model
servers) across multiple runtimes. Argo workflows and serving templates, as well as their execution
and deployment, are managed using the SAP AI Core implementation of the AI API.

Choice of Data Model: The syntax and semantics of the major data abstractions that may be
exchanged among interoperating systems. Ensuring that these major data abstractions are consistent
with data from the interoperating systems. Data transformations have to be done to and from the
data abstractions of systems with which we interoperate.
Security (M,H)
Ensure security through authorisation and authentication

Authenticate and authorise actors: Utilise user management and authentication mechanisms
provided by SAP BTP Authorization and Trust Management Service (XSUAA). Provide for
default role collections that one can assign to users. The role collections determine which
actions a user is able to carry out in SAP AI Core

Intrusion Detection: Customer’s communication with the service, including data upload and
download should protected by encryption using the transport layer security (TLS) protocol.
Customer’s systems must use the supported protocol versions and cipher suites to set up
secure communication with the services, and must validate the certificates against the
services’ domain names to avoid man-in-the-middle attacks.

Open Source Vulnerability scan: Any Open Source component used by the product must be
scanned for vulnerabilities. Vulnerable OSS components must be patched.
Reusability (L,M)
Execute pipelines as a batch job,use docker images,Synchronize AI content from git
repository.

Use Standard Libraries and Frameworks: Leverage established AI libraries and frameworks. Popular
libraries like TensorFlow, PyTorch, or scikit-learn have reusable components that can be easily
integrated into various projects.

Containerization: Docker Containers: Package AI applications and dependencies into Docker


containers. Containers encapsulate the environment, ensuring consistency across different stages of
the development and deployment pipeline.

Reusable Models: Pre-trained Models: Utilize pre-trained models when applicable. Transfer learning
and reusing models trained on similar tasks can save significant training time and resources.

Version Control: Use version control systems like Git for tracking changes to code, configurations, and
model weights. This ensures traceability and facilitates collaboration.

Documentation: Clear documentation facilitates understanding and reuse by team members or other
stakeholders.
05
Diagrams→
Context Diagram
Module decomposition Diagram
Component and Connector Diagram
Deployment Diagram
Docker Hub
06
Description of the
system→
Description
SAP AI Core can be used to train and deploy machine learning platforms. A
developer writes a machine learning code, containerize it and upload it in a
docker repository. He now creates an AI core instance in his SAP BTP
subaccount. He can create a resource group which authenticates and syncs his
docker and github repository. He can upload his data into a cloud object store,
and provides it's reference into the resource group. A workflow template will be
created within the git repository.

The user can create a configuration selecting workflow and dataset, with which
he can create any number of executions. Each executions will output a model
which will be stored into the cloud object store. These models can be accessed
by a serving template. The configuration of a serving template can be used to
make any number of deployments, which can be used by business analysts.
07
Key learnings→
KEY LEARNINGS
● Explored strategies to identify Architecturally Significant Requirements (ASRs), gaining
a comprehensive understanding of the techniques involved.
● Investigated an existing system, gaining deep insights into its functions and
architectural intricacies.
● Developed various architectural diagrams, comprehending their diverse roles in
illustrating software system structures and functionalities.
● Cultivated teamwork skills for effective collaboration within a software development
team.
● Improved the ability to discern both functional and non-functional requirements,
strategically prioritizing quality attributes based on specific architectural needs.

Name: Abhijith S Babu


ID: 2023SL93100
KEY LEARNINGS
● Explored and comprehended the intricacies of an existing system, gaining in-depth
insights into its functionalities and operational workflows, unraveling its architectural
complexities.
● Developed various types of architectural diagrams, understanding their diverse
implications and unique roles in depicting the structure and functionalities of a
software system.
● Acquired adept teamwork skills, honing effective collaboration and communication
within a software development team.
● Enhanced the ability to identify both the functional and non-functional requirements
of a software system, strategically prioritizing quality attributes based on these specific
architectural specifications. Name: Antony Ashwin Anto
ID: 2023SL93002
KEY LEARNINGS
● Learned to work as a team, collaborate and communicate effectively.
● Identified an existing system and learned its purpose and workflow.
● Learned to identify its functional and non-functional requirements and ranked the
quality attributes based on their requirements in the system.
● Learned how to find Architecturally Significant Requirements(ASRs) of the system and
analyse its business and architectural impact.
● Understood the tactics used to implement those ASRs.
● Learned to how create various types of architecture diagrams for a system and what it
means, and how they differ.
● Able to describe what the system does and its purpose.

Name: Gayathri Sivakumar Menon


ID: 2023SL93087
KEY LEARNINGS
● As a team, efficiency and effectiveness of work can be enhanced
● Identified an existing system and learned its purpose and workflow.
● Tried to learn how functional and non-functional requirements can be prioritised for a
system.
● How to rank the quality attributes based on their requirements in the system.
● How to analyse and find Architecturally Significant Requirements(ASRs) of the system
its business and architectural impact.
● Understood the tactics used to implement those ASRs.
● Learned to how create various types of architecture diagrams for a system and what
significance it holds.
. Name: Vishal Kumar
ID: 2023SL93026
KEY LEARNINGS
● Learned to create various types of architecture diagrams for a system, understanding
their implications, differences and roles.
● Developed the ability to describe the purpose and functionalities of a system
● Explored architecturally significant requirements and analysed their business and
architectural impact.
● Developed the ability to identify both functional and non-functional requirements of a
system, strategically ranking quality attributes based on specific architectural needs.
● Understood tactics used in implementing Architecturally Significant Requirements
(ASRs) effectively.

Name: Abhishek Bantiya


ID: 2023SL93003
Thank you

You might also like