You are on page 1of 11
Importance of Organization or Architecture: Proper organization is, crucial to master the complexity of distributed systems. Layered Architectures: assembles parts into layers, each of which is connected to the layers above and below. Helps in managing the complexity of distributed systems by structurally ‘organizing components, Units for interacting with users or external applications are found in the application-interface layer. The processing layer houses the application's functionalities ‘The data a cient requests is contained in the data layer. ‘Many distributed information systems that use classic database technology and related applications have this stacking. Client-Server Architectur ‘The client-server architecture is a distributed computing system where tasks are split between software on the server computer and the client computer. Design of a computer network where a centralized server (host computer) serves as the request and service provider for numerous clients (remote processors). Web applications, in which a web browser sends requests to a web server for web pages. Peer-to-peer architectures ina P2P design, each process has the same status and set of capabilities. ~ Every node can function as both a client and a server; there is no central server. - Every process has the abilty to simultaneously seek services (client role) and offer services (server role) - The word "process" in peer-to-peer (P2P) systems refers to every single User or node in the distributed system. = P2P communication is used by apps lke Skype and WhatsApp to allow users to chat and make live audio and video calls to one another media files and messages are shared Service-Oriented Architectures (SOA) = Offer services that are self-contained but can readily link and. collaborate. = SOA can adjust to technological advancements more easily = You may update your apps in an economical and efficent manner. - An illustration of an object-based architectural style is one in which the ‘elements are objects that are linked to one another by procedure calls. Calls may run over a network if objects are placed on separate machines, -Read Performance: By spreading out read operations throughout several nodes, much as Master-Slave Replication, read performance may be enhanced by lessening the strain on individual nodes. Reliability: Data Consistency: Because concurrent writes on various nodes may result in conflicts, achieving good consistency in Multi-Master Replication can be difficult. Mechanisms for resolving conflicts are required to preserve data consistency, which could affect reliability and ‘add complexity. -Fallure Resilience: Compared to Master Slave Replication, Mult-Master Replication is more resilient to node failures since several nodes can ‘continue to accept writes even in the event that one or more nodes fal. By lessening the effect of node fallures on total system availabilty, this ‘can increase system dependability, ANS(3A: Fault tolerance: “The ability ofa system to function normally even ifone or more ofits ‘components fail is known as fault tolerance. Iti an essential component of system design, particularly in distributed systems where a number of interconnected components cooperate to accomplish a single objective, ‘When a single component fails fault tolerance makes thatthe system still functions as intended and is responsive, available, and dependable. =A procedure that gives an operating system the abilty to react to a hardware or software malfunction according to this concept of fault- tolerance, the system's capacity to function. Redundancy : provides system-wide fault tolerance, helping to ensure that the CSP continues to process calls despite a hardware or software fault. ANS(2.2): Master-Slave Replication: - A single designated master node manages all writing activities in master-stave replication multiple slave nodes duplicate data from the master the slaves are read-only copies of the master data and any ‘changes made to the master data are propagated to the slave nodes ‘asynchronously Performance: Write Performance: Since al wite operations are dected tothe master rode, it might become a bottleneck, especialy in wt-heavy scenarios. However, read ‘operations can be dstrbuted among mulple slave nodes, potentaly improving ‘ead performance ‘Read Performance: Slave nodes can handle read requests, which can improve overall ‘ead performance by dstbutng the load across multiple nodes. Reliability: ‘Date Durability: Dati ypcaly more durable in master-slave replication because the master nade ensures that all wites are commited sucestuly before being, replicated tothe slaves. This enhances dataralabiiy ‘One Point of Fallure: One pont of falures represented bythe master node. System ‘dependabilty may be impacted ifthe master node fs because write operations, ‘cannat be completed unt the master is recovered or a new masters chosen Multi-Master Replication: + Multiple nodes can independently accept read and write operations in ‘multi-master replication. Every node has the ability to receive writes and forward them to other nodes, Mechanisms for resolving conflicts are used to manage situations where multiple nodes simultaneously modify the same piece of data Performance: “Write Performance: Compared to Master-Siave Replication, write ‘operations may be spread across several nodes, possibly increasing ‘overall write performance. This can improve system speed, particularly in situations when there is alot of write concurrent. Consistency: + aset of guidelines or an agreement between the operations of a distributed data store. ~ describes what should happen when read and write activities are performed concurrently, - guarantees dependable and consistent data exchange in distributed systems, splication: ~ Reliability is increased by making duplicates of the data and code in the system while resolving the problem, move to another copy ifthe first one fails. To even up the load, split up the requests among the clones consider the distance between replicates and users to improve response times. ~ Every modification made to one copy needs to be duplicated throughout all copies it's crucial to maintain consistency and currency across all copies. Dato-centric consistency models: Acontract or set of guidelines governing the operations of a distributed data store describes the expected results of write and read operations when there is concurrent access. (lient-Centric Consistency Models: {guarantees that clients receive updates in a logical order ideal for systems with infrequent or simple-to-merge data updates. Distributed Commit Protocols: To preserve data integrity throughout the distributed system, distributed commit protocols make sure that transactions involving humerous nodes or components ae either fully committed or totally aborted -Consistency: Distributed commit protocols ensure that transactions are either committed or aborted consistently across all Participating nodes, preventing partial or inconsistent results. + Reliability: By coordinating the commit process across multiple nodes, distributed commit protocols enhance the reliability of transactions, reducing the risk of data inconsistencies or loss. Recovery Mechanisms: = Recovery techniques are necessary incase of failures, crashes, or corrupted data in order to restore consistency and dependability to the system. These mechanisms comprise methods lke transaction rollback, logging, and checkpointing. + Reliability: Recovery mechanisms improve the reliability of distributed systems by providing mechanisms to recover from failures and ensure that the system remains operational and consistent, + Data integrity: By ensuring that transactions are logged and ‘changes are applied atomically, recovery mechanisms help maintain data integrity and consistency even in the presence of failures. Threads: = Aminimal software processor that executes instructions in a specific context. - Within a process, there can be one or more threads. - Executing a thread, means executing a series of instructions in the context of that thread. - Saving a thread context implies stopping the current execution and saving all the data needed to continue the execution at a later stage. - Single-threaded processes block (wait) during / ‘operations (such as read/write from disk), - Multithreading allows other threads (typically that doesn’t need VO operations) to continue running to improve efficiency. Apply parallelism: - In multiprocessor or multicore systems, multiple threads can run simultaneously to enhance resource utilization/use. Avoid process switching: - Switching between processes is resource-intensive, Avoiding process switching reduce the overhead, which can offer faster performance. Virtualization: irtualization refers to the creation of a virtual (rather than, actual) version of something, such as a hardware platform (CPU), operating system, storage device (Memory), or network resources. irtualization is the technique of simulating the execution of multiple processes, or threads in parallel, on a single-processor system. - Although a single-core CPU can execute only one thread or process at a time, virtualization achieves an illusion of parallelism ‘through rapid switching between multiple threads or processes. Containers: + Containers offer an efficient and lightweight solution for, application deployment, ensuring applications have the necessary dependencies without the overhead of a full vitual machine, - Containers require fewer resources than traditional VMs. - Containers ensure applications run consistently across different computing environments (better for distributed systems) Application of virtual machines to distributed systems: Cloud computing: -The most important application of virtualization. - Example: Amazon Elastic Compute Cloud (EC2) rents out VMs with virtualized CPU, storage. Containers important application Continuous integration and continuous deployment (CI/CD), microservices architecture, scaling web applications, deploying and managing cloud-native applications. - We encrypt the network professionally and do not leave any loopholes, whether internal or external. ~ Single Sign-On (SSO): OAuth or OpenID Connect are two SSO solutions that distributed systems utilize to let users log in just once and access a variety of services or apps without having to keep entering their credentials. - Data Encryption: To prevent unwanted access or interception, distributed systems encrypt data while it is being sent between nodes and while it is being stored. Flat Naming Schem Every resource under a flat naming scheme is given a unique identification without any kind of structured or hierarchical arrangement. The naming convention for resources is usually fat, straightforward names that make them easy to handle but may lead to naming conflicts in large-scale systems. Flat Names: deal for machines; not user-friendly. Example: - File System: “document! txt’, “image|pg", “report. docs’ Structured Names: ~ Used in various system components like fie systems and Internet hosts essential for locating resources in a distributed system. Structured Names: Composed of readable, hierarchical components, - Structured names provide context and are easier for humans to understand and remember. Example: = Internet host naming follows structured naming (e.g www google.com), Attribute-based naming: - Attribute-based naming uses attribute-value pairs for entity identification, enhancing search efficiency beyond traditional flat and structured names. ~ Entities are described by characteristics (attributes) Example: - In an office network, finding a printer with specified attributes: [Color: Yes, Speed: >30ppm] returns printers meeting these criteria, ANS(5.2): Domain Name System (DNS): - A major distributed naming service on the Internet Primarily used {or finding IP addresses of hosts and mail servers. - Used in various system components like file systems and Internet hosts essential for locating resources in a distributed systems. ANS(6.1): += Network Security ~ Since communication in distributed systems takes place over networks, there are risks of interception, tampering, and eavesdropping, To intercept or manipulate data transferred between system ‘components, attackers may use man-\n-the-middle attacks, explolt flaws, in network protocols, or infiltrate network devices. ~ Authentication and Authorization: = The absence of a centralized authentication authority and the requirement to coordinate access control policies among distributed ‘entities make it difficult to ensure safe authentication and authorization processes among distributed nodes. Data breaches, privilege escalation, ‘and unauthorized access can result from weak authentication and authorization procedures. ~ Data Security and Privacy = Data may be stored, processed, and transferred across several locations and entities in distributed systems, making data confidentiality, integrity, and privacy protection essential

You might also like