You are on page 1of 20

UNIT-III

Hyperledger Fabric: Transaction Flow, Fabric Details: Ordering Services,


Channels (Single and Multiple Channels), Peer, Client Applications, Certificate
Authority: Membership and Identity Management, Hyperledger Fabric Network
Setup.

Hyperledger Composer: Application Development, Network Administration

Transaction Flow

Transaction Flow in Hyper ledger Fabric


Transaction in hyper ledger fabric reflects the business activity in the fabric
network. Due ensure data immutability the transactions are kept inside the block
and the chain structure is used for protection. In hyperledger fabric, each node
maintains a ledger and the consensus mechanism is used to keep the ledger updated
and identical at every node. The ledger at each node is composed of two parts, a
blockchain, and a world-state database. The article focuses on discussing the
transaction flow in Hyperledger Fabric.

Assumptions

There are a few assumptions to understand before beginning with the transaction
flow in Hyperledger Fabric.
• This transaction flow assumes that the channel is set up and running.
• The application user has been registered and enrolled with the organization’s
Certificate Authority (CA) Certification and received back necessary
Protected cryptographic material, which will be used to authenticate to the
network.
• The chaincode containing a set of key representation value pairs is installed
on the peers and deployed to its current channel.
• The chaincode contains logic defining a set of active transaction instructions
and the agreed-upon price for a radish.
• An endorsement policy has also been set up for this chaincode, stating that
both peer A and peer B must endorse any transaction.
Transaction Flow in Hyperledger Fabric
Below are the steps in the transaction flow in the hyperledger fabric:

1. Client A initiates a transaction:

• Client A sends requests to Client B for purchasing radishes.


• The request targets peer A and peer B, who are representatives of Client A
and Client B.
• The endorsement policy states that both peers must endorse any transaction
thus the request goes to peer A and peer B.
• The transaction proposal which is a request to invoke chaincode function
with the intent to read/ update the ledger is constructed using SDK.
• The SDK takes the user’s cryptographic credentials to produce a unique
signature for the transaction proposal.
• The SDK submits the transaction proposal to the target peer and it will first
forward the transaction proposal to other peers for execution.

2. Endorsing peers verify the signature and execute the transaction:


The endorsing peers verify that:
• The transaction proposal is well-formed.
• It has not been submitted in the past for replay-attack protection.
• The signature is valid.
• The submitter (Client A) satisfies the channel’s Writers’ policy and is
properly authorized to perform the proposed operation on the channel.
The endorsing peers take the transaction proposal inputs as arguments to invoke
the chaincode function. The chaincode function gets executed and produces
transaction results, response value, write set, and read set. Till this point, there is
no update being done to the ledger. The set of these generated values along with
the endorsing peer’s signature is sent to the target peer as the proposal response.

3. Proposal responses are inspected:


• In step, the target peer verifies the proposal responses.
• Even if checking is not performed hyperledger fabric architecture is
designed in such a way that the endorsement policy will still be checked
and enforced when each peer validated transactions prior to committing
them.
4. Target peer assembles endorsements into a transaction:

• The target peer broadcasts transaction messages containing transaction


proposals and responses to the ordering service. This includes Channel ID,
read/ write sets, and a signature for each endorsing peer.
• The ordering service receives the transactions, orders them, and creates
blocks of transactions per channel. It will not inspect the entire content of
the transaction.

5. Transaction is validated and committed:


• The blocks of transactions are delivered to all the peers on the channel and
they validate the transactions within the block.
• The peers validate transactions to ensure the endorsement policy is fulfilled
and there have been no changes to the ledger state for read set variables
since the read set was generated by the transaction execution.
• They tag transactions in the block as valid or invalid.
7. Ledger updated:
• Each peer appends the block to the channel’s chain.
• For each valid transaction, the write set is committed to the current database.
• Each peer emits an event to notify the client application that the transaction
is immutably appended to the chain and also sends the notification
whether the transaction is validated or invalidated.

Fabric Details

The Ordering Service

What is ordering?

Many distributed blockchains, such as Ethereum and Bitcoin, are not


permissioned, which means that any node can participate in the consensus
process, wherein transactions are ordered and bundled into blocks. Because of
this fact, these systems rely on probabilistic consensus algorithms which
eventually guarantee ledger consistency to a high degree of probability, but
which are still vulnerable to divergent ledgers (also known as a ledger “fork”),
where different participants in the network have a different view of the
accepted order of transactions.

Hyperledger Fabric works differently. It features a node called


an orderer (it’s also known as an “ordering node”) that does this
transaction ordering, which along with other orderer nodes forms
an ordering service. Because Fabric’s design relies
on deterministic consensus algorithms, any block validated by the peer is
guaranteed to be final and correct. Ledgers cannot fork the way they do in
many other distributed and permissionless blockchain networks.

In addition to promoting finality, separating the endorsement of chaincode


execution (which happens at the peers) from ordering gives Fabric advantages
in performance and scalability, eliminating bottlenecks which can occur when
execution and ordering are performed by the same nodes.

Orderer nodes and channel configuration

Orderers also enforce basic access control for channels, restricting who
can read and write data to them, and who can configure them. Remember
that who is authorized to modify a configuration element in a channel is
subject to the policies that the relevant administrators set when they
created the consortium or the channel. Configuration transactions are
processed by the orderer, as it needs to know the current set of policies to
execute its basic form of access control. In this case, the orderer processes
the configuration update to make sure that the requestor has the proper
administrative rights. If so, the orderer validates the update request
against the existing configuration, generates a new configuration
transaction, and packages it into a block that is relayed to all peers on the
channel. The peers then process the configuration transactions in order to
verify that the modifications approved by the orderer do indeed satisfy the
policies defined in the channel.

Orderer nodes and identity

Everything that interacts with a blockchain network, including peers,


applications, admins, and orderers, acquires their organizational identity
from their digital certificate and their Membership Service Provider
(MSP) definition.

For more information about identities and MSPs, check out our documentation
on Identity and Membership.
Just like peers, ordering nodes belong to an organization. And similar to peers,
a separate Certificate Authority (CA) should be used for each organization.
Whether this CA will function as the root CA, or whether you choose to deploy
a root CA and then intermediate CAs associated with that root CA, is up to you.

Orderers and the transaction flow

Phase one: Transaction Proposal and Endorsement

We’ve seen from our topic on Peers that they form the basis for a blockchain
network, hosting ledgers, which can be queried and updated by applications
through smart contracts.

Specifically, applications that want to update the ledger are involved in a


process with three phases that ensures all of the peers in a blockchain network
keep their ledgers consistent with each other.

In the first phase, a client application sends a transaction proposal to the


Fabric Gateway service, via a trusted peer. This peer executes the
proposed transaction or forwards it to another peer in its organization for
execution.

The gateway also forwards the transaction to peers in the organizations


required by the endorsement policy. These endorsing peers run the
transaction and return the transaction response to the gateway service.
They do not apply the proposed update to their copy of the ledger at this
time. The endorsed transaction proposals will ultimately be ordered into
blocks in phase two, and then distributed to all peers for final validation
and commitment to the ledger in phase three.

Note: Fabric v2.3 SDKs embed the logic of the v2.4 Fabric Gateway service in
the client application — refer to the v2.3 Applications and Peers topic for
details.

For an in-depth look at phase one, refer back to the Peers topic.
Phase two: Transaction Submission and Ordering

With successful completion of the first transaction phase (proposal), the


client application has received an endorsed transaction proposal response
from the Fabric Gateway service for signing. For an endorsed transaction,
the gateway service forwards the transaction to the ordering service, whic h
orders it with other endorsed transactions, and packages them all into a
block.

The ordering service creates these blocks of transactions, which will


ultimately be distributed to all peers on the channel for validation and
commitment to the ledger in phase three. The blocks themselves are also
ordered and are the basic components of a blockchain ledger.

Ordering service nodes receive transactions from many different application


clients (via the gateway) concurrently. These ordering service nodes
collectively form the ordering service, which may be shared by multiple
channels.

The number of transactions in a block depends on channel configuration


parameters related to the desired size and maximum elapsed duration for a
block ( BatchSize and BatchTimeout parameters, to be exact). The blocks are
then saved to the orderer’s ledger and distributed to all peers on the channel. If
a peer happens to be down at this time, or joins the channel later, it will receive
the blocks by gossiping with another peer. We’ll see how this block is
processed by peers in the third phase.

It’s worth noting that the sequencing of transactions in a block is not


necessarily the same as the order received by the ordering service, since there
can be multiple ordering service nodes that receive transactions at
approximately the same time. What’s important is that the ordering service puts
the transactions into a strict order, and peers will use this order when validating
and committing transactions.

This strict ordering of transactions within blocks makes Hyperledger Fabric a


little different from other blockchains where the same transaction can be
packaged into multiple different blocks that compete to form a chain. In
Hyperledger Fabric, the blocks generated by the ordering service are final.
Once a transaction has been written to a block, its position in the ledger is
immutably assured. As we said earlier, Hyperledger Fabric’s finality means that
there are no ledger forks — validated and committed transactions will never be
reverted or dropped.

We can also see that, whereas peers execute smart contracts (chaincode) and
process transactions, orderers most definitely do not. Every authorized
transaction that arrives at an orderer is then mechanically packaged into a block
— the orderer makes no judgement as to the content of a transaction (except for
channel configuration transactions, as mentioned earlier).

At the end of phase two, we see that orderers have been responsible for the
simple but vital processes of collecting proposed transaction updates,
ordering them, and packaging them into blocks, ready for distribution to
the channel peers.

Phase three: Transaction Validation and Commitment

The third phase of the transaction workflow involves the distribution of ordered
and packaged blocks from the ordering service to the channel peers for
validation and commitment to the ledger.

Phase three begins with the ordering service distributing blocks to all channel
peers. It’s worth noting that not every peer needs to be connected to an orderer
— peers can cascade blocks to other peers using the gossip protocol —
although receiving blocks directly from the ordering service is recommended.

Each peer will validate distributed blocks independently, ensuring that ledgers
remain consistent. Specifically, each peer in the channel will validate each
transaction in the block to ensure it has been endorsed by the required
organizations, that its endorsements match, and that it hasn’t become
invalidated by other recently committed transactions. Invalidated transactions
are still retained in the immutable block created by the orderer, but they are
marked as invalid by the peer and do not update the ledger’s state.
The second role of an ordering node is to distribute blocks to peers. In this
example, orderer O1 distributes block B2 to peer P1 and peer P2. Peer P1
processes block B2, resulting in a new block being added to ledger L1 on P1.
In parallel, peer P2 processes block B2, resulting in a new block being added
to ledger L1 on P2. Once this process is complete, the ledger L1 has been
consistently updated on peers P1 and P2, and each may inform connected
applications that the transaction has been processed.

Ordering service implementations

While every ordering service currently available handles transactions and


configuration updates the same way, there are nevertheless several different
implementations for achieving consensus on the strict ordering of transactions
between ordering service nodes.

For information about how to stand up an ordering node (regardless of the


implementation the node will be used in), check out our documentation on
deploying a production ordering service.

• Raft (recommended)
New as of v1.4.1, Raft is a crash fault tolerant (CFT) ordering service based
on an implementation of Raft protocol in etcd . Raft follows a “leader and
follower” model, where a leader node is elected (per channel) and its
decisions are replicated by the followers. Raft ordering services should be
easier to set up and manage than Kafka-based ordering services, and their
design allows different organizations to contribute nodes to a distributed
ordering service.
• Kafka (deprecated in v2.x)
Similar to Raft-based ordering, Apache Kafka is a CFT implementation that
uses a “leader and follower” node configuration. Kafka utilizes a ZooKeeper
ensemble for management purposes. The Kafka based ordering service has
been available since Fabric v1.0, but many users may find the additional
administrative overhead of managing a Kafka cluster intimidating or
undesirable.
• Solo (deprecated in v2.x)
The Solo implementation of the ordering service is intended for test only
and consists only of a single ordering node. It has been deprecated and may
be removed entirely in a future release. Existing users of Solo should move
to a single node Raft network for equivalent function.
Raft

For information on how to customize the orderer.yaml file that determines the
configuration of an ordering node, check out the Checklist for a production
ordering node.

The go-to ordering service choice for production networks, the Fabric
implementation of the established Raft protocol uses a “leader and follower”
model, in which a leader is dynamically elected among the ordering nodes in a
channel (this collection of nodes is known as the “consenter set”), and that
leader replicates messages to the follower nodes. Because the system can
sustain the loss of nodes, including leader nodes, as long as there is a majority
of ordering nodes (what’s known as a “quorum”) remaining, Raft is said to be
“crash fault tolerant” (CFT). In other words, if there are three nodes in a
channel, it can withstand the loss of one node (leaving two remaining). If you
have five nodes in a channel, you can lose two nodes (leaving three remaining
nodes). This feature of a Raft ordering service is a factor in the establishment
of a high availability strategy for your ordering service. Additionally, in a
production environment, you would want to spread these nodes across data
centers and even locations. For example, by putting one node in three different
data centers. That way, if a data center or entire location becomes unavailabl e,
the nodes in the other data centers continue to operate.

From the perspective of the service they provide to a network or a channel, Raft
and the existing Kafka-based ordering service (which we’ll talk about later) are
similar. They’re both CFT ordering services using the leader and follower
design. If you are an application developer, smart contract developer, or peer
administrator, you will not notice a functional difference between an ordering
service based on Raft versus Kafka. However, there are a few major differences
worth considering, especially if you intend to manage an ordering service.

• Raft is easier to set up. Although Kafka has many admirers, even those
admirers will (usually) admit that deploying a Kafka cluster and its
ZooKeeper ensemble can be tricky, requiring a high level of expertise in
Kafka infrastructure and settings. Additionally, there are many more
components to manage with Kafka than with Raft, which means that there
are more places where things can go wrong. Kafka also has its own versions,
which must be coordinated with your orderers. With Raft, everything is
embedded into your ordering node.
• Kafka and Zookeeper are not designed to be run across large networks.
While Kafka is CFT, it should be run in a tight group of hosts. This means
that practically speaking you need to have one organization run the Kafka
cluster. Given that, having ordering nodes run by different organizations
when using Kafka (which Fabric supports) doesn’t decentralize the nodes
because ultimately the nodes all go to a Kafka cluster which is under the
control of a single organization. With Raft, each organization can have its
own ordering nodes, participating in the ordering service, which leads to a
more decentralized system.
• Kafka is supported natively, which means that users are required to get the
requisite images and learn how to use Kafka and ZooKeeper on their own.
Likewise, support for Kafka-related issues is handled through Apache, the
open-source developer of Kafka, not Hyperledger Fabric. The Fabric Raft
implementation, on the other hand, has been developed and will be
supported within the Fabric developer community and its support apparatus.
• Where Kafka uses a pool of servers (called “Kafka brokers”) and the admin
of the orderer organization specifies how many nodes they want to use on a
particular channel, Raft allows the users to specify which ordering nodes
will be deployed to which channel. In this way, peer organizations can make
sure that, if they also own an orderer, this node will be made a part of a
ordering service of that channel, rather than trusting and depending on a
central admin to manage the Kafka nodes.
• Raft is the first step toward Fabric’s development of a byzantine fault
tolerant (BFT) ordering service. As we’ll see, some decisions in the
development of Raft were driven by this. If you are interested in BFT,
learning how to use Raft should ease the transition.

For all of these reasons, support for Kafka-based ordering service is being
deprecated in Fabric v2.x.

Note: Similar to Solo and Kafka, a Raft ordering service can lose transactions
after acknowledgement of receipt has been sent to a client. For example, if the
leader crashes at approximately the same time as a follower provides
acknowledgement of receipt. Therefore, application clients should listen on
peers for transaction commit events regardless (to check for transaction
validity), but extra care should be taken to ensure that the client also gracefully
tolerates a timeout in which the transaction does not get committed in a
configured timeframe. Depending on the application, it may be desirable to
resubmit the transaction or collect a new set of endorsements upon such a
timeout.

Raft concepts

While Raft offers many of the same features as Kafka — albeit in a simpler and
easier-to-use package — it functions substantially different under the covers
from Kafka and introduces a number of new concepts, or twists on existing
concepts, to Fabric.
Log entry. The primary unit of work in a Raft ordering service is a “log entry”,
with the full sequence of such entries known as the “log”. We consider the log
consistent if a majority (a quorum, in other words) of members agree on the
entries and their order, making the logs on the various orderers replicated.

Consenter set. The ordering nodes actively participating in the consensus


mechanism for a given channel and receiving replicated logs for the channel.

Finite-State Machine (FSM). Every ordering node in Raft has an FSM and
collectively they’re used to ensure that the sequence of logs in the various
ordering nodes is deterministic (written in the same sequence).

Quorum. Describes the minimum number of consenters that need to affirm a


proposal so that transactions can be ordered. For every consenter set, this is
a majority of nodes. In a cluster with five nodes, three must be available for
there to be a quorum. If a quorum of nodes is unavailable for any reason, the
ordering service cluster becomes unavailable for both read and write operations
on the channel, and no new logs can be committed.

Leader. This is not a new concept — Kafka also uses leaders — but it’s critical
to understand that at any given time, a channel’s consenter set elects a single
node to be the leader (we’ll describe how this happens in Raft later). The leader
is responsible for ingesting new log entries, replicating them to follower
ordering nodes, and managing when an entry is considered committed. This is
not a special type of orderer. It is only a role that an orderer may have at
certain times, and then not others, as circumstances determine.

Follower. Again, not a new concept, but what’s critical to understand about
followers is that the followers receive the logs from the leader and replicate
them deterministically, ensuring that logs remain consistent. As we’ll see in our
section on leader election, the followers also receive “heartbeat” messages from
the leader. In the event that the leader stops sending those message for a
configurable amount of time, the followers will initiate a leader election and
one of them will be elected the new leader.

Raft in a transaction flow


Every channel runs on a separate instance of the Raft protocol, which allows
each instance to elect a different leader. This configuration also allows further
decentralization of the service in use cases where clusters are made up of
ordering nodes controlled by different organizations. Ordering nodes can be
added or removed from a channel as needed as long as only a single node is
added or removed at a time. While this configuration creates more overhead in
the form of redundant heartbeat messages and goroutines, it lays necessary
groundwork for BFT.

In Raft, transactions (in the form of proposals or configuration updates) are


automatically routed by the ordering node that receives the transaction to the
current leader of that channel. This means that peers and applications do not
need to know who the leader node is at any particular time. Only the ordering
nodes need to know.

When the orderer validation checks have been completed, the transactions are
ordered, packaged into blocks, consented on, and distributed, as described in
phase two of our transaction flow.

Channel, one of the key abstractions in Hyperledger Fabric — what is it for?


What is a channel?

If I were asked to describe what a channel is in one sentence, I would say that it is
an abstraction that forms a permissioned boundary around the ledger and the
chaincodes.
Before I move to an example use case, let’s have a look at what a channel is in a
more technical overview.

Channel is one of the basic building blocks in Hyperledger Fabric. You


cannot have a network without a single channel. In fact, one of the first things to
do when bootstrapping a new network is creating the system channel that is used
to propagate configuration about the network and its members. It also contains
system chaincodes you can use throughout the life of the network.

There is also another type of channel — an application channel. From a


developer's perspective, it's the main and most commonly used one. You can have
as many of them as you want and each of them can have multiple chaincodes
deployed. Whenever you want to create a new channel, you have to specify what
members will have access to it, and what permissions they will have. For
example, you could limit some clients to just querying the ledger, which could be
useful for auditing.

Business case

Let’s say we are building an application for a Law Firm. They came to us and
shared that they are handling many cases for their clients, they need to exchange
lots of paper documents before the interested parties are ready to sign any of
them. Unfortunately, it happens that some documents are lost, sometimes there
are changes someone was not aware of, in short, they have a lot of problems
everyone would like to avoid. On top of that, clients get mad, so we have to do
something or the firm will start losing clients.

We proposed to build an application based on blockchain technology, so they and


their clients would be able to cooperate on documents, track all changes,
and verify document integrity at any point in time. That way, we could avoid a
lot of misunderstandings.

After a surprisingly short amount of time, we are ready to deploy a pilot. Our
biggest client agreed to take part in it. Let’s name them Client A. We have all
Hyperledger Fabric pieces in place, but what I would like to focus on are peer
nodes. They will have copies of the ledger where the application will store data,
like an audit of changes made to the documents. To give an assurance that the
data is truly valid, our client needs to have peer nodes on the infrastructure owned
by them. All that is left right now is to join all peer nodes with our newly created
application channel where we have already deployed all relevant chaincodes.
Our client admitted they are very happy with the way the application is working.
Finally, there is no need to exchange so many paper documents and we can keep
track of everything related to them.

The next step for our Law Firm is to implement this solution with the next client,
let’s call them Client B. We have a working network with one application
channel that is used by us and Client A. We don’t want to share that channel with
Client B as they would be able to see everything from the ledger or maybe even
write data to it too. What we're gonna do next is add peer nodes to the
infrastructure owned by Client B and create a new channel which will join our
and their peers. In that way, we will have two ledgers, one for each client we are
cooperating with, but only peers that are members of the channel will have a copy
of its ledger.

I bet you already see the pattern here. Every time we implement a solution with
another client, they need to set up peer nodes on their side, then we join them
to a newly created channel and the ledger is stored only on the participating peer
nodes. Though in our example, we use a channel to communicate just between
two organizations, it is possible to add more parties to a single channel.

As word about our application is spreading across the world, more and more
clients are interested in using it and we also landed a few new clientes. We came
to a point where we noticed that our peer nodes are handling a lot of load and
joining them to another channel could be fatal to the performance of the network.

Thankfully, each of the peer nodes can be a member in a different set of


channels. Given that at the beginning we had three peers and each of them was a
member of two different channels, now we can spawn more peer nodes and
utilize them with new clients. That way, we can achieve some sort of sharding.
For the sake of the simplicity of the diagram, let’s say we have four peer nodes
and three clients. We can configure the network in the way that each channel will
be placed on two of our peers.

Summary

As you can see, a channel is pretty much a basic building block of a Hyperledger
Fabric network, without it, you cannot build one. It also separates data in a secure
way, so only peers that are members of a particular channel, will store the data,
and only actors with proper permissions can read and write to the ledger of the
channel. The last thing to remember is that you can freely configure your peers so
that each of them can know about different sets of channels, in that way the
network can be more fail proof or you can achieve sharding.

You might also like