You are on page 1of 46

BLOCKCHAIN TECHNOLOGY

The new way for distributed organizations

F.J.B.Carvalho, F.Varela, P.M.F.Sendim

July 2023
The new way for distributed organizations

Abstract

When it comes to build a new system, abstraction states as a first player to define main concepts.
It's the main entry point to reduce complexity and to allow the progressive understanding of the
fundamental building blocks and design potential solutions.

Each significant system project turns to always try to respond to a one or more specific needs: a
trivial concept of utility. Precisely defining this needs is the first step to engage and explore
possible response spaces.

The present paper will first take this path of reflection and abstraction. It will focus and expose
a new approach to a distributed collaborative system and a shared immutable data persistence
system: the YPE Blockchain Network.

Today the Blockchain technology ecosystem is a constant subject of research and improvement.
The enthusiasm and interest in the technology has increased since 2012, after the work carried
out since 2009 and the early launch of the Bitcoin network (S. Nakamoto). It should be noted
that several previous developments and works have contributed to the adoption of this
technology, in particular: in matters of security (cryptography), distributed network technologies
and other attempts to create digital currencies (such as the famous Adam Back’s Hashcash
project).

Several players contribute significantly to the research and development of this technology, such
as the Ethereum Network, defining new concepts, new utilities, new application domains.

The YPE Blockchain is built from ground zero. Even if most of the existing solutions are amazing
and inspiring, there was a need to understand and reconsider each technological subset
involved. From cryptographic primitives strictly tied to security needs, to the distributed network
topology, the communication protocol, the data persistence layer, the actual consensus
algorithms and the Blockchain structure, all was subject to new reflections and induced
developments.

Following a brief state of the art consideration, all relevant components are depicted for the
overall structure of the YPE project. When needed theoretical notes will explain and justify the
assumptions considered important.

As a proof of concept of the solution and the introduced concepts, a simple client app was
created and is available in a testnet environment.

(The app will be briefly presented and is available for download at https://ypeblockchain.org)

1
The new way for distributed organizations

Table of Contents

Abstract ......................................................................................................................................... 1
Introduction................................................................................................................................... 3
State of the Art .............................................................................................................................. 6
A walk through the history of decentralized digital currency ................................................... 6
The Bitcoin Network case: A Decentralized Digital Revolution ................................................. 8
The Ethereum Network case: Empowering Decentralized Applications ................................... 9
Distributed networks: Little History of Peer-to-Peer Topology ............................................... 10
Cryptography: The needed catalyst for Blockchain Technology's Security ............................. 11
The YPE Blockchain...................................................................................................................... 14
The Network Topology ............................................................................................................ 15
A New Set of Cryptographic Tools ........................................................................................... 19
Theoretical support ............................................................................................................. 21
The JHash Hashing process ................................................................................................. 26
The XCube Cryptographic tool set ....................................................................................... 33
Transactions............................................................................................................................. 38
The Blockchain Structure......................................................................................................... 40
A Client App in Testnet ............................................................................................................ 41
Token Economics ......................................................................................................................... 43
Conclusion ................................................................................................................................... 44
References ................................................................................................................................... 45

2
The new way for distributed organizations

Introduction

In the realm of system development, abstraction emerges as a fundamental tool that holds
significant value. It acts as a guiding principle to reduce complexity and unlock the potential for
building robust and scalable solutions. By abstracting away intricate details and focusing on high-
level concepts, abstraction allows to develop a layered structure that fosters clarity, flexibility,
and ease of maintenance.

At its core, abstraction enables developers to break down complex systems into manageable and
coherent components. By identifying and extracting the essential features and functionalities,
abstraction paves the way for a structured approach to system design. This structured approach
manifests as a layered architecture, where each layer encapsulates a specific set of
functionalities and interacts with other layers through well-defined interfaces.

For example, when creating an app that needs to connect to a remote machine, it is common to
consider available a set of functionalities to use the network hardware interface without
knowing the details of how the card effectively works or low level operations that are realized by
the OS (in general, those functionalities are available in subjacent OS layers). As another
example, everyone uses the concept in the everyday life: driving a car. It doesn't require one to
known all technical details of what is going on in mechanical or electronic systems behind the
car command interface: those systems are abstracted layers for the driver. The interface of
commands is all the driver really needs to know.

From a programmer point of view, the layered structure that abstraction facilitates brings
benefits to development. Firstly, it enhances modularity, allowing developers to isolate different
components and functionalities within distinct layers. This modularity enables independent
development and testing of each layer, promoting parallelism and facilitating efficient
collaboration among development teams. Additionally, modularity makes it easier to replace or
upgrade individual layers without affecting the entire system, thereby improving maintainability
and extensibility.

Moreover, abstraction enables developers to establish clear boundaries between layers,


promoting separation of concerns. The layered view of the system promotes comprehension,
collaboration, effective problem-solving, especially in large-scale projects.

Abstraction also enables system scalability. As the system evolves and grows in complexity, the
layered structure allows for the addition or modification of layers as needed. New functionalities
can be seamlessly integrated without disrupting the overall system. This scalability enables the
system to adapt to changing requirements, accommodate future enhancements, and maintain a
high level of performance and efficiency.

For the YPE Network, abstraction have played a vital role in system development by enabling the
creation of layered structures. It was possible to distill complexity into simplified subsystems,
into manageable components, enhance modularity and facilitate scalability. Abstraction has
allowed a flexible design, a maintainable, comprehensible, and adaptable system.

3
The new way for distributed organizations

Figure 1-Main layers

Five main layers were considered for the YPE Blockchain project (fig. 1). The lower layer is
composed of two main functionalities groups. The Network Group implements the connections
with the network and maintains messages pools for incoming and outgoing messages. Beside
the cryptography group defines a set of primitives allowing data integrity control,
encryption/decryption functions, digital signing and identity verification.

The Protocol/Messaging Layer defines the discovery service allowing peers to find other nodes
to connect to the network. This Layer is responsible to define the initial handshake allowing two
connected peers to first exchange identification and security parameters for encrypted
communications. The messages structures are defined and verified at this layer.

An integrity checksum is included in each message. This checksum and the message structure
are continuously controlled. Messages are discarded on unrecognized messages structures,
invalid checksums or duplicated messages. The overall logic considered for the peer-2-peer
network defines a segmented overlay structure.

4
The new way for distributed organizations

The Data layer represents the effective information about the Blockchain, the transactions and
all necessary called logistic information (connections, node personal data, etc.). This layer is a
central layer receiving information and using routines of subjacent layers and maintaining all the
necessary information for the consensus layer.

The consensus layer is the most critical layer of a Blockchain. Its function consists in ensuring
that transactions are validated, ordered, and agreed upon by all peers in the network. It
establishes a clear set of agreements between nodes that follow the rules to validate
transactions and create blocks. Has further exposed, the consensus will take advantage of the
segmented nature of the network overlay.

The application layer is the support for decentralized applications that interact with the YPE
Blockchain Network through APIs. It will not be restricted to wallets. This layer aims to be the
open door to allow interactivity with YPE external client service/distributed app. In the YPE
Blockchain, distributed processing aims to allow full on chain processing services. The approach
will make possible the development of actual utilities (like contracts, DeFi, etc.) and pretends to
offer a general purpose distributed processing platform.

One can verify that the cryptographic functionalities group should remain transversal up to the
consensus layer.

5
The new way for distributed organizations

State of the Art

A walk through the history of decentralized digital currency

Today, it is almost impossible to dissociate the concepts of Blockchain and cryptos. And yet, the
Blockchain system, appeared before cryptocurrencies. It is also the underlying technology that
today supports all the elements that constitute and operate in this sector. But where does it
really come from and how does it manage to carry out the significant flow of transactions that
happens every day?

For the short history of the Blockchain technology, we have to go back to the early 1980s to hear
about the concept of Blockchain for the first time. The system would have been proposed initially
by David Chaum, an American computer scientist who proposed to carry out anonymous
transactions thanks to cryptography. The idea was picked up in the early 1990s by two engineers,
Stuart Haber and Scott Stornetta, who were studying all systems based on cryptographically
protected Blockchain. Their goal was then to find a system that could secure timestamped
documents and protect them from tampering or backdating.

Other attempts were made. In 1997, Adam Back invented Hashcash, initially a way to prevent
email spam. Even in 1997, email spam was a significant nuisance to anyone with an email
account. A simple fundamental problem: all that was needed to send an email to someone was
to send a bunch of packets to an email server. Sending an email was inexpensive, almost basically
free. Honest users only ever sent a handful of emails a day, and yet spammers could send tens
of millions with a cost of almost nothing. The idea was then to first raise the operation cost.

Consider, for example, that sending an email would have cost of one penny. To a normal user,
with a normal number of emails sent per day, this wouldn't be critical, but, for a spammer, it
would represent up to millions of pennies: making their spam campaigns impossible. It was the
core idea. Since digital currencies didn't exist yet, Adam's Hashcash system circumvented and
solved introducing a new core concept: The proof-of-work.

Finally, it was in 2009 that Satoshi Nakamoto, proposed the very first concrete application of
Blockchain technology: The Bitcoin, inspired in the developments introduced in Hashcash
project. Among other concerns, the Bitcoin project proposed a secure solution preventing the
digital currency tokens double-spending.

The cryptography system conceptualized by David Chaum, implemented by Nakamoto, brought


to the world the very first decentralized digital monetary system, operating on peer to peer
technology.

The Blockchain is thus presented as a public digital register, which no one can manipulate and in
which everyone can participate. This technology is powerful, not only because of all that it allows
to do in the world of cryptocurrencies with Bitcoin and Ethereum, but also for all that it brings
in great fields and industries.

The concept of decentralized digital currency has revolutionized the way we perceive and engage
with traditional financial systems. Born out of a desire for financial freedom, privacy, and

6
The new way for distributed organizations

transparency, decentralized digital currencies have emerged as a viable alternative to centralized


financial institutions. Let's resume the fascinating history of decentralized digital currency, from
its humble beginnings to its current impact on the global economy.

The true usable genesis: Bitcoin

The story of decentralized digital currency begins with the emergence of Bitcoin in 2009. An
enigmatic individual or group known as Satoshi Nakamoto introduced Bitcoin as a peer-to-peer
electronic cash system, challenging the traditional centralized banking model.

Built on the principles of Blockchain technology, Bitcoin provided a decentralized network, a


peer-to-peer network, where transactions could be verified and recorded by multiple
participants, eliminating the need for intermediaries.

Bitcoin faced significant initial skepticism, but its potential soon captured the attention of
technology enthusiasts and early adopters. The first documented Bitcoin transaction took place
in 2010: Laszlo Hanyecz famously purchased two pizzas for 10,000 bitcoins, marking the first real-
world use case of the digital currency. The subsequent progressive growth of cryptocurrency
exchanges and the increasing acceptance of Bitcoin as a form of payment laid the foundation for
the rapid expansion of decentralized digital currencies.

Expanding Horizons: Altcoins and Ethereum

As Bitcoin gained popularity, alternative cryptocurrencies, commonly referred to as altcoins,


began to emerge. These digital currencies sought to address limitations perceived in Bitcoin, such
as scalability and transaction speed. In 2015, Ethereum, created by Vitalik Buterin, introduced a
revolutionary Blockchain platform that enabled developers to build decentralized applications
(DApps) and execute smart contracts. Ethereum's launch marked a significant milestone in the
evolution of decentralized digital currencies, offering programmable money and expanding the
possibilities of Blockchain technology.

The ICO Boom and Regulatory Challenges:

The year 2017 witnessed an unprecedented surge in decentralized digital currencies. Initial Coin
Offerings (ICOs) became a popular method for Blockchain projects to raise capital by issuing
tokens to investors. However, the ICO boom also brought forth challenges as fraudulent projects
and scams proliferated, prompting regulatory bodies worldwide to take notice. Governments
and financial institutions started brainstorming on how to regulate and co-exist with this new
financial landscape.

Beyond Currency: DeFi and NFTs

Decentralized Finance (DeFi) emerged as a disruptive force in the decentralized digital currency
space. DeFi protocols leverage Blockchain technology to offer traditional financial services, such
as lending, borrowing, and yield farming, without the need for intermediaries. This innovative
approach has democratized access to financial services, providing opportunities to individuals.

Another groundbreaking development within the decentralized digital currency ecosystem are
the Non-Fungible Tokens (NFTs). NFTs represent unique digital assets that can be bought, sold,
and traded on various Blockchain platforms. From digital art and virtual real estate to collectibles
and music, NFTs have opened up new avenues for creators to protect ownership, to monetize
their work, revolutionizing the art and entertainment industries.

7
The new way for distributed organizations

Where do actual trends points to

As decentralized digital currencies continue to evolve, their impact on the global economy
cannot be ignored. Governments and financial institutions are exploring central bank digital
currencies to harness the benefits of decentralized systems while trying to maintain regulatory
control. The integration of Blockchain technology into various industries, including supply chain
management, healthcare, and voting systems, holds the potential to enhance transparency,
security, and efficiency.

It’s a fact, the Blockchain technology and eventual derivatives are here to stay. Many collateral
aspects have to be considered. The technology is today a source of intense research.

The Bitcoin Network case: A Decentralized Digital Revolution


Since its inception in 2009, the Bitcoin network has captivated the world with its groundbreaking
approach to decentralized digital currency. Powered by Blockchain technology, Bitcoin
introduced a peer-to-peer electronic cash system that operates without the need for
intermediaries. Here, we will delve briefly into the mechanics of the Bitcoin network, exploring
its key components, the process of transaction validation, and the role of miners in maintaining
the integrity of the system.

At the heart of the Bitcoin network lies the Blockchain, a distributed ledger that records all
transactions ever executed on the network. The Blockchain serves as a decentralized and
transparent record of ownership and transaction history. It consists of a chain of blocks, where
each block contains a set of transactions, a timestamp, and a unique identifier.

When a Bitcoin user initiates a transaction, it is broadcasted to the network and grouped with
other pending transactions into a block. Miners, the network participants, compete to solve a
complex mathematical puzzle known as proof-of-work. This puzzle requires substantial
computational power to solve, and the first miner to solve it is rewarded with newly minted
bitcoins.

Once a miner successfully solves the puzzle, the block is added to the Blockchain, and the
transactions within it are considered confirmed. The process of solving the puzzle and adding
the block to the Blockchain is called mining. It is worth noting that mining not only validates
transactions but also secures the network against fraudulent activities, as altering a block
requires an enormous amount of computational power and would require re-mining subsequent
blocks.

Decentralization is a crucial and a key aspect of the Bitcoin network. Unlike traditional financial
systems that rely on a centralized authority, Bitcoin operates on a decentralized network of
computers called nodes. Each special node, named full node, maintains a copy of the entire
Blockchain and participates in the validation and propagation of transactions across the network.

To Bitcoin consensus algorithm (Proof-of-Work -PoW) provides security, immutability, and trust.

Bitcoin addresses play a vital role in ensuring security and privacy on the network. A Bitcoin
address is a unique alphanumeric string associated with a user's wallet. When initiating a
transaction, the sender includes the recipient's address, which acts as a pseudonym, providing a
certain level of privacy.

8
The new way for distributed organizations

Additionally, Bitcoin employs cryptographic techniques, such as public-key cryptography, to


secure transactions and wallets, making it extremely difficult for unauthorized parties to tamper
with the funds or impersonate the owners.

Since its inception, the Bitcoin network has witnessed exponential growth and captured
significant attention. Bitcoin has not only provided an alternative decentralized currency but has
also paved the way for advancements in Blockchain technology and the emergence of various
cryptocurrencies.

The Bitcoin network represents a transformative leap in the world of finance and decentralized
systems. With its decentralized nature, transparent Blockchain, transaction validation through
mining, and robust security measures, Bitcoin has established itself as the first revolutionary,
effective and usable digital currency.

The Ethereum Network case: Empowering Decentralized Applications


The Ethereum network has emerged as a groundbreaking platform, revolutionizing the
Blockchain technology ecosystem beyond digital currency. Introduced in 2015, Ethereum
expanded the possibilities of decentralized systems by enabling the creation and execution of
smart contracts and decentralized applications (DApps). Let’s briefly explore the mechanics of
the Ethereum network, including its underlying structure, smart contracts, and the role of Ether
as the native cryptocurrency.

At the core of the Ethereum network lies the Ethereum Virtual Machine (EVM), a decentralized,
Turing-complete virtual machine. The EVM executes smart contracts, which are self-executing
agreements written in Solidity or other programming languages supported by Ethereum. Smart
contracts define the rules and conditions of an agreement, enabling automated and trustless
interactions between parties.

One of Ethereum's most significant contributions is the facilitation of decentralized applications


(DApps). These are applications that run on the Ethereum virtual machines, leveraging its
decentralized infrastructure and smart contract capabilities. DApps span various sectors,
including finance, gaming, supply chain, and decentralized finance (DeFi), offering transparent,
secure, and censorship-resistant solutions.

Ether (ETH) is the native cryptocurrency of the Ethereum network. It serves multiple purposes
within the ecosystem. Firstly, Ether acts as the incentive for validators peers who control and
validate transactions and secure the network using a consensus algorithm called Proof-of-Work
(PoW). To become a minor node, a peer has to stake an amount of his own ETH balance.

A group of minor network nodes are first selected and the probability for one node to be selected
is proportional to his staked amount. For each round of validation, the selected nodes compete
to solve complex mathematical puzzles, and, once completed, rewarded with newly minted
Ether. A new block of transactions is then issued and added to the Ethereum Blockchain.

Secondly, Ether is also used to pay for transaction fees on the network. Whenever a user initiates
a transaction or executes a smart contract, a small amount of Ether is required to cover the
computational resources consumed. This mechanism ensures that the network remains efficient
and prevents spam or malicious activities.

9
The new way for distributed organizations

The code execution is payed as a compensation for energy consumption. This brings a protection
against eventual malicious codes. For example, if a malicious code implements an infinite cycle
in a smart contract, the Node in charge of execution would stop when the executed
computational steps would have consumed the limit associated with the contract.

Ethereum 2.0, Towards a More Scalable and Sustainable Network:

To address the scalability limitations and energy consumption concerns associated with initially
used Proof-of-Work, Ethereum has transitioning to Ethereum 2.0, also known as Ethereum
Serenity. This upgrade introduces a new consensus mechanism called Proof-of-Stake (PoS) and
shard chains to enable higher transaction throughput.

With PoS, network participants, known as validators, lock up their Ether as collateral to propose
and validate new blocks. Validators are chosen based on their stake, encouraging them to act
honestly and avoid malicious behavior. This shift to PoS significantly reduces the network's
energy consumption, enhances scalability, makes Ethereum a more sustainable platform for
DApps.

The Ethereum network has had a profound impact on various industries (decentralized finance,
non-fungible tokens, Smart contracts, etc.). Looking ahead, Ethereum continues to push the
boundaries of decentralized technology. Ongoing research and development aim to improve
scalability, enhance privacy features, and enable interoperability.

Distributed networks: Little History of Peer-to-Peer Topology


Little history but a great and fundamental technology. Peer-to-peer (P2P) networks have
revolutionized the way we connect, share information, and collaborate online. By allowing direct
communication and resource sharing between individual devices, P2P networks have challenged
traditional client-server models and paved the way for decentralized and distributed systems.
They represent the major playground of the very significant majority of Blockchain projects.

The concept of P2P networks traces back to the early days of computer networking. In the 1970s
and 1980s, pioneers such as Vint Cerf and Bob Kahn laid the foundation for modern networking
protocols, including the Transmission Control Protocol/Internet Protocol (TCP/IP).

These protocols facilitated the exchange of data between individual devices, setting the stage for
peer-to-peer communication.

The emergence of Napster in the late 1990s marked a significant milestone in the history of P2P
networks. Napster, developed by Shawn Fanning, introduced a decentralized file-sharing
platform that allowed users to share music files directly with each other.

Despite legal challenges and subsequent shutdowns, Napster started and paved the way for
subsequent P2P file-sharing systems like BitTorrent, Gnutella, and eDonkey.

BitTorrent, developed by Bram Cohen in 2001, revolutionized file-sharing by introducing a more


efficient and scalable approach. Unlike traditional P2P file-sharing, where downloading from a
single source can be slow, BitTorrent breaks files into small pieces and enables users to download
different pieces from multiple sources simultaneously.

This distributed approach accelerated file transfers and reduced the strain on individual peers.

10
The new way for distributed organizations

In the mid-2000s, the development of Distributed Hash Tables (DHTs) further enhanced the
capabilities of P2P networks. DHTs enabled decentralized indexing and resource discovery,
eliminating the need for centralized servers.

Systems like BitTorrent's DHT and the Kad network allowed peers to locate and connect with
each other directly, facilitating efficient file sharing and decentralized applications.

Advantages of P2P Networks:

 Scalability: P2P networks can scale effectively as more participants join, as each peer
contributes to the overall network capacity and resources.
 Decentralization: P2P networks eliminate single points of failure, reducing dependency
on central servers and making them more resilient to outages or attacks.
 Efficient Resource Utilization: P2P networks leverage the resources of individual peers,
enabling efficient content distribution and reducing the load on centralized
infrastructure.
 Reducing Costs: By eliminating the need for dedicated servers, P2P networks can
significantly reduce infrastructure and maintenance costs.
 Anonymity and Privacy: P2P networks offer increased privacy and anonymity for users,
as communication occurs directly between peers without intermediaries.

Drawbacks of P2P Networks:

 Security Risks: P2P networks can be susceptible to security vulnerabilities, as they rely
on peers to validate and distribute content, which may include malicious files or harmful
software.
 Quality and Reliability: The reliability and quality of content in P2P networks depend on
the availability and integrity of participating peers. In some cases, this can lead to
inconsistent or unreliable access to resources.
 Legal and Copyright Concerns: P2P networks have often faced legal challenges due to
copyright infringement issues, as they enable the sharing of copyrighted material
without permission.
 Limited Network Control: The decentralized nature

Cryptography: The needed catalyst for Blockchain Technology's Security


Cryptography, the art and science of secure and secret communication, has a rich history
spanning thousands of years. From ancient civilizations to the modern digital era, cryptography
has played a vital role in safeguarding sensitive information. Here we will briefly explore the main
steps in the cryptography history, leading up to its critical importance in the emergence of
Blockchain technologies.

Ancient Origins and Early Developments: The roots of cryptography can be traced back to ancient
times. Ancient civilizations, such as the Egyptians, Greeks, and Romans, employed various
techniques to conceal messages. The Caesar cipher, for example, substituted each letter in a
message with a letter a fixed number of positions down the alphabet.

The Middle Ages witnessed the rise of more sophisticated cryptographic systems. Prominent
among them was the Vigenère cipher, developed by Giovan Battista Bellaso in the 16th century.

11
The new way for distributed organizations

This polyalphabetic substitution cipher used multiple alphabets, significantly enhancing the
security of encoded messages.

The Birth of Modern Cryptography: The 20th century marked a turning point in the evolution of
cryptography. In World War II, the Enigma machine, a complex electromechanical device, was
used by the Germans to encrypt military communications. However, its encryption was
eventually cracked by codebreakers at Bletchley Park, led by Alan Turing, an event that had a
profound impact on the development of modern cryptography.

The advent of computers in the mid-20th century revolutionized cryptography. The Data
Encryption Standard (DES), introduced in the 1970s, became a widely used symmetric-key
encryption algorithm.

Later, the rise of asymmetric cryptography brought forward groundbreaking algorithms like RSA,
Diffie-Hellman, and elliptic curve cryptography (ECC), providing secure key exchange scheme and
digital signatures.

Emergence of Blockchain Technology and the Role of Cryptography: In 2008, the pseudonymous
figure known as Satoshi Nakamoto introduced the world to Bitcoin, a decentralized digital
currency built on Blockchain technology. Cryptography plays a central role in Bitcoin and
subsequent Blockchain platforms, ensuring the security, privacy, and integrity of the network.

Blockchain technology relies on cryptographic techniques to achieve its core functions. Hash
functions, such as SHA-256, ensure the immutability of data stored in blocks, making it virtually
impossible to alter previous transactions without detection. Public-key cryptography enables
secure digital signatures, verifying the authenticity of transactions and the identity of
participants.

Cryptography acts as the foundation of Blockchain security, addressing key challenges such as
tamper resistance, data confidentiality, and secure transaction validation. It enables trust in
decentralized networks without relying on a central authority.

In addition to the fundamental cryptographic primitives, emerging developments like zero-


knowledge proofs, homomorphic encryption, and multi-party computation are pushing the
boundaries of Blockchain security. Zero-knowledge proofs allow the verification of information
without revealing the underlying data, enhancing privacy and scalability.

Homomorphic encryption enables computation on encrypted data, preserving confidentiality.


Multi-party computation enables secure computation among multiple parties without sharing
sensitive inputs.

These advancements in cryptography empower Blockchain technology to expand beyond digital


currencies. Blockchain-based solutions are being developed for supply chain management,
healthcare records, voting systems, and decentralized finance (DeFi).

Cryptography ensures the confidentiality of sensitive data, the integrity of transactions, and the
privacy of participants in these applications.

12
The new way for distributed organizations

13
The new way for distributed organizations

The YPE Blockchain

Distributed systems technologies in the Web3 reflection context leads to significant concerns in
3 major axes: Security, Decentralization, Scalability. Different approaches evidenced
commitment context based solutions.

Blockchain technology attracts a great deal of attentions as an effective way to innovate business
processes since it materializes those different commitment approaches. Several examples show
that this technology can be integrated with other Business Process components by secure
transactions and the dynamic aspects introduced by runnable contracts.

The current efforts of integration are at an early stage and difficulties are found related
essentially with speed and scalability. To apply Blockchain technology into business processes
efficiently, Blockchain process bottlenecks must be identified, rethinked more than worked
around. It all heavily relies on the effectiveness of distribution and on the implementation of
consensus protocol that poses a major challenge in business operations, especially ones that are
time-critical.

With the overcoming of possible solutions to process bottlenecks, the increasing data
persistency volume must be considered never forgetting validity, consistency, auditability and
availability concerns.

Our proposed Blockchain suggests an architecture to overcome the problems of time


inconsistency and consensus based on new and unique validation models.

The focus is maintained in load auto balancing among peers for every aspect. The architecture
provides persistency, validity, and auditability that Blockchain offers with less latency, less energy
consumption. Frequently seen full data replication has been, in some sense, considered has a
multi-centralized paradigm more than effective data persistence distribution.

The architecture also provides flexibility and introduces real potential for scalability. We consider
the actual system as a new form of ledger: No need to be everywhere to be fully available
anytime.

The Blockchain designation is maintained, but, in fact the proposed architecture reflects a set of
parallel chains: a 2D chain or, more precisely, more a Blockgrid than a Blockchain.

The actual section will present the YPE Network building blocks.

14
The new way for distributed organizations

The Network Topology

Physical Layer

Figure 2 - Physical Network context

The YPE network assumes a Peer-to-peer (P2P) layout. A P2P is a decentralized communications
model in which each participant has the same capabilities and either participant can initiate a
communication session.

Unlike the client-server model, in which the client makes a service request and the server fulfills
the request, the P2P network model enables each node to function as both a client and server.
The YPE P2P network provides anonymized routing of network traffic, massive parallel capacity
for computing environments and distributed storage.

In the physical layer, nodes connections are achieved considering physical world connections. By
default, each client app using a node of the network is identified by its IP address (Internet
Protocol address) and the port 19675. If two or more clients belongs to a same LAN, following
ports from 19675 are assigned in depending on availability.

Once a connection is established each node maintains a list of known peers for future
reconnections. This will potentially reduce initial discovery mechanisms processing needs. This
information will eventually be shared with other new comers.

The Physical Layer of the network is by nature unstructured since it is strictly tied to effective
and non-controllable set of connected hardware whose connections, setups and availability are
not guaranteed.

15
The new way for distributed organizations

Logical Layer

Figure 3 - Logical Layer

A logical layer only takes in account effective connections between peers each identified by a
unique ID. Locally, each node only needs to know how to communicate with a peers’ subset.

A discovery algorithm allows each node to define a set of contacts (remote peers’ connections
detailed information gathered from the physical network context).

This classical first approach (depicted in fig. 3) is unstructured and negatively impacts security,
network load, and manageability.

To solve this issue, the YPE P2P logical layer introduces a multi-purpose segmentation
organization of the logical main layer: Each segment is designated as a Bucket.

Everything in the YPE P2P Logical network belongs to a Bucket.

But: What is a bucket?

Simply put, only a hashing process based on XOR folding multi-bytes IDs to a tiny Hash (reduced
to one byte). Any object involved, interacting, participating in the network activity has an ID or
Hash ID, then anything could be assigned a bucket. The used methods for buckets assignments
were tested for uniformity distribution.

At the connection level between peers, buckets define a network segmentation that reduce
network load in operations involving broadcasts. Priorities are defined intra-buckets operations
and inter-buckets flows are significantly reduced.

At the data storage level, a very significant reduction of the effective volume of data to store at
each node is achievable without compromising a sufficient level of the needed redundancy and
availability.

16
The new way for distributed organizations

Load, scalability and expected reduced energy needs are all taking advantage of the YPE network
assumed logical segmentation layer.

Every Peer has, as already said, a unique ID and then is assigned to a specific bucket. The same
situation occurs with transactions: Each transaction is identified by a hash and then is assigned
to a specific bucket index.

Figure 4- Buckets: Logical Network Segmentation

Buckets can be conceived and materialized as logical ports sets limited to 256 elements. Each
node in the network maintain its own buckets list and is assign to a specific bucket (example in
fig. 4).

Given a peer A, a peer B is considered "internal" if he belongs to the same bucket of A (in the
opposite case B is said "external" relative to A).

Regarding to transactions, the same terminology is used. Upon a transaction creation, a peer
doesn't control in which bucket the resulting object will be processed.

Along with the auto balanced processing and data storage, the bucket structure is an increment
in terms of security.

17
The new way for distributed organizations

Figure 5-Broadcasting a transaction in a bucket context

Due to the logical segmentation of the overall logical network layer, transactions do not need
exhaustive broadcast to all peers. The same way each node is assigned a bucket, transactions too
are associated to buckets ports through their identifying Hash.

The assignment of one object to a bucket is always controlled by a tested uniform distribution
algorithm (which, among participating in solving security concerns, allows computing and data
persistence auto balancing).

It is stated that the subsequent storage, verification and integration operations should be
assumed only by nodes belonging to the same bucket.

In the example fig. 5, the local node produces a transaction which is finalized with the sender
signing process and the computation of the transaction hash. Follows the calculus of the bucket
index the transaction should be assigned to.

Finally, the broadcast occurs to the connected peers associated with the same bucket index.
Additionally, recipient is searched and informed of the transaction emission (possibly in a cross
bucket notification).

18
The new way for distributed organizations

A New Set of Cryptographic Tools

Modern cryptography includes all theoretical and technical developments surrounding


information security. In fact, information security is much more than cryptography.

Common Security Requirements for Modern Cryptography are developed around the concepts
of secrecy, authenticity, integrity and non-repudiation.

Today a fantastic set of already developed cryptographic algorithms is available to build protocols
for system where security is a fundamental key.

This section only mentions major algorithms of very frequent use, without pretending to be
exhaustive. All of them have proven to be efficient and secure, no question about that, but
couldn't we maintain levels of security, or obtain higher levels, reducing processing needs,
processing time and, by consequence, reducing energy consumption?

SHA 256:

SHA256 is a part of the SHA 2 family of algorithms (SHA stands for Secure Hash Algorithm).
Started in 2001, it was a joint effort between the NSA and NIST to define an alternative to the
SHA 1 family, which was losing strength against brute force attacks. The reason why SHA-256 is
a computationally-intensive algorithm is the mathematical operations it performs. It applies a
series of bitwise operations and logical functions to the input data in a specific order, requiring
many CPU cycles to complete. The algorithm operations are designed to consume significant
time and resources, making it computationally expensive to execute. The SHA256 is in most cases
at the base of cryptographic puzzles involves in the PoW consensus algorithms.

ETHASH:

ETHASH cones from a modified version of the Dagger-Hashimoto algorithm. Ethash proof-of-
work is memory hard (function that costs a significant amount of memory to reduce
computational effort). Ethash served proof-of-work process in Ethereum prime network but was
switched off from actual Ethereum consensus process to improve the rate of transactions
validations and reduce energy cost and environmental footprint. Ethash is still used to mine
other coins on other non-Ethereum proof-of-work networks. The computational effort put in the
algorithm is less than SHA256 but still significant.

Scrypt:

Scrypt is a cryptographic memory-hard hash function and was specifically designed to be more
memory-intensive than computationally-intensive, meaning that its computation requires a
large amount of memory and relatively less CPU power. This is done to make it difficult for an
attacker to perform a precomputation attacks and to reduce the advantage of using specialized
hardware to accelerate the hash computations. The reason why Scrypt is a memory-intensive
algorithm is its use of a large "memory-hard" buffer. This memory buffer is filled with random
data, and the algorithm repeatedly makes memory-access operations at random locations in the
memory buffer to perform mathematical operations on the data.

19
The new way for distributed organizations

RIPEMD160:

The RIPEMD160 Hash process is an evolution of The first RIPEMD whose first implementation
revealed some design flaws, leading to security problems (one of which was the size of output:
too small and easy to break). RIPEMD160 increases the output length to 160 bit and the security
level of the hash function. This function is designed to work as a replacement and like others
involves significant computational effort. In the case of the Bitcoin network, the RIPEMD160 is
used in conjunction with the SHA256 to produce the first generation of Bitcoin addresses from
public key hashes.

Elliptic curves:

The application of elliptic curves in cryptography was proposed by V. Miller and N. Koblitz in the
80’s. ECC (Elliptic curve cryptography) offers shorter key length when compared to other public-
key cryptosystems like RSA. ECC security is correlated with the hardness of the discrete logarithm
problem over the additive group of points on an elliptic curve over finite fields (yes, many math
notions! ...but a theoretical foundation that allows the production of shorter keys, at higher
speed and lower power consumption than RSA). A pair of keys, public and private keys, are used
by public-key cryptosystems to carry out cryptographic processes such as encryption/decryption
of data and signing/verification of digital signatures. Scalar multiplication is considered as the
central time-consuming operation in ECC. In order to compute this operation, it is necessary to
perform iterative ECC point operations, and their efficient performance is essential to speed up
the computation of scalar multiplication. Still involves a significant amount of computations.

Sometimes reducing complexity in systems can result in amazing concepts. It's a fact:
Mathematical and Sciences fundamental laws tend to assume simple expressions.

Let's consider, for example, the case of cellular automaton. The concept was originally discovered
in the 40's by S. Ulam and J. Von Neumann but real studies were started in the 60´s. Only in the
70's J. Conway 's through his famous Game of Life triggered a huge interest in the subject, even
beyond research institutes and Academia. In the 80's, Stephen Wolfram engaged in a systematic
study of elementary cellular automaton and his research with M.Cook showed that simple
systems can define amazing computing structures (in fact simple assumptions were sufficient to
model a computing machine - A Turing-complete computer model; in science, 2D and 3D cellular
automaton models are still used today to simulate physical experiments like Brownian
movements, Heat diffusion, Molecular interactions, among an endless list of experiments).

So, for the cryptographic group of functionalities to be used in the YPE Network, we have stepped
in the same direction, searched for simplicity without never avoiding the goal of a high level of
security.

Our proposal: The XCube and JHash concepts as a new kind of cryptographic tools. A Theoretical
foundation is first proposed. A detailed consideration of each algorithm is then depicted.

20
The new way for distributed organizations

Theoretical support

Cryptography, an art and science

Information secrecy and secure communications, has been a vital field of study and practice for
centuries. It encompasses the principles, techniques, and methods used to protect information
from unauthorized access or modification. By employing various cryptographic algorithms and
protocols, cryptography ensures the confidentiality, integrity, and authenticity of data in digital
systems.

The fundamental concept behind cryptography is the use of cryptographic keys. These keys serve
as the basis for encryption and decryption processes. Encryption is the process of converting
plain, readable data (plaintext) into an unintelligible form (ciphertext), while decryption is the
reverse process of converting ciphertext back into plaintext. The security of a cryptographic
system largely relies on the secrecy and strength of the keys.

There are two primary types of cryptography:

 symmetric key cryptography and


 public key cryptography

Symmetric key cryptography, also known as secret key cryptography, employs a single shared key
for both encryption and decryption. The same key is used by both the sender and the receiver
to secure the communication. While symmetric key algorithms provide fast and efficient
encryption, the challenge lies in the secure key exchange/distribution between the authorized
parties.

Public key cryptography, on the other hand, makes use of a pair of mathematically related keys:
a public key and a private key. The public key is freely shared with others, while the private key
remains known only to the owner. Messages encrypted with the recipient's public key can only
be decrypted using their corresponding private key. Public key cryptography solves the key
distribution problem faced by symmetric key cryptography, as the public keys can be freely
distributed. However, it tends to be slower and computationally more expensive than symmetric
key cryptography.

Cryptography finds applications in various areas, including secure communication, data storage,
digital signatures, secure authentication, and secure electronic transactions. It plays a crucial role
in ensuring the security and privacy of sensitive information in everyday digital interactions, such
as online banking, e-commerce, email communication, and data transfers over the internet.

The strength of cryptographic systems is constantly challenged by advancements in computing


power and evolving attack techniques. Therefore, it is crucial to regularly evaluate and update
cryptographic algorithms.

Overall, cryptography is an essential tool: It provides the foundation for secure and trusted
transactions.

21
The new way for distributed organizations

The Kerckhoffs’s principle:

The cipher method must not be required to be secret, and it must be able to fall into the hands
of the enemy without affecting the security of the cipher method.

This principle states that the security of an encryption protocol does not rely on the encryption
algorithm/protocol procedures being unknowns.

One may wonder how do we obtain the security while the eavesdropper knows all the details.
But there is one thing the eavesdropper does not know, which is the key. In fact, the security of
all classical encryption schemes depends on the key shared by the communicating parties being
unknown by the eavesdropper.

Cryptography relevant factors:

In a cryptographic system, several factors are considered crucial to ensure its security and
effectiveness. Here are some key considerations and concerns when designing and evaluating a
cryptographic system:

 Confidentiality: Maintaining confidentiality is one of the primary objectives of


cryptography. It involves protecting sensitive information from unauthorized access or
interception by ensuring that only authorized parties can decrypt and access the data.
 Integrity: Cryptographic systems should ensure the integrity of the information,
meaning that data remains unchanged and unaltered during storage, transmission, or
processing. Techniques like message authentication codes (MACs) or digital signatures
help verify the integrity of the data.
 Authentication: Cryptographic systems should provide mechanisms to verify the identity
of communicating parties to prevent impersonation or unauthorized access. This is
typically achieved through the use of digital certificates, public key infrastructure (PKI),
or various authentication protocols.
 Non-repudiation: Non-repudiation ensures that a sender cannot deny sending a
message or a receiver cannot deny receiving it. Techniques like digital signatures or
secure timestamps help establish non-repudiation in cryptographic systems.
 Key Management: Proper key management is crucial for cryptographic systems. It
involves generating, distributing, storing, and revoking cryptographic keys securely.
Robust key management practices ensure that the keys are protected from unauthorized
access and are regularly updated to maintain the system's security.
 Algorithm Strength: The choice of cryptographic algorithms is critical. The algorithms
should be well-vetted, widely accepted, and resistant to various cryptographic attacks.
Regular scrutiny and analysis by the cryptographic community help ensure their strength
against current and future threats.
 Performance: While security is paramount, cryptographic systems should also be
efficient and provide reasonable performance. Balancing security requirements with the
system's operational requirements ensures that the system is practical and usable.
 Scalability: Cryptographic systems should be designed to accommodate future growth
and changing requirements. They should be scalable to support increased data volumes,
additional users, and evolving technologies without compromising security.

22
The new way for distributed organizations

 Compliance and Standards: Adherence to established cryptographic standards.


 Continuous Monitoring and Evaluation: Cryptographic systems should undergo regular
security assessments, vulnerability testing, and audits.

By addressing these concerns and incorporating them into the design, implementation, and
maintenance of cryptographic systems, organizations can create robust and secure solutions that
protect sensitive information and support secure communication and data exchange.

About Hashing

A hash function is a fundamental concept in cryptography and computer science. It is a


mathematical function that takes an input, often referred to as the "message" and produces a
fixed-size output called the "hash" or "digest." The primary purpose of a hash function is to
quickly and efficiently map data of arbitrary size to a fixed-size output.

Ideally a Hash functions should verify the following properties:

1. Deterministic: A hash function produces the same hash value for a given input every time. If
the input remains unchanged, the resulting hash value will always be the same.

2. Fixed Output Size: Hash functions have a fixed output size, regardless of the size of the input.
This property ensures that the resulting hash value has a consistent length, making it useful in
various applications.

3. Preimage Resistance: Given a hash value, it should be computationally infeasible to determine


the original input. In other words, it should be highly improbable to find two different inputs that
produce the same hash value (collision resistance).

4. Small Input Change, Large Output Change: Even a tiny modification in the input should lead to
a significantly different hash value. This property, known as the avalanche effect, ensures that
even a minor change in the message will produce a completely different hash value.

5. Efficient Computation: Hash functions are designed to be computationally efficient, enabling


quick calculation of the hash value for any given input. This efficiency is crucial for their
widespread usage in various applications.

6. Pseudo randomness: A good hash function should produce outputs that appear random and
uniformly distributed, even though they are deterministically derived from the input. This
property is essential for security and preventing the discovery of patterns or vulnerabilities in
the hash function.

Hash functions find extensive applications in cryptography and data integrity verification. They
are commonly used to store passwords securely, verify the integrity of files or messages,
generate digital signatures, and create data structures like hash tables and hash-based data
structures. It is important to note that the security of hash functions relies on the strength of
their properties.

23
The new way for distributed organizations

Fundamental theoretical algebra concepts - Groups

The goal of these simplified theoretical notes is to give an introduction to the subject of group
theory (Math’s branch of algebra, sometimes abstract algebra). One may first think of algebra
classic and basic addition, multiplication, solving equations, and so on. This first approach isn't
wrong but only a very small part of what algebra really has to offer. Abstract algebra, as the name
suggests, in a much more abstract way! Rather than looking at the properties of some specific
operations computed on elements of specific sets, abstract algebra is focuses on arbitrary
operations (not specific ones) properties and sets structures.

To understand the presented forward XCube cryptographic tool set, inspired on a 4x4x4 Rubik’s
cube, it is first needed to consider some different properties of functions.

Functions and composition:

A function f from a domain D to a range R (we write f: D → R) is a rule which assigns to each
element x ∈ D a unique element y ∈ R. We write f(x)=y. y is said to be the image of x and x the
preimage of y. Note that an element in D has exactly one image, but an element of R may have
0 or more preimages.

A function f: D → R is called one-to-one if each element of R has at most one preimage.

A function f: D → R is called onto if every element of R has at least one preimage.

A function f: D → R is called a bijection if it is both one-to-one and onto. That is to say, if very
element of R has exactly one preimage.

If 2 functions f and g are such that: f: S1 → S2 and g: S2 → S3, then we can define a new function
f ◦ g: S1 → S3 by (f ◦ g)(x) = f(g(x)). The operation ◦ is called composition.

Group notion:

A group (G, ∗) consists of a set G and an operation ∗ such that:

 G is closed under ∗. That is, if a, b ∈ G, then a ∗ b ∈ G.


 ∗ is associative. That is, for any a, b, c ∈ G, a ∗ (b ∗ c) = (a ∗ b) ∗ c.
 There is an identity element e ∈ G which satisfies g = e ∗ g = g ∗ e for all g ∈ G.
 Inverses exist; that is, for any g ∈ G, there exists an element h ∈ G such that g ∗ h = h ∗ g
= e. (h is called an inverse of g.)

If a group (G, ∗) is such that ∗ is commutative, that is g ∗ h = h ∗ g for any g, h in G, then the group
is said an Abelian Group or a Commutative Group.

Proposition: A group has exactly one identity element.

Proof: Let (G, ∗) be a group, and suppose e and e0 are identity elements of G (we know that G
has at least one identity element by the definition of a group). Then, e ∗ e0 = e since e0 is an

24
The new way for distributed organizations

identity element. On the other hand, e ∗ e0 = e0 since e is an identity element. Therefore, e = e0


because both are equal to e ∗ e0.

Proposition: If (G, ∗) is a group, then each g ∈ G has exactly one inverse.

Proof: Let g ∈ G, and suppose g1, g2 are inverses of G (we know there is at least one by the
definition of a group); that is, g ∗ g1 = g1 ∗ g = e and g ∗ g2 = g2 ∗ g = e. By associativity, (g1 ∗ g)
∗ g2 = g1 ∗ (g ∗ g2). Since g1 is an inverse of g, (g1 ∗ g) ∗ g2 = e ∗ g2 = g2. Since g2 is an inverse
of g, g1 ∗ (g ∗ g2) = g1 ∗ e = g1. Therefore, g2 = g1.

In general, we write the unique inverse of g as g-1.

Subgroups

It is a general philosophy in group theory that, to understand a group G, it is frequently


convenient to understand smaller parts of it.

A nonempty subset H of a group (G, ∗) is called a subgroup of G if (H, ∗) is a group.

The advantage of studying subgroups is that it may be much simpler.

Note that if (G, ∗) is a group. A nonempty subset H of G is a subgroup of (G, ∗) if and only if, for
every a, b ∈ H, a ∗ b-1 ∈ H.

Generators

Let G be a group and S be a subset of G. <S> defines a subgroup.

Let G be a group and S be a subset of G. S is said to generate G if G = <S>; that is, every element
of G can be written as a finite product (under the group operation) of elements of S and their
inverses.

Generators can be seen as being the 'core' of the group; since every element of the group can
be written in terms of the generators.

A group G will be said cyclic if g ∈ G and g−1 = gn for some natural integer n. Note that gn
corresponds to the composition of g with itself n times (in fact it has nothing to do with a classic
multiplicative law). Intuitively the naming is related to a periodicity notion, that is to say that G
is cyclic if the inverse o any g is equal to gn for a natural integer n.

25
The new way for distributed organizations

The JHash Hashing process

The JHash algorithm implements a hash function based on the model of linear automatons with
conditioned transitions from generation to generation.

A generalized cellular automaton (CA) is a model of a system of cell objects. The abstraction is
commonly reduced to a set of regions in space.

For a purpose of simplification, a two dimensional grid is generally used to explain the underlying
concepts. The system of considered cells has the following characteristics:

 The cells live on a grid. (a cellular automaton can be considered and defined in a space
with any finite number of dimensions)
 Each cell has a state. A state can be defined by any finite number of parameters, and
each parameter can assume values in a finite discrete associated set. We will use the
simplest example: a state only defined by a Boolean variable (two possibilities: 1 and 0,
also referred to as ON or OFF, or "alive" or "dead").
 Each cell has a neighborhood (a set of adjacent cells or a set of cells not far from the
considered cell). This is usually defined by a list of adjacent cells (which seems
reasonable, since the context, neighborhood, of a cell can probably explain, influence,
any observable mutation of its state).

Imagine a set of simple rules based on each cell neighborhood that would allow to define a new
state for each one in a given grid. This is essentially the process of a CA.

The state of cells in a given instant T(k) is said to be the state at generation k. The resultant state
of the overall grid at the following instant is the state of all cells after all the rules have been
applied, each sell assumes then the state at instant T(k+1).

Successive generations' states' mutations exhibit behavior similar to biological reproduction and
evolution. In some restricted and simplified way, CA mimics in certain circumstances nature
diversity.

The first developments of cellular automata systems are attributed to Stanisław Ulam and John
von Neumann, in the 1940s.

The best-known cellular automaton, John Conway’s “Game of Life” (1970), simulates the
processes of life, death, and population dynamics.

In the early 2000, Stephen Wolfram's conducted the most significant scientific study on cellular
automata. His work was made publicly available in 2002 in the book: A New Kind of Science. All
the study was made available for free online.

A major claim of the book shows that CA are not simply neat tricks, but are relevant to the study
of biology, chemistry, physics, and all branches of science, even if simple rules are considered in
the mutation process: A beautiful demonstration that somehow diversity isn't a matter of
complexity and that simplicity is the way of nature.

26
The new way for distributed organizations

For the purpose of our hash function JHash, we will consider the following simplified system:

 A one-dimension grid (so called linear cellular automaton).


 The grid is a 32 unit cells space.
 Each cell state will be represented by a bit value (0 or 1).
 Each cell state transition depends on its actual state and on the next 3 following cells'
states (the state transition rule must then consider 4 state variables).

JHASH: Hashing Process

The JHash hashing function takes any message of finite length as input and produces a hash of
constant size: 64 bytes (1 byte = unsigned integral number from 0 to 255, corresponding to all
possible 8 bits’ values). Any message is always represented in the computer as a list of bytes. The
process starts by subdividing the message in a list of blocks of 64 bytes each. Then each block is
processed and combined with a state vector of 64 bytes. The fig.6 depicts the first approach of
the process.

Figure 6 -JHash Hashing process overview

Before processing the message set of blocks, a common initial state is defined as follow:

Figure 7 JHash initial state


27
The new way for distributed organizations

A set of 256 state transition functions are defined. Since each cell state transition depends on
its actual state and on the next 3 following cells' states, the selected functions are defined by
16 bits’ values (2 bytes). A multi-criteria study has allowed the selection of the following
functions:

Figure 8 - JHash: Vector of transition functions vOP

28
The new way for distributed organizations

Each block of the message consists in 16 segments of 32 bytes. Each segment is transformed
independently using a transition function rule. The process of one line transformation is
described in fig. 9, 10 and 11.

Figure 9- JHash: Applied Transformation Rule

Figure 10- JHash Progressive Rule 366 function application

29
The new way for distributed organizations

Wrapping around when computing the state changes of the last positions:

Figure 11- JHash Progressive Rule 366 function application / wrapping

The process is repeated for each segment in each block of the message. The fig. 12 details the
necessary steps to process an entire block.

30
The new way for distributed organizations

Figure 12-JHash: One message block processing

31
The new way for distributed organizations

Once processed each block is combined with the state vector as follow:

Figure 13- JHash: combining message blocks

The final set of bytes of the state vector defines the hash of the message.

32
The new way for distributed organizations

The XCube Cryptographic tool set

The XCube concept and implementation was first inspired by the Rubik’s Cube in its 4x4x4
version.

First approach:

The 4x4x4 Rubik’s cube is composed of virtually 64 small cubes, typically called 'cubies'. 56 of
these cubies are visible (8 central cubies don’t actually exist and the center space is reserved for
the cube mechanism). The cubies in the corners are called 'corner cubies' and each one has 3
visible faces (there are 8 corner cubies). The cubies with two visible faces are called 'edge cubies'
(there are 24 edge cubies). Finally, the cubies with a single visible face are called 'center cubies'
(there are 24 center cubies).

It is also usual to name the 6 faces of the cube. Given a fixed position of the cube in front of an
observer the faces will be call: right (r), left (l), up (u), down (d), front (f), and back (b). The naming
scheme presents the advantage that each face can be referred to by a single letter.

Figure 14- A 4x4x4 cube model

As for identifying the cube faces, it is convenient to give names to possible moves of the cube.
The most basic move one can do is to rotate a single face. We will let R denote a clockwise
rotation of the right face (looking at the right face, turn it 90◦ clockwise).

Similarly, we will use the capital letters L, U, D, F, and B to denote clockwise twists of the
corresponding faces. More generally, we will call any sequence of these 6 face twists a 'move' or
a 'transformation' of the cube. Fig. 15 shows the possible transformations to the cube state due
to the mechanical construction of the cube.

33
The new way for distributed organizations

It is important to note that corner cubies are never moved to edges or centers and edge cubies
are never moved to center cubies.

Figure 15- Basic possible transformations of the cube

Due to those previously restrictions due to the physical cube construction, not all theoretically
possible configurations are valid configurations.

We will say that a configuration of the cube is valid if it can be achieved by a series of moves
from a given starting configuration.

The count of possible configurations (states of the cube) is given by the following function:
𝑛−2 2
𝑛−2 24! [( 2
) ]
𝑓(𝑛) = 8! ∙ 37 ∙ (12! ∙ 210 )𝑛 𝑚𝑜𝑑 2 ∙ (24!)[ 2 ] ∙( ) ∙ 24−((𝑛+1) 𝑚𝑜𝑑 2)
(4!)6
In the model of the 4x4x4 cube:

𝑓(4) = 7.4018 ∙ 1045

34
The new way for distributed organizations

Many studies have already explored the possibility of using this and other models as a
cryptographic tool but none has ever explored the fact that at a given point, an abstraction from
the physical cube structure was needed to overcome and solve observed and reported
cryptographic issues.

Abstraction from mechanical restrictions:

At first a set of other transformations were considered. The fig.16 presents some of the tried
functions.

Figure 16- Complementary transformations examples

35
The new way for distributed organizations

Finally, as a last abstraction step, we have maintained the XCube name but our model has
become much like vector of 96 byte values and transformations of the cube are now simply a
predefined set of permutations (without structural limitations of the initial physical model of the
cube).

Figure 17-Applying one transformation using an abstracted structure

The transformation set was chosen considering the group of transformations theory. That is to
say, such that the different possible transformations (permutations) were a generator base of all
possible permutations of the 96 visible cells:

96! = 9.9167 ∙ 10149

The Rubik’s Cube into a Group

Let’s define a move (or transformation) as any sequence of finite length of fundamental moves.

Two moves will be considered equal if applied to a same initial state they result in the same final
state.

The set of moves of the cube allows the construction of a group, which we will denote (G, ∗).
The elements of G will be all possible moves. The group operation is defined by: if M1 and M2
are two moves, then M1 ∗ M2 is the move where you first do M1 and then do M2. (the group
operation is a composition as presented in Math’s group resumed presentation).

Proof:

• G is certainly closed under ∗ since, if M1 and M2 are moves, M1 ∗ M2 is a move as well by the
move definition.

• let e be the 'empty' move (that is, do nothing or don't change the state of the cube), then M ∗
e means 'first do M, then do nothing'. This is certainly the same as just doing M, so M ∗ e = M.
So, (G, ∗) has a neutral element.

• If M is a move, the steps of the move can be reversed to get a move rM. Then, the move M ∗
rM means do M and undo M with rM which results in the initial state; that is to say that M ∗ rM
= e, the neutral element. rM is then the inverse of M.

36
The new way for distributed organizations

• Finally, we must show that ∗ is associative. If S0 is the initial cube state and S1 is the state after
applying the move m1, and S2 after m2, and finally S3 after m3 then: m3((m2 after m1)(S0)) =
m3(S2) = S3 and in the other hand: (m3 after m2)(m1(S0)) = (m3 after m2)(S1) = S3. Thus, ∗ is
associative.

Therefore, (G, ∗) is indeed a group.

(Note that G with * isn’t an Abelian Group due to the non-commutativity of the composition)

Final considerations for the cube model:

A reduced set of bitwise operations completes sequences of permutations. The model is able to
produce private and public key pairs, implements a 48 bytes’ hash function, is fast for signing
and enables the actual YPE network key exchange protocol. The fig.18 shows an example of the
cube after the bitwise operation.

Figure 18- 4x4x4 base Cube

The existing key exchange protocols are fundamentally based on multiplicative properties (even
in the case of the discrete logarithm problem since exponentiation isn't more than an algebraic
extension of multiplicative laws). Considering the actual common computer machine, those
protocols are not at risk.

The problem is: How will the actual computer evolve? Considering recent announcements about
Quantum Computing and studies and implementations of factorization algorithms like Shor's
algorithm, multiplicative laws based or exponential based protocols could suffer some severe
security issues.

The XCube takes advantage of transformations composition to solve the key exchange problem:
functional composition isn't a multiplicative law by design and given a resulting composition, the
problem of finding component transformations is also an NP-Hard problem. At this stage of
quantum computing, its design follows post quantum crypto systems research paradigm.

The XCube and the JHash algorithms are intensively tested to fulfill all the requirements of the
existing well-known cryptographic tools. For detailed information about their specifications,
implementations and test results please fill free to contact: info@ypeblockchain.org.

37
The new way for distributed organizations

Transactions

A COLLABORATIVE, DISTRIBUTED, FAST AND SCALABLE PROCESS

Actually the overall process of validation is composed of 2 main step: TRANSACTIONS


VALIDATION and TRANSACTIONS INTEGRATION. When a fully prepared and signed transaction is
broadcasted to the network, it enters a spool of unverified transactions specific to the bucket it
belongs to.

This spool is maintained at each node and ordered considering a timestamp defined at emission
time in descending order.

Any nodes in the network can participate in the validation process since it adheres of conditions
of the Proof of Stake like paradigm. The validation process controls legitimate emission of the
transaction verifying the accompanying signature and the formal structure of the transaction.

(Note that a transaction isn’t restricted to crypto currency aspects).

Once accepted by a significant number of validator nodes, the transaction is moved to a spool
of verified transactions. In case of rejection, the transaction is discard and the sender address is
notified.

The second step consists in Transaction Integration. By integration, the YPE network assumes the
insertion of the transaction in a block of the Blockchain issued at a given bucket. As for the
validation process, any node in the network can participate in this integration process since the
node belongs to the same transaction’s bucket.

Both the processes take advantage of the segmented logical layer structure of the YPE network,
allowing, per bucket network load balancing, distributed processing for an increased gain in
speed (due to framed broadcasts) and by consequence an optimized energy consumption
request.

Figure 19 - Transaction flow

38
The new way for distributed organizations

Transactions continuous aggregation:

The YPE address XYZ (like any other address) can possibly receive one or more transactions from
one or multiple sources.

When this same address pretends to build a transaction for the recipient YPE address TUV, it
doesn’t limit the input transactions to sufficient number of existing unspent transactions.

In fact, ALL unspent transactions are systematically used to define de input of the next
transaction. Once build and signed, the transaction is broadcasted and subject to the validation
and integration process.

Once confirmed a new transaction is returned to the XYZ address with the corresponding change
(in case of non-null change). The recipient TUV address is informed of the validation.

All previous XYZ unspent transactions of the XYZ address at the beginning of the process are
marked as "SPENT" and become historical data.

The systematic per address aggregation simplify search processes and reduce the volume of
necessary data storage.

Figure 20 - Transactions Aggregation

39
The new way for distributed organizations

The Blockchain Structure

The proposal for the Blockchain is a Distributed Blockchain Structure: 256 Buckets, 1 Blockchain
per bucket.

To ensure balanced transaction distribution and optimal utilization of network resources, our
proposed structure implements a bucket-based approach. Transactions are uniformly distributed
among the 256 buckets based on their hash values. This distribution mechanism allows for
efficient load balancing, minimizing congestion and maximizing network throughput.

In our Blockchain structure, transactions follow a standardized yet flexible format. Each
transaction is assigned a unique ID, which is derived from the hash of the transaction data itself.
The block structure consists of a header, the hash of the previous transaction, and employs our
proprietary XCube function for computing cryptographic hashes, ensuring data integrity and
security. Internally a root hash of the transactions hash tree (Merkle Tree Like) is integrated in
the header. This hash tree links the transactions to the header, thus, to the block.

Figure 21- Blockchain at bucket K

To maintain the integrity of the Blockchain, our structure employs a chaining mechanism. Valid
blocks are required to meet a specific difficulty level, which is achieved through the use of a
nonce. A valid block must have the n first bits matching the n last bits of the hash of the previous
block, ensuring a strong and secure chain of blocks.

Peers within the network are uniquely identified using IDs. Based on these IDs, each peer is
assigned a specific bucket, enabling efficient routing and improved network organization. This
peer-to-bucket mapping optimizes communication and resource sharing among peers,
enhancing the overall performance of the Blockchain structure.

Security Considerations:

The security of our proposed Blockchain structure is of upmost importance. We rely on the
robustness of our cryptographic primitives and protocols.

40
The new way for distributed organizations

A Client App in Testnet

The client App implement a basic wallet functionality in a testnet environment. For the sake of
testing setup simplicity, the app requires an enabled UPnP router/gateway since an auto port
forwarding mechanism is integrated. (The test app runs on the default TCP port 29675).

When installed and open for the first time, the app will ask the user for a nickname. This
nickname will be completed with an extension that includes an assigned bucket and a control
hash.

Figure 22- Welcome screen

Once defined the user Nickname, the wallet presents the actual defined addresses. Each user
can create until 3 different addresses. Each address created in test is actually rewarded with 5
YPEs (currency unit used in the network).

Figure 23-Wallet addresses

41
The new way for distributed organizations

Top menu

Once logged in the app the top bar gives the user access to the following functionalities:

Figure 24- App top bar & functionalities

42
The new way for distributed organizations

Token Economics

YPE is YPE Blockchain native Token.

Token Distribution

Fig.25 depicts the token distribution for 100% of the Total Supply (281 474 976 YPEs, decimal
resolution 10-6).

Figure 25 - Token Distribution

Token Utility and Value:

Community tokens are held by the YPE Foundation, which is run by an independent board.
Community tokens are allocated to validator nodes for rewards. YPE serves several core
functions on the network.

1 Staking: YPE token holders can stake YPE to validators operating on YPE Blockchain.

2 Governance: YPE token holders decide to stake and can participate in on-chain governance.

3 Fees & Incentives: Validators incur costs by running and maintaining their systems. These costs
are covered by: Transaction fees, processing cost units (pc’s) and blocks’ creation incentives.

YPE Economics:

YPE consensus algorithms involves Proof of Stake and Proof of Work for validation and on-chain
registration. A validator node can be set to process validation, process Blockchain blocks, and/or
process on-chain User defined program.

For the consensus involved operations, a minimal fee will be defined. This minimal value will
present the transaction to the network with a said normal priority level. The transactions issuers
can propose an increased amount of the minimal fee to define a higher priority level. The total
fee and block addition reward will be issued to participant nodes. On chain distributed
processing costs will be charged to user programs issuers.

43
The new way for distributed organizations

Conclusion

The concept of abstraction has played a crucial role in the proposed system, reducing parts
complexity and enabling the construction of a robust, flexible, maintainable and scalable
solution: The YPE Blockchain.

Its layered structure enhances modularity, independent development and testing and will also
facilitate internal and external collaborations. The improved maintainability and extensibility
allows the replacement or upgrade of individual architectural components without impacting
the entire system.

As initially stated, the integration of Blockchain technology into various industries holds
significant potential for enhancing transparency, security, and efficiency. Despite the challenges
related to speed and scalability, efforts are being made to integrate Blockchain technology into
business processes effectively. The YPE Blockchain proposes an architecture that addresses time
inconsistency and consensus challenges by introducing new and unique validation models. The
proposed architecture emphasizes load auto-balancing, persistency, validity, and auditability
while maintaining scalability and reducing latency and energy consumption.

In the context of the YPE Network, a peer-to-peer (P2P) layout is adopted, providing anonymized
routing of network traffic, massive parallel computing capacity, and distributed storage. The
network is structured on a logical segmented overlay network. The segmentation organization
through buckets is introduced, reducing network load and improving security, network
management, and data storage efficiency. Every object (peer, transaction, etc.) is assigned to a
specific bucket, enabling controlled and efficient communication within and between buckets.

The cryptographic tools employed in the YPE Network, namely the JHash and XCube concepts,
emphasize simplicity and propose higher security level (not discreet log based, no multiplicative
laws involved, then not subject to actual studies and implementations of Shor algorithm nor
actually known Quantum Computing algorithms). The JHash algorithm utilizes the principles of
cellular automata to implement a hash function, while the XCube concept draws inspiration from
Permutations Groups theory mixed with bitwise operators to provide a cryptographic toolset.
Both tools demonstrate that simplicity can be a powerful approach in cryptography.

Overall, the YPE Blockchain project and the YPE Network showcase the application of
abstraction, layered architecture, and innovative cryptographic tools in building scalable, secure,
and efficient systems. By embracing these principles and tools, organizations can develop robust
solutions that address the challenges of decentralization, security, and scalability in the Web3
reflection context.

44
The new way for distributed organizations

References

Back, A. (2002). Hashcash - A Denial of Service Counter-Measure.

Buterin, V. (2014). Ethereum: A Next-Generation Smart Contract and Decentralized Application


Platform.

C. Paar, J. P. (2010). Understanding Cryptography. Springer, Verlag Berlin Heidelberg.

Chhagan Lal, D. M. (2021). Blockchain Testing: Challenges, Techniques, and Research Directions.
Simula Research Laboratory, Oslo, Norway.

Dieck, T. t. (2008). Algebraic Topology. Georg-August-Universität Göttingen: Mathematisches


Institut.

J. Kolb, M. A. (2020). Core Concepts, Challenges, and Future Directions in Blockchain: A


Centralized Tutorial. ACM Computing Surveys. University of California – Berkeley.

Lek, J. (2009). The Mathematics of the Rubik’s Cube. Introduction to Group Theory and
Permutation Puzzles.

M. Habeeb, D. K. (2013). Public key exchange using semidirect product. California University of
Pennsylvania.

M.D.Davis, R. E. (1994). Computability, Complexity and Languages 2Ed. Academic Press, Yale
University.

Merkle, R. C. (1978). Secure Communications Over Insecure Channels. Department of Electrical


Engineering and Computer Sciences, University of California, Berkeley.

Olga Kharlampovich, A. M. (2006). Elementary theory of free non-abelian groups. Elsevier,


Journal of Algebra 302 (2006) 451–552.

S.Nakamoto. (2008). Bitcoin: A Peer-to-Peer Electronic Cash System.

Schuster, S. (2018). The Art Of Thinking In Systems. steveschusterbooks.

Shiffman, D. (2012). The nature of code. (P. o. https://natureofcode.com/book/, Ed.) Creative


Commons Attribution-NonCommercial 3.0.

Singh, Y. (2005). Mathematical foundation of computer science. New Age International, Ltd.

SM Bilan, M. B. (2020). New methods and paradigms for modeling dynamic processes based on
cellular automata. books.google.com.

45

You might also like