You are on page 1of 18

CODES

IN MODERN
WORLD MATH
CODES
Codes play a crucial role in various aspects of
the modern world. They're used in computer
programming to create software and
applications, in cryptography to secure data and
communication, in engineering for designing
systems and processes, and even in everyday
life, like barcodes for product identification.
Codes help us organize, communicate, and
understand complex information efficiently.
BINARY CODES
Binary codes are used in various fields, including
computing, digital communications, and data
storage, to represent information using binary
digits (0s and 1s).
EXAMPLE
1. Unicode: Unicode is a character encoding
standard that supports a wider range of
characters, including symbols, emojis, and
characters from various languages. Each
character is represented by a unique binary
code. For example:
The Unicode representation of the heart

emoji ' ' is U+2764, which corresponds to
the binary code 0010011101100100.
DECIMAL 25
This is because 25 = (1 * 2^4) + (1 * 2^3) + (0 * 2^2) +
(0 * 2^1) + (1 * 2^0).
INTEGERS IN
COMPUTER
Integers in computers are represented using binary numbers.
Here's a breakdown of how integers are typically represented in
modern computer systems:

Binary Representation: Integers are represented using binary


digits (bits). Each binary digit can be either 0 or 1.

Fixed-Length Representation: In most computer systems,


integers are represented using a fixed number of bits. For
example, a common representation is 32 bits, known as a 32-bit
integer.

.
INTEGERS IN
COMPUTER
Signed vs. Unsigned: Integers can be signed (able to represent
both positive and negative numbers) or unsigned (only represent
non-negative numbers). The most significant bit (leftmost bit) is
often used to indicate the sign in signed integer
representations.

Two's Complement: The most common representation for


signed integers is two's complement. In this representation,
positive numbers are represented directly in binary, while
negative numbers are represented by taking the two's
complement of their absolute value.
INTEGERS IN
COMPUTER
Range of Representable Numbers: The range of representable
numbers depends on the number of bits used for the integer
representation and whether it is signed or unsigned. For
example, a 32-bit signed integer can represent values ranging
from -2,147,483,648 to 2,147,483,647.

Overflow and Underflow: When performing arithmetic


operations on integers, it's essential to consider the possibility
of overflow (resulting in a value too large to be represented) or
underflow (resulting in a value too small to be represented).
INTEGERS IN
COMPUTER
Integer Operations: Computers perform arithmetic operations
(addition, subtraction, multiplication, division) and bitwise
operations (AND, OR, XOR) on integers using specialized
hardware and software instructions optimized for efficiency.

Understanding how integers are represented and manipulated in


computers is fundamental for programming and understanding
the behavior of computer algorithms and applications.
EXAMPLE
Let's take an example of representing the decimal number 25 as
a 32-bit signed integer using two's complement notation:
1. Binary Representation:
Decimal 25 in binary is
00000000000000000000000000011001.
2. Sign Extension:
Since we're representing a signed integer, we'll use the
leftmost bit (the 32nd bit) as the sign bit.
Since 25 is positive, the sign bit is 0.
3. Padding:
To make it a 32-bit signed integer, we pad the remaining
bits with zeros:
00000000000000000000000000011001.
EXAMPLE
Let's take an example of representing the decimal number 25 as
a 32-bit signed integer using two's complement notation:
1. Binary Representation:
Decimal 25 in binary is
00000000000000000000000000011001.
2. Sign Extension:
Since we're representing a signed integer, we'll use the
leftmost bit (the 32nd bit) as the sign bit.
Since 25 is positive, the sign bit is 0.
3. Padding:
To make it a 32-bit signed integer, we pad the remaining
bits with zeros:
00000000000000000000000000011001.
LOGIC AND COMPUTER ADDITION

Logic and computer addition are fundamental 1. Logic and Addition: In logic, addition often involves
concepts in computer science and Boolean algebra, where logical operations like AND, OR,
mathematics. In logic, addition typically refers and XOR are used to combine inputs. For example:
AND operation: True if both inputs are true.
to Boolean logic operations such as OR, OR operation: True if at least one input is true.
where inputs are combined to produce an XOR operation: True if exactly one input is true.
output. In computer addition, it involves 2. These operations are fundamental in designing digital
adding binary numbers using logical AND, OR, circuits, programming, and algorithm development.

and XOR operations, along with carry


operations to handle overflow.
Now, let's perform the addition bit by
bit, starting from the rightmost bit (the
least significant bit) and moving to the
left:
1. Bit 1: 1 + 1 = 10 in binary. So, we write
down 0 and carry over the 1.
2. Bit 2: 1 + 0 + 1 (carry) = 10 in binary.
Write down 0 and carry over the 1. 1011
3. Bit 3: 0 + 1 + 0 (carry) = 1 in binary. + 0101
4. Bit 4: 1 + 0 (carry) = 1 in binary.
-------
10000
In this example:
Binary addition involves using logic
gates like XOR and AND.
The carry bit is calculated using AND
gate.
The sum bit is calculated using XOR
gate.
This result, 10000 in binary, represents
16 in decimal, which is the sum of 11 and
5.
In binary addition:
OR operation is used to determine the sum bit,
considering both inputs and the carry from the
previous bit.
XOR operation is used to determine the sum bit
without considering the carry, allowing for the
calculation of the partial sum.
These operations, along with the AND operation for
carry calculation, are fundamental in binary
arithmetic and digital circuit design.
DOCUMENTATION
THANK YOU
SO MUCH

You might also like