You are on page 1of 19

REPORT 1

I-SERIAL AND PARALLEL PORTS COMPARISON:


Serial port has all the advantages of serial data transmission:
1. The serial port cable can be longer than a parralel port cable, as serial port
transmits '1' as voltage from -5 to -12V and '0' as voltage from +5 to +12 V,
while parralel port transmits '1' as voltage of 5 volts and '0' as voltage of 0
volts. At the same time the receiver of the serial port receives '1' as voltage
from -3 to -25 V and '0' as voltage from +3 to +25 V. Thus serial port can have
maximal swing up to 50 volts, while parralel port has maximal swing of 5
volts. Thus the losses in the cable when transmitting data using serial port are
less substantial then losses when transmitting data using parralel port.
2. The number of wires needed when transmitting data serially is less than when
the transmission is parallel. Is the external device has to be installed at a great
distance from the computer, the cable with three wires is much cheaper than
the cable with 19 or 25 wires if the transmission is parallel. Still one should
remember that there are interface creation expences for every
receiver/transmitter.
3. Further development of serial port is usage of infrared devices which
immediately proved popular. Many electronic diaries and palmtop computers
have inbuilt infrared devices for connection with external devices.
4. Another proof of serial port universality is microcontrollers. Many of them
have inbuilt SCI (Serial Communications Interfaces), used for communication
with other devices. In this case serial interface reduces the number of outputs
on the chip. Usually only 2 outputs are used: Transmit Data (TXD) and
Receive Data (RXD). Just compare that to minimum of 8 outputs when using
8-bit parralel connection.
Surely enough serial port has its drawbacks. The main one is that when organizing
serial connection it is always necessary to convert the data into serial code and vice
versa.
Note : I2C and CAN are serial communication protocols ,so they will be
compared to each others .

II-How CANBUS WORKS


The CAN is a "broadcast" type of bus. That means there is no explicit address in the
messages. All the nodes in the network are able to pick-up or receive all
transmissions. There is no way to send a message to just a specific node. To be more
specific, the messages transmitted from any node on a CAN bus does not contain
addresses of either the transmitting node, or of any intended receiving node. Instead,
an identifier that is unique throughout the network is used to label the content of the
message. Each message carries a numeric value, which controls its priority on the bus,
and may also serve as an identification of the contents of the message. And each of
the receiving nodes performs an acceptance test or provides local filtering on the
identifier to determine whether the message, and thus its content, is relevant to that
particular node or not, so that each node may react only on the intended messages. If
the message is relevant, it will be processed; otherwise it is ignored.
?How do they communicate
If the bus is free, any node may begin to transmit. But what will happen in situations
where two or more nodes attempt to transmit message (to the CAN bus) at the same
time. The identifier field, which is unique throughout the network helps to determine
the priority of the message. A "non-destructive arbitration technique" is used to
accomplish this, to ensure that the messages are sent in order of priority and that no
messages are lost. The lower the numerical value of the identifier, the higher the
priority. That means the message with identifier having more dominant bits (i.e. bit 0)
will overwrite other nodes' less dominant identifier so that eventually (after the
arbitration on the ID) only the dominant message remains and is received by all
.nodes
As stated earlier, CAN do not use address-based format for communication, instead
uses a message-based data format. Here the information is transferred from one
location to another by sending a group of bytes at one time (depending on the order of
priority). This makes CAN ideally suited in applications requiring a large number of
short messages (e.g.: transmission of temperature and rpm information). by more than
one location and system-wide data consistency is mandatory. (The traditional
networks such as USB or Ethernet are used to send large blocks of data, point-to-point
.(from node A to node B under the supervision of a central bus master
Let us now try to understand how these nodes are interconnected physically, by
pointing out some examples. A modern automobile system will have many electronic
control units for various subsystems (fig1-a). Typically the biggest processor will be
the engine control unit (or the host processor). The CAN standard even facilitates the
subsystem to control actuators or receive signals from sensors. A CAN message never
reaches these devices directly, but instead a host-processor and a CAN Controller
(with a CAN transciever) is needed between these devices and the bus. (In some

cases, the network need not have a controller node; each node can easily be connected
to the main bus directly.)
The CAN Controller stores received bits (one by one) from the bus until an entire
message block is available, that can then be fetched by the host processor (usually
after the CAN Controller has triggered an interrupt). The Can transciever adapts
signal levels from the bus, to levels that the CAN Controller expects and also provides
a protective circuitry for the CAN Controller. The host-processor decides what the
received messages mean, and which messages it wants to transmit itself.

Fig 1-a
It is likely that the more rapidly changing parameters need to be transmitted more
frequently and, therefore, must be given a higher priority. How this high-priority is
achieved? As we know, the priority of a CAN message is determined by the
numerical value of its identifier. The numerical value of each message identifier (and
thus the priority of the message) is assigned during the initial phase of system design.
To determine the priority of messages (while communication), CAN uses the
established method known as CSMA/CD with the enhanced capability of nondestructive bit-wise arbitration to provide collision resolution and to exploit the
maximum available capacity of the bus. "Carrier Sense" describes the fact that a
transmitter listens for a carrier wave before trying to send. That is, it tries to detect the
presence of an encoded signal from another station before attempting to transmit. If a
carrier is sensed, the node waits for the transmission in progress to finish before
initiating its own transmission. "Multiple Access" describes the fact that multiple
nodes send and receive on the same medium. All other nodes using the medium
generally receive transmissions by one node. "Collision Detection" (CD) means that
collisions are resolved through a bit-wise arbitration, based on a preprogrammed
priority of each message in the identifier field of a message.

Fig 1-b
THE term "priority" becomes more important in the network. Each node can have one
or more function. Different nodes may transmit messages at different times (Depends
how the system is configured) based on the function(s) of each node. For example:
1) Only when a system failure (communication failure) occurs.
2) Continually, such as when it is monitoring the temperature.
3) A node may take action or transmit a message only when instructed by another
node, such as when a fan controller is instructed to turn a fan on when the
temperature-monitoring node has detected an elevated temperature.
When one node transmits the message, sometimes many nodes may accept the
message and act on it (which is not a usual case). For example, a temperature-sensing
node may send out temperature data that are accepted & acted on only by a
temperature display node. But if the temperature sensor detects an over-temperature
situation, then many nodes might act on the information.
CAN use "Non Return to Zero" (NRZ) encoding (with "bit-stuffing") for data
communication on a "differential two wire bus". The two-wire bus is usually a twisted
pair (shielded or unshielded). Flat pair (telephone type) cable also performs well but
generates more noise itself, and may be more susceptible to external sources of noise.
Main Features:
a) A two-wire, half duplex, high-speed network system mainly suited for high-speed
applications using "short messages". (The message is transmitted serially onto the
bus, one bit after another in a specified format).
b) The CAN bus offers a high-speed communication rate up to 1 M bits / sec, for up
to 40 feet, thus facilitating real-time control. (Increasing the distance may decrease the
bit-rate).
c) With the message-based format and the error-containment followed, it's possible to
add nodes to the bus without reprogramming the other nodes to recognize the addition
or changing the existing hardware. This can be done even while the system is in
operation. The new node will start receiving messages from the network immediately.
This is called "hot-plugging"

d) Another useful feature built into the CAN protocol is the ability of a node to
request information from other nodes. This is called a remote transmit request, or
RTR.
e) The use of NRZ encoding ensures compact messages with a minimum number of
transitions and high resilience to external disturbance.
f) CAN protocol can link up to 2032 devices (assuming one node with one identifier)
on a single network. But accounting to the practical limitations of the hardware
(transceivers), it may only link up to 110 nodes on a single network.
g) Has an extensive and unique error checking mechanisms.
h) Has High immunity to Electromagnetic Interference. Has the ability to selfdiagnose & repair data errors.
i) Non-destructive bit-wise arbitration provides bus allocation on the basis of need,
and delivers efficiency benefits that can not be gained from either fixed time schedule
allocation (e.g. Token ring) or destructive bus allocation (e.g. Ethernet.)
j) Fault confinement is a major advantage of CAN. Faulty nodes are automatically
dropped from the bus. This helps to prevent any single node from bringing the entire
network down, and thus ensures that bandwidth is always available for critical
message transmission.
k) The use of differential signaling (a method of transmitting information electrically
by means of two complementary signals sent on two separate wires) gives resistance
to EMI & tolerance of ground offsets.
l) CAN is able to operate in extremely harsh environments. Communication can still
continue (but with reduced signal to noise ratio) even if:
1. Either of the two wires in the bus is broken
2. Either wire is shorted to ground
3. Either wire is shorted to power supply.
CAN protocol Layers & message Frames:
Like any network applications, Can also follows layered approach to the system
implementation. It conforms to the Open Systems Interconnection (OSI) model that is
defined in terms of layers. The ISO 11898 (For CAN) architecture defines the lowest
two layers of the seven layers OSI/ISO model as the data-link layer and physical
layer. The rest of the layers (called Higher Layers) are left to be implemented by the
system software developers (used to adapt and optimize the protocol on multiple
media like twisted pair. Single wire, optical, RF or IR). The Higher Level Protocols
(HLP) is used to implement the upper five layers of the OSI in CAN.
CAN use a specific message frame format for receiving and transmitting the data. The
two types of frame format available are:
a) Standard CAN protocol or Base frame format
b) Extended Can or Extended frame format
The following figure (Fig 2) illustrates the standard CAN frame format, which
consists of seven different bit-fields.
a) A Start of Frame (SOF) field - indicates the beginning of a message frame.
b) An Arbitration field, containing a message identifier and the Remote Transmission
Request (RTR) bit. The RTR bit is used to discriminate between a transmitted Data

Frame and a request for data from a remote node.


c) A Control Field containing six bits in which two reserved bits (r0 and r1) and a four
bit Data Length Code (DLC). The DLC indicates the number of bytes in the Data
Field that follows.
d) A Data Field, containing from zero to eight bytes.
e) The CRC field, containing a fifteen-bit cyclic redundancy check-code and a
recessive delimiter bit.
f) The Acknowledge field, consisting of two bits. The first one is a Slot bit which is
transmitted as recessive, but is subsequently over written by dominant bits transmitted
from any node that successfully receives the transmitted message. The second bit is a
recessive delimiter bit.
g) The End of Frame field, consisting of seven recessive bits.
An Intermission field consisting of three recessive bits is then added after the EOF
field. Then the bus is recognized to be free.

(Fig 2)
The Extended Frame format provides the Arbitration field with two identifier bit
fields. The first (the base ID) is eleven (11) bits long and the second field (the ID
extension) is eighteen (18) bits long, to give a total length of twenty nine (29) bits.
The distinction between the two formats is made using an Identifier Extension (IDE)
bit. A Substitute Remote Request (SRR) bit is also included in the Arbitration Field.
Error detection & correction:
This mechanism is used for detecting errors in messages appearing on the CAN bus,
so that the transmitter can retransmit message. The CAN protocol defines five
different ways of detecting errors. Two of these works at the bit level, and the other
three at the message level.
1. Bit Monitoring.
2. Bit Stuffing.
3. Frame Check.

4. Acknowledgement Check.
5. Cyclic Redundancy Check
1. Each transmitter on the CAN bus monitors (i.e. reads back) the transmitted signal
level. If the signal level read differs from the one transmitted, a Bit Error is signaled.
Note that no bit error is raised during the arbitration process.
2. When five consecutive bits of the same level have been transmitted by a node, it
will add a sixth bit of the opposite level to the outgoing bit stream. The receivers will
remove this extra bit. This is done to avoid excessive DC components on the bus, but
it also gives the receivers an extra opportunity to detect errors: if more than five
consecutive bits of the same level occurs on the bus, a Stuff Error is signaled.
3. Some parts of the CAN message have a fixed format, i.e. the standard defines
exactly what levels must occur and when. (Those parts are the CRC Delimiter, ACK
Delimiter, End of Frame, and also the Intermission). If a CAN controller detects an
invalid value in one of these fixed fields, a Frame Error is signaled.
4. All nodes on the bus that correctly receives a message (regardless of their being
"interested" of its contents or not) are expected to send a dominant level in the socalled Acknowledgement Slot in the message. The transmitter will transmit a
recessive level here. If the transmitter can't detect a dominant level in the ACK slot,
an Acknowledgement Error is signaled.
5. Each message features a 15-bit Cyclic Redundancy Checksum and any node that
detects a different CRC in the message than what it has calculated itself will produce
a CRC Error.
Error confinement:
Error confinement is a technique, which is unique to CAN and provides a method for
discriminating between temporary errors and permanent failures in the
communication network. Temporary errors may be caused by, spurious external
conditions, voltage spikes, etc. Permanent failures are likely to be caused by bad
connections, faulty cables, defective transmitters or receivers, or long lasting external
disturbances.
Each node along the bus will be having two error counters namely the transmit error
counter (TEC) and the receive error counter (REC), which are used to be incremented
and/or decremented in accordance with the error detected. If a transmitting node
detects a fault, then it will increments its TEC faster than the listening nodes
increments its REC because there is a good chance that it is the transmitter who is at
fault.
A node usually operates in a state known as "Error Active" mode. In this condition a
node is fully functional and both the error count registers contain counts of less than
127. When any one of the two error counters raises above 127, the node will enter a
state known as "Error Passive". That means, it will not actively destroy the bus traffic
when it detects an error. The node which is in error passive mode can still transmit
and receive messages but are restricted in relation to how they flag any errors that
they may detect. When the Transmit Error Counter rises above 255, the node will

enter the Bus Off state, which means that the node doesn't participate in the bus traffic
at all. But the communications between the other nodes can continue unhindered.
To be more specific, an "Error Active" node will transmit "Active Error Flags" when
it detects errors, an "Error Passive" node will transmit "Passive Error Flags" when it
detects errors and a node, which is in "Bus Off" state will not transmit "anything" on
the bus at all. The transmit errors give 8 error points, and receive errors give 1 error
point. Correctly transmitted and/or received messages cause the counter(s) to
decrease. The other nodes will detect the error caused by the Error Flag (if they
haven't already detected the original error) and take appropriate action, i.e. discard the
current message.
Let's assume that whenever node-A (for example) on a bus tries to transmit a
message, it fails (for whatever reason). Each time this happens, it increases its
Transmit Error Counter by 8 and transmits an Active Error Flag. Then it will attempt
to retransmit the message and suppose the same thing happens again. When the
Transmit Error Counter rises above 127 (i.e. after 16 attempts), node A goes Error
Passive. It will now transmit passive error flags on the bus. A Passive Error Flag
comprises 6 recessive bits, and will not destroy other bus traffic - so the other nodes
will not hear the node-A complaining about bus errors. However, A continues to
increase its TEC. When it rises above 255, node-A finally stops and goes to "Bus Off"
state.
What does the other nodes think about node A? - For every active error flag that A
transmitted, the other nodes will increase their Receive Error Counters by 1. By the
time that A goes Bus Off, the other nodes will have a count in their Receive Error
Counters that is well below the limit for Error Passive, i.e. 127. This count will
decrease by one for every correctly received message. However, node A will stay bus
off. Most CAN controllers will provide status bits and corresponding interrupts for
two states: "Error Warning" (for one or both error counters are above 96) and "Bus
Off".
Bit Timing and Synchronization:
The time for each bit in a CAN message frame is made up of four non-overlapping
time segments as shown below.

The following points may be relevant as far as the "bit timing" is concerned.
1. Synchronization segment is used to synchronize the nodes on the bus. And it will
always be of one quantum long.
2. One time quanta (which is also known as the system clock period) is the period of

the local oscillator, multiplied by the value in the Baud Rate Pre-scaler (BRP) register
in the CAN controller.
3. A bit edge is expected to take place during this synchronization segment when the
data changes on the bus.
4. Propagation segment is used to compensate for physical delay times within the
network bus lines. And is programmable from one to eight time quanta long.
5. Phase-segment1 is a buffer segment that can be lengthened during
resynchronization to compensate for oscillator drift and positive phase differences
between the oscillators of the transmitting and receiving nodes. And is also
programmable from one to eight time quanta long.
6. Phase-segment2 can be shortened during resynchronization to compensate for
negative phase errors and oscillator drift. And is the maximum of Phase-segment1
combined with the Information Processing Time.
7. The Sample point will always be at the end of Phase-seg1. It is the time at which
the bus level is read and interpreted as the value of the current bit.
8. The Information Processing Time is less than or equal to 2 time quanta.
This bit time is programmable at each node on a CAN Bus. But be aware that all
nodes on a single CAN bus must have the same bit time regardless of transmitting or
receiving. The bit time is a function of the period of the oscillator local to each node,
the value that is user-programmed into BRP register in the controller at each node,
and the programmed number of time quanta per bit.
How do they synchronize:
Suppose a node receives a data frame. Then it is necessary for the receiver to
synchronize with the transmitter to have proper communication. But we don't have
any explicit clock signal that a CAN system can use as a timing reference. Instead, we
use two mechanisms to maintain synchronization, which is explained below.
Hard synchronization:
It occurs at the Start-of-Frame or at the transition of the start bit. The bit time is
restarted from that edge.
Resynchronization:
To compensate for oscillator drift, and phase differences between transmitter and
receiver oscillators, additional synchronization is needed. The resynchronization for
the subsequent bits in any received frame occurs when a bit edge doesn't occur within
the Synchronization Segment in a message. The resynchronization is automatically
invoked and one of the Phase Segments are shortened or lengthened with an amount
that depends on the phase error in the signal. The maximum amount that can be used
is determined by a user-programmable number of time quanta known as the
Synchronization Jump Width parameter (SJW).
Higher Layer Protocols:
Higher layer protocol (HLP) is required to manage the communication within a
system. The term HLP is derived from the OSI model and its seven layers. But the
CAN protocol just specifies how small packets of data may be transported from one

point to another safely using a shared communications medium. It does not contain
anything on the topics such as flow control, transportation of data larger than CAN fit
in an 8-byte message, node addresses, establishment of communication, etc. The HLP
gives solution for these topics.
Higher layer protocols are used in order to
1. Standardize startup procedures including bit rate setting
2. Distribute addresses among participating nodes or kinds of messages
3. Determine the layout of the messages
4. Provide routines for error handling on system level
Different Higher Layer Protocols
There are many higher layer protocols for the CAN bus. Some of the most commonly
used ones are given below.
1. Can Kingdom
2. CAN open
3. CCP/XCP
4. Device Net
5. J1939
6. OSEK
7. SDS
Note:
Lot of recently released microcontrollers from Freescale, Renesas, Microchip, NEC,
Fujitsu, Infineon, and Atmel and many such leading MCU vendors are integrated with
CAN interface.

III- ISC HOW IT WORKS?


IC is a multi-master, low-bandwidth, short distance, serial communication bus
protocol. Nowadays it is not only used on single boards, but also to attach low-speed
peripheral devices and components to a motherboard, embedded system, or cellphone, as the new versions provide lot of advanced features and much higher speed.
The features like simplicity & flexibility make this bus attractive for consumer and
automotive electronics.
Details:
The basic design of IC has a 7-bit address space with 16 reserved addresses, which
makes the maximum number of nodes that can communicate on the same bus as 112.
That means each IC device is recognized by a unique 7-bit address. It is important to
note that the maximum number of nodes is obviously limited by the address space,
and also by the total bus capacitance of 400 pf.
The two bi-directional lines, which carry information between the devices connected
to the bus, are known as Serial Data line (SDA) and Serial Clock line (SCL). As the
name indicates the SDA line contains the data and the SCL with the clock signal for
synchronization. The typical voltages used are +5 V or +3.3V.
Like the CAN & LIN protocols, the IC also follows the master-slave communication
protocol. But the IC bus is a multi-master bus, which means more than one IC/device
capable of initiating a data transfer can be connected to it. The device that initiates the
communication is called MASTER, whereas the device being addressed by the Master
is called as SLAVE. It is the master device who always do generation of clock
signals, which means each master generates its own clock signals when transferring
data on the bus.
.

The real communication:


As we saw already, the active lines used for communication in IC protocol are bidirectional. Each & every device (for example: MCU, LCD driver, ASIC, remote I/O
ports, RAM, EEPROM, data converters) connected to the bus will be having a unique
address. Each of these devices can act as a receiver and/or transmitter, depending on
the functionality. And also each device connected to the bus is software addressable
by this unique address:
As we already know the nodes or any other peripheral devices (no matter whether its
master or slave) use microcontrollers to connect and communicate through the bus,
this communication can be considered as inter-IC communication in general.
Let us assume that the master MCU (as always, it's the master who initiates the
communication) wants to send data to one of its slaves. The step-by-step procedure
will be as follows.
1. Wait until it sees no activity on the I2C bus. The SDA and SCL lines are both high.
The bus is 'free'.
2. The Master MCU issues a start condition, telling that "its mine - I have started to
use the bus". This
condition informs all the slave devices to listen on the serial data
line for instructions/data.
3. Provide on the SCL line a clock signal. It will be used by all the ICs as the
reference time at which each bit of data on the SDA line will be correct (valid) and
can be used. The data on the data line must be valid at the time the clock line
switches from 'low' to 'high' voltage.
4. The Master MCU sends the unique binary address of the target device it wants to
access.
5. Master MCU puts a one-bit message (called read/write flag) on the bus telling
whether it wants to SEND or RECEIVE data from the other chip. This read/write
flag is an indication to the slave node whether the access is a read or a write
operation.
6. The slave node ICs will then compare the received address with its own address.
The Slave device
with the matching address responds back with an
acknowledgement signal. If the address doesn't
match, they simply wait until the
bus is released by the stop condition.
7. Once the master MCU receives the acknowledgement signal, it starts transmitting
or receiving and
the data communication proceeds between the Master and the
Slave on the data bus. Both the
master and slave can receive or transmit data
depending on whether the communication is a read
or write. The transmitter sends
8-bits of data to the receiver, which replies with a 1-bit
acknowledgement. And the
action of data transfer continues.
8. When the communication is complete, the master issues a stop condition indicating
that everything is done. This action free ups the bus. The stop signal is just one bit
of information transferred by a special 'wiggling' of the SDA/SCL wires of the bus.

Notes:
1. Devices with Master capability can identify themselves to other specific Master
devices and advise their own specific address and functionality.
2. Only two devices exchange data during one 'conversation'

The trick of open-drain lines & pull-up resistors:


The bus interface in IC is built around an input buffer and an open drain transistor.
When the bus is in "idle" state, the bus lines are kept in the logic "high" state. The
external pull-up resistors are used for this condition to achieve. This pull-up resistor
as seen in the fig-1 is actually a small current source. If the device wants to put a
signal on the bus, the chip drives its output transistor, thus pulling the bus to "low"
level. Suppose the bus is already occupied by another chip by sending a "low" state to
the bus. Then all other chips lose their right to access the bus. The chip does this with
a built-in bus mastering technique.
Both the bus lines SDA and SCL are initially bi-directional. This means that in a
particular device, these lines can be driven by the IC itself or from an external device.
In order to achieve this functionality, these signals use open collector or open drain
outputs.
The weak point of open-collector technique is, in a lengthy bus the speed of
transmission comes down drastically due to the presence of capacitive load. The
shapes of the signals alter in proportion to RC time constant. Higher the RC constant,
the slower will be the transmission. At some point, the ICs will not be able to sense
logic 1 and 0.

And also it can cause reflections at high speed, which creates "ghost signals" and
corrupt the data, which is being transmitted.
This problem can be overcome by using an active IC terminator. This device consists
of a twin charge pump, which can be considered as a dynamic resistor (instead of the
passive pull-up resistors used). The moment the state changes, it provides a large
current (low dynamic resistance) to the bus. This action will charge the parasitic
capacitor very quickly. Once the voltage has risen above a certain level, the high
current mode cuts out and the output current drops sharply.
Different states, conditions & events on the bus:
We saw several unique states and conditions on the bus in our explanation: START,
ADDRESS, ACK, DATA and STOP. Let us now try to understand some more about
those states.
START:
The Start state needs to be issued on the bus before any type of transaction on the bus.
The master-chip first pulls the data line (SDA) low, and next pulls the clock line
(SCL) low. This condition is a signal to all the connected chips to listen to the bus to
expect a transmission.
A single message can contain multiple Start conditions. The use of this is called
"repeated start".
ADDRESS & DATA:
After the "start" bit, a byte (7+1) is transmitted to slaves by the master. This byte is
the address, which can identify the particular slave on the bus. Bit 0 of this byte
determines the slave access mode ('1' = read / '0' = write). Remember, bytes are
always transmitted MSB first. The R/W bit '0' indicates the master is willing to send
data to the slaves. Then the intended slave will respond back with ACK signal,
indicating that its ready to receive. And the communication continues.
In the same way, a byte can be received from the slave if the R/W bit in the address
was set to '1', i.e. 'read'. But now the master is not allowed to touch the SDA line.
Master sends the 8 clock pulses needed to clock in a byte on the SCL line, and
releases the SDA line. Instead, the slave will now take control of this SDA line for
data transfer. All the master has to do now is generate a rising edge on the SCL line,
read the level on SDA and generate a falling edge on the SCL line. The slave will not
change the data during the time that SCL is high. This sequence has to be performed 8
times to complete the data byte.
Some of the addresses are reserved for "extended addressing mode", which use 10-bit
addressing. If a standard slave node, which is not able to resolve this extended
addressing receives this address, it won't do anything.
ACK:
As we know the ACK signal is send back to the master whenever the address or data
byte has been transmitted onto the bus and received by the intended slave node.

The slave after sending ACK signal will drive the SDA line to low status immediately
after receiving the 8th data bit transmitted by the master or after evaluating received
address byte. So to signal the completion of transmission, SCL pulled to low by
master, whereas SDA is pulled to low by slave.
To repeat the transmission, master drops a clock pulse on the SCL line and slave will
release the SDA line after receiving the clock. With this, bus is now ready again for
master to send data or to initiate a stop condition.
In the same way, the master must acknowledge the slave device upon successful
reception of a byte from a slave.
SDA and the SCL line are in full control of the master. The slave will release the SDA
line after sending last bit to the master and make the SDA line high. The Master will
now bring the SDA line low state and put a clock pulse on the SCL line. After
completion of this clock pulse, the master will again release the SDA line to allow the
slave to regain control of the SDA line.
The master can stop receiving data from the slave at any time, just by sending a stop
condition.
NACK:
NACK means "Not Acknowledge". Confused? Don't confuse it with "No
Acknowledge". Because, "Not Acknowledge" occurs only after a master has read a
byte from a slave. And "No Acknowledge" occurs after a master has written a byte to
a slave. Again confused? Lets analyze this in detail.
This happens when the slave regains control of the SDA line after the ACK cycle
issued by the master.
Let's assume the next bit ready to be sent to the master is a 0. The slave would pull the
SDA line low immediately after the master takes the SCL line low. The master now
attempts to generate a Stop condition on the bus. It releases the SCL line first and then
tries to release the SDA line, which is held low by the slave. So in short, No Stop
condition has been generated on the bus. This condition is called a NACK.
No ACK:
If, after transmission of the 8th bit from the master to the slave the slave does not pull
the SDA line low, then this is considered a No ACK condition.
This condition may be created due to the following reasons:
1. The slave is not there (in case of an address)
2. The slave missed a pulse and got out of sync with the SCL line of the master.
3. The bus is "stuck". One of the lines could be held low permanently.
In any case the master should abort by attempting to send a stop condition on the bus.

STOP:
The stop state is sent to the bus only after the message transfer has been completed.
The master-MCU first releases the SCL and then the SDA line. This condition is a
true indication to all the chips and devices on the bus that the bus is idle or the bus is
free and available again for another communication.
A Stop condition denotes the END of a transmission, even if it is issued in the middle
of a transaction or in the middle of a byte. In this case, the chip disregards the
information sent and goes to IDLE-state, waiting for a new start condition.
Modes of operation:
The IC bus can operate in three modes, or in other words the data on the I2C bus can
be transferred in three different modes.
1. Standard mode
2. Fast mode
3. High-Speed (Hs) mode
Standard mode:
1. This is the original Standard mode released in early 80's
2. It has maximum data rates of 100kbps
3. It uses 7-bit addressing, which provides 112 slave addresses.
Enhanced or Fast mode:
The fast mode added some more features to the slave devices.
1. The maximum data rate was increased to 400kbps.
2. To suppress noise spikes, Fast-mode devices were given Schmitt-triggered inputs.
3. The SCL and SDA lines of an IC-bus slave device were made to exhibit high
impedance when power was removed.
High-Speed mode:
This mode was created mainly to increase the data rate up to 36 times faster than
standard mode. It provides 1.7 Mbps (with C>b = 400pF), and 3.4Mbps (with C>b =
100pF).
The major difference between High Speed (HS) mode in comparison to standard
mode is, HS mode systems must include active pull up resistors on the SCL line. The
other difference is, the master operating in HS -mode sends a compatibility request
signal in code form to slave, if not-Acknowledge (a bit name with in the I2C frame)
remains high after receiving the compatibility code, than the master assumes the slave
is capable of HS-mode.

The risk of data corruption:


The operation of the bus with one master node seems to be very easy. But what
happens if there are two masters connected to the bus and if both of them start
communicating at the same time. Let us try to analyze this situation in detail.
When the first MCU issues a start condition and sends an address, all slaves will listen
(including the second MCU which at that time is considered a slave as well). If the
address does not match the address of the second MCU, it will hold back any activity
until the bus becomes idle again after a stop condition.
As long as the two MCU's monitor what is going on, on the bus (start and stop) and as
long as they are aware that a transaction is going on because the last issued command
was not a STOP, there is no problem.
But, what will happen if one of the MCU's missed the START condition and still
thinks the bus is idle, or it just came out of reset and wants to communicate.
The physical bus setup of the IC helps to solve this problem. Since the bus structure
is a wired AND (if one device pulls a line low it stays low), its possible to find
whether the bus is idle or occupied.
Benefits and Drawbacks:
Since only two wires are required, I2C is well suited for boards with many devices
connected on the bus. This helps reduce the cost and complexity of the circuit as
additional devices are added to the system.
Due to the presence of only two wires, there is additional complexity in handling the
overhead of addressing and acknowledgments. This can be inefficient in simple
configurations and a direct-link interface such as SPI might be preferred.

IIII-I2C AND CAN COMPARISON :


-

CAN is differential, and is more immune to noise


I2C does support arbitration and multiple masters while can has only one
master support
i2c was really designed for talking between chips on the same board while can
is used to connect multiple boards
CAN only allows 8 bytes to be sent at a time, i2c is essentially
unlimited
You can broadcast with i2c, although i2c uses node addresses for most
messages.
I2C typically requires more MCU overhead than CAN, because most of the
CAN controllers implement all of the low level ms in HW.
I2C slave devices can stretch the clock to adpat the speed down. This is not
good if the clock gets stretched indefinitely.

You might also like