You are on page 1of 334

Fundamental

Fundamentals of Pharmaceutical Manufacturing

Technologies

VOMP 3001

Question Booklet 5 of 8

Step 1 and Step 4 Revision Questions

Rev 001 Session-5 Question Booklet Page 1 of 334


Prepare the answers to your questions in this Booklet – then when you have an
answer that you are satisfied with put the answer up on your course.

5-1: Biopharmaceuticals Manufacturing, Upstream, Fermentation

Step 1

Warm up - Before watching the video, answer the question to 'unlock' your prior knowledge

Q: What conventional products / foodstuffs / beverages are you aware of that are derived from biological processes?

Dairy products such as cheese, buttermilk, kefir, yogurts. Beverages such as beer, vodka and wine Food stuffs like cakes and breads, sauerkraut, soy sauce, etc.

Industrial biotechnology is a promising technology that uses enzymes and micro-organisms to create bio-based products in sectors such as chemicals, food ingredients, detergents, paper, textiles, and
biofuels. This technology has the potential to address global challenges such as feeding a growing population and providing new alternatives to scarce natural resources. Biotechnology is not a new concept,
as traditional products like bread, beer, cheese, wine, and yoghurt all use natural processes. In the 1800s, Louis Pasteur demonstrated that fermentation was the result of microbial activity. In 1928, Sir
Alexander Fleming extracted penicillin from mold, and in the 1940s, large-scale fermentation techniques were developed to produce industrial quantities of the wonder drug. The biotechnology revolution
began after World War II, leading to modern industrial biotechnology. Since then, industrial biotechnology has produced enzymes for use in daily life and the manufacturing sector. These enzymes, which are
specialized proteins, have evolved in nature to be super-performing biocatalysts that facilitate and speed up complex biochemical reactions. These enzyme catalysts make industrial biotechnology a powerful
technology.

Industrial biotechnology plays a crucial role in various industries, including alcohol production, beer production, and food processing. Alcohol is made from water, starch sources like barley, brewer's yeast,
and flavorings like hops, which are converted to sugar by enzymes. Enzymes and microbes are two common tools used in industrial biotechnology.

First-generation biofuels are produced by fermenting plant-derived sugars to ethanol or by converting plant-oils to biodiesel, using crops such as sugar cane, corn, wheat, oil seed rape, or sugar beet. These
biofuels are blended with petrol and diesel to meet greenhouse gas emissions legislation. Blending bio fuels into road transport fuel can reduce their carbon impact. The fuel quality directive allows for up to
10% ethanol to be blended into petrol.

Wheat products come either directly from cells or are made using enzymes taken from cells. Cells and enzymes can also be biotech products themselves, such as probiotic yogurts and non-soya veggie
burgers. Enzymes are used in washing detergents, food processing, cosmetics, and more.

Cells and enzymes are used to make washing detergents, which traditionally come from cereal crops. However, development is ongoing to access the sugars locked up in waste-derived feedstocks such as
agricultural residues, forestry residues, and post-consumer waste. Bioplastics made from biopolymers are already used in plastic food packaging, mobile phone cases, sunglasses, pens, and personal care
packaging for products such as shampoos and conditioners.

Plastic bottles use bioplastics, a synthetic polymer fiber produced from fossil fuel, to make clothing, blankets, carpets, and other fabrics. Many biochemicals are also used in the production of dyes, tanning
agents, nylon, and polyester, all of which are vital materials in the production of textiles for carpets, clothing, and upholstery.

Rev 001 Session-5 Question Booklet Page 2 of 334


In the future, many consumer products will contain materials derived from bio-based feedstocks, such as personal care products such as makeup, shampoos, and skin care. Extracted Cellulose Fibres are
absorbent and tough, and can be extracted from raw materials for use in composites as a replacement for glass and in many applications where absorbency is needed.

Skin cream can be produced using gas from biorefineries, which can be combusted to produce heat and power. Algae can also be grown as a biofuel, and CPI is working on projects to allow biomass from
algae to be recycled and used to produce a wide variety of products such as bioethanol, biopharmaceuticals, biogas, and compost for crop production.

Food and drink industry uses biochemicals for various products, including bioplastics, flavours, fragrances, sweeteners, souring agents, acidity regulators, and dietary supplements. Biorefineries can also
extract neutraceuticals such as dietary supplements and herbal products, and specialist chemicals can help ripen fruit ready for sale.

In conclusion, industrial biotechnology plays a significant role in improving our everyday lives and the way we live. With the recent opening of CPI's C1 gas facility, the technology continues to evolve
alongside our economic landscape.

Step 4

Self Assessment - Answer the following questions to self-assess your knowledge of the subject.

Q 1: What are the inputs to and outputs from upstream processing operations for a conventional biotechnological process?

The inputs to upstream process are Raw material and biocatalyst. In the making of beer our raw material is grains and hops and our biocatalyst are yeast. Outputs will be the side product which comes out of
next stage of upstream process.

Inputs of the upstream processing operations: Media preparation; Cell culture; Cell separation; Cell harvesting. Output: Protein product and waste.

ESI Ultrapure/Upstream/Processes

Upstream Processing (USP): What Is It?

The first stage of a bioprocess, such as growing bacterial or mammalian cell lines in bioreactors, is referred to as the upstream part of the process. All of the stages involved in developing an inoculum are
included in downstream processing:

Media Preparation

Cell Culture

Cell Separation

Harvest

When the cells have reached the desired density, they are harvested and moved to the downstream section of the bioprocess.

Preparation of media

Employing distinct formulas necessary for each phase of bioreactor scaling, ranging from inoculation to harvest. Media preparation often occurs in tanks, carboys, bottles, or bags, where the media is added.
Similar to people, cells likewise necessitate adequate nutrition in order to operate and ultimately generate the protein product.

Hence, media mostly consists of a combination of carbohydrate (glucose), nitrogen (amino acids), fats (lipids), and small quantities of salt. Media components are typically in a powdered form and are added
to water for injection (WFI), which has a high level of purity. Prior to exposing the cells to culture, it is necessary to ensure that the media is homogeneous and well mixed. Following this procedure, the
amalgamated solution is transferred from a container or large glass bottle to the bioreactor.

The composite solution can be conveyed using a pump in cases when gravity is insufficient, and Single Use devices such as flow, pH, pressure, etc can be employed to verify proper functionality.

Rev 001 Session-5 Question Booklet Page 3 of 334


Disposable assemblies

Product Transfer Assembly of components using a buffer or media. Cosmetics Reservoir Bag Buffer creation assemblies

Closures for bottles and flasks

Collecting a representative subset of data from a larger population. Assemblies for extracting aseptic samples from the buffer media

Tubing refers to the activity of riding an inflatable tube down a river or other body of water.

Biopharmaceutical TPE Tubing is used to establish a sterile closed system for transferring the buffer to the bioreactor by the process of welding.

Silicone tubing is predominantly utilised in typical applications for conveying the buffer, eliminating the need for welding.

Device used to move fluids or gases by mechanical action. Tubing specifically designed for peristaltic pumps is known as Grade Tubing. Its design aims to maximise the lifespan of the pump.

Filtering mechanisms

Preliminary filters: Proclear

Viral clearance can be achieved with the use of retentive filters.

Utilising sterilising grade filters to exclude germs from the solution

Centrifugal pumps

Levitronix: Utilise a centrifugal pump that employs magnetic levitation principles to facilitate the transfer process, ensuring a continuous flow rate without any dripping.

Synthetic polymer Plumbing components and joining devices

Sterile connectors enable the connection of tubes or the connection of tubing to the column, ensuring a completely closed and sterile system.

Bioprocess fittings, such as Y-Pieces, Tee's, and Reducers, are provided for the purpose of connecting tubing or linking tubing to the bioreactor. Additionally, these fittings can be utilised to create an
assembly that is intended for one-time use.

Measurement and control of variables using specialised devices and equipment.

Utilise disposable sensors to regulate and oversee your operation during media preparation.

Containers for liquids

Centrifuge Flask

Erlenmeyer flask, sometimes known as a shake flask

Mass communication outlets and platforms Containers for collecting samples of media preparation

Collecting a representative subset of data from a larger population. Containers for collecting samples of media preparation.

In vitro cultivation of cells

Cell culture refers to the cultivation of cells in a laboratory setting, outside of their natural environment. Living organisms undergo cellular division for growth, and the same principle applies to cell culture,
where cells divide and proliferate in an appropriate environment.

An optimal environment comprises a nutrient-rich growth medium and specialised cell culture vessels that enable precise control over gases and temperature. As the cell's density rises, the medium will
become opaque in the growth stage. The growth is meticulously monitored to assess the proliferation of cells during specific cycles; this is accomplished by collecting samples and accurately enumerating
the cells.

Disposable Assemblies

Product transfer assemblies are used to transfer various substances like as medium and cells.

Cell culture bag assembly designed for cell growth and to facilitate the expansion of cell cultures.

Cell culture bottle assembly designed for cell growth and efficient scaling up of the culture.

Tube assemblies employing centrifugal force for the purpose of harvesting

Rev 001 Session-5 Question Booklet Page 4 of 334


Tubing refers to the activity of riding on an inflatable tube, typically on water.

Pharmaceutical Grade TPE Tubing: Appropriate for fusion and closure applications.

Platinum-cured silicone tubing designed for the purpose of transferring products.

Bioprocess bags

Bioprocess bags, both in 2D and 3D formats, can be obtained in sizes up to 2000L. These bags are specifically developed for the purpose of preparing, storing, and transporting buffers and solutions used in
biopharmaceuticals.

2D Rockerbags designed for cell culture applications

Cryogenic bags with a capacity of up to 20 litres are provided for the purpose of storing biopharmaceutical goods at a temperature of -85 degrees Celsius.

Filtering mechanisms

Prefilters, whether integrated or standalone, can be utilised to eliminate substances from the solution.

Utilising sterilising grade filters to exclude microorganisms from the solution

Air filtration systems

Pumps

Disposable/Reusable Pumps that exert minimal shear stresses and provide high cell viability are highly valuable in cell culture applications.

Plastic fittings and connectors

Bioprocess couplers and fittings, such as Y pieces, Tees, and reducers, are provided for the purpose of connecting tubing to tubing.

Sterile connectors provide an alternative to the processes of welding and sealing.

Plastic Equipment Bag Totes

Roller Dollies Instrumentation

Disposable Pressure, temperature, and flow sensors

Welders are used to join TPE tubing through the process of welding.

Sealers are used to securely close or fasten thermoplastic elastomers (TPE) by creating a seal.

Containers for liquids

Spinner flask is a type of flask used for culturing cells in a laboratory. Erlenmeyer flask, also known as shake flask, is another type of flask commonly used for mixing and shaking substances.

Roller bottles

Storage containers designed for media

Containers for storage.

Cell Separation refers to the process of isolating and separating individual cells from a mixture or population of cells.

The initial stage in retrieving the protein product (cells) from the culture is cell separation. The end of the Cell Culture must occur at a specific time, which is calculated in advance based on the quality and
amount of the product that has been collected in the bioreactors. Centrifugation is typically the initial stage in the separation of mammals, involving the fast spinning and sedimentation of cells to separate
them from the culture. Centrifugation relies on the disparity in density between the particles to be isolated and the surrounding medium. Centrifugation is primarily employed for the purpose of segregating
solid particles from the liquid phase, also known as fluid/particle separation.

The process of collecting microorganisms is similar to that of mammals, but it requires an additional step of lysing the bacteria to release the protein before centrifugation.

The subsequent stage involves depth filtration to eliminate the sizable trash. Prior to the centrifugation and depth filtering processes, the result exhibits turbidity. These two procedures enhance the clarity of
the mixture and are commonly referred to as Clarification steps due to their ability to boost transparency. Currently, the combination has undergone the process of separating cells and big detritus. Depth
filtering employs disposable plastic pods that lack any reusable wetted components. The depth filter pods are secured in a clamp-like holder due to the insufficient pressure capacity of the plastic housings.
Depth filtration is a suitable substitute for centrifugation.

Rev 001 Session-5 Question Booklet Page 5 of 334


Following the harvest process, it is common to employ sterile grade membrane filtering as a subsequent step to eliminate smaller particles and microbial contamination. Sterile grade filtration typically has a
rating of 0.22µm or below. This process makes the product transparent and prepares it for subsequent bioprocess processes.

Disposable Assemblies

Product transfer assemblies are used to transfer various substances like as medium and cells.

Tube assemblies designed for centrifugal harvesting purposes

Silicone tubing made from platinum-cured material.

Peristaltic pumps now have specialised tubing designed specifically for their use, known as pump-grade tubing. It has the ability to endure extended periods of pump operation.

This TPE tubing is specifically designed for use in the biopharmaceutical industry. It is well-suited for welding and sealing purposes.

Bioprocess bags

Assemblies of bags in two dimensions

Assemblies of bags in three dimensions

Sampling Systems

A sampling bottle system is provided, offering a range of 60mL bottles for the purpose of obtaining a sterile sample of the product in order to assess its quality.

The sampling bag system is used to collect sterile samples of the product in 50mL bags for quality inspection purposes.

Filtering mechanisms

Prefilters, whether integrated or independent, can be utilised to eliminate substances from the solution.

Sterilising grade filters are specifically engineered to efficiently and cost-effectively process solutions that are challenging to filter.

Microfiltration filters used for cell harvesting.

Department utilises filters to collect cells.

Air filtration systems.

Pumps

Pumps that may be used only once or multiple times, and have minimal forces that cause deformation in cells and high ability of cells to survive, are highly beneficial in applications involving the collection of
cells.

Plastic fittings and connectors

Sterile connectors offer a viable alternative to the process of tube welding and sealing.

Bioprocess refers to a series of controlled and organised activities that involve the use of living organisms or their components to produce desired products or carry out specific tasks Connectors (such as Y
pieces, Tee's, reducers, etc.) are provided to join tubing together.

Plastic Equipment Bag Totes

Roller Dollies Measurement and control systems

Disposable sensors are utilised to monitor and measure the pressure, flow rate, conductivity, pH level, temperature, and turbidity in order to assess the conditions during the harvesting process.

Welders are used to join TPE tubing through the process of welding.

Sealers are used to hermetically seal TPE tubing through the process of sealing.

Biotechnology, a field that has evolved over 8000 years, is applied to various industries such as healthcare, food and agriculture, industrial, and environmental cleanup. The term "biotechnology" was first
used in the mid-1970s and has been defined as "the application of scientific and engineering principles to processing materials by biological agents to provide goods and services." Some definitions replace
the term "biological agents" with more specific terms like microorganisms, cells, plant and animal cells, and enzymes.

Rev 001 Session-5 Question Booklet Page 6 of 334


Biotechnology is an interdisciplinary field with contributions from basic life science disciplines such as molecular and cell biology, biochemistry, genetics, and engineering such as chemical, instrumentation,
and control. The entire process can be divided into three stages: Upstream Processing, Fermentation, and Downstream Processing.

Upstream processing involves the cultivation of microorganisms, including cell culture, which involves engineering and growing the cell line to be used to manufacture the drug product. Harvesting and
recovery involves the separation of crude product from microbial mass and other solids and liquid medium to prepare it for purification. This often requires some type of cell disruption, such as centrifugation,
mechanical grinding, freezing, detergents, enzymes, high pressure, centrifugation, homogenization, and mechanical grinding.

Downstream operations begin by separating the 'good' from the 'waste' in the product materials via a filtration operation (cross-flow-filtration (CCF) /tangential-flow-filtration (TFF). The separation steps
include extraction and precipitation, filtering, microfiltration, and ultrafiltration.

Bioanalysis is another essential step in biopharmaceutical manufacturing, as it helps to understand the properties of the products and their potential applications. Facility layout plays a crucial role in the
overall success of biopharmaceutical manufacturing.

The process of biopharmaceuticals involves several steps, including purification, filling, bioanalysis, and facility layout. Purification involves high-risk chromatography operations and is costly to perform. It
involves gel/size-exclusion filtration (SEC), ion exchange (IEX), hydrophobic interaction (HIC), and affinity. The required protein must be modified to a stable, sterile form that can be taken by the patient.
Biotech products traditionally are sterile injectibles, but there is progress in inhalation and transdermal delivery options.

Filling is the process of placing the drug product into a container, with two general categories: bulk and final. Bulk filling is defined as the placement of larger quantities (5L-100L) of product into containers for
shipment/storage, while final filling is the placement of drug product into its final container/closure system. Most production facilities produce product in bulk, and many companies ship their bulk to contract
filling firms.

Bioanalysis is critical as it provides proof of the drug's safety, purity, and efficacy. Analytical methods are required for backing up regulatory submissions, supporting pre-clinical and clinical studies,
monitoring environmental conditions during manufacturing, and monitoring the quality of the manufacturing process.

The facility layout includes media preparation, staging, buffer preparation, buffer hold, media path, harvest, inoculum cell culture, utilities, cell culture support, buffer path, and purification. Upstream
processing stages include innoculum expansion, liquid nitrogen container, flasks on a rotary shaker, and laboratory scale seed fermenter. Buffer solutions are used to keep pH at a nearly constant value in
various chemical applications, as many life forms thrive only in a relatively small pH range.

The development of culture media for mammalian cells has been a subject of study for over 50 years. The first attempts at culturing animal cells in vitro used biological fluids, such as serum and blood or
tissue extracts. This was followed by the attempt to culture animal cells in defined media through the analysis of the contents of biological fluids. Eagle's minimal essential medium (EMEM) was developed to
provide the necessary undefined components for growth. As new cell lines became available in the scientific community, new formulations were developed, including Dulbecco's modification of Eagle's
medium (DMEM). As progress was made in understanding cell metabolism and growth factor requirements, various serum-free formulations were also developed.

Currently, there are many formulations available for the culture of animal cells, with the decision of which formulation to use depending on the purpose of the culture. For the production of viruses or other
non-specific molecular studies, basic formulations such as serum-supplemented MEM are often used. However, for other studies where undefined components can affect the results or in large-scale
production systems where productivity is an issue, serum-free formulations are relied upon.

The supplementation of culture media with undefined components, such as serum, has many inherent disadvantages, fueling the demand for better serum-free media in both the research and industrial
communities. New formulations are being developed all the time, but the performance of many of these formulations remains poor as many contain undefined components that can affect their quality and
consistency. Culture media typically consist of antibiotics, carbohydrates, amino acids, salts, trace elements, vitamins, buffers, serum, serum-free, and growth factors.

Pictures

Rev 001 Session-5 Question Booklet Page 7 of 334


Rev 001 Session-5 Question Booklet Page 8 of 334
Rev 001 Session-5 Question Booklet Page 9 of 334
x

The development and manufacturing of biologics has experienced significant growth worldwide before the emergence of the COVID-19 pandemic, with demand for biosimilars also increasing. The
requirements associated with COVID-19 vaccine manufacturing are placing further pressure on upstream manufacturing capacities. The industry is exploring the right balance between internal
capacity/capability and external capacity/capability and key strategic partnerships. A strong and sustainable supply network is more vital to ensure supply chain stability, as global sourcing of raw materials
can increase risk during a crisis. Many companies are looking to diversify their supply chains to reduce the risk of disruption in the future.

Equipment suppliers, contract manufacturers, and biopharmaceutical companies are implementing new bioprocessing solutions and manufacturing strategies to increase efficiency and productivity, forging
innovative partnerships, and investing in additional physical capacity on a global basis. The evolution of bioprocessing has driven upstream manufacturing processes to evolve, with the latest iteration of
technologies being developed with scalability, process intensification, and lean principles in mind to ensure quality and reliability for the life of a process. Process flexibility and technical innovations are key
strategies to accelerate the pace of process development while increasing upstream capacity.

Developing and scaling up manufacturing operations and analytical testing capabilities require major innovation and the courage to move away from conventional constructs. A focus on horizontal integration
internally and externally has been a key enabler for the COVID-19 vaccine and will be key to future new product introductions. The upstream capacity landscape does vary largely with the product class,
process design, and technology selection.

Shortening development timelines has been another focus of the industry, with efforts targeting the development of new expression technologies, parallel tracking of activities that are not on the critical path,
and implementing high-throughput, small-scale technologies. Digital transformation also affords good opportunities to improve lead times for rapid cycle process development, upstream capacity availability,
and quality. Samsung is leveraging automation, ultra scale-down modeling in microreactors, in-silico and predictive modeling including metabolomics, and mobile apps for company-wide data visibility to
reduce the client tech-transfer process to as little as three and one-half months while maintaining high levels of quality.

Viral vectors represent a special case with respect to manufacturing capacity shortages. Traditional viral vectors were manufacturing using plasticware and manually-intensive processes that required scale
out rather than scale up to increase capacity. These challenges can be overcome through the use of scalable technologies such as stirred-tank bioreactors or even intensified, scalable technologies such as
fixed-bed bioreactors.

Rev 001 Session-5 Question Booklet Page 10 of 334


Upstream capacity in viral-vector manufacturing is also impacted by capacity limitations in downstream processing. Parallelization of experiments using automated systems for high-producer clone selection,
medium optimization, and the availability of standard kits are having an overwhelmingly positive impact on capacity. Implementing platform processes that require no or minimal development, accurate scale-
down models, and a quality-by-design approach during development are minimizing risk and providing increased predictability when moving to large-scale production.

The one factor currently limiting the pace of change is talent availability. More expertise is needed across the industry to help companies identify new ways to further improve process development to make
better use of existing upstream capacity.

The COVID-19 pandemic has sparked increased interest and awareness in the biotech sector, leading to increased investment and utilization of current biologics manufacturing capacity. However, COVID-
19-related projects, including vaccines, are using up capacity that may have been used for other therapeutic products, increasing the demand for upstream capacity. This manufacturing is performed outside
of mainstream biopharma, where most capacities and capabilities involve the production of mAbs.

While COVID-19 has created a temporary shortage of upstream capacity for a portion of the industry, this spike has been counteracted by a stronger focus on life sciences as a whole, with more funds being
pushed into biotech for other vaccines and therapeutics. Large capital investments will provide relief and support growth of the industry in the mid-term. Biopharmaceutical companies of all sizes have sought
additional contract manufacturing capacity to free internal capacity, expedite process development and large-scale manufacture, and forge partnerships to secure secondary sources in their drug supply
chains.

For instance, Regeneron has been working with the FDA and the US government to rapidly scale up production of its REGEN-COV antibody cocktail. The company is leveraging production and
manufacturing platforms developed over decades, financial support from BARDA, and accelerated licensing of its Irish facility. This approach has enabled the company to adjust much of its internal
manufacturing activities to maximize its ability to produce REGEN-COV while still ensuring patients get the other FDA-approved Regeneron medicines that they need.

To support its COVID-19 vaccine (BNT162b2), Pfizer had to both expand its capability and build new capacity for the upstream process, which it did in part through various partnerships. Industry
collaboration is evident in numerous examples where companies have supported one another to get a vaccine or a therapeutic to patients faster.

However, there have been supply chain challenges attributed to the COVID-19 pandemic. Vendors have rapidly shifted focus to produce materials to support COVID-19 vaccines and therapies, resulting in
some therapeutic developers having to slow down or redirect demand for equipment, reagents, and consumables.

The demand for materials used for the manufacture of biological products has increased over the past five years, mainly for SU equipment, and the accelerated development and production of COVID-19
vaccines has increased the demand much faster than suppliers have been able to increase capacity. The SARS-CoV-2 outbreak has highlighted the need for high-capacity, scalable technologies for viral
vaccine production, and many companies using adherent cells have been driven to consider more scalable technologies that maintain a similar microenvironment for the cells.

On the positive side, Hitchcock notes that the COVID-19 pandemic is not only stimulating new investments from the private sector and strategic investments from governments to establish local vaccine
production capabilities but also accelerating the development of these technology platforms and their broader application beyond cancer indications.

The increased demand for upstream bioprocessing capacity has impacted both classical pharmaceutical capacity and CDMO capacity, as it is the product that is important. Contract manufacturers, drug
developers, and outsourcing partners have all been impacted by this demand, with KBI Biopharma observing that clients focused on COVID-19 vaccines have limited in-house capacity. As a result, many are
moving other vaccine products with shorter timelines to CDMOs, leading to the increased demand across the industry as all manufacturers adapt to the manufacturing challenges created by the pandemic
while still maintaining production of other vaccines and therapeutics.

Vaccine manufacturing in terms of capacity has largely been provided by CDMOs operating in the cell and gene therapy space for viral vector and plasmid DNA production. Prior to the pandemic, large
investments in capacity were being put in place, with the likes of Cobra Biologics in the UK completing its viral vector production expansion in 2020 and other significant expansions being carried out globally
to keep up with the growth of the advance therapies sector. However, CDMOs are likely to be more impacted than biopharma companies by upstream capacity issues because they cannot easily divert
capacity committed to existing customer projects to COVID-related projects without authorization from those customers.

Current shifts in manufacturing strategies include investing in new technologies and manufacturing strategies to achieve capacity increases. There is a drastic disruption needed to support the
commercialization of advanced therapies, with three emerging themes dramatically improving manufacturing efficiency across the biologics spectrum: continuous manufacturing, process analytics, and
radically improved downstream purification. There is also a lot of focus on compressing the biologic drug development timeline within the industry, especially in the process-development space.

Selexis and KBI Biopharma have invested in technology and innovation in cell-line development and process development to reduce cost of goods while increasing output at a given scale. High-yielding cell
lines and process intensification are powerful strategies to address growing demand. The company has also invested in digitalization and manufacturing 4.0 technologies to maximize the effectiveness of its
operations.
Rev 001 Session-5 Question Booklet Page 11 of 334
Lonza has invested in technologies and processes to reduce development, tech-transfer, and batch-release timelines for increased throughput and therefore additional capacity. They are further improving
right-first-time success rates and their lean management systems and have implemented in-line testing technologies that leverage digitalization to increase productivity. Pfizer has leveraged production
systems and its management infrastructure to free up time for continuous improvement of operations and innovation.

SU technologies are being widely implemented for commercial and clinical production to minimize the time and costs associated with maintenance and cleaning of standard reusable equipment. Companies
are also leveraging high-cell-density perfusion processes to boost efficiency. Increased focus on alternate sourcing for both materials and intermediates and use of automation, production scheduling, and
batch release are becoming more prominent.

Means for enhancing viral-vector productivities are also being explored, such as screening for high-producer cell clones, using technologies to achieve high cell densities and perfusion cultures, and
optimizing virus recovery. The use of high-efficacy viral vector platforms to minimize the needed vaccination dose can play an important role in helping to make better use of existing capacity.

The COVID-19 pandemic has prompted both CDMOs and biopharmaceutical companies to invest heavily in long-term capacity, with numerous investments made across the industry. KBI Biopharma is
implementing a global expansion strategy, including bringing online new clinical and commercial manufacturing facilities in the United States and Europe. Vibalogics is in the middle of a $150-million
investment at its site in Boxborough, MA, and is currently investing extensively in its early-phase facility in Cuxhaven, Germany. Samsung has implemented large-scale N-1 perfusion to enable inoculation of
production bioreactors at higher cell densities and achieve peak cell densities within shorter culture durations. The company has also invested in process automation, including a manufacturing execution
system and online monitoring capability, and is looking to fully automate its activities. A new manufacturing facility is under construction and expected to be operational by the end of 2023.

Significant activity has occurred in the plasmid DNA and viral vector space, with Cobra Biologics quadrupling its clinical and commercial capacity for high-quality DNA manufacturing in Europe and doubling
its capacity in the US. Other companies that have recently completed or announced new capacity include Matica Biotechnology, Porton Biologics, Fujifilm Diosynth Biotechnologies, Oxford Biomedica,
Emergent BioSolutions, BioReliance (Merck KgaA), WuXI Advanced Therapies, Delphi Genetics, Biomay, VGXI, and Thermo Fisher Scientific. SU equipment suppliers have also responded to the need to
increase output and reduce lead times for disposable biopharmaceutical manufacturing solutions. Pall Corporation is expanding capacity at six existing manufacturing sites in Europe and the US and
constructing a new manufacturing facility in the US.

Partnerships have been formed across the industry to provide flexible and agile supply options driven by demand and capacity. Increased partnerships across all supply chain segments will de-risk supply
and the overall required investment, giving access to flexible manufacturing capacity supported by innovative technologies and facilities, extremely well-trained staff, and competitive pricing. Numerous
industry collaborations, including strategic partnerships between companies and contract service providers exploring ideas around flexible capacity and partnerships with vendors and suppliers to better
understand and solve their capacity challenges, are playing a role alongside significant capital investments in capacity and capability and a paradigm shift in reducing capital project timelines to support
capacity expansion.

For Regeneron, innovative partnerships have been key to ensuring that the company is getting medicines and vaccines to patients as efficiently as possible. For example, Regeneron currently anticipates
that the full supply of REGEN-COV for its agreement with the US government will be produced by Regeneron, but the company has also significantly increased global capacity through a collaboration with
Roche. The whole life sciences industry is coming together to address the global crisis and hope this will inspire collaboration to achieve the common goal of delivering life-saving treatments faster than ever
to the patients who need them most—every day, not just during a crisis.

Addressing bioprocess capacity issues is crucial for the future success of many biopharmaceutical products, such as Regeneron's ability to provide high numbers of finished doses of REGEN-COV.
Investment in both R&D technology and manufacturing infrastructure is needed to better prepare for future pandemics. Innovative collaborations and relationships established during the pandemic should
continue to be explored as key strategic levers. Collaboration between clinicians, regulators, and industry will determine how much acceleration of drug development and commercialization is possible.

The COVID-19 pandemic serves as a case study in what happens if global capacity models are unexpectedly disrupted. Many innovative technologies are being developed with the potential to enhance and
shape the future of biomanufacturing. However, there has been a gap in investment capital for companies that have launched these innovative products but still require guidance to fully scale and
industrialize their solutions. Without expert guidance and proper representation, these innovations could be lost or stifled if they are acquired or merged into large life sciences companies with many
competing priorities.

The supply of crucial SU upstream process equipment and consumables is a key concern, with delivery times of up to 52 weeks currently expected across the industry. Aseptic filling of biologics, particularly
live virus filling, is one of the most critical bottlenecks in the biopharma development and manufacturing process, including availability of the required primary packaging material. Steps need to be taken to
optimize biologics manufacturing capacity as quickly as possible.

Companies developing new biologics, in particular vaccines, oncolytic viruses, and gene therapies based on viral vectors, need to identify and access manufacturing capacity as early as possible to ensure
they can bring their innovations to market as soon as they receive regulatory approval. Access to talent is another issue that must be addressed to ensure future success in biomanufacturing. Capital
investments, engineering projects, and process improvement are fundamental to enable the projected growth of the industry, but a sustainable pipeline of talent at all levels in organizations is also essential.

Rev 001 Session-5 Question Booklet Page 12 of 334


Access to key data in terms of capacity and capability is important for the biopharmaceutical industry as a whole, and leveraging these data to understand the current and potential challenges for all nodes of
the supply chain, upstream and downstream, will facilitate the prioritization of near-term and long-term action.

Q 2: What are the inputs to and outputs from the ‘Fermentation / Bio-reaction’ phase of a conventional biotechnological process?

Answer: These are the inputs and outputs

*Fermentaion based preosses using native organisms

-Wine and beer making

-Baking

*Use of native or engineered enzymes/genes to enhance first generation process

-Textile industry

-Starch industry

*Direct use of recombinant and engineered systems

-Plant, insect and animal cell culture.

Inputs of the fermentation in conventional biotechnological process: in Alcohol or Lactose fermentation Glucose and ADP/ Pi, Outputs: in Alcohol fermentation are Ethanol, CO2 and ATP , in Lactate
fermentation are Lactate and ATP.

Our inputs are yeast and sugar associated with sugar. Our output is alcohol.

February 17, 2023

A comprehensive manual on fermentation in the pharmaceutical sector

Divulge:

Although the traditional definition of fermentation refers to the anaerobic conversion of sugar into carbon dioxide and alcohol by yeast, in the pharmaceutical sector, fermentation is employed to cultivate
microorganisms for the production of antibiotics, therapeutic proteins, enzymes, and insulin. The process usually entails the use of temperature-regulated containers, commonly referred to as fermenters,
and the precise combination of nutrients to foster the growth of the intended organism. Microbial and bacterial fermentation technology, together with its associated processes, offer novel opportunities and
serve as crucial components for gene-editing, conjugates, and DNA plasmids utilised in contemporary vaccine manufacturing.

Single-use technologies are becoming increasingly important in various sectors of upstream processing, including fermentation. Single Use Support is a leading company in the field of single-use
bioprocessing. They provide comprehensive solutions for managing fluids and handling freeze-thaw logistics in pharmaceutical fermentation processes.

Fermentation in the pharmaceutical industry

Summary

Rev 001 Session-5 Question Booklet Page 13 of 334


Utilising microbial fermentation to manufacture medications

The future potential for growth in the fermentation market

The benefits of microbial fermentation include the ability to produce goods according to Good Manufacturing Practices (GMP) in a microbial setting.

Obstacles encountered in the process of microbial fermentation for pharmaceutical production

The utilisation of single-use systems in microbial fermentation can provide significant benefits.

Frequently Asked Questions regarding pharmaceutical fermentation

Further information regarding biopharmaceutical fermentation

Utilising microbial fermentation to manufacture medications

Microbial fermentation is a highly promising technique for making pharmaceutical products, including recombinant protein-based drugs, vaccines, and antibiotics for various medical purposes.

Fermentation is a biological process in which microorganisms, such as bacteria, yeast, and fungi, develop and multiply. The process takes place within a regulated setting, usually in expansive bioreactors,
with the aim of generating the intended output. Escherichia coli, commonly referred to as E. coli, is a well-known and extensively researched microbe. It is highly valued in the biotechnology sector due to its
ability to quickly and easily generate strains, its short fermentation times, and its ability to achieve high cell densities.

Fermentation Processing Solutions

Prospects for the expansion of the fermentation market

Mammalian cell culture systems were often preferred for producing recombinant medications for an extended period of time. In recent years, microbial fermentation has undergone significant growth and
development. Microbial fermentation greatly enhances the efficiency of producing tiny biologics or bioconjugates. This is also evident in market statistics.

Based on a research study conducted by BCC, the global markets for peptide hormones and vaccines had a growth from $18 billion to $28 billion and from $10 billion to $19 billion, respectively.

The resurgence in microbial fermentation for biopharmaceutical manufacture can be ascribed to various factors: Firstly, there is the advancement of next-generation treatments that rely on smaller biologic
drug compounds and higher production outputs for biopharmaceutical manufacturing, facilitated by progress in genetic engineering. Conversely, the revived interest is propelled by advancements in
molecular biology and synthetic biology, resulting in a higher frequency of outsourcing fermentation processes to Contract Development and Manufacturing Organisations (CDMOs).1

Summary of elements contributing to the increase in microbial fermentation:

Enhanced manufacturing of compact biologics, facilitated by advancements in genetic engineering, has resulted in higher productivity and improved quality. Additionally, the ability to easily scale up
production has been enhanced.

Reduced manufacturing expenses Accelerated production

Benefits of microbial fermentation

Microbial fermentation typically presents less issues and allows for quicker scale-up due to the faster and more consistent development patterns of organisms involved, as well as their lower intrinsic
metabolic burden. This process enables the generation of significant amounts of targeted compounds within a very little period, hence serving as a cost-efficient approach for manufacturing specialised
medications.

However, mammalian cell fermentation frequently necessitates supplementary downstream processing measures to refine the ultimate output. Furthermore, microbes possess the ability to synthesise
intricate compounds that are challenging to create by conventional synthetic organic chemistry. This renders fermentation a highly effective approach for manufacturing diverse pharmaceutical goods.

Microbial production of cGMP - Implementation of Good Manufacturing Practices in microbial fermentation

Contract Development and Manufacturing Organisations (CDMOs) provide services for the production of fermentation products that comply with current Good Manufacturing Practices (cGMP). These
services include cell banking, which involves the synthesis and preservation of cell lines in cryogenic conditions. Additionally, CDMOs offer expertise in fed-batch and perfusion techniques employing various
microbial expression systems. Pharmaceutical production firms can ensure the receipt of top-notch fermentation products that adhere to set norms and rules by subcontracting to cGMP-compliant microbial
manufacturing.

Rev 001 Session-5 Question Booklet Page 14 of 334


Adopting single-use solutions will enable Contract Development and Manufacturing Organisations (CDMOs) to establish an infrastructure that complies with current Good Manufacturing Practice (cGMP)
standards. Furthermore, disposable solutions provide the essential adaptability to promptly respond to alterations in production demands, while also delivering substantial and scalable yields of superior
quality.

Contract manufacturing organisations (CMOs) offer comprehensive pilot-to-commercial microbial fermentation services, providing complete assistance throughout the whole life cycle of the product. This
includes help from preclinical and clinical trial stages to the commercial stages, encompassing process development and the entire manufacturing process for the final drug product.

SELECTING A CONTRACT DEVELOPMENT AND MANUFACTURING ORGANISATION (CDMO) There are seven factors that need to be taken into account.

Obstacles encountered in the process of microbial fermentation for pharmaceutical production

Microbial fermentation offers numerous advantages, but it also poses specific difficulties. The method is intricate and necessitates a sterile setting to avoid contamination from other microorganisms that may
jeopardise the excellence and purity of the end product, diminish product output, modify the quality, or even generate harmful by-products.

Scalability might pose as another obstacle, necessitating the transition from laboratory or clinical trial phase to a larger scale. This may entail modifying the circumstances and equipment in the process.
Furthermore, the production of pharmaceuticals is subject to rigorous regulations, and ensuring compliance with these regulations can be intricate and time-consuming.

Single-use technology solutions can be beneficial in this context: Single Use Support provides versatile and comprehensive process solutions that may be readily adjusted to meet varying or evolving needs,
all while adhering to FDA and other regulatory protocols and criteria.

Explore further the difficulties encountered in microbial fermentation manufacturing.

Access our digital publication on high-capacity bioprocessing by downloading our eBook.

Discover the vast capabilities of disposable technology in large-scale bioprocessing. Discover how Single-Use Technologies (SUT) enhance the safety, efficiency, and scalability of bioprocessing,
encompassing aseptic liquid handling and freeze and thaw solutions.

eBook: Addressing Challenges in the Management of High Volumes of Biologics Using Single-Use Support

Size of the file: 3.99 megabytes - Type of file: application/pdf

The utilisation of single-use systems in microbial fermentation can provide significant benefits.

Now, let's examine Single Use Support's comprehensive solutions and precisely how these single-use systems might enhance microbial fermentation manufacturing. Initially, they establish a sterile and
regulated setting to facilitate the proliferation of microorganisms. Single-use bioreactors have the advantage of reducing the risk of contamination and require less time and effort for cleaning and sterilisation
compared to single-use bags and tube assemblies. This is because single-use bioreactors come pre-sterilized, leading to improved efficiency and cost-effectiveness.

Furthermore, fully-automated disposable systems can be tailored to individual fermentation processes, providing increased adaptability and expandability while mitigating human errors. Due to their modular
characteristics, they may be specifically tailored to fit various quantities and configurations, hence facilitating the optimisation of fermentation conditions and enhancing production yields.

Comprehensive solutions for the entire biopharmaceutical process.

Rev 001 Session-5 Question Booklet Page 15 of 334


Microbial fermentation single-use systems

Frequently Asked Questions regarding pharmaceutical fermentation

Precision fermentation refers to a process that involves using advanced techniques to produce specific substances, such as proteins or chemicals, through the controlled growth of microorganisms.

Precision fermentation employs genetically engineered microorganisms to synthesise targeted proteins, enzymes, and other substances. This is achieved by introducing genes into the DNA of microbes,
enabling them to synthesise the intended product. Precision, in this context, pertains to the capacity to accurately regulate the entire process, encompassing the genetic alteration of microbes and the fine-
tuning of fermentation conditions.

Which fermentation process is employed in the production of pharmaceuticals?

Pharmaceuticals are commonly produced via microbial and bacterial fermentation, as well as mammalian cell cultures. Microbial fermentation utilises bacteria such as e. coli, yeast like Pichia pastoris, or
other microbes to generate a specific product, such as a pharmaceutical or enzyme. Mammalian cell culture refers to the in-vitro cultivation of cells in a controlled environment known as cell culture. Further
information can be found by continuing to read: The distinction lies in the process of microbial fermentation, which involves the metabolic conversion of organic compounds by microorganisms, and
mammalian cell culture, which refers to the cultivation of cells derived from mammals in a controlled laboratory environment.

Microbial fermentation refers to the metabolic process in which microorganisms, such as bacteria or yeast, break down organic compounds in the absence of oxygen to produce energy, along with various
byproducts such as alcohol, acids, or gases.

Microbial fermentation is the utilisation of microorganisms to transform organic substrates into a desired product via biochemical processes. Microbial fermentation can take place under several conditions,
including aerobic (with oxygen) or anaerobic (without oxygen) environments. Organic acids created during fermentation serve various purposes and are commonly utilised in the food and drinks industry.

Which medications are manufactured using fermentation?

The fermentation process is frequently employed in the life science business to generate enzymes or proteins for the biotech sector, specifically for biopharmaceutical or pharmaceutical production
objectives. Fermentation yields several substances, including metabolites and enzymes, which have applications in medical therapies including thrombolytics, as well as hormones like insulin and human
growth hormone (HGH), and immunosuppressants. Fermentation is commonly employed to generate components for antibiotics, immunotherapies, and insulin, which are all well-known to us.

Fermentation for APIs refers to the process of using microorganisms, such as bacteria or yeast, to produce active pharmaceutical ingredients (APIs).

Fermentation for Active Pharmaceutical Ingredients (APIs) entails utilising microbial fermentation to generate pharmaceutical products, including antibiotics, vaccines, and other medicinal substances.
Fermentation commonly employs bacteria, yeast, or fungus to generate a particular active component or intermediate, which is subsequently isolated and refined to become the ultimate pharmaceutical
product. This procedure is both cost-effective and efficient, making it a commonly employed technique in the pharmaceutical sector.

https://www.susupport.com/knowledge/fermentation/fermentation-pharmaceutical-industry-complete-guide

Rev 001 Session-5 Question Booklet Page 16 of 334


Rev 001 Session-5 Question Booklet Page 17 of 334
Q 3: What are the inputs to and outputs from Downstream Processing operations for a conventional biotechnological process?

The input is a product which received from upstream process. In later stage this product is purified and filtrated. After filtration and purification we will get our final product which our output of downstream
processing.

Answer: Depending on the nature of the product and method of synthesis, downstream processing generally includes a combination of the following steps:

Harvest and Filtration.

Rev 001 Session-5 Question Booklet Page 18 of 334


Primary Capture.

Buffer Exchange and Up-concentration. ...

Purification (and contaminant or impurity clearance) ...

Bioconjugation (Molecule Dependent) ...

Formulation.

Inputs of the downstream processing operations are: Buffer, chromatography step product. Outputs are: Final product and waste.

What are the downstream processes in bioreactor?

Downstream Bioprocessing

Purification of product – involves the separation of contaminants closely imitating the product in physical and chemical properties . Some of the operations carried out here are – size inclusion, affinity, ion-
exchange chromatography, reversed-phase chromatography, crystallization.

What are the five stages in downstream processing after fermentation?

The five stages are: (1) Solid-Liquid Separation (2) Release of Intracellular Products (3) Concentration (4) Purification by Chromatography and (5) Formulation. In Fig. 20.1, an outline of the major steps in
downstream processing is given.

Downstream Processing

Contents:

Pre-treatment:

Separation:

Concentration:

Purification:

Formulation:

Downstream Processing

Harvesting, purification, and final processing of fermentation products in suitable dosage form for their intended use after completion of fermentation are called
downstream processing. Stages of product recovery from fermentation broth are shown in Fig. 1.

Rev 001 Session-5 Question Booklet Page 19 of 334


Fig. 1

The downstream process consists of three primary stages: cell separation from the fermentation broth, isolation of the impure product, and purification and
final processing of the product.

The product resulting from fermentation can exist within the cell or outside of it, and it can either withstand high temperatures or be susceptible to them.
Therefore, the downstream process is meticulously planned, taking into account numerous parameters, in order to attain highly purified and isolated products at
minimal expense.

The choice of particular processes is contingent upon the following considerations.

1. The product can be found either outside the cell (extracellular) or inside the cell (intracellular).

2. Sensitivity of the product.

3. The level of product concentration or potential yield.

4. Characteristics and application of the product.

5. Criteria for acceptable levels of purity.

6. Potential contaminants.

7. The efficiency of production and the market value of the product.

Prior to treatment:

The fermentation broth consists of microbial cells, cell fragments, soluble and insoluble components of the medium, as well as the active product.

Pre-treatment of the fermentation broth is conducted to modify the viscosity of the medium, the size of the biomass, and the interaction among particles.

• Synthetic polymers, cellulosic polycations, and inorganic salts are introduced into the broth to act as flocculating agents. • These agents cause the individual
cell particles to aggregate into large flocs, making it easier to separate them through centrifugation. • In the case of intracellular products, certain techniques
are employed to break down the cell and release the desired product.

Rev 001 Session-5 Question Booklet Page 20 of 334


Fig. 2

Separation:

The cells after cell disruption for intra-cellular products and without cell disruption for extra-cellular products are separated by centrifugation or filtration.

Filtration retains large particles as a cake and allows the passage of liquid through the filter. The flow of liquid through the filter medium is dependent on the
area of the filter, the pore size of the filter, and flow resistances by the cake formed on the filter medium.

Cellulose, glass, ceramics, synthetic membranes, synthetic fibers, cloth, metal, etc. are used as filter media.

When filtration is not a satisfactory method to remove micro-organisms, centrifugation can be employed. Different types of centrifuges are available with
varying r.p.m.

Fig. 3

Concentration:

• Following the separation of microbial cells, the broth is fractionated or extracted using various processes such as extraction, evaporation, and precipitation.

• Evaporation is a straightforward yet energy-intensive process mostly used to eliminate water. Falling film evaporators, forced film evaporators, plate evaporators, and centrifugal forced film evaporators are
frequently employed for solvent concentration.

The user's text is a bullet point.Precipitation is employed in the product recovery process to enhance and consolidate the product in a single operation.

The precipitation process is frequently employed to recover products and separate contaminants. Precipitation is accomplished through the use of external agents, such as acids and bases (to alter the pH),
organic solvents like chilled acetone, ethanol, and methanol (to modify dielectric properties), salts like ammonium or sodium sulphate (for protein recovery), non-ionic polymers like polyethylene glycol (PEG),
polyelectrolytes, and protein binding dyes. • Extraction is a commonly employed technique in large-scale fermentation processes for the purpose of concentrating and purifying substances. The selection of a
solvent for extraction is heavily influenced by the solubility and polarity of the product.

• Multistage extraction, also known as counter-current extraction, is used to achieve a high extraction yield. Mixer settlers, columns, and centrifugal extractors are frequently employed for the process of
extraction. The solvents utilised for extraction are costly. Therefore, all solvents are retrieved and reused in the extraction process.

1.1.4.Purification: • The concentrated crude product is subjected to purification using a combination of fractional precipitation, crystallisation, and
chromatographical methods. • Crystallization is particularly employed for the retrieval of acids, solvents, and the purification of diverse chemicals. The crystals
acquired during the process of crystallisation are isolated by filtration, dissolved again in an appropriate solvent, and then subjected to a second round of
crystallisation to ensure the complete removal of any impurities. • Chromatographic techniques are frequently employed for the isolation and purification of
fermentation products. •The components are partitioned between a fixed phase and a moving phase. The stationary phase consists of a column filled with
particles of the same size that have been balanced with an appropriate solvent. The mobile phase is a solvent that flows through this packed column (Fig. 4).

Rev 001 Session-5 Question Booklet Page 21 of 334


Fig 4: Column Chromatography

• The mixture to be separated is densely filled into a column, which is subsequently followed by the mobile phase. Adsorption chromatography, ion-exchange
chromatography, gel-filtration chromatography, affinity chromatography, reverse phase chromatography, and high-performance liquid chromatography are
frequently employed methods for purifying proteins and medicines. The purity of items is consistently verified by the use of paper and thin-layer
chromatography.

Various forms of paper chromatography, including ascending, descending, and circular, are employed to assess the purity of isolated products.

1.1.1 Statement of the problem:

Antibiotics, proteins, and enzymes are prepared in the form of a solution, suspension, or dry powders. Additives such as ammonium sulphate salt, sorbitol,
glycerol, PEG, and other stabilisers are included in these formulations.

Various additional substances are used in the final formulation, depending on the dosage form. These may include diluents, lubricating agents, suspending
agents, emulsifying agents, colouring agents, and others.

• Drying is a crucial step, particularly for protein-based goods. Commonly employed for the drying process are contact dryers (such as drum dryers), convection
dryers (such as spray dryers and fluidized dryers), and radiation dryers (such as freeze dryers).

The antibiotics are enclosed in aseptic vials either as a dry substance or a liquid mixture, intended for injection or ingestion. The tablets can be produced with
a film coating.

Fig 5: Thin-layer Chromatography

• It is essential to estimate the fermentation products at the drying stage in downstream processing and at the conclusion of the process. Gravimetry, spectrophotometry, specific gravity, optical density,
packed cell volume, total viable count, measurements of cell components, counting chambers, and chromatography are the prevalent techniques employed to quantify fermentation products.

Quality control of fermentation products is conducted both throughout the production process and upon completion of the final product. The effectiveness of the unit operations in downstream processing is
assessed by evaluating the product at each stage.

• Quality assurance or quality control tests for fermentation products encompass sterility testing, pyrogen testing, toxicity testing, allergy testing, microbiological assays, and carcinogenicity testing. Prior to
being marketed, it is imperative that it meets all government regulations.

5-2: Cellular Protein Synthesis

Step 1

Rev 001 Session-5 Question Booklet Page 22 of 334


Warm up - Before watching the video, answer the question to 'unlock' your prior knowledge

Q: What is your understanding of a protein? What is the function of a protein?

Proteins serve as the fundamental constituents of all living beings.

They consist of individual amino acids, which are then used to construct elements like as carbon, oxygen, hydrogen, nitrogen, sulphur, and phosphorus. These amino acids are joined by peptide bonds.

Proteins serve multiple essential functions in the body. They play a crucial role in tissue regeneration, serve as the fundamental building blocks of blood, lymph, milk, hormones, and enzymes. They also
contribute to the immune system, help maintain the appropriate pH levels in body fluids, act as carriers for certain vitamins and minerals, and participate in the regulation of blood pressure.

Proteins are large, complex molecules that play many critical roles in the body. They do most of the work in cells and are required for the structure, function and regulation of the body’s tissues and organs.

Protein Function

The cellular health and function are determined by the assemblage of proteins within it. Proteins play a crucial role in various cellular functions, such as maintaining cell structure and organisation, producing
essential substances, removing waste, and carrying out regular upkeep. Proteins also receive extracellular cues and initiate intracellular responses. Proteins are the essential molecules responsible for
carrying out many actions within the cell.

How Diverse Are Proteins?

Figure 1: The phosphorylation of a protein can make it active or inactive.

Phosphorylation has the ability to either enhance the activity of a protein (orange) or suppress it (green). Kinase is an enzymatic catalyst that adds phosphate groups to proteins through a process called
phosphorylation. Phosphatase is an enzymatic catalyst that removes phosphate groups from proteins, hence reversing the effects of kinase.

Copyright 2010 Nature Education. All rights reserved.

Detailed description or analysis of a figure or illustration.

Proteins vary in size, with some being large and others being small. They can predominantly attract water (hydrophilic) or repel water (hydrophobic). Proteins can exist alone or as components of a larger
structure. Additionally, they can undergo regular changes in shape or remain relatively stationary. These variances originate from the distinct amino acid sequences that constitute proteins. Fully folded
proteins possess unique surface properties that dictate their interactions with other molecules. Proteins can undergo conformational changes, both subtle and dramatic, when they interact with other
molecules.

Unsurprisingly, the activities of proteins are just as varied as their structures. Structural proteins, such as those found in connective tissues like cartilage and bone, play a crucial role in maintaining the form
of cells, similar to a skeleton. Enzymes, which are a distinct category of proteins, facilitate the biochemical events taking place within cells. However, there are more proteins that function as sensors,
undergoing conformational changes and altering their activity in reaction to metabolic cues or external signals received by the cell. Cells furthermore release several proteins that integrate into the
extracellular matrix or participate in intercellular communication.

Proteins undergo occasional modifications following the completion of translation and folding. Transferase enzymes are responsible for adding minor modifying groups, such as phosphates or carboxyl
groups, to the protein in these instances. These alterations frequently induce changes in the shape of proteins and function as molecular toggles that activate or deactivate the activity of a protein. Several
post-translational modifications can be reversed, however distinct enzymes facilitate the reverse reactions. Enzymes known as kinases are responsible for adding phosphate groups to proteins, while
enzymes termed phosphatases are necessary for removing these phosphate groups (Figure 1).

How Do Proteins Provide Structural Support for Cells?

Rev 001 Session-5 Question Booklet Page 23 of 334


Figure 2 : Proteins can have a structural role in a cell.

Actin filaments (red) and microtubules (green) are two different kinds of proteins that provide structure to cells.

Courtesy of Dr. Takeshi Matsuzawa and Dr. Akio Abe. All rights reserved.

Proteins contribute to the high level of organisation in the cytoplasm. In eukaryotic cells, which are often larger and require more mechanical support than prokaryotic cells, a complex network of filaments
consisting of microtubules, actin filaments, and intermediate filaments can be observed using various microscopic techniques. Microtubules have a significant function in the arrangement of the cytoplasm
and the dispersion of organelles. Additionally, they contribute to the formation of the mitotic spindle during the process of cell division. Actin filaments have a role in different types of cell movement, such as
cell motility, muscle cell contraction, and cell division (Figure 2). Intermediate filaments are robust fibres that function as structural scaffolds within cells.

1.1What is the role of proteins in facilitating the biochemical reactions of a cell?

Cells depend on a multitude of diverse enzymes to facilitate metabolic activities. Enzymes, which are proteins, enhance the likelihood of a biological process by reducing the activation energy required for the
reaction. As a result, these events occur at a significantly accelerated rate, often thousands or even millions of times quicker than they would without a catalyst. Enzymes exhibit a high degree of specificity
towards their substrates. These substrates are bound by the proteins at certain regions on their surfaces, creating a tight connection that is often likened to a lock and key mechanism by scientists. Enzymes
function by binding one or more substrates, facilitating their proximity for a reaction to occur, and subsequently releasing them upon completion of the reaction. Specifically, upon substrate attachment,
enzymes undergo a conformational change that aligns or stretches the substrates, making them more prone to chemical reactions (Figure 3).

An enzyme's nomenclature often corresponds to the specific biological reaction it facilitates. Proteases are enzymes that degrade proteins, while dehydrogenases are enzymes that catalyse the oxidation of
a substrate by eliminating hydrogen atoms. In general, the presence of the "-ase" suffix indicates that a protein is an enzyme. Additionally, the initial component of an enzyme's name typically indicates the
specific reaction that it facilitates.

Figure 3: Enzymes and activation energy

Enzymes decrease the amount of energy required to convert a starting substance into a final product. The reaction on the left is uncatalyzed, indicated by the red colour, whereas the reaction on the right is
catalysed, indicated by the green colour. During an enzyme-catalyzed reaction, the enzyme attaches to the reactant and aids in its conversion into a product. Hence, the reaction pathway facilitated by the
enzyme has a reduced energy barrier (activation energy) that must be surpassed for the reaction to occur.

Copyright 2010 Nature Education. All rights reserved.

1.1What is the role of proteins in the plasma membrane?

The proteins present in the plasma membrane play a crucial role in facilitating the cell's interaction with its surroundings. Plasma membrane proteins perform a wide range of roles, such as transporting
nutrients across the plasma membrane, detecting chemical signals from the external environment, converting chemical signals into actions within the cell, and occasionally securing the cell in a specific
position (Figure 4).

Rev 001 Session-5 Question Booklet Page 24 of 334


Figure 4: Examples of the action of transmembrane proteins

Transporters facilitate the translocation of a molecule (such as glucose) across the plasma membrane. Receptors have the ability to interact with a molecule from outside the cell (triangle), which triggers a
process within the cell. The enzymes present in the cell membrane have the ability to catalyse the conversion of a molecule into a different form, much like they do in the cytoplasm of the cell. Anchor
proteins serve to physically connect intracellular structures to external structures.

Copyright 2010 Nature Education. All rights reserved.

Detailed description/ depiction of a figure

Figure 5: The fluid-mosaic model of the cell membrane

The cell membrane is a complex structure made up of proteins, phospholipids, and cholesterol, resembling a mosaic. The proportions of these components vary among different membranes, and the
composition of lipids in membranes may also vary. Proteins play a crucial role in cellular processes, providing structural support, facilitating movement, catalyzing chemical reactions, and interacting with the
external environment. Their roles are diverse, as are their distinct amino acid sequences and intricate three-dimensional physical architectures. Proteins are expansive and intricate molecules that perform
vital functions within the body, maintaining the structure, function, and regulation of tissues and organs. They consist of several amino acids connected in long chains, and can be synthesized by combining
20 distinct amino acids. The arrangement of amino acids determines the unique three-dimensional configuration and purpose of each protein. Amino acids are encoded by specific combinations of three
DNA nucleotides, defined by the gene sequence.

Proteins can be described according to their large range of functions in the body, listed in alphabetical order:

Function Description Example

Antibody Antibodies bind to specific foreign particles, such as viruses and bacteria, to help protect the body. Immunoglobulin G (IgG)

Enzyme Enzymes carry out almost all of the thousands of chemical reactions that take place in cells. They also assist with the Phenylalanine hydroxylase
formation of new molecules by reading the genetic information stored in DNA.

Rev 001 Session-5 Question Booklet Page 25 of 334


Function Description Example

Messenger Messenger proteins, such as some types of hormones, transmit signals to coordinate biological processes between Growth hormone
different cells, tissues, and organs.

Structural These proteins provide structure and support for cells. On a larger scale, they also allow the body to move.
component

Actin

Transport/storage These proteins bind and carry atoms and small molecules within cells and throughout the body.

Ferritin

Examples of protein functions

For more information about proteins and their functions:

Arizona State University's "Ask a Biologist" discusses the different kinds of proteins and what they do.

The textbook Molecular Biology of the Cell (4th edition, 2002), from the NCBI Bookshelf, offers a detailed introduction to protein function.

The How Genes Work chapter discusses proteins, their roles, and how genes direct their production. It also explores epigenetics, cell division, and cell growth and division. Proteins play a crucial role in
repairing and building tissues, facilitating metabolic reactions, regulating pH and fluid equilibrium, fortifying the immune system, and transporting nutrients. They are essential for optimal health and consist of
20 amino acids. Proteins play nine significant roles in the body, primarily within cells, and are also known as macromolecular peptides.

peptide

Scientists at Michigan State University have discovered a breakthrough in controlling a cancer-fueling protein and revealing the immune system's guardian, IKAROS. Key people involved include John B.
Fenn, Tasuku Honjo, Richard Henderson, William G. Kaelin, Jr., and George P. Smith.

Rev 001 Session-5 Question Booklet Page 26 of 334


proteins

How proteins build muscle.

videos

Protein is a crucial compound found in all living beings, with significant nutritional importance and direct role in vital chemical processes. Its significance was recognized by chemists in the early 19th century,
with the term "protein" introduced by Swedish scientist Jöns Jacob Berzelius in 1838. Proteins exhibit species specificity, meaning proteins from one species are distinct from those from another. They are
also specialized to specific organs, such as muscles and brain and liver. Protein molecules are larger than sugar or salt molecules and consist of several amino acids linked together to create elongated
chains. Naturally occurring proteins have approximately 20 distinct amino acids, and proteins with similar functions exhibit analogous amino acid composition and sequence. Although it is not possible to fully
understand a protein's functions based on its amino acid sequence, the observed connections between structure and function can be attributed to the characteristics of the amino acids.

legume; amino acid

Legumes—such as beans, lentils, and peas—are high in protein and contain many essential amino acids.(more)

Plants produce all amino acids, while mammals lack this ability. Plants thrive in substrates with inorganic nutrients like nitrogen and potassium, and use atmospheric carbon dioxide for photosynthesis to
synthesize organic substances, including carbohydrates. Animals, like ruminants, primarily consume plant material to meet their amino acid requirements. Nonruminant organisms like humans primarily
acquire proteins from animals and animal-derived products like meat, milk, and eggs. Legume seeds are increasingly used to provide affordable, high-protein food. Animal tissues have higher protein
concentrations than blood plasma, with muscles comprising around 30% protein, liver and red blood cells containing 20-30% and 30%, respectively. Hair, bones, and organs with low water content have
higher protein proportions. The abundance of free amino acids and peptides in animals is lower than the protein quantity. Protein molecules are synthesized within cells by a sequential arrangement of amino
acids and are released into body fluids after the synthesis process is completed.

Medical Terms and Pioneers Quiz

hemoglobin

Hemoglobin is a protein made up of four polypeptide chains (α 1, α2, β1, and β2). Each chain is attached to a heme group composed of porphyrin (an organic ringlike compound) attached to an iron atom.
These iron-porphyrin complexes coordinate oxygen molecules reversibly, an ability directly related to the role of hemoglobin in oxygen transport in the blood.(more)

The presence of a high protein content in certain organs does not imply that the significance of proteins is directly proportional to their quantity inside an organism or tissue. In fact, some of the most crucial
proteins, such as enzymes and hormones, are found in exceedingly minute quantities. The significance of proteins mostly lies on their functionality. Proteins are the only type of enzymes that have been

Rev 001 Session-5 Question Booklet Page 27 of 334


discovered so far. Enzymes, serving as catalysts in metabolic reactions, facilitate an organism's synthesis of essential chemical compounds such as proteins, nucleic acids, carbohydrates, and lipids. They
also aid in the conversion and breakdown of these substances. Enzymes are vital for sustaining life. Multiple protein hormones possess significant regulatory activities. Haemoglobin, a respiratory protein
found in all vertebrates, serves as an oxygen carrier in the bloodstream. Its primary function is to deliver oxygen from the lungs to various organs and tissues in the body. An extensive assemblage of
structural proteins serves to uphold and safeguard the integrity of the animal organism.

One Overview of protein structure and characteristics

The amino acid composition of proteins

protein synthesis

Synthesis of protein.

The common property of all proteins is that they consist of long chains of α-amino (alpha amino) acids. The general structure of α-amino acids is shown in . The α-amino acids are so called because the α-
carbon atom in the molecule carries an amino group (―NH2); the α-carbon atom also carries a carboxyl group (―COOH).

Under acidic conditions, with a pH below 4, the ―COO groups react with hydrogen ions (H+) and transform into the neutral form (―COOH). At pH levels above 9 in alkaline solutions, the ammonium groups
(―NH+3) undergo deprotonation and transform into amino groups (―NH2). Amino acids have both cationic and anionic charges throughout the pH range of 4 to 8, resulting in their lack of migration in an
electric field. These configurations are referred to as dipolar ions, or zwitterions, which are hybrid ions.

While there are over 100 naturally occurring amino acids, notably in plants, the majority of proteins consist of only 20 kinds. Protein molecules are formed by connecting α-amino acids through peptide
bonds, which occur between the amino group of one amino acid and the carboxyl group of its next amino acid.

The condensation (joining) of three amino acids yields the tripeptide.

Rev 001 Session-5 Question Booklet Page 28 of 334


Peptide structures are conventionally depicted with the free α-amino group, also known as the N terminus, on the left side and the free carboxyl group, or C terminus, on the right side. Proteins are huge
macromolecules made up of many amino acids that are bound together by peptide bonds. The majority of the typical ones consist of over 100 amino acids connected to one another in an extended peptide
chain. The mean molecular weight of each amino acid, with respect to the weight of a hydrogen atom as 1, falls within the range of around 100 to 125. Consequently, the molecular weights of proteins
typically range from 10,000 to 100,000 daltons, with one dalton being equivalent to the weight of one hydrogen atom. The specificity of proteins to a particular species or organ is determined by variations in
the quantity and arrangement of amino acids. A chain of 100 amino acids can be organised in more than 10^100 ways, where 10^100 represents the number one followed by 100 zeroes.

1.1 Structures of frequently occurring amino acids

Proteins exhibit variation in the structure of their side (R) chains, resulting in differences in the amino acids they contain. Glycine is the most basic amino acid, with R representing a hydrogen atom. R in
certain amino acids denotes linear or branched carbon chains. One of the amino acids present is alanine, where R represents the methyl group (―CH3). Valine, leucine, and isoleucine, which possess
longer R groups, form the alkyl side-chain series. The alkyl side chains (R groups) of these amino acids are hydrophobic; this indicates that they do not have a tendency to interact with water but do have a
tendency to interact with each other. While plants have the ability to produce all alkyl amino acids, animals can only synthesise alanine and glycine. Therefore, valine, leucine, and isoleucine must be
obtained through dietary sources.

Serine and cysteine, both consisting of three carbon atoms, are produced from alanine. Serine differs from alanine by having an alcohol group (―CH2OH) instead of a methyl group, while cysteine contains
a mercapto group (―CH2SH). Animals have the ability to produce serine, but they lack the capability to synthesise cysteine or cystine. Cysteine is mostly found in proteins in its oxidised state, known as
cystine, where hydrogen atoms have been removed through oxidation. Cystine is formed by the bonding of two cysteine molecules by a disulfide bond (―S―S―), which occurs when a hydrogen atom is
eliminated from the mercapto group of each cysteine. Disulfide bonds play a crucial role in protein structure as they enable the connection of two distinct segments of a protein molecule, resulting in the
development of loops within the normally linear chains. Certain proteins contain trace quantities of cysteine that possess unbound sulfhydryl (―SH) groups.

Proteins contain four amino acids, each composed of four carbon atoms. These amino acids are aspartic acid, asparagine, threonine, and methionine. Animals have the ability to synthesise significant
quantities of aspartic acid and asparagine. Threonine and methionine are classified as essential amino acids since they cannot be produced by the body and must thus be obtained from the food. The
majority of proteins typically include little quantities of methionine.

Proteins also consist of glutamic acid, an amino acid with five carbon atoms, and proline, which contains a secondary amine. A secondary amine is a structure where the amino group (―NH2) is connected
to the alkyl side chain, forming a ring. Glutamic acid and aspartic acid are classified as dicarboxylic acids due to their possession of two carboxyl groups. (―COOH).

Rev 001 Session-5 Question Booklet Page 29 of 334


Glutamine and asparagine share similarities as they are both amides derived from their respective dicarboxylic acids. In other words, they have an amide group (―CONH2) instead of a carboxyl group
(―COOH) in their side chains. Glutamic acid and glutamine are highly prevalent in the majority of proteins. For instance, in plant proteins, they can make up more than 33% of the total amino acids. Animals
have the ability to synthesise both glutamic acid and glutamine.Amino acid content of some proteins*

protein
amino acid
alpha-casein gliadin edestin collagen (ox hide) keratin (wool) myosin

*Number of gram molecules of amino acid per 100,000 grams of protein.

**The values for aspartic acid and glutamic acid include asparagine and glutamine, respectively.

***Isoleucine plus leucine.

lysine 60.9 4.45 19.9 27.4 6.2 85

histidine 18.7 11.7 18.6 4.5 19.7 15

arginine 24.7 15.7 99.2 47.1 56.9 41

aspartic acid** 63.1 10.1 99.4 51.9 51.5 85

threonine 41.2 17.6 31.2 19.3 55.9 41

serine 63.1 46.7 55.7 41.0 79.5 41

glutamic acid** 153.1 311.0 144.9 76.2 99.0 155

proline 71.3 117.8 32.9 125.2 58.3 22

glycine 37.3 — 68.0 354.6 78.0 39

alanine 41.5 23.9 57.7 115.7 43.8 78

half-cystine 3.6 21.3 10.9 0.0 105.0 86

valine 53.8 22.7 54.6 21.4 46.6 42

methionine 16.8 11.3 16.4 6.5 4.0 22

isoleucine 48.8 90.8*** 41.9 14.5 29.0 42

leucine 60.3 60.0 28.2 59.9 79

tyrosine 44.7 17.7 26.9 5.5 28.7 18

phenylalanine 27.9 39.0 38.4 13.9 22.4 27

tryptophan 7.8 3.2 6.6 0.0 9.6 —

hydroxyproline 0.0 0.0 0.0 97.5 12.2 —

Rev 001 Session-5 Question Booklet Page 30 of 334


Glutamine and asparagine share similarities as they are both amides derived from their respective dicarboxylic acids. In other words, they have an amide group (―CONH2) instead of a carboxyl group
(―COOH) in their side chains. Glutamic acid and glutamine are highly prevalent in the majority of proteins. For instance, in plant proteins, they can make up more than 33% of the total amino acids. Animals
have the ability to synthesise both glutamic acid and glutamine.Amino acid content of some proteins*

protein
amino acid
alpha-casein gliadin edestin collagen (ox hide) keratin (wool) myosin

hydroxylysine — — — 8.0 1.2 —

total 839 765 883 1,058 863 832

Collagen, a protein found in animal connective tissue, contains proline and hydroxyproline amino acids, which do not possess free amino groups due to a ring structure enclosing the amino group and side
chain. This results in their inability to exist in a zwitterion form. The nitrogen-containing group in certain amino acids can create a peptide bond with another amino acid's carboxyl group, but this causes a
distortion in the peptide chain. Proteins typically have a near-neutral nature, lacking both acidic and basic characteristics. Aspartic and glutamic acid have approximately equal amounts of acidic carboxyl
groups. Proteins have three fundamental amino acids, each consisting of six carbon atoms. Lysine is produced by plants but not by animals, while arginine is present in all proteins, but is especially
abundant in highly alkaline protamines found in fish sperm. Histidine is the third fundamental amino acid, and animals can synthesize both. Histidine exhibits lower basicity compared to lysine and arginine.
The imidazole ring, a pentagonal structure with two nitrogen atoms in its side chain, acts as a buffer by selectively binding hydrogen ions to its nitrogen atoms, stabilizing hydrogen ion concentration.

The remaining amino acids—phenylalanine, tyrosine, and tryptophan—have in common an aromatic structure; i.e., a benzene ring is present. These three amino acids are essential, and, while animals
cannot synthesize the benzene ring itself, they can convert phenylalanine to tyrosine.

Rev 001 Session-5 Question Booklet Page 31 of 334


Due to the presence of benzene rings, these amino acids are capable of absorbing ultraviolet light within the range of 270 to 290 nanometers (nm; 1 nanometer = 109 metre = 10 angstrom units).
Phenylalanine has a low absorption of UV light, but tyrosine and tryptophan have a substantial absorption and are the main contributors to the absorption band observed in most proteins at 280-290
nanometers. This phenomenon of absorption is frequently employed to quantify the amount of protein contained in protein samples.

The majority of proteins consist solely of the aforementioned amino acids, while trace amounts of other amino acids can be found in proteins. Connective tissue collagen includes hydroxyproline and modest
quantities of hydroxylysine. Some proteins have monomethyl-, dimethyl-, or trimethyllysine, which are lysine derivatives with one, two, or three methyl groups (―CH3). The presence of these atypical amino
acids in proteins, however, hardly surpasses 1 to 2 percent of the overall amino acid composition.

OneCharacteristics of the amino acids from a physical and chemical perspective

The physical characteristics of a protein are dictated by the corresponding properties of the amino acids it contains.

With the exception of glycine, the α-carbon atom of all amino acids is chiral, meaning that it has four distinct chemical entities (atoms or groups of atoms) linked to it. Consequently, with the exception of
glycine, every amino acid has the ability to exist in two distinct spatial configurations, known as isomers, that resemble mirror images similar to right and left hands.

These isomers demonstrate the phenomenon of optical rotation. Optical rotation refers to the phenomenon of the rotation of the plane of polarised light. Polarised light consists of light waves that vibrate in a
single plane or direction. Substances that cause the rotation of the plane of polarisation are referred to as optically active solutions, and the magnitude of this rotation is known as the optical rotation of the
solution. The orientation of light rotation is typically denoted as plus, or d, for dextrorotatory (rightward), or as minus, or l, for levorotatory (leftward). Certain amino acids exhibit dextrorotation, while others
display levorotation. Except for a few peptides found in bacteria, the amino acids present in proteins are predominantly L-amino acids.

D-alanine and other D-amino acids have been identified as constituents of gramicidin and bacitracin in bacteria. These peptides have bactericidal properties and are employed in the field of medicine as
antibiotics. D-alanine has been detected in some peptides present in bacterial membranes.

Rev 001 Session-5 Question Booklet Page 32 of 334


Hydrogen bonds are created when the imide hydrogen atom is attracted to the unshared pair of electrons of the oxygen atom in the carbonyl group. The outcome is a subtle shift of the imide hydrogen
towards the oxygen atom of the carbonyl group. Despite its reduced strength compared to a covalent bond, which involves the equal sharing of bonding electrons between two carbon atoms, the abundance
of imide and carbonyl groups in peptide chains leads to the creation of many hydrogen bonds. Hydrophobic interaction refers to the attraction that occurs between nonpolar side chains of valine, leucine,
isoleucine, and phenylalanine. This attraction causes water molecules to be displaced.

The structure of the peptide chain in cystine-rich proteins is significantly influenced by the presence of disulfide bonds (―S―S―) in cystine. The cystine halves can be found in distinct regions of the peptide
chain, allowing them to create a closed loop through Amino acids are insoluble in organic solvents and exist as dipolar ions, also known as zwitterions or hybrid ions, in aqueous solutions. They function as
buffers by interacting with strong acids and bases to stabilize hydrogen ions (H+) or hydroxide ions (OH−). Glycine is commonly used as a buffering agent within the pH range of 1 to 3 for acidic solutions
and 9 to 12 for alkaline solutions.

The isoelectric point is the pH at which an amino acid exhibits no migration in an electric field. The majority of monoamino acids have isoelectric values comparable to glycine. The isoelectric points of
aspartic and glutamic acids are approximately pH 3, while those of histidine, lysine, and arginine are pH 7.6, 9.7, and 10.8, respectively.

To accurately determine the amount of amino acid residues in protein molecules, hydrolytic cleavage is necessary. This process is typically achieved by subjecting the protein to boiling conditions in the
presence of strong hydrochloric acid. The amino acids can be quantitatively determined using chromatographic separation on filter paper and visualization with ninhydrin spray. The protein hydrolysate's
amino acids are isolated by flowing the hydrolysate down a column of adsorbents, which selectively bind the amino acids based on their varying affinities.

The Edman degradation process can be iteratively employed to determine the sequence of amino acids in the peptide chain. However, the presence of little losses at each stage renders it unfeasible to
ascertain the order of more than around 30 to 50 amino acids using this method. To obtain additional insights, the protein is typically subjected to hydrolysis by the enzyme trypsin, which selectively breaks
peptide bonds involving the carboxyl groups of lysine and arginine.

English biochemist Frederick Sanger pioneered the use of many proteolytic enzymes to produce results, such as determining the amino acid sequence of insulin and other proteins.

Proteins' fundamental structure is the sequence of amino acids in peptide chains, which dictates their shape and arrangement. The arrangement of a protein is determined by the reciprocal attraction or
repulsion of polar or nonpolar groups in the side chains (R groups). Some segments of a peptide chain can adopt a loop or helical structure, while others may remain linear or create irregular coils.

The terms secondary, tertiary, and quaternary structure are commonly used to describe the arrangement of the peptide chain in a protein. The International Union of Biochemistry (IUB) has established a
nomenclature committee to provide precise definitions for these terms.

Secondary structure refers to the local folding patterns of a protein or nucleic acid molecule, specifically the arrangement of its backbone atoms. Due to significant bond angles between neighboring atoms,
the nitrogen and carbon atoms cannot be aligned in a straight line, resulting in restricted flexibility. The peptide chain tends to adopt an asymmetric helical structure, with certain fibrous proteins composed of
elongated helices arranged around a straight screw axis.

Tertiary structure refers to the three-dimensional arrangement of a protein's atoms and the overall folding pattern of the protein. The tertiary structure is determined by the interactions between the side
chains (R groups) of its constituent amino acids, which can have positively or negatively charged groups, polarity, or nonpolar properties. Salt bridges are formed by attractive forces between negatively
charged side chains of aspartic or glutamic acid and positively charged side chains of lysine or arginine. The establishment of several hydrogen bonds also leads to mutual attraction between adjacent
peptide strands.

the disulfide link.

Rev 001 Session-5 Question Booklet Page 33 of 334


Why Is Eating Protein Important?

When the disulfide link is chemically reduced by adding hydrogen, the protein's tertiary structure undergoes a significant transformation. This transformation involves the breaking of closed loops and the
separation of adjacent peptide chains that are bound by disulfide bonds.

1.1 The quaternary structure refers to the arrangement of several protein subunits to form a functional protein complex.

The quaternary structure is exemplified by the arrangement of subunits in haemoglobin. Human haemoglobin is composed of four peptide chains, specifically two α-chains and two β-chains, making it a
tetramer. The four subunits are interconnected through hydrogen bonding and hydrophobic interactions. The haemoglobin tetramer is referred to as a molecule due to the intimate association of its four
subunits, despite the absence of covalent connections between the peptide chains of these subunits. Covalent bonds, specifically disulfide bridges, are responsible for binding the subunits together in other
proteins.

The amino acid sequence of porcine proinsulin is shown below. The arrows indicate the direction from the N terminus of the β-chain (B) to the C terminus of the α-chain (A).

The isolation and determination of proteins

Animal tissue typically contains significant amounts of protein and fats, with low carbohydrate levels. Plants mostly consist of carbohydrates in their dry matter. The Kjeldahl method is used to quantify protein
content in animal food products by boiling samples with sulfuric acid and an inorganic catalyst like copper sulphate. This method assumes that proteins consist of 16 percent nitrogen, while nonprotein
nitrogen is found in minimal quantities. However, this assumption is not applicable to insects and crustaceans, where chitin, a type of carbohydrate, is present.

Proteins are susceptible to heat, acids, bases, organic solvents, and radiation, making chemical purification procedures unsuitable for proteins. Dialysis is a method used to eliminate salts and small
molecules from protein solutions, which can be done through a resin column or gel filtration. Salting out involves gradually adding sodium sulphate or ammonium sulphate to a protein solution, causing
insolubility and precipitation of globulins and albumins. Water-soluble proteins can be acquired in dehydrated form through freeze-drying, which freezes the protein solution at a temperature below -15°C
(5°F) and removes the water, resulting in a dry powder.

Most proteins exhibit insolubility in boiling water and undergo denaturation, meaning they are irreversibly transformed into an insoluble substance. Connective tissue cannot undergo heat denaturation due to
the conversion of its major structural protein, collagen, into water-soluble gelatin when exposed to boiling water.

What Is the Difference Between a Peptide and a Protein?

Gel filtration can be used to separate a mixture of proteins with varying molecular weights by fractionating them into their individual components. The retention of proteins in the gel is determined by the
characteristics of the gel. The proteins that remain in the gel are extracted from the column using solutions containing an appropriate concentration of salts and hydrogen ions.

Although numerous proteins were initially acquired in a crystalline state, the presence of crystallinity does not guarantee purity. Many protein preparations that exhibit crystalline properties also contain other
chemicals. Several assays are employed to ascertain the presence of a single protein in a protein preparation. Protein solution purity can be assessed using techniques such as chromatography and gel
filtration. Moreover, a solution consisting solely of protein will produce a solitary peak when subjected to ultracentrifugation at extremely high velocities. Similarly, during electrophoresis, the protein will
migrate as a singular band within an electrical field. Once various methods, including amino acid analysis, confirm the purity of the protein solution, it can be deemed pure. Insoluble proteins pose a
challenge for techniques such as chromatography, ultracentrifugation, and electrophoresis, resulting in limited knowledge about them. It is possible that these proteins consist of a combination of numerous
identical proteins.

Microheterogeneous variations can be observed in certain proteins that appear to be pure. There exist variations in the amino acid content of proteins that are otherwise identical, and these variations are
inherited from one generation to another. In other words, they are determined by genetic factors. For example, some humans have two hemoglobins, hemoglobin A and hemoglobin S, which differ in one
amino acid at a specific position in the molecule. Haemoglobin A has glutamic acid at the location, while haemoglobin S contains valine. The improvement of protein analysis techniques has led to the
identification of additional cases of microheterogeneity.

The amount of a pure protein can be estimated by either weighing it or measuring its UV absorbance at a wavelength of 280 nanometers. The absorbance at a wavelength of 280 nanometers is contingent
upon the concentration of tyrosine and tryptophan within the protein. Occasionally, the biuret reaction, which is less sensitive, is employed to detect the presence of proteins. This reaction results in a purple
hue when copper sulphate is added to alkaline protein solutions. The intensity of the colour is solely determined by the quantity of peptide bonds per gramme, which is consistent across all proteins.

Rev 001 Session-5 Question Booklet Page 34 of 334


1.1 The physicochemical characteristics of proteins

1.2 The protein's molecular weight

The determination of protein molecular weight using standard chemical methods, such as freezing-point depression, is not feasible due to the need for protein solutions of larger concentrations than can be
generated.

To determine the minimal molecular weight of a protein or its subunit, we can compute the weight based on the presence of a single molecule of an amino acid or a single atom of elements like iron, copper,
or others. For instance, myoglobin, a protein, has 0.34 gramme of iron per 100 grammes of protein. The atomic weight of iron is 56. Therefore, the minimum molecular weight of myoglobin can be calculated
as (56 × 100)/0.34, which is around 16,500. The molecular weight of myoglobin can be determined accurately using direct measurements, resulting in consistent values. The molecular weight of
haemoglobin, which also comprises 0.34 percent iron, has been determined to be 66,000 or 4 × 16,500. Consequently, haemoglobin consists of four iron atoms.

The primary technique employed for determining the molecular weight of proteins is ultracentrifugation, which involves subjecting them to high-speed rotation in a centrifuge, reaching rates of approximately
60,000 revolutions per minute. Velocities of such magnitude generate centrifugal forces exceeding 200,000 times the gravitational force experienced on the surface of Earth. The initial ultracentrifuges,
constructed in 1920, were employed for ascertaining the molecular weight of proteins. The molecular weights of several proteins have been established. The majority of them are composed of several
subunits, with a molecular weight typically below 100,000 and often falling within the range of 20,000 to 30,000. Hemocyanins, the respiratory proteins of invertebrates that contain copper, include proteins
with extremely large molecular weights, some of which can reach several million. While there is no specific minimum molecular weight for proteins, shorter sequences of amino acids are commonly referred
to as peptides.

The shape of protein molecules

X-ray diffraction

X-ray diffraction pattern of a crystallized enzyme.

In the process of X-ray diffraction, X-rays are directed towards a protein crystal. The X-rays, which have been deflected by the crystal, strike a photographic plate, creating a pattern of spots. This method
demonstrates that peptide chains have the ability to adopt highly intricate and seemingly unpredictable configurations. The two contrasting shapes in proteins are the compact, intricately folded structure of
globular proteins and the elongated, linear structure of fibrous proteins. These distinct shapes were identified even before the advent of X-ray diffraction technology. Fibrous protein solutions exhibit high
viscosity, meaning they are sticky, while globular protein solutions show low viscosity, meaning they flow readily. A 5 percent solution of a globular protein, such as ovalbumin, exhibits good fluidity when
passing through a narrow glass tube. On the other hand, a 5 percent solution of gelatin, which is a fibrous protein, does not flow through the tube due to its tendency to solidify at room temperature,
remaining in a liquid state only at elevated temperatures. Solutions with gelatin concentrations as low as 1 or 2 percent exhibit significant viscosity and can only flow through a tight tube at a slow rate or
when subjected to pressure.

macromolecules

Flow birefringence. Orientation of elongated, rodlike macromolecules (A) in resting solution, or (B) during flow through a horizontal tube.(more)

Rev 001 Session-5 Question Booklet Page 35 of 334


Fibrous proteins, which have extended peptide chains, can intertwine mechanically and by the mutual attraction of their side chains, resulting in the incorporation of significant quantities of water. The
majority of hydrophilic groups of globular proteins are located on the surface of the molecules, resulting in a small number of water molecules. When a fibrous protein solution passes through a narrow tube,
the elongated molecules align themselves in the same direction as the flow, resulting in birefringence, similar to a crystal.

Protein hydration is crucial for maintaining structural integrity, as without these water molecules, the crystalline structure of the protein disintegrates. Proteins in water solutions exhibit strong binding with
certain water molecules, while others are either weakly attached or create clusters of water molecules between folded peptide chains. Islands of water in proteins are referred to as icebergs because they are
believed to have water molecules arranged in a similar orientation as crystalline ice. Water molecules can also create connections between the carbonyl and imino groups of neighboring peptide chains,
leading to structures that resemble pleated sheets but with a water molecule replacing the hydrogen bonds in that arrangement.

The degree of hydration of protein molecules in aqueous solutions is significant, as several methods employed to ascertain the molecular weight of proteins provide the molecular weight of the protein in its
hydrated state. The water content per gramme of a globular protein in solution ranges from 0.2 to 0.5 grammes. Significantly greater quantities of water are physically trapped within the extended peptide
chains of fibrous proteins.

Protein solubility in water requires hydration. When a salt like ammonium sulphate is added to a protein dissolved in water, the level of hydration in the protein decreases, resulting in the protein becoming
insoluble and forming a precipitate, a process known as salting out. This salting-out process is reversible because the protein remains in its native state and does not undergo irreversible denaturation upon
the addition of salts such as sodium chloride, sodium sulphate, or ammonium sulphate. Euglobulins, a type of globulin, exhibit insolubility in water when salts are absent, due to the interaction between polar
groups on neighboring molecules' surfaces.

The study of chemical reactions involving proteins and their electrical properties reveals that a protein molecule contains only one α-amino group (at the N terminus) and one α-carboxyl group (at the C
terminus), with minimal impact on the electrochemical properties of the protein.

Electrometric titration

electrometric titration of glycine

Hydrochloric acid introduces hydrochloric acid into a protein solution, causing a pH decrease in response to hydrogen ions. The pH values of 3 to 4 are buffered by the protein, resulting in a smaller reduction
when more acid is added. This process involves the protonation of the carboxyl group, transforming ―COO− into ―COOH.

When an isoelectric protein is subjected to electrometric titration with potassium hydroxide, the pH gradually increases at a moderate rate. The protein has a weak buffering effect at pH 7, but a large
buffering effect is observed in the pH range of 9 to 10. The buffering capacity at pH 7 is limited due to the low abundance of histidine in proteins. The enhanced buffering capacity at pH levels 9 to 10 is
attributed to the deprotonation of the hydroxyl group in tyrosine and the ammonium groups in lysine.

Protein electrometric titrations show analogous curves, allowing the estimation of the estimated quantity of carboxyl groups, ammonium groups, histidines, and tyrosines per protein molecule. Proteins exhibit
behavior similar to amino acids in an electric field due to the presence of positively and negatively charged side chains. The isoelectric point of proteins often falls within the pH range of 5 to 7, with some
proteins having isoelectric values within the 8 to 10 range.

Number of amino acids per protein molecule

protein*
amino acid
Cyto Hb alpha Hb beta RNase Lys Chgen Fdox

*Cyto = human cytochrome c; Hb alpha = human hemoglobin A, alpha-chain; Hb beta = human hemoglobin A, beta-chain; RNase = bovine ribonuclease; Lys = chicken lysozyme; Chgen = bovine

Rev 001 Session-5 Question Booklet Page 36 of 334


Hydrochloric acid introduces hydrochloric acid into a protein solution, causing a pH decrease in response to hydrogen ions. The pH values of 3 to 4 are buffered by the protein, resulting in a smaller reduction
when more acid is added. This process involves the protonation of the carboxyl group, transforming ―COO− into ―COOH.

When an isoelectric protein is subjected to electrometric titration with potassium hydroxide, the pH gradually increases at a moderate rate. The protein has a weak buffering effect at pH 7, but a large
buffering effect is observed in the pH range of 9 to 10. The buffering capacity at pH 7 is limited due to the low abundance of histidine in proteins. The enhanced buffering capacity at pH levels 9 to 10 is
attributed to the deprotonation of the hydroxyl group in tyrosine and the ammonium groups in lysine.

Protein electrometric titrations show analogous curves, allowing the estimation of the estimated quantity of carboxyl groups, ammonium groups, histidines, and tyrosines per protein molecule. Proteins exhibit
behavior similar to amino acids in an electric field due to the presence of positively and negatively charged side chains. The isoelectric point of proteins often falls within the pH range of 5 to 7, with some
proteins having isoelectric values within the 8 to 10 range.

Number of amino acids per protein molecule

protein*
amino acid
Cyto Hb alpha Hb beta RNase Lys Chgen Fdox

chymotrypsinogen; Fdox = spinach ferredoxin.

**The values recorded for aspartic acid and glutamic acid include asparagine and glutamine, respectively.

lysine 18 11 11 10 6 14 4

histidine 3 10 9 4 1 2 1

arginine 2 3 3 4 11 4 1

aspartic acid** 8 12 13 15 21 23 13

threonine 7 9 7 10 7 23 8

serine 2 11 5 15 10 28 7

glutamic acid** 10 5 11 12 5 15 13

proline 4 7 7 4 2 9 4

glycine 13 7 13 3 12 23 6

alanine 6 21 15 12 12 22 9

half-cystine 2 1 2 8 8 10 5

valine 3 13 18 9 6 23 7

methionine 3 2 1 4 2 2 0

isoleucine 8 0 0 3 6 10 4

leucine 6 18 18 2 8 19 8

tyrosine 5 3 3 6 3 4 4

phenylalanine 3 7 8 3 3 6 2

Rev 001 Session-5 Question Booklet Page 37 of 334


Hydrochloric acid introduces hydrochloric acid into a protein solution, causing a pH decrease in response to hydrogen ions. The pH values of 3 to 4 are buffered by the protein, resulting in a smaller reduction
when more acid is added. This process involves the protonation of the carboxyl group, transforming ―COO− into ―COOH.

When an isoelectric protein is subjected to electrometric titration with potassium hydroxide, the pH gradually increases at a moderate rate. The protein has a weak buffering effect at pH 7, but a large
buffering effect is observed in the pH range of 9 to 10. The buffering capacity at pH 7 is limited due to the low abundance of histidine in proteins. The enhanced buffering capacity at pH levels 9 to 10 is
attributed to the deprotonation of the hydroxyl group in tyrosine and the ammonium groups in lysine.

Protein electrometric titrations show analogous curves, allowing the estimation of the estimated quantity of carboxyl groups, ammonium groups, histidines, and tyrosines per protein molecule. Proteins exhibit
behavior similar to amino acids in an electric field due to the presence of positively and negatively charged side chains. The isoelectric point of proteins often falls within the pH range of 5 to 7, with some
proteins having isoelectric values within the 8 to 10 range.

Number of amino acids per protein molecule

protein*
amino acid
Cyto Hb alpha Hb beta RNase Lys Chgen Fdox

tryptophan 1 1 2 0 6 8 1

total 104 141 146 124 129 245 97

Zone electrophoresis is a method used to detect electrophoretic migration in proteins, where the protein is placed within a gel or porous substance. This technique allows for visual tracking of pigmented
proteins, while proteins without color become detectable after electrophoresis with a dye for staining.

The secondary and tertiary structure of globular proteins is primarily determined through X-ray diffraction analysis of their crystals. The intensity of the diffraction pattern on a photographic plate is influenced
by the electron density of the protein crystal. The abundance of hydrogen atoms is the least dense, making it nearly impossible to determine the structure of a protein with more than 100 amino acids.

To enhance resolution, heavy atoms, especially heavy metals, are replaced into the side chains of specific amino acids. Mercury ions can attach themselves to the sulfhydryl groups of cysteine, while
platinum chloride is used in alternative proteins. The iron atom in iron-containing proteins is sufficient.

X-ray diffraction can only partially determine the three-dimensional conformation of the peptide chain, but a complete resolution has been achieved by combining findings from X-ray diffraction with amino
acid sequence analysis. The technique has revealed orderly structural patterns in proteins, including an elongated configuration of peptide chains connected by hydrogen bonds.

protein structure; α-helix

The α-helix in the structural arrangement of a protein.

The α-helix is a significant structural arrangement in proteins, consisting of a series of amino acids coiled around a straight axis. It has a length of 1.5 angstroms per amino acid residue and is stabilized by
hydrogen bonds between carbonyl and imino groups. Previously, the α-helix was believed to be the main structural component of globular proteins, but it is now understood that myoglobin is unique in this
regard. The remaining globular proteins, determined using X-ray diffraction, consist of limited sections of the α-helix. The peptide chains often exhibit a disordered arrangement, commonly referred to as a
random coil, but this is inaccurate as the folding process is determined by the primary structure and influenced by secondary and tertiary structures.

Rev 001 Session-5 Question Booklet Page 38 of 334


lysozyme; protein conformation

The simplified structure of lysozyme from hen's egg white has a single peptide chain of 129 amino acids. The amino acid residues are numbered from the terminal α group (N) to the terminal carboxyl group
(C). Circles indicate every fifth residue, and every tenth residue is numbered. Broken lines indicate the four disulfide bridges. Alpha-helices are visible in the ranges 25 to 35, 90 to 100, and 120 to 125.(more)

The iron-containing proteins myoglobin and haemoglobin were the first proteins to have their internal structures fully resolved. The examination of the hydrated crystals of these proteins, conducted by Max
Perutz, an Austrian-born British biochemist, and John C. Kendrew, a British biochemist, who were awarded the 1962 Nobel Prize for Chemistry for their research, unveiled that the peptide chains are folded
so tightly that the majority of the water is expelled from the core of the spherical molecules. The amino acids containing the ammonium (―NH3+) and carboxyl (―COO−) groups were observed to relocate to
the outer surface of the globular molecules, whereas the nonpolar amino acids were observed to accumulate in the interior.

1.1 Alternative methods for determining protein structure

X-ray diffraction is a powerful method for determining the secondary and tertiary structure of proteins, but it requires extensive study and specialized equipment. Some basic methods rely on the optical
characteristics of proteins, such as refractivity, absorption of light at different wavelengths, rotation of plane-polarized light at different wavelengths, and luminescence.

Spectrophotometric characteristics are limited to proteins with colored prosthetic groups, such as crimson heme proteins in blood, violet pigments in the retina, green and yellow hues in green and yellow,
copper-containing proteins in blue, and melanins in deep brown. Peptide bonds have carbonyl groups that absorb light energy at extremely small wavelengths, while phenylalanine, tyrosine, and tryptophan
possess aromatic rings that absorb ultraviolet light within the range of 280 to 290 nanometers.

Optical activity refers to the ability of a substance to rotate the plane of polarised light. All amino acids, except glycine, demonstrate optical activity, which refers to the rotation of the plane of polarised light.
Proteins exhibit optical activity when polarized light with wavelengths in the visible range is employed, exhibiting a leftward rotation of the plane of polarisation, known as levorotatory. The specific rotation of
most L-amino acids ranges from -30° to +30°, depending on the concentration of the protein solution and the distance the light travels through it.

The optical rotation of a protein is influenced by all its constituent amino acids, with cystine and the aromatic amino acids phenylalanine, tyrosine, and tryptophan having the most significant impact.

Chemical methods can be used to understand the internal structure of proteins, determining if certain groups are exposed on the protein's surface or hidden within the tightly folded peptide chains. The
chemical reagents used in these inquiries must be gentle and not alter the protein's structure. Tyrosine reactivity is particularly intriguing, as only three out of six tyrosines present in the enzyme ribonuclease
can undergo iodination. Enzymatically breaking down iodinated ribonuclease is employed to detect the presence of peptides containing iodinated tyrosines.

Cysteine can be identified by reacting it with substances like iodoacetic acid or iodoacetamide, leading to the creation of carboxymethylcysteine or carbamidomethylcysteine. The peptides containing these
compounds can then be analysed to determine the amount of cysteine present. The imidazole groups of specific histidines can also be identified by coupling with identical reagents using varying
circumstances. However, only a limited number of additional amino acids can be tagged without inducing alterations in the secondary and tertiary structure of the protein.

Protein subunit association is another important aspect of protein structure. Complexes consist of chains composed of two, four, or more repeating fundamental structural units. These subunits can be
systematic and can be either cyclic, cubic, or tetrahedral. Some proteins have identical subunits, while others have subunits that vary. Haemoglobin is a protein composed of four subunits, specifically two α-
chains and two β-chains, arranged in a tetrameric structure.

Protein denaturation refers to the process by which proteins lose their structural and functional integrity. Boiling a protein solution often leads to the protein becoming insoluble, meaning it is denatured.
Renaturation is the process by which the original structure of a protein can be restored in certain cases. Proteins undergo denaturation when exposed to alkaline or acidic substances, oxidizing or reducing
agents, and certain organic solvents. Denaturing chemicals, such as urea and guanidinium chloride, disrupt the hydrogen bonds and salt bridges between positively and negatively charged side chains,
resulting in the elimination of the tertiary structure of the peptide chain.

In some cases, the native protein reassembles when denaturing chemicals are eliminated from a protein solution. Additionally, proteins can be denatured by subjecting them to organic solvents like ethanol
or acetone, which disrupt the cohesive forces between nonpolar groups.

Rev 001 Session-5 Question Booklet Page 39 of 334


Proteins, despite their small size, exhibit remarkable stability even when exposed to high temperatures. Denatured proteins typically lack biological activity and are less resistant to trypsin, an enzyme that
breaks down proteins during digestion. Denaturation is a binary process, with multiple intermediate stages between the native and denatured forms of a protein. Denatured proteins show more pronounced
color responses for tyrosine, histidine, and arginine compared to their natural condition.

The native shape of globular proteins in live creatures is maintained by adding one amino acid at a time. Experiments using radioactive carbon or heavy hydrogen have shown that the protein molecule
expands incrementally from the N terminus to the C terminus, with side chains interacting to create either an α-helix or closed loops through hydrogen bonds or disulfide bridges. The ultimate configuration is
likely fixed when the peptide chain reaches 50 or more amino acid residues.

Soluble proteins tend to go towards the boundary between air and water or oil and water. They disperse and arrange themselves into thin films within the contact, which decreases the surface tension.
Proteins can be retrieved from films in their original state by applying lateral pressure to the film, which thickens and forms a layer with a height equivalent to the original protein molecule's size.

Protein molecules' movement at the boundary between air and water has been used to ascertain the molecular weight of proteins by quantifying the magnitude of the protein layer's applied force on a barrier.
When a protein solution is vigorously agitated in the presence of air, it undergoes foaming due to the migration of soluble proteins to the interface between the air and water. These proteins remain at the
interface, hindering or delaying the reversion of the foam back into a uniform solution. Certain labile, readily alterable proteins undergo denaturation when exposed to the air-water contact. An example of
irreversible denaturation through surface spreading can be observed when egg white is vigorously churned, resulting in the creation of a stable foam.

Classification of proteins

Classification by solubility

collagen

Collagen molecule.

In 1902, German chemists Emil Fischer and Franz Hofmeister separately concluded that proteins are primarily made up of polypeptides composed of multiple amino acids. Consequently, an effort was made
to categorise proteins based on their chemical and physical characteristics, as their biological role had not yet been determined. The proteinaceous nature of enzymes was not demonstrated until the 1920s.
Proteins were generally classed based on their solubility in various solvents. However, this classification is no longer adequate as proteins with distinct structures and functions can sometimes have
comparable solubilities. Conversely, proteins with similar structures and the same function can sometimes have varying solubilities. The terminology linked to the previous classification, nonetheless,
continue to be extensively utilised. The definitions are provided below.

keratin

Scanning electron micrograph showing strands of keratin in a feather, magnified 186×.(more)

Albumins are water-soluble proteins that dissolve in water that is half-saturated with ammonium sulphate. They can be precipitated by adding ammonium sulphate until it reaches half-saturation.
Pseudoglobulins are soluble in water without salt, while euglobulins are insoluble in salt-free water. Plant proteins, prolamins and glutelins, exhibit insolubility in water. Protamine is a group of proteins found
in fish sperm that are very alkaline due to their high content of arginine. Histones, which possess lower alkalinity, exclusively exist within cell nuclei and are tightly associated with nucleic acids.

Scleroproteins are insoluble proteins found in animal tissues, consisting of keratin and collagen. Conjugated proteins are complex molecules composed of both protein and nonprotein components, with the
prosthetic group being the nonprotein component. They can be classified into different categories based on their composition, such as mucoproteins, lipoproteins, phosphoproteins, chromoproteins, and
nucleoproteins.

Rev 001 Session-5 Question Booklet Page 40 of 334


The classification of globulins is flawed due to the presence of minor quantities of carbohydrates in most, if not all, globulins. Phosphoproteins lack a separable prosthetic group and are essentially proteins
with phosphorylated serine hydroxyl groups. globulins encompass proteins that serve many functions, such as enzymes, antibodies, fibrous proteins, and contractile proteins.

Functional classification is more advantageous than previous categorization, as a single protein might possess multiple functions. For example, myosin, a contraction-related protein, also functions as an
ATPase. Additionally, a protein cannot be classified as an enzyme until its substrate is identified, which is crucial for investigating its enzymatic action.

Special structure and function of proteins

protein engineering

How protein engineering helps scientists battle diseases.

Although it has its flaws, a functional classification is employed in this context to illustrate, whenever feasible, the relationship between the structure and function of a protein. The initial presentation focuses
on the structural, fibrous proteins because to their simpler structure compared to globular proteins and their direct relationship to their function, which involves maintaining either a rigid or flexible structure.

Structural proteins

Scleroproteins

Collagen

collagenous fibres

Randomly oriented collagenous fibres of varying size in a thin spread of loose areolar connective tissue (magnified about 370 ×).(more)

Collagen is a crucial protein in bones, tendons, ligaments, and skin. It was once considered insoluble in water but can be extracted from calf skin using a citrate buffer with a pH of 3.7. Collagen is formed by
cleaving procollagen's peptide bonds, and it consists of three subunits, each weighing 95,000 units. The three chains are arranged in a staggered manner, resulting in a trimer lacking clear terminal
boundaries.

Collagen is unique due to its elevated levels of proline and hydroxyproline, unlike other proteins like elastin. Proline is found in the glycine-proline-X sequence, where X is either alanine or hydroxyproline.
Collagen lacks cystine and tryptophan, making it unsuitable for replacement. Proline's presence induces bends in the peptide chain, resulting in a decrease in amino acid unit length.

Collagenase, an enzyme produced by bacteria, can hydrolyze collagen, leading to the formation of gelatin. When boiling collagen with water, it disintegrates the triple helix structure and partial hydrolysis of
its subunits, forming a hydrated molecule. Collagen undergoes cross-linking when exposed to tannic acid or chromium salts, resulting in insolubility. This phenomenon is used in tanning processes, which
convert hide into leather.

Collagen in living beings appears to experience an ageing process due to the development of cross connections between collagen strands. These cross-links are created through the transformation of some
lysine side chains into aldehydes and bonding with the ε-amino groups of intact lysine side chains. Elastin, a protein found in elastic fibers of connective tissue, exhibits similar cross-links and may arise from
the amalgamation of collagen fibers with other proteins. Desmosins and isodesmosins are created when cross-linked collagen or elastin undergo degradation, resulting in the breakdown of cross-linked
lysine fragments.

Keratin, a fibrous protein found in the outermost layers of the skin, is impervious to proteolytic enzymes and cannot serve as a substitute for dietary proteins. Its exceptional stability is due to the abundant
presence of cystine's disulfide bonds, which make it impervious to degradation. Keratin and collagen have distinct amino acid compositions, with cystine comprising approximately 24% of the overall amino
acid composition. Keratin consists of peptide chains organized in roughly equal proportions of antiparallel and parallel pleated sheets.

The water concentration of keratin fibres determines their length, with the fibers having the ability to retain around 16% of water, resulting in a 10 to 12 percent increase in their length. Hair keratin, found in
wool, has been extensively studied, as it undergoes irreversible shrinkage when exposed to water heated to approximately 90°C (190°F). The breakdown of hydrogen bonds and other noncovalent bonds is
responsible for this phenomenon, but disulfide linkages remain unaffected.

Rev 001 Session-5 Question Booklet Page 41 of 334


Fibroin, an insoluble substance found in silk, has been extensively studied and researched. The silk found in the silkworm's cocoon is composed of two proteins: sericin, which is soluble in hot water, and
fibroin, which is insoluble. The amino acid makeup of fibroin is distinct from all other proteins, mostly consisting of glycine, alanine, tyrosine, and serine, with minor quantities of the remaining amino acids
and the absence of any amino acids containing sulphur.

There is limited knowledge regarding the scleroproteins found in marine sponges and the insoluble proteins present in the cellular membranes of animal cells. Some membranes exhibit solubility in
detergents, while others are insoluble in detergents.

Muscle tissue contains two proteins: myosin and tropomyosin. Myosin is present in the globulin fraction and blood platelets, while tropomyosin is a tiny protein that shares similar features with myosin.
Myosin forms highly viscous solutions when added to a cooled solution of diluted potassium chloride and sodium bicarbonate.

Muscle contraction is fueled by the oxidation of carbohydrates or lipids, providing the necessary energy. The process of converting chemical energy into mechanical energy is known as a mechanochemical
reaction, driven by a molecular process that involves the fibrous muscle proteins.

muscle: actin and myosin

The structure of actin and myosin filaments.

Myosin combines easily with another muscle protein called actin, the molecular weight of which is about 50,000; it forms 12 to 15 percent of the muscle proteins. Actin can exist in two forms—one, G-actin, is
globular; the other, F-actin, is fibrous. Actomyosin is a complex molecule formed by one molecule of myosin and one or two molecules of actin. In muscle, actin and myosin filaments are oriented parallel to
each other and to the long axis of the muscle. The actin filaments are linked to each other lengthwise by fine threads called S filaments. During contraction the S filaments shorten, so that the actin filaments
slide toward each other, past the myosin filaments, thus causing a shortening of the muscle (for a detailed description of the process, see muscle: Striated muscle).

Fibrinogen and fibrin

fibrin in blood clotting

Red blood cells (erythrocytes) trapped in a mesh of fibrin threads. Fibrin, a tough, insoluble protein formed after injury to the blood vessels, is an essential component of blood clots.(more)

Fibrinogen, a protein found in blood plasma, is converted into insoluble fibrin during blood clotting. Blood serum, collected after the clot is removed, contains blood plasma without fibrinogen. The
concentration of fibrinogen in blood plasma ranges from 0.2 to 0.4 percent. Fibrinogen can be precipitated from blood plasma by adding sodium chloride at a concentration half of its saturation point. The
molecules in electron micrographs are observed as elongated structures measuring 47.5 nanometers in length and 1.5 nanometers in diameter.

Thrombin, an enzyme, triggers the clotting process by catalyzing the cleavage of certain peptide bonds in fibrinogen. This leads to the release of two smaller fibrinopeptides with molecular weights of 1,900
and 2,400. The residual portion of the fibrinogen molecule, which exists as a single unit, can dissolve and maintain its stability when exposed to pH levels below 6, specifically in acidic situations. Under
neutral conditions (pH 7), the monomer undergoes a conversion into a bigger molecule called insoluble fibrin due to the creation of new peptide bonds.

Soluble proteins, including albumins, globulins, and other types, are present in animal fluids such as blood plasma and lymph. The protein content in human blood serum is around 7%, with two-thirds found
in the albumin fraction and the remaining one-third in the globulin fraction. Serum electrophoresis demonstrates a prominent albumin peak and three smaller peaks corresponding to alpha-, beta-, and
gamma-globulins. The concentrations of alpha-, beta-, and gamma-globulin in normal human serum are around 1.5%, 1.9%, and 1.1%, respectively.

Serum albumin exhibits less heterogeneity, meaning it comprises a smaller number of different proteins compared to globulins. It is one of the rare serum proteins that can be produced in a crystalline state.
Serum albumin is found in high concentrations in blood serum and serves as a protective colloid, stabilizing other proteins.

Rev 001 Session-5 Question Booklet Page 42 of 334


The alpha-globulin fraction of blood serum consists of several conjugated proteins, including an α-lipoprotein and two mucoproteins. Haptoglobin, a type of mucoprotein, binds iron and copper in the
bloodstream.

antibody structure

Immunoglobulins, or antibodies, have a four-chain structure consisting of two identical light (L) and two identical heavy (H) chains. Gamma-globulins have the highest level of heterogeneity among all
globulins, with some reaching up to 800,000. Immunoglobulins are referred to as IgM or gamma M (γM) for macroglobulins, while IgG or gamma G (γG) is used for γ−globulins with a molecular weight of
150,000.

Milk comprises three main proteins: α-lactalbumin, beta-lactoglobulin, and casein, which is a phosphoprotein. When acid is introduced to milk, the casein protein undergoes precipitation. Lactoglobulin can
also be found in the form of a dimer with a molecular weight of 37,000. Lactalbumin shares a similar amino acid composition and tertiary structure with lysozyme, a protein found in eggs.

Casein can be precipitated with the addition of acid or the enzymatic action of rennin, which is present in gastric juice. Rennin is used to precipitate casein, a protein found in milk that serves as the basis for
cheese production. Casein causes milk fat to separate, while milk sugar stays in the liquid portion (whey). α-Casein accounts for approximately 75% of the total casein content. Cystine has exclusively been
detected in κ-casein.

Egg white proteins include conalbumin, lysozyme, ovoglobulin, ovomucoid, and avidin. Lysozyme is an enzyme that breaks down carbohydrates in protective capsules produced by specific bacteria,
activating lactose synthetase to produce lactose. Avidin is a glycoprotein that selectively binds with biotin, causing "egg-white injury" due to its action. Avidin's amino acid sequence is well-documented.

Egg yolk proteins consist of a combination of lipoproteins and livetins. The latter substances bear resemblance to serum albumin, α-globulin, and β-globulin. Phosvitin, a phosphoprotein, is also present in
the yolk. Phosvitin, discovered in fish sperm as well, possesses a molecular weight of 40,000 and an atypical amino acid makeup, with phosphoserine accounting for one third of its amino acids.

Protamines and histones are types of proteins.

Protamines are found in the spermatozoa of fish, with salmine and clupeine being the most extensively investigated. These protamines bind to DNA, resulting in the formation of nucleoprotamines. They
have alkaline properties due to their high arginine concentration, resulting in isoelectric points ranging from pH 11 to 12. The molecular weights of salmine and clupeine are around 6,000.

Histones have lower basicity compared to protamines, possessing significant quantities of either lysine or arginine and minimal amounts of aspartic acid and glutamic acid. Nucleohistones, consisting of
histones and DNA, are present in the nuclei of somatic cells of animals and plants, excluding animal sperm. Their molecular weights range from 10,000 to 22,000 and consist of the majority of the 20 amino
acids, excluding tryptophan and amino acids that contain sulphur.

Plant proteins, primarily globulins, are derived from the protein-dense seeds of cereals and legumes. These substances are composed of larger molecules made up of smaller units, such as edestin from
hemp, amandin from almonds, concanavalin A (42,000) and B (96,000), and canavalin (113,000) from jack beans. These substances are insoluble in water but soluble in combinations of water and ethanol,
and are referred to as prolamins.

Integration of proteins with prosthetic groups occurs through covalent bonds, which are chemical interactions where electrons are shared. In lipoproteins, nucleoproteins, and certain heme proteins, the
linkage between the two components is established through noncovalent bonds. These bonds are formed by hydrogen bonds, salt bridges, disulfide bonds, and the interaction of hydrophobic groups.

Rev 001 Session-5 Question Booklet Page 43 of 334


Mucoproteins and glycoproteins are types of proteins that include carbohydrates. Oligosaccharides found in mucoproteins and glycoproteins serve as prosthetic groups. These oligosaccharides are
composed of a small number of simple sugar molecules, often ranging from four to 12. The most frequently occurring sugars in these oligosaccharides include galactose, mannose, glucosamine, and
galactosamine. Certain mucoproteins have a carbohydrate content of 20% or higher, typically consisting of several oligosaccharides linked to various regions of the peptide chain.

Lipoproteins and proteolipids are types of biomolecules. The association between the protein and the lipid component of lipoproteins and proteolipids is a noncovalent interaction. Lipoproteins found in the α-
and β-globulin fraction of blood serum are soluble in water but not in organic solvents. However, certain lipoproteins in the brain, known as proteolipids, are soluble in organic solvents due to their high lipid
content.

Lipoproteins, due to their lipid composition, possess the least density among all proteins and are typically categorised as low-density lipoproteins (LDL) and high-density lipoproteins (HDL).

Learn how diet and genetics give lobsters their colors, also how the colors can help study shell disease

Using chemistry to understand the colours and diseases of lobster shells.

Lipoproteins acquire coloration by amalgamating proteins with carotenoids, such as astaxanthin in lobsters, crayfish, and other crustaceans. The retina of the eye contains retinal, a chemical derived from
carotene and generated through the oxidation of vitamin A. The aldehyde group (―CHO) of retinal makes a covalent link with an amino (―NH2) group of opsin, the protein carrier. Colour vision is facilitated
by the existence of many visual pigments in the retina that differ from rhodopsin either in the composition of retinal or in the protein carrier's structure.

Metalloproteins are proteins in which heavy metal ions are directly attached to certain side chains of amino acids such as histidine, cysteine, or other amino acids. Transferrin and ceruloplasmin are found in
the globulin fractions of blood serum, carrying iron and copper respectively. Ferritin, an additional iron protein, serves as the storage form of iron in animals and has been isolated in crystalline structure from
the liver and spleen. It consists of 20 to 22 percent iron and has a molecular weight of around 480,000.

Green plants and certain photosynthetic and nitrogen-fixing bacteria possess a variety of ferredoxins, which are characterized by their modest size, consisting of 50 to 100 amino acids, and containing a
chain of iron and disulfide units (FeS2). The stoichiometry of FeS2 units per ferredoxin molecule ranges from five in spinach ferredoxin to ten in certain bacterial ferredoxins.

Ceruloplasmin is a globulin that contains copper and has a molecular weight of 151,000. It serves as the primary transporter of copper in living organisms, but can also be conveyed by transferrin. A different
protein that contains copper, called copper-zinc superoxide dismutase (formerly known as erythrocuprein), has been separated from red blood cells and found in the liver and brain.

Zinc ions are commonly found in numerous animal enzymes, typically forming bonds with the sulphur atoms of cysteine. The kidneys of horses contain the protein metallothionein, which consists of zinc and
cadmium, both of which are chemically bonded to sulphur. Significant quantities of a vanadium-protein combination called hemovanadin have been discovered in the yellowish-green cells, known as
vanadocytes, of tunicates, marine invertebrates.

Heme proteins and other chromoproteins contain a pigment called heme, which gives them their characteristic colour. Despite the presence of iron, heme proteins are typically not classified as
metalloproteins due to the strong binding of iron inside their prosthetic group, which is an iron-porphyrin complex. Porphyrin compounds exhibit strong light absorption in the vicinity of 410 nanometers.

blue-green algae

Blue-green algae in Morning Glory Pool, Yellowstone National Park, Wyoming.

Biliproteins, green chromoproteins found in insects and bird eggshells, are derived from the bile pigment biliverdin, produced from porphyrin. They are found in red and blue-green algae, with red being
phycoerythrin and blue being phycocyanobilin. Nucleoproteins, on the other hand, are formed when a protein solution is combined with a nucleic acid solution. The cell nucleus primarily contains
deoxyribonucleic acid (DNA), while the cytoplasm mainly contains ribonucleic acid (RNA). Protein-nucleic acid complexes are formed in live cells. Nucleoproteins, such as nucleoprotamines, nucleohistones,
and certain RNA and DNA viruses, have specific properties. Nucleoprotamines are found in fish sperm cells, while nucleohistones are found in thymus, pea seedlings, and other plant material. Both
nucleoprotamines and nucleohistones exclusively contain DNA.

Rev 001 Session-5 Question Booklet Page 44 of 334


tobacco mosaic virus

Schematic structure of the tobacco mosaic virus. The cutaway section shows the helical ribonucleic acid associated with protein molecules in a ratio of three nucleotides per protein molecule.(more)

Basic viruses consist of RNA molecules enveloped by proteins, with the majority being bacteriophages and some animal viruses. DNA is found in most bacterial viruses, known as phages, and the central
part of DNA is enveloped by protein. Phage protein consists of a combination of enzymes, making it not solely the protein component of a single nucleoprotein.

Haemoglobin serves as the oxygen transporter in all vertebrates and certain invertebrates. Oxyhemoglobin (HbO2) is a bright red compound attached to four nitrogen atoms of porphyrin, while
deoxyhemoglobin (Deoxyhemoglobin) is oxyhemoglobin without oxygen. Carbon monoxide has a higher affinity for haemoglobin than oxygen, so it can replace the oxygen in oxyhemoglobin, preventing
oxygen from reaching bodily tissues.

Haemoglobins found in mammals, birds, and many other vertebrates consist of tetramers composed of two α-chains and two β-chains. The four subunits are connected through noncovalent interactions. By
extracting hemin, the ferrous porphyrin component, from globin, two molecules of globin are formed. Each molecule has a molecular weight of 32,200. Unlike haemoglobin, globin is a labile protein that is
prone to denaturation.

Mammalian haemoglobins exhibit variations in their amino acid makeup, resulting in differences in their secondary and tertiary structure. Crystallization of rat and horse haemoglobins is highly facile, while
human, cow, and sheep haemoglobins are challenging due to their higher solubility. Haemoglobin crystals have varying shapes across different species, and the rates of disintegration and denaturation also
fluctuate among species.

Invertebrates have respiratory proteins similar to haemoglobin, such as erythrocruorin, which is found in insects, mollusks, and protozoans. Leghemoglobin, a red protein found in the root nodules of
leguminous plants, appears to be generated by nitrogen-fixing bacteria found in the root nodules and likely plays a role in converting atmospheric nitrogen into ammonia and amino acids.

Additional respiratory proteins include chlorocruorin, a green respiratory protein found in marine worms, and hemocyanin, a metalloprotein containing copper. Hemocyanins appear pale yellow in the
absence of oxygen and turn blue in its presence, with molecular weights ranging from 300,000 to 9,000,000.

Peptide hormones

Endocrine glands produce a variety of hormones, including proteins or peptides as well as steroids. The article "Hormone" discusses the genesis, physiological role, and manner of action of hormones. None
of the hormones possess any enzymatic action. Each substance has a specific target organ that triggers a particular biological response, such as the release of gastric or pancreatic juice, the formation of
milk, or the creation of steroid hormones. The precise process via which hormones exert their effects remains incompletely known. Cyclic adenosine monophosphate has a role in transmitting hormonal
signals to cells that are selectively activated by the hormone.

Thyroid gland hormones

Thyroglobulin, consisting of two molecules of the iodine-containing chemical thyroxine, has a molecular weight of 670,000. Thyroglobulin also includes thyroxine with a reduced number of iodine atoms (two
and three instead of four) and tyrosine with a lesser amount of iodine atoms (one and two). Administration of the hormone induces a rise in metabolic rate, whereas its absence leads to a deceleration.

Calcitonin, an additional hormone found in the thyroid gland, functions to decrease the concentration of calcium in the bloodstream. The amino acid sequences of calcitonin derived from pig, beef, and
salmon exhibit variations in certain amino acids compared to human calcitonin. However, all of them possess the half-cystines (C) and the prolinamide (P) in identical positions.

Parathyroid hormone, also known as parathormone, is crucial for regulating the blood's calcium level. It is synthesised in small glands located within or below the thyroid gland. A reduction in its production
leads to hypocalcemia, which is characterised by lower than normal levels of calcium in the bloodstream. The molecular weight of bovine parathormone is 8,500. It does not contain cystine or cysteine, but it
is abundant in aspartic acid, glutamic acid, or their amides.

Pancreatic hormones

Despite the amino acid structure of insulin being discovered in 1949, numerous efforts to synthesise it resulted in low yields due to the inability of the two peptide chains to properly join and create the right
disulfide bridge. The ease of the production of insulin is explained by the finding in the pancreas of proinsulin, from which insulin is generated. During the conversion of proinsulin to insulin, a peptide chain

Rev 001 Session-5 Question Booklet Page 45 of 334


consisting of 33 amino acids, known as the connecting peptide or C peptide, is removed from the single peptide chain of proinsulin. The disulfide bonds of proinsulin link the A and B chains.

Insulin mostly occurs as a hexameric complex in aqueous solutions, with each monomer consisting of an A and a B chain. Insulins from many species have been extracted and examined, revealing slight
variations in their amino acid sequences. However, all of them appear to possess identical disulfide bridges connecting the two chains.

Insulin infusion reduces blood sugar levels, while the administration of glucagon, a different hormone produced by the pancreas, increases blood sugar levels. Glucagon is composed of a linear peptide chain
with 29 amino acids. The synthesis of the product has successfully replicated the full biological activities of natural glucagon. Glucagon's structure lacks cystine and isoleucine.

The pituitary gland is composed of three distinct sections: the anterior lobe, the posterior lobe, and an intermediate region. These sections vary in terms of cellular structure as well as the structure and
function of the hormones they produce. The posterior lobe secretes two analogous chemicals, oxytocin and vasopressin. The former induces uterine contraction, whereas the latter elevates blood pressure.
Both peptides consist of eight amino acids, with a ring structure comprising five amino acids (considering the two cystine halves as a single amino acid) and a side chain consisting of three amino acids. The
two cystine halves are connected to each other through a disulfide bond, and the amino acid at the C terminal is glycinamide. The structure has been established and verified. Vasopressin in humans is
distinguished from oxytocin by the substitution of isoleucine with phenylalanine and leucine with arginine.

The intermediate part of the pituitary gland produces the melanocyte-stimulating hormone (MSH), which causes expansion of the pigmented melanophores (cells) in the skin of frogs and other batrachians.
Two hormones, called α-MSH and β-MSH, have been prepared from hog pituitary glands. The first, α-MSH, consists of 13 amino acids; its N terminal serine is acetylated (i.e., the acetyl group, CH 3CO,
of acetic acid is attached), and its C terminal valine residue is present as valinamide. The second, β-MSH, contains in its 18 amino acids many of those occurring in α-MSH.

The anterior pituitary lobe secretes various protein hormones, including thyroid-stimulating hormone (thyrotropin), lactogenic hormone (prolactin), growth hormone (21,500), luteinizing hormone (30,000), and
follicle-stimulating hormone (29,000. These hormones have a molecular weight of 28,000, 22,500, 21,500, 30,000, and 29,000. The thyroid-stimulating hormone is composed of α and β subunits with similar
makeup to the luteinizing hormone. When joined together, they restore around 50% of their original activity.

Sheep pituitary glands produce a lactogenic hormone called prolactin, which has a sequence of 188 amino acids that exhibit 10% of its biological action and some activity of the growth hormone. The
luteinizing hormone is a mucoprotein with around 12,000 molecular units and is composed of two subunits, each weighing around 15,000 molecular units. Chorionic gonadotropin, found in the urine of
pregnant women, enables early detection of pregnancy.

Recently, researchers have identified small peptides with hormone-like effects on specific target organs. Angiotensin, also known as angiotonin or hypertensin, is produced in the blood by the activity of
renin, an enzyme found in the kidney. Peptides with similar properties include bradykinin, gastrin, secretin, and kallikrein.

Antibodies, proteins that fight against foreign substances in the body, are linked to the globulin component of the immune serum. Antibodies can be purified through precipitation with the specific antigen that
stimulated their production and then separating the resulting antigen-antibody complex. The majority of immunoglobulins are located in the IgG fraction, which also harbors the majority of antibodies.

The initial fraction obtained after gel filtration of γ-globulin solutions using dextran has a molecular weight of 900,000. The subsequent two fractions are IgA (γA) and IgG (γG), with approximate molecular
weights of 320,000 and 150,000, respectively. Two lesser quantities of immunoglobulins, namely IgD and IgE, have been identified in certain immunological sera.

In conclusion, the anterior pituitary lobe secretes various protein hormones, including thyroid-stimulating hormone, lactogenic hormones, growth hormones, peptides, and antibodies. These hormones have
varying molecular weights, amino acid makeup, and other characteristics.

Rev 001 Session-5 Question Booklet Page 46 of 334


IgG immunoglobulin

Diagram of an IgG immunoglobulin.

When IgG molecules are subjected to the enzyme papain, they divide into three pieces with nearly similar molecular weights of 50,000. Two of these pieces, known as Fab fragments, are indistinguishable;
the third is referred to as Fc. The reduction of some disulfide bridges in IgG leads to the creation of two heavy chains (with a molecular weight of 55,000) and two light chains (with a molecular weight of
22,000). The disulfide bonds connecting them are arranged in the sequence L―H―H―L. The H chain consists of four intrachain disulfide bonds, while the L chain consists of two.

Antibody preparations of the IgG class exhibit heterogeneity, with H and L chains comprising a multitude of diverse L chains and a range of H chains. Patients with myelomas have pure IgG, IgM, and IgA
immunoglobulins in their blood serum, while malignancies generate either an IgG, IgM, or IgA protein, but never more than one class. The Bence-Jones protein, discovered in the urine of myeloma tumour
patients, is indistinguishable from the L chains of the myeloma protein.

Examinations of the Bence-Jones proteins have demonstrated that the L chains seen in humans and other mammals may be classified into two distinct forms, namely kappa (κ) and lambda (λ), both
containing roughly 220 amino acids. The N-terminal portions of κ- and λ-chains exhibit variability, manifesting distinct characteristics in each Bence-Jones protein. The C-terminal portions of these identical L
chains possess a consistent amino acid sequence, which can be either of the κ-type or the λ-type.

The transmission of certain amino acid sequences in the L and H chains occurs between generations. The presence of numerous allotypes indicates that antibodies, despite being generated in response to a
single antigen, consist of a combination of various allotypes. The presence of multiple classes of antibodies, each with distinct allotypes, and the ability of the variable sections of antibodies to adapt to
different areas of an antigen molecule leads to a diverse array of antibody molecules, even when just a single antigen is introduced.

Enzymes manage the vast array of intricate metabolic events occurring in animals, plants, and microbes. These catalytic proteins exhibit high efficiency and specificity, meaning they enhance the speed of a
particular chemical process involving a specific substance. They outperform artificial catalysts in terms of efficiency. Every cell harbors enzymes, which exhibit variations in quantity and composition based
on the specific cell type. For example, an average mammalian cell is roughly one one-billionth (10−9) the size of a water droplet and typically possesses around 3,000 enzymes.

Enzymes were discovered throughout the 19th century by scientists investigating the phenomenon of fermentation. The elucidation of enzymes' function as catalysts ensued swiftly. Prior to 1850, significant
advancements were made in the field of biochemistry, with the enzyme amylase successfully extracted from malt in 1833 and pepsin isolated from the stomach wall of animals in 1836. Enzymes have long
been referred to as ferments, a term derived from the Latin word for yeast.

Certain enzymes facilitate the decomposition of complex nutritional molecules, such as proteins, lipids, and carbohydrates, into smaller molecules during the digestion of food in the stomach and intestines of
animals. Additional enzymes facilitate the passage of smaller, fragmented molecules across the intestinal barrier and into the circulatory system. Some enzymes also facilitate the synthesis of intricate
macromolecules from basic, smaller ones to generate essential components of cells.

Each enzyme is capable of catalyzing only a single specific chemical reaction, and the substances upon which the enzyme exerts its catalytic action are called substrates. Enzymes function inside highly
organized metabolic systems known as pathways, where the product of one step in a metabolic pathway becomes the substrate for the next step in the pathway.

Enzymes play a crucial part in metabolic processes and can be visually represented using diagrams. The chemical substance denoted as A undergoes a series of enzymatic reactions to produce product E.
Throughout this process, intermediate compounds denoted as B, C, and D are sequentially created. They serve as substrates for enzymes denoted by 2, 3, and 4. Compound A can undergo a different set
of reactions, some of which overlap with the pathway for the production of E, resulting in the development of products G and H.

Rev 001 Session-5 Question Booklet Page 47 of 334


The letters correspond to chemical substances, whereas the numbers indicate enzymes that facilitate certain processes. The relative heights correspond to the thermodynamic energy levels of the
compounds (e.g., compound A has a higher energy content than B, B has a higher energy content than C). Compounds A, B, and others exhibit a sluggish rate of change when a catalyst is not present, but
undergo rapid transformation when catalysts 1, 2, 3, and others are introduced.

To elucidate the regulatory function of enzymes in metabolic pathways, one can employ a straightforward analogy: comparing the molecules, depicted as letters in the picture, to a sequence of
interconnected water reservoirs on a slope. Similarly, the enzymes denoted by the numbers are comparable to the valves of the reservoir system. The valves regulate the movement of water within the
reservoir. Specifically, when valves 1, 2, 3, and 4 are open, water from A can only flow to E. However, when valves 1, 2, 5, and 6 are open, water from A can flow to G. If enzymes 1, 2, 3, and 4 are
functioning, product E is produced in the metabolic pathway. Conversely, if enzymes 1, 2, 5, and 6 are active, product G is generated. The enzymatic activity, or lack thereof, in the pathway ultimately
dictates the outcome of chemical A. Specifically, it can either remain unaltered or undergo conversion into one or more products. Furthermore, the relative activity of enzymes 3 and 4 in comparison to
enzymes 5 and 6 plays a crucial role in determining the amount of product E generated in relation to product G.

The hydrodynamics of water and enzymatic processes follow thermodynamic principles, allowing for the movement of water from reservoir F to H through valves 1, 2, 5, and 7. Enzymes in the metabolic
pathway cannot directly convert chemical F to H without energy, but can harness energy from energy-conserving reactions. Energy is preserved in the form of adenosine triphosphate (ATP) during the
enzyme-catalyzed oxidation of carbohydrates to carbon dioxide and water.

Regulation of enzyme activity and synthesis is necessary due to the varying requirements of cells and organisms. For example, enzymes involved in muscle action must be activated and inhibited at specific
periods. Enzymes are not necessary for specific cells, such as liver cells or bacteria. Enzyme creation and function are regulated by genetic mechanisms, organic secretions, hormones, nerve impulses, and
small compounds.

Disease may arise if an enzyme exhibits malfunction, as they are essential for the conversion of initial chemical A into final product E. If enzyme activity is hindered, it might lead to the inhibition of a specific
step in a biochemical pathway, preventing the formation of product E. In cases where product E is essential for a critical biological process, the development of a disease occurs.

Hereditary diseases and disorders in humans often arise from a dearth of a certain enzyme, such as albinism, which is caused by a hereditary deficiency in the synthesis of the enzyme tyrosinase, which
plays a crucial role in hair and eye color production.

Enzymes identified with hereditary diseases

disease name defective enzyme

Albinism Tyrosinase

Phenylketonuria phenylalanine hydroxylase

Fructosuria Fructokinase

Methemoglobinemia methemoglobin reductase

Galactosemia galactose-1-phosphate uridyl transferase

Other functions

Enzymes are essential in various fields, including medicine, industry, and food. They facilitate wound healing, diagnose diseases, induce leukaemia remission, and treat thrombosis. They also prevent tooth
decay and act as anticoagulants in treating thrombosis. Enzymes can regulate enzyme shortages and abnormalities caused by illnesses.

In industrial procedures, they produce specific chemical compounds and treat leather. They are also crucial in analytical techniques for detecting small amounts of certain chemicals. Enzymes play a vital
role in food-related sectors like cheese production, beer brewing, wine ageing, bread baking, and laundering garments.

Taxonomy and naming conventions for enzymes began in 1833 with the initial designation diastase. Émile Duclaux proposed a naming convention for enzymes, adding the suffix -ase to a root that reflects
the substrate's nature. Most enzyme names no longer follow this convention and end with the suffix -ase.

A systematic categorization of enzymes should rely on a shared characteristic or attribute that exhibits enough variation to serve as a meaningful distinguishing factor. Three aspects of enzymes can be used
to classify them: the precise chemical composition of the enzyme, the chemical composition of the substrate, and the type of reaction that the enzyme catalyses.

There are six primary categories of enzymatic reactions within the framework of systematic nomenclature. Oxidoreductases facilitate hydrogen transfer processes, while hydrolases facilitate water addition to
a specific location in a molecule. Transferases facilitate substances other than hydrogen transfer, lyases catalyze chemical bond breaking or formation, isomerases convert molecules into isomeric forms,
and ligases join two molecules together using energy from ATP.

Oxidoreductases and transferases constitute around 50% of the currently identified 1,000 enzymes. A concise compilation of enzymes, including their common names, official names, and respective
functions in biological processes, is provided.

Classification of some enzymes

*Based on recommendations (1964) of the International Union of Biochemistry.


**The numbering system is as follows: the first number places the enzyme in one of six general groups—1, oxidoreductases; 2, transferases; 3, hydrolases; 4, lyases; 5, iomerases; and 6, ligases. The

Rev 001 Session-5 Question Booklet Page 48 of 334


Enzymes are essential in various fields, including medicine, industry, and food. They facilitate wound healing, diagnose diseases, induce leukaemia remission, and treat thrombosis. They also prevent tooth
decay and act as anticoagulants in treating thrombosis. Enzymes can regulate enzyme shortages and abnormalities caused by illnesses.

In industrial procedures, they produce specific chemical compounds and treat leather. They are also crucial in analytical techniques for detecting small amounts of certain chemicals. Enzymes play a vital
role in food-related sectors like cheese production, beer brewing, wine ageing, bread baking, and laundering garments.

Taxonomy and naming conventions for enzymes began in 1833 with the initial designation diastase. Émile Duclaux proposed a naming convention for enzymes, adding the suffix -ase to a root that reflects
the substrate's nature. Most enzyme names no longer follow this convention and end with the suffix -ase.

A systematic categorization of enzymes should rely on a shared characteristic or attribute that exhibits enough variation to serve as a meaningful distinguishing factor. Three aspects of enzymes can be used
to classify them: the precise chemical composition of the enzyme, the chemical composition of the substrate, and the type of reaction that the enzyme catalyses.

There are six primary categories of enzymatic reactions within the framework of systematic nomenclature. Oxidoreductases facilitate hydrogen transfer processes, while hydrolases facilitate water addition to
a specific location in a molecule. Transferases facilitate substances other than hydrogen transfer, lyases catalyze chemical bond breaking or formation, isomerases convert molecules into isomeric forms,
and ligases join two molecules together using energy from ATP.

Oxidoreductases and transferases constitute around 50% of the currently identified 1,000 enzymes. A concise compilation of enzymes, including their common names, official names, and respective
functions in biological processes, is provided.

Classification of some enzymes

second number places the enzyme in a subclass based on substrate type or reaction type; e.g., the enzyme may act on molecules with −CHOH groups. The third number places the enzyme in a
subsubclass, which specifies the reaction type more fully; e.g., NAD coenzyme required. The fourth number is the serial number of the enzyme in its subsubclass.
***NAD and NADH represent the oxidized and reduced forms of nicotinamide adenine dinucleotide (NAD), respectively; ATP and ADP represent adenosine triphosphate and adenosine diphosphate,
respectively.

systematic name* trivial name reaction catalyzed biological role

code number** name***

1.1.1.1 alcohol: NAD oxidoreductase alcohol dehydrogenase alcohol + NAD → acetaldehyde NADH alcoholic fermentation

1.1.1.27 L-lactate: NAD oxidoreductase lactic dehydrogenase lactate + NAD → pyruvate + NADH carbohydrate metabolism

2.7.1.40 ATP: pyruvate phosphotransferase pyruvate kinase pyruvic acid + ATP → phosphoenolpyruvic acid + ADP carbohydrate metabolism

3.1.1.7 acetylcholine: acetylhydrolase acetylcholinesterase acetylcholine + H2O → acetate + choline nerve-impulse conduction

Chemical nature

Enzymes, primarily proteins, were initially believed to be composed of a single chain of amino acids. However, in the early 20th century, the protein urease was crystallized and identified as an enzyme in
1926. The digestive enzymes pepsin, trypsin, and chymotrypsin were later discovered to be proteins. Since then, numerous enzymes have been synthesized and examined using chemical techniques,
leading to a better understanding of protein chemistry. Enzymes are composed of multiple chains, known as subunits, with some having two, four, or six subunits, while others may have 12 to 60 subunits.
Most proteins found in physiologically active tissues like the kidney and liver are enzymes, and it is evident that multiple enzymes are present in each tissue to accommodate the diverse reactions involved in
metabolism.

Cofactors

Rev 001 Session-5 Question Booklet Page 49 of 334


B-vitamin coenzymes in metabolism

Functions of B-vitamin coenzymes in metabolism.

Enzymes are complex proteins that consist of both a protein component and a cofactor. A holoenzyme is a fully functional enzyme, while an apoenzyme refers to the protein component that is no longer
active when the cofactor is removed. Cofactors can be metals, prosthetic groups, or specific substrate molecules. They enhance the catalytic activity of an enzyme or actively participate in the enzymatic
reaction.

A coenzyme works as a substrate in specific enzymatic reactions and responds in the precise stoichiometric proportions needed for the reaction. Examples of coenzymes include nicotinamide adenine
dinucleotide (NAD) and adenosine triphosphate (ATP), which act as hydrogen acceptors and chemical-group donors respectively. The catalytic nature of a coenzyme becomes evident only when it
effectively combines the functions of two enzymes.

Catalysis is the process by which a molecule is converted into a product, expediting the attainment of equilibrium. Enzymes enhance the speed at which a molecule is converted into a product, but do not
affect the specific equilibrium point reached. Enzymes significantly enhance the likelihood of reactions by catalyzing the transformation of numerous particular molecules into more reactive states through the
formation of intermediate compounds.

The active site is the specific area of contact between the substrate and the enzyme, which is limited due to the disparity in size. Enzymes are macromolecules with molecular weights ranging from few
thousand to several million, depending on the weight of a hydrogen atom. The enzyme-substrate interaction is limited due to the disparity in size, resulting in only a small portion of the enzyme being in direct
contact with the substrate.

In summary, enzymes play a crucial role in metabolic processes, facilitating the exchange of molecules such as hydrogen and phosphoric acid.

enzyme; active site

The active site of an enzyme plays a crucial role in the lock-and-key fit of a substrate to its active site. The arrangement of amino acids determines the shape and structure of the enzyme, which in turn
determines the enzyme's specificity. The substrate can be attracted to the enzyme's surface through physical or chemical factors, such as electrostatic bonds between groups with opposite charges or
hydrophobic bonding between hydrocarbon parts of the enzyme and substrate.

Changes in the configuration of amino acids near the active site impact the enzyme's functionality, as these amino acids play a crucial role in the proper alignment and attraction of the substrate to the
enzyme's surface. Excessive bulkiness in inappropriate regions impeds any interaction with the enzyme, while molecules with large groups in a position that does not hinder binding or activity can function as
a substrate for the enzyme.

The "key-lock" hypothesis, introduced by German scientist Emil Fischer in 1899, elucidates the concept of a match between substrate and enzyme, namely their specificity. Most enzymes have an active site
that includes a cleft or depression where the substrate can fit. Enzyme specificity is essential for the segregation of numerous metabolic pathways, each involving multiple enzymes.

Not all enzymes exhibit a high degree of specificity, as enzymes involved in digestion, such as pepsin and chymotrypsin, can break down a wide range of proteins found in food. In contrast, thrombin, which
selectively interacts with the protein fibrinogen, plays a crucial role in blood clotting, requiring it to exclusively target one specific component to ensure the system operates correctly.

Initially, enzymes were believed to have a high degree of specificity, meaning they would only react with a single chemical. However, artificial substrates can be created in the laboratory to mimic the natural
substrate. Enzymes discovered to date exhibit specificity towards the specific chemical reactions they catalyze, such as oxidoreductases and hydrolases.

The mechanism of enzymatic action

Rev 001 Session-5 Question Booklet Page 50 of 334


mechanisms of enzymatic action

Mechanisms of enzymatic action (see text).

An enzyme is a chemical process that attracts substrates to its active site, facilitating the formation of products and allowing them to separate from the enzyme surface. The enzyme-substrate complex refers
to the combination of an enzyme and its substrates, with ternary complexes formed when two substrates and one enzyme are involved, and binary complexes formed when one substrate and one enzyme
are involved. Noncovalent bonds, which are physical attractions rather than chemical bonds, are drawn towards the active site through electrostatic and hydrophobic forces.

For example, two substrates (S1 and S2) attach to the enzyme's active site in step 1 and undergo a reaction to produce products (P1 and P2). In step 3, the products disengage from the enzyme surface,
resulting in the enzyme's liberation. The enzyme remains unaltered by the reaction and can repeatedly react with several substrate molecules at a rapid rate to produce products. Enzymatic methods can be
classified into two types: those that include the formation of a covalent intermediate and those that do not.

A covalent intermediate is formed when a substrate, such as B―X, combines with a group N on the enzyme surface to create an enzyme-B intermediate molecule. This intermediate molecule then
undergoes a reaction with the second substrate, Y, resulting in the formation of the products B―Y and X. This mechanism is used by numerous enzymes to catalyze processes, such as acetylcholinesterase
and sucrose phosphorylase.

The formation of a covalent intermediate between the enzyme and substrate in double displacement reactions is believed to enhance the reaction rate and act as a genuine catalyst, despite temporary
changes throughout the enzymatic activity. A single displacement reaction occurs when one substrate reacts directly with the second substrate (X―B), and maltose phosphorylase directly impacts the bonds
between substrates (B―X and X), resulting in the formation of glucose (X) and glucosylphosphate (B―Y).

The rate of enzymatic reactions

The Michaelis-Menten hypothesis

diagram of enzyme action

Curves representing enzyme action (see text).

If the velocity of an enzymatic reaction is represented graphically as a function of the substrate concentration ( S), the curve obtained in most cases is a hyperbola. The mathematical expression of this curve,
shown in the equation below, was developed in 1912–13 by German biochemists Leonor Michaelis and Maud Leonora Menten. In the equation, VM is the maximal velocity of the reaction, and KM is called the

Michaelis constant,

The shape of the curve is a logical consequence of the active-site concept; i.e., the curve flattens at the maximum velocity ( VM), which occurs when all the active sites of the enzyme are filled with substrate.
The fact that the velocity approaches a maximum at high substrate concentrations provides support for the assumption that an intermediate enzyme–substrate complex forms. At the point of half the
maximum velocity, the substrate concentration in moles per litre (M) is equal to the Michaelis constant, which is a rough measure of the affinity of the substrate molecule for the surface of the
Rev 001 Session-5 Question Booklet Page 51 of 334
enzyme. KM values usually vary from about 10−8 to 10−2 M, and VM from 105 to 109 molecules of product formed per molecule of enzyme per second. The value for VM is referred to as the turnover number
when expressed as moles of product formed per mole of enzyme per minute. The binding of molecules that inhibit or activate the protein surface usually results in similar types.

Enzymes are catalysts that have a higher efficiency than human-made catalysts when acting under identical conditions. This is due to the limited space available for a small number of enzyme molecules,
which can only generate 1012 molecules of oxygen per second. The catalytic groups at the active site of an enzyme have a significantly higher efficiency, ranging from 106 to 109 times, compared to similar
groups in nonenzymatic reactions.

The exact mechanism behind the exceptional efficiency of enzymes is still unclear, but it is partly due to the accurate arrangement of substrates and catalytic groups at the active site, which increases the
likelihood of collision between reacting atoms. Additionally, the conditions at the active site may be conducive to the reaction, allowing acidic and basic groups to work more efficiently, substrate molecules to
be subjected to strain, or optimal substrate orientation on the enzyme surface.

Enzyme inhibition occurs when molecules that closely resemble the substrate of an enzyme can attach to the active site but fail to undergo a chemical reaction, inhibiting the binding of the genuine substrate.
This competitive suppression of enzyme activity is known as inhibitors, and they are extensively used in chemotherapy to selectively eradicate pathogens while sparing their hosts.

Enzymes are essential for various biological processes, and their activity can be influenced by various factors such as inhibitors, temperature, pH, and allosteric mechanisms. Inhibitors hinder enzyme
activity by interacting with functional groups at the active site, such as nerve gas and muscle function enzymes. They also alter amino acids outside the active site, leading to decreased enzymatic activity.

Inherited disorders often result from mutations in an enzyme's amino acid sequence, rendering it dysfunctional. These disorders can lead to fatalities or partial malfunctioning of an enzyme, resulting in
severe illness but still allowing for survival.

Temperature plays a crucial role in enzyme efficiency, as they exhibit optimal efficiency at specific ranges that are conducive to physiological conditions. Enzymes are susceptible to denaturation at elevated
temperatures, which can moderate reaction velocity and progressively denature them. Excessive temperature can lead to the denaturation of enzymes, resulting in the termination of life.

The pH level of a solution also impacts enzymes, with the optimal acidity being a characteristic attribute that can be altered by temperature and other components of the enzyme solution. Most biological
systems have strong buffering mechanisms to maintain a stable acidity level, with most organisms having a pH level of approximately 7, indicating neutrality.

Flexibility of enzymes and regulation of their activity through allosteric mechanisms is another aspect of enzyme-catalyzed reactions. The induced-fit theory, which builds upon the key-lock hypothesis,
suggests that the substrate must not only fit into the active site but also undergo further changes to adapt to its shape. This theory is similar to the perfect fit of a hand in a glove, where the hand represents
the substrate and causes a modification in the shape of the glove, symbolizing the enzyme.

In summary, enzymes play a crucial role in various biological processes, and their flexibility and regulation through allosteric mechanisms are essential for their overall function.

induced-fit theory

Rev 001 Session-5 Question Booklet Page 52 of 334


Induced-fit binding of a substrate to an enzyme surface and allosteric effects.

Enzymes are complex systems that regulate the activity of their substrates and catalytic groups. They are regulated by various mechanisms, including noncompetitive inhibition, which occurs when an
inhibitor molecule interferes with the binding of the substrate without impeding the reaction. This can occur at allosteric sites, which are distinct from active sites, and can either activate or inhibit enzymatic
activity by exerting an influence on the enzyme's conformation.

Allosteric control is a crucial aspect of enzymes, as it allows for the regulation of enzymatic activity. For example, the biosynthesis of amino acid histidine involves ten enzymatic reactions, which are halted
when a cell has accumulated enough histidine. This is done to conserve resources. Feedback inhibition, another mechanism, occurs when the action of an enzyme is blocked by a product several steps
away from the initiating enzyme. This is common in various routes across all forms of life.

Activators can also achieve allosteric control. Adrenaline, also known as epinephrine, functions as an allosteric activator of enzymes. It is released and stimulates the enzyme adenyl cyclase through
allosteric activation when the body requires energy. This enzyme facilitates the conversion of adenosine triphosphate (ATP) into cyclic adenosine monophosphate (cyclic AMP), which functions as an
allosteric activator of enzymes, enhancing the rate at which carbohydrates are metabolized to generate energy.

The use of both allosteric activation and inhibition enables the generation of energy or materials at the required times and ceases production when the supply is sufficient. The flexibility of enzymes is
essential for regulating enzymatic activity and ensuring the proper arrangement of catalytic groups.

Allosteric control is a crucial mechanism in living organisms for controlling essential products. However, many enzymes are unnecessary for specific cells, making it inefficient to produce them. Repressors,
protein molecules that attach to DNA, hinder the production of unnecessary enzymes. When specific metabolites are introduced into cells that require an enzyme, the synthesis of the enzyme is triggered,
causing it to be induced.

Distinct cell types in multicellular animals possess unique enzymes, despite sharing identical DNA content. These enzymes are tailored to meet the requirements of individual cells and differ not only across
different cell types but also between different species. Cooperativity, a distinguishing feature of allosteric enzymes, is demonstrated by a sigmoidal curve, making them more responsive to regulatory
mechanisms.

Haemoglobin, although not an enzyme, exhibits cooperative behavior, making it the first known example of such a phenomenon. The process of oxygen uptake in the lungs and its distribution in body tissues
is highly effective due to positive cooperativity exhibited by haemoglobin subunits.

Living organisms also exhibit negative cooperativity, where the binding of one molecule hinders the subsequent binding of another molecule, reducing the enzyme's susceptibility to changes in metabolite
concentrations. This is particularly relevant for enzymes that need to maintain a consistent level of activity inside the cell.

Certain enzymes form aggregates composed of many enzyme units, such as the pyruvate dehydrogenase system, which has five distinct enzymes with a combined molecular weight of 4,000,000. Cellular
enzymes can be arranged by creating intricate assemblies, adhering to a cell wall, or being sequestered within specialized compartments bounded by membranes. Clustering enzymes within a specific
pathway enhances their functionality in a manner similar to an industrial assembly line.

Daniel E. KoshlandThe Editors of Encyclopaedia Britannica

interleukin

Where is protein stored?

What do proteins do?

What Is the Difference Between a Peptide and a Protein?

Rev 001 Session-5 Question Booklet Page 53 of 334


Olympics: A Survey of Banned Substances

Why Is Eating Protein Important?

Discover

The Seven Sacraments of the Roman Catholic church

HomeHealth & MedicineAnatomy & Physiology

Science & Tech

interleukin

protein

Print Cite Share Feedback

Written and fact-checked by

Encyclopaedia Britannica

Related Topics: IL-1

leukemia inhibitory factor master colony-stimulating


factor
protein IL-2

Interleukin (IL) are naturally occurring proteins that facilitate cellular communication and govern cell proliferation, specialization, and movement. They play a crucial role in promoting immunological
responses, particularly in triggering inflammation. Interleukins are a subgroup of cytokines that are produced transiently in response to a stimulus, such as an infectious pathogen. They migrate to their
intended cell and attach to it using a receptor molecule on the cell's outer membrane, initiating a series of signals that modify the cell's behavior.

Initially discovered in the 1970s, researchers initially believed that interleukins were mostly produced by leukocytes to affect other leukocytes. However, it is now understood that interleukins are also
generated by and interact with various cells unrelated to immunity, contributing to various physiological processes. There are currently fifteen distinct classifications of interleukins, numbered IL-1 through IL-
15.

IL-1 and IL-2 play a key role in activating T and B lymphocytes, which are essential white blood cells involved in the acquired immune response. IL-1, IL-6, and IL-4 mediate inflammation, stimulate B
lymphocytes to produce more antibodies, and promote the production of cytotoxic T cells and natural killer cells. The specific infectious agent dictates the set of interleukins activated, which affects the cells
that respond to the infection and affects some clinical symptoms of the disease.

phenol

For Students

Rev 001 Session-5 Question Booklet Page 54 of 334


phenol summary

Chemistry

Science & Tech

phenol

chemical compound

Written by

Leroy G. Wade

Encyclopaedia Britannica

phenol-formaldehyde resin

Category: Science & Tech

Related Topics: salicylic acid

chlorophenol bisphenol F

bisphenol A bisphenol S

Phenol is an organic molecule that belongs to a group of substances characterised by the presence of a hydroxyl (―OH) group bonded to a carbon atom within an aromatic ring. In addition to being the
general term for the entire family, phenol specifically refers to its simplest component, monohydroxybenzene (C6H5OH), which is also known as benzenol or carbolic acid.

Phenols have similarities to alcohols, however, they have the ability to generate more robust hydrogen bonds. Consequently, they exhibit greater solubility in water compared to alcohols and possess higher
boiling temperatures. Phenols exist as either colourless liquids or white solids at room temperature and can be extremely poisonous and corrosive.

chemical compound: Alcohols and phenols

Phenols are essential compounds in various industries, including domestic products, industrial synthesis, and medical applications. They are used as disinfectants in household cleaning products and oral
rinses, and as an antiseptic in surgical environments. In 1865, British surgeon Joseph Lister used phenol to reduce mortality rates from amputations by 45 to 15 percent. However, phenol is highly poisonous
and can cause severe burns without pain. N-hexylresorcinol, a lower-toxicity phenol, has replaced phenol in cough drops and other antiseptic uses. Butylated hydroxytoluene (BHT) is used as an antioxidant
in food products. Phenols are also used as precursors for polymers, explosives, and pharmaceuticals. Hydroquinone, a common phenol, is used in photographic development and in dye production.
Additionally, phenols, particularly cresols, are used in wood preservatives like creosote.

Natural sources of phenols

Rev 001 Session-5 Question Booklet Page 55 of 334


Poison ivy (Toxicodendron radicans) is a natural source of the phenol urushiol—an irritant that causes severe inflammation of the skin.

Phenols are abundant in nature and can be found in various substances. Some examples include tyrosine, which is a standard amino acid present in most proteins; epinephrine (adrenaline), a hormone that
stimulates the body and is produced by the adrenal medulla; serotonin, a neurotransmitter in the brain; and urushiol, an irritant that is secreted by poison ivy to deter animals from consuming its leaves.
Several intricate phenols, which serve as flavourings and fragrances, are derived from the essential oils of plants. Vanillin, the primary aroma compound found in vanilla, is extracted from vanilla beans.
Similarly, methyl salicylate, known for its distinct minty taste and odour, is obtained from wintergreen. Additional phenolic compounds derived from plants include thymol, which is extracted from thyme, and

eugenol, which is extracted from cloves.

Phenol, the cresols (methylphenols), and other simple alkylated phenols can be obtained from the distillation of coal tar or crude petroleum.

1. Growth and Maintenance

Rev 001 Session-5 Question Booklet Page 56 of 334


Pinterest

Protein is essential for the growth and maintenance of tissues, and its levels are constantly changing. During periods of illness, pregnancy, and lactation, the body's capacity to produce protein can be
exceeded, leading to an increase in protein requirements. This is common in individuals undergoing rehabilitation after injury or surgery, elderly individuals, and athletes.

Enzymes, proteins that facilitate chemical reactions in living organisms, play a crucial role in various biological events both within and outside of cells. They can bind with substrates, which are chemicals
present within the cell, facilitating the catalysis of crucial metabolic reactions. Some enzymes require the presence of other molecules, such as vitamins or minerals, for a reaction to occur. Enzyme-
dependent bodily activities include gastrointestinal processing of food, energy generation, hemostasis, and skeletal muscle contraction. Dysfunction or deficiency of these enzymes can lead to illness.

Hormones, proteins that function as chemical messengers, facilitate communication among cells, tissues, and organs in the body. They are produced and released by endocrine tissues or glands and carried
through the bloodstream to certain tissues or organs. They are classified into three primary classifications: proteins and peptides, steroids, and amino acids. The majority of the body's hormones are
composed of proteins and polypeptides.

Fibrous proteins, such as keratin, collagen, and elastin, contribute to the formation of the connective framework in certain structures within the body. Keratin is an inherent structural protein present in the
integumentary system, including the skin, hair, and nails. Collagen, the most prevalent protein in the human body, serves as the foundational protein for bones, tendons, ligaments, and skin. Elastin has a
much higher degree of flexibility than collagen, with a magnitude several hundred times greater.

In summary, proteins are essential for the growth and maintenance of tissues, and their levels depend on overall health and physical activity.

Protein plays a crucial role in maintaining the optimal pH level in the body, regulating fluid equilibrium, and supporting the immune system. The pH scale is used to quantify the equilibrium between acids and
bases, with pH ranging from 0 to 14. Common compounds with pH values include gastric acid in the stomach, juice from tomatoes, coffee without cream or sugar, blood in the human body, suspension of
magnesium hydroxide in milk, and water with soap added.

Proteins also regulate the balance of fluids, such as albumin and globulin, which play a vital role in regulating the body's fluid equilibrium by attracting and holding water. Insufficient protein consumption can
lead to a gradual decline in these proteins, causing fluid accumulation in interstitial spaces, leading to swelling or edoema. Kwashiorkor, a severe protein deficiency, occurs when an individual consumes an
adequate amount of calories but fails to consume sufficient protein.

Proteins also help in the production of immunoglobulins, or antibodies, which are essential in combating infections. Antibodies, proteins present in the bloodstream, play a crucial role in protecting the body
against harmful intruders such as bacteria and viruses. They generate antibodies against specific pathogens, allowing the body to rapidly respond to disease-causing pathogens.

Transport proteins facilitate the movement of chemicals inside the bloodstream, including into cells, out of cells, or within cells. They facilitate the transportation of vitamins, minerals, blood sugar, cholesterol,
and oxygen. For example, haemoglobin transports oxygen from the lungs to the tissues of the body, while glucose transporters (GLUT) facilitate glucose movement into cells, and lipoproteins transport
cholesterol and other lipids in the bloodstream. Protein transporters exhibit specificity, as they exclusively interact with particular molecules.

Some proteins facilitate the transportation of nutrients throughout the body, while others store them. Ferritin is an iron-storing protein, and casein is the primary protein responsible for promoting growth in
infants

Protein can provide energy to the body, with a caloric value of four calories per gramme, equivalent to the energy provided by carbohydrates. However, it is not the preferred source of energy due to its
essential role in several bodily functions. Carbohydrates and lipids are more suitable for providing energy, as the body stores them as reserves for fuel use and is metabolized with greater efficiency
compared to carbohydrates.

Under normal circumstances, protein provides a minimal amount of energy to the body. During a fasting state, the body undergoes skeletal muscle breakdown to use amino acids as an energy source. If
glycogen reserve is depleted, the body will use amino acids derived from the breakdown of skeletal muscle, which can arise during strenuous physical activity or insufficient overall calorie intake.

In conclusion, protein serves numerous functions throughout the body, including restoring and building bodily tissues, facilitating metabolic processes, and regulating physiological functions. It serves as the
structural underpinning for the body, regulates pH levels and fluid equilibrium, and supports the immune system.

Does excessive protein intake have negative effects?

We have a team of specialists who consistently oversee the health and wellness field, and we revise our articles as soon as new material emerges.

Feb 15, 2023

Written By

Gavin Van De Walle

Edited By

Elizabeth Donovan

Rev 001 Session-5 Question Booklet Page 57 of 334


By Gavin Van De Walle, MS, RD — Updated on February 15, 2023

A List of 50 Super Healthy Foods

The 9 Best Foods and Drinks to Have Before Bed

10 Anti-Aging Foods to Support Your 40s-and-Beyond Body

Healthy Dinner Recipes in 10 Minutes (or Less)

https://www.healthline.com/nutrition/functions-of-protein#Is-too-much-protein-harmful?

Proteins are large biomolecules and macromolecules that consist of one or more long chains of amino acid residues. They perform a vast array of functions within organisms, including catalyzing metabolic
reactions, DNA replication, responding to stimuli, providing structure to cells and organisms, and transporting molecules from one location to another. Proteins differ from one another primarily in their
sequence of amino acids, which is dictated by the nucleotide sequence of their genes.

A linear chain of amino acid residues is called a polypeptide, and a protein contains at least one long polypeptide. Short polypeptides, containing less than 20-30 residues, are rarely considered proteins and
are commonly called peptides. The sequence of amino acid residues in a protein is defined by the sequence of a gene, which is encoded in the genetic code.

Proteins can be chemically modified by post-translational modification, altering their physical and chemical properties, folding, stability, activity, and ultimately, the function of the proteins. Some proteins
have non-peptide groups attached, which can be called prosthetic groups or cofactors. Proteins can also work together to achieve a particular function and often associate to form stable protein complexes.

Once formed, proteins only exist for a certain period and are then degraded and recycled by the cell's machinery through the process of protein turnover. A protein's lifespan is measured in terms of its half-
life and covers a wide range. Abnormal or misfolded proteins are degraded more rapidly either due to being targeted for destruction or due to being unstable

Rev 001 Session-5 Question Booklet Page 58 of 334


Proteins are essential parts of organisms and participate in virtually every process within cells. They are essential enzymes that catalyze biochemical reactions, have structural or mechanical functions, and
are vital in cell signaling, immune responses, cell adhesion, and the cell cycle. Proteins can be purified from other cellular components using various techniques, with genetic engineering making it possible
to facilitate purification.

Proteins were first recognized as a distinct class of biological molecules in the 18th century by Antoine Fourcroy and others, distinguished by their ability to coagulate or flocculate under heat or acid
treatments. They were first described by Gerardus Johannes Mulder and named by Jöns Jacob Berzelius in 1838. Early nutritional scientists believed that protein was the most important nutrient for
maintaining the body's structure, as it was generally believed that "flesh makes flesh."

Karl Heinrich Ritthausen extended known protein forms with the identification of glutamic acid. Thomas Burr Osborne compiled a detailed review of vegetable proteins at the Connecticut Agricultural
Experiment Station, while William Cumming Rose continued and communicated the work of Franz Hofmeister and Hermann Emil Fischer in 1902. The understanding of proteins as polypeptides came
through the work of Franz Hofmeister and Hermann Emil Fischer in 1902.

The central role of proteins as enzymes in living organisms was not fully appreciated until 1926 when James B. Sumner showed that the enzyme urease was actually a protein. Due to the difficulty in
purifying proteins in large quantities, early studies focused on proteins that could be purified in large quantities, such as those of blood, egg whites, various toxins, and digestive/metabolic enzymes obtained
from slaughterhouses.

Linus Pauling is credited with the successful prediction of regular protein secondary structures based on hydrogen bonding, while Walter Kauzmann contributed an understanding of protein folding and
structure mediated by hydrophobic interactions. Frederick Sanger was the first protein to be sequenced, correctly determining the amino acid sequence of insulin, conclusively demonstrating that proteins
consisted of linear polymers of amino acids rather than branched chains, colloids, or cyclols.

The development of X-ray crystallography allowed for the sequencing of protein structures, with hemoglobin and myoglobin being the first structures solved. The use of computers and increasing computing
power also supported the sequencing of complex proteins, such as RNA polymerase using high intensity X-rays from synchrotrons in 1999.

Cryo-electron microscopy (cryo-EM) of large macromolecular assemblies has been developed since then, using frozen protein samples and electron beams to analyze larger structures. Computational
protein structure prediction of small protein domains has also helped researchers approach atomic-level resolution of protein structures.

The number of proteins encoded in a genome roughly corresponds to the number of genes, with viruses typically encoding a few hundred proteins, archaea and bacteria a few hundred to a few thousand,
and eukaryotes coding a few thousand up to tens of thousands. Proteins are primarily classified by sequence and structure, but other classifications are commonly used, such as the EC number system for
enzymes and gene ontology for both genes and proteins.

Sequence similarity is used to classify proteins in terms of evolutionary and functional similarity, using either whole proteins or protein domains, especially in multi-domain proteins. Protein domains allow
protein classification by a combination of sequence, structure, and function, and can be combined in many different ways. In an early study of 170,000 proteins, about two-thirds were assigned at least one
domain, with larger proteins containing more domains.

Most proteins consist of linear polymers built from series of up to 20 different L-α- amino acids. All proteinogenic amino acids possess common structural features, including an α-carbon to which an amino
group, a carboxyl group, and a variable side chain are bonded. The side chains of standard amino acids have a great variety of chemical structures and properties, and the combined effect of all of the amino
acid side chains in a protein determines its three-dimensional structure and chemical reactivity.

The amino acids in a polypeptide chain are linked by peptide bonds, with the linked series of carbon, nitrogen, and oxygen atoms known as the main chain or protein backbone. The peptide bond has two
resonance forms that contribute some double-bond character and inhibit rotation around its axis, so that the alpha carbons are roughly coplanar. The end with a free amino group is known as the N-terminus
or amino terminus, while the end of the protein with a free carboxyl group is known as the C-terminus or carboxy terminus.

Proteins can interact with various molecules, including other proteins, lipids, carbohydrates, and DNA. The abundance of proteins in cells varies, with average-sized bacteria containing about 2 million
proteins per cell, while smaller bacteria like Mycoplasma or spirochetes contain fewer molecules. Eukaryotic cells, on the other hand, are larger and contain much more protein, with yeast cells estimated to
contain about 50 million proteins and human cells on the order of 1 to 3 billion.

Proteins are synthesized from amino acids using information encoded in genes. Each protein has its own unique amino acid sequence specified by the nucleotide sequence of the gene encoding this protein.
The genetic code is a set of three-nucleotide sets called codons, with each three-nucleotide combination designating an amino acid. The total number of possible codons is 64, resulting in some redundancy
in the genetic code.

Genes encoded in DNA are first transcribed into pre-messenger RNA (mRNA) by proteins such as RNA polymerase. Most organisms then process the pre-mRNA using various forms of post-transcriptional
modification to form the mature mRNA, which is then used as a template for protein synthesis by the ribosome. In prokaryotes, the mRNA may either be used as soon as it is produced or be bound by a
ribosome after having moved away from the nucleoid. In contrast, eukaryotes make mRNA in the cell nucleus and then translocate it across the nuclear membrane into the cytoplasm, where protein
synthesis takes place.

The process of synthesizing a protein from an mRNA template is known as translation. The mRNA is loaded onto the ribosome and is read three nucleotides at a time by matching each codon to its base
pairing anticodon located on a transfer RNA molecule, which carries the amino acid corresponding to the codon it recognizes. The enzyme aminoacyl tRNA synthetase "charges" the tRNA molecules with
the correct amino acids, creating the growing polypeptide, often termed the nascent chain.

Short proteins can also be synthesized chemically through a family of methods known as peptide synthesis. Chemical synthesis allows for the introduction of non-natural amino acids into polypeptide chains,
such as attachment of fluorescent probes to amino acid side chains. These methods are useful in laboratory biochemistry and cell biology but generally not for commercial applications.

Most proteins fold into unique 3D structures, known as their native conformation. Biochemists often refer to four distinct aspects of a protein's structure: primary structure, secondary structure, and solvent-
accessible surface representation.

Proteins are complex molecules that have various structures and functions. Tertiary structure refers to the overall shape of a single protein molecule, while quaternary structure is formed by several protein
molecules (polypeptide chains) that function as a single protein complex. Quinary structure is dependent on transient macromolecular interactions that occur inside living cells.

Rev 001 Session-5 Question Booklet Page 59 of 334


Proteins can be informally divided into three main classes: globular proteins, fibrous proteins, and membrane proteins. Almost all globular proteins are soluble and many are enzymes. Fibrous proteins are
often structural, such as collagen or keratin, while membrane proteins often serve as receptors or provide channels for polar or charged molecules to pass through the cell membrane. A special case of
intramolecular hydrogen bonds within proteins, poorly shielded from water attack, are called dehydrons.

Many proteins are composed of several protein domains, which are segments of a protein that fold into distinct structural units. Domains usually have specific functions, such as enzymatic activities or they
serve as binding modules. Short amino acid sequences within proteins often act as recognition sites for other proteins. Protein topology describes the entanglement of the backbone and the arrangement of
contacts within the folded chain. Two theoretical frameworks of knot theory and Circuit topology have been applied to characterize protein topology.

Cellular functions are the chief actors within the cell, carrying out the duties specified by the information encoded in genes. Proteins make up half the dry weight of an Escherichia coli cell, whereas other
macromolecules such as DNA and RNA make up only 3% and 20%, respectively. The set of proteins expressed in a particular cell or cell type is known as its proteome.

The chief characteristic of proteins that allows their diverse set of functions is their ability to bind other molecules specifically and tightly. The region of the protein responsible for binding another molecule is
known as the binding site, and this binding ability is mediated by the tertiary structure of the protein, which defines the binding site pocket, and by the chemical properties of the surrounding amino acids' side
chains. Protein binding can be extraordinarily tight and specific, with minor chemical changes such as the addition of a single methyl group to a binding partner sometimes suffice to nearly eliminate binding.

Proteins can bind to other proteins as well as small-molecule substrates. When proteins bind specifically to other copies of the same molecule, they can oligomerize to form fibrils, which occurs often in
structural proteins that consist of globular monomers that self-associate to form rigid fibers. Protein–protein interactions also regulate enzymatic activity, control progression through the cell cycle, and allow
the assembly of large protein complexes carrying out many closely related reactions with a common biological function.

Studying the interactions between specific proteins is a key to understanding important aspects of cellular function and the properties that distinguish particular cell types.

Proteins play a crucial role in the cell as enzymes, which catalyze chemical reactions. Enzymes are highly specific and accelerate only one or a few chemical reactions, carrying out most of the reactions
involved in metabolism and manipulating DNA in processes such as DNA replication, DNA repair, and transcription. About 4,000 reactions are known to be catalysed by enzymes, with the rate acceleration
conferred by enzymatic catalysis often being enormous.

Enzymes are bound and acted upon by molecules called substrates, which can consist of hundreds of amino acids. The region of the enzyme that binds the substrate and contains the catalytic residues is
known as the active site. Dirigent proteins dictate the stereochemistry of a compound synthesized by other enzymes.

Many proteins are involved in the process of cell signaling and signal transduction. Some proteins, such as insulin, are extracellular proteins that transmit a signal from the cell in which they were synthesized
to other cells in distant tissues. Others are membrane proteins that act as receptors whose main function is to bind a signaling molecule and induce a biochemical response in the cell. Many receptors have a
binding site exposed on the cell surface and an effector domain within the cell, which may have enzymatic activity or may undergo a conformational change detected by other proteins within the cell.

Antibodies are protein components of an adaptive immune system whose main function is to bind antigens, or foreign substances in the body, and target them for destruction. Antibodies can be secreted into
the extracellular environment or anchored in the membranes of specialized B cells known as plasma cells. An antibody's binding affinity to its target is extraordinarily high.

Many ligand transport proteins bind particular small biomolecules and transport them to other locations in the body of a multicellular organism. These proteins must have a high binding affinity when their
ligand is present in high concentrations but must also release the ligand when it is present at low concentrations in the target tissues. Lectins are sugar-binding proteins which are highly specific for their
sugar moieties and typically play a role in biological recognition phenomena involving cells and proteins. Receptors and hormones are highly specific binding proteins.

Transmembrane proteins can also serve as ligand transport proteins that alter the permeability of the cell membrane to small molecules and ions. Membrane proteins contain internal channels that allow
such molecules to enter and exit the cell. Many ion channel proteins are specialized to select for only a particular ion.

Structural proteins confer stiffness and rigidity to otherwise-fluid biological components. Most structural proteins are fibrous proteins, such as collagen and elastin, which are critical components of connective
tissue such as cartilage. Some globular proteins can also play structural functions, such as actin and tubulin, which polymerize to form long, stiff fibers that make up the cytoskeleton. Motor proteins such as
myosin, kinesin, and dynein are capable of generating mechanical forces and play essential roles in intracellular transport.

Protein evolution is a key question in molecular biology, as mutations can lead to new structures and functions. In vitro studies of purified proteins in controlled environments are useful for learning how a
protein carries out its function, while in vivo experiments provide information about the physiological role of a protein in the context of a cell or whole organism. In silico studies use computational methods to
study proteins.

In vitro analysis of proteins involves purifying them away from other cellular components, often starting with cell lysis. This process can be done using ultracentrifugation, salting out, chromatography, gel
electrophoresis, spectroscopy, enzyme assays, and electrofocusing. Genetic engineering is often used to simplify this process by adding chemical features to proteins that make them easier to purify without
affecting their structure or activity. A "tag" consisting of a specific amino acid sequence, often a series of histidine residues (a "His-tag"), is attached to one terminus of the protein, allowing for the isolation of
specific proteins from complex mixtures.

Cellular localization is another important aspect of protein study in vivo. The study of proteins in vivo often focuses on the synthesis and localization of proteins within the cell. Genetic engineering can be
used to express a fusion protein or chimera consisting of the natural protein of interest linked to a "reporter" such as green fluorescent protein (GFP), which can be visualized using microscopy. Other
methods for elucidating cellular location of proteins require the use of known compartmental markers for regions such as the ER, Golgi, lysosomes or vacuoles, mitochondria, chloroplasts, plasma
membrane, etc. Fluorescently tagged versions of these markers or antibodies to known markers can help identify the localization of a protein of interest.

Immunohistochemistry uses an antibody to one or more proteins of interest conjugated to enzymes yielding luminescent or chromogenic signals that can be compared between samples, allowing for
localization information. Another applicable technique is cofractionation in sucrose gradients using isopycnic centrifugation. Immunelectron microscopy is the gold-standard method of cellular localization,
which also uses an antibody to the protein of interest and classical electron microscopy techniques. Site-directed mutagenesis allows researchers to alter the protein sequence and hence its structure,
cellular localization, and susceptibility to regulation.

Proteomics is the study of large-scale data sets of proteins present at a time in a cell or cell type, known as its proteome. Key experimental techniques in proteomics include 2D electrophoresis, mass
spectrometry, protein microarrays, and two-hybrid screening. The interactome is the total complement of biologically possible interactions, and structural genomics is a systematic attempt to determine the
structures of proteins representing every possible fold.

Structure determination is crucial for understanding how a protein performs its function and how it can be affected, such as in drug design. Common experimental methods include X-ray crystallography and
NMR spectroscopy, which can produce structural information at atomic resolution. Dual polarisation interferometry is a quantitative analytical method for measuring the overall protein conformation and
conformational changes due to interactions or other stimulus. Circular dichroism is another laboratory technique for determining internal β-sheet/α-helical composition of proteins. Cryoelectron microscopy

Rev 001 Session-5 Question Booklet Page 60 of 334


produces lower-resolution structural information about very large protein complexes, including assembled viruses. Solved structures are usually deposited in the Protein Data Bank (PDB), a freely available
resource from which structural data about thousands of proteins can be obtained in the form of Cartesian coordinates for each atom in the protein.

This article is about a class of molecules. For protein as a nutrient, see Protein (nutrient). For other uses, see Protein (disambiguation).

A representation of the 3D structure of the protein myoglobin showing turquoise α-helices. This protein was the first to have its structure solved by X-
ray crystallography. Toward the right-center among the coils, a prosthetic group called a heme group (shown in gray) with a bound oxygen molecule (red).

Proteins are large biomolecules and macromolecules that comprise one or more long chains of amino acid residues. Proteins perform a vast array of functions within organisms, including catalysing
metabolic reactions, DNA replication, responding to stimuli, providing structure to cells and organisms, and transporting molecules from one location to another. Proteins differ from one another primarily in
their sequence of amino acids, which is dictated by the nucleotide sequence of their genes, and which usually results in protein folding into a specific 3D structure that determines its activity.

John Kendrew with model of myoglobin in progress

1.1 Biochemistry

Chemical structure of the peptide bond (bottom) and the three-dimensional structure of a peptide bond between

an alanine and an adjacent amino acid (top/inset). The bond itself is made of the CHON elements.
Resonance structures of the peptide bond that links individual amino acids to form a protein polymer

Main articles: Biochemistry, Amino acid, and Peptide bond

Rev 001 Session-5 Question Booklet Page 61 of 334


1.2 Synthesis

Biosynthesis

A ribosome produces a protein using mRNA as template The DNA sequence


of a gene encodes the amino acid sequence of a protein

Main article: Protein biosynthesis

1.3 Structure

The crystal structure of the chaperonin, a huge protein complex. A single protein subunit is highlighted. Chaperonins

assist protein folding. Three possible representations of the three-dimensional structure of the protein triose
phosphate isomerase. Left: All-atom representation colored by atom type. Middle: Simplified representation illustrating the backbone conformation, colored by secondary structure. Right: Solvent-accessible
surface representation colored by residue type (acidic residues red, basic residues blue, polar residues green, nonpolar residues white).

Main article: Protein structure

Molecular surface of several proteins showing their comparative sizes. From left to right are: immunoglobulin G (IgG,
an antibody), hemoglobin, insulin (a hormone), adenylate kinase (an enzyme), and glutamine synthetase (an enzyme).

1.4 Cellular functions

Proteins are the chief actors within the cell, said to be carrying out the duties specified by the information encoded in genes. [28] With the exception of certain types of RNA, most other biological molecules are
relatively inert elements upon which proteins act. Proteins make up half the dry weight of an Escherichia coli cell, whereas other macromolecules such as DNA and RNA make up only 3% and 20%,
respectively.[44] The set of proteins expressed in a particular cell or cell type is known as its proteome.

Rev 001 Session-5 Question Booklet Page 62 of 334


The enzyme hexokinase is shown as a conventional ball-and-stick molecular model. To scale in the top right-hand corner are two of its
substrates, ATP and glucose.

Cell signaling and ligand binding

See also: Glycan-protein interactions

Ribbon diagram of a mouse antibody against cholera that binds a carbohydrate antigen

Many proteins are involved in the process of cell signaling and signal transduction. Some proteins, such as insulin, are extracellular proteins that transmit a signal from the cell in which they were synthesized
to other cells in distant tissues. Others are membrane proteins that act as receptors whose main function is to bind a signaling molecule and induce a biochemical response in the cell. Many receptors have a
binding site exposed on the cell surface and an effector domain within the cell, which may have enzymatic activity or may undergo a conformational change detected by other proteins within the cell.[30]: 251–81

Cellular localization

Proteins in different cellular compartments and structures tagged with green fluorescent protein (here, white)

Rev 001 Session-5 Question Booklet Page 63 of 334


Structure prediction

Constituent amino-acids can be analyzed to predict secondary, tertiary and quaternary protein structure, in this case
hemoglobin containing heme units

Main articles: Protein structure prediction and List of protein structure prediction software

1.5 References

1. ^ Thomas Burr Osborne (1909): The Vegetable Proteins Archived 2016-03-22 at the Wayback Machine, History pp 1 to 6, from archive.org
2. ^ Mulder GJ (1838). "Sur la composition de quelques substances animales". Bulletin des Sciences Physiques et Naturelles en Néerlande: 104.
3. ^ Harold H (1951). "Origin of the Word 'Protein.'". Nature. 168 (4267): 244. Bibcode:1951Natur.168..244H. doi:10.1038/168244a0. PMID 14875059. S2CID 4271525.
4. Encyclopedia wikipedia

Step 4

Self Assessment - Answer the following questions to self-assess your knowledge of the subject.

Q 1: What is an amino acid and what is a peptide structure?

A peptide is a short chain of amino acids. The amino acids in a peptide are connected to one another in a sequence by bonds called peptide bonds. Typically, peptides are distinguished from proteins by
their shorter length, although the cut-off number of amino acids for defining a peptide and protein can be arbitrary. Q 2: How does a sequence of chromosomal nucleotides code for amino acids? Rev 001
Session-5 Question Booklet Page 4 of 9 This study source was downloaded by 100000870777840 from CourseHero.com on 01-09-2024 22:11:31 GMT -06:00
https://www.coursehero.com/file/189684897/Andrzej-Gurazda-Question-Booklet-5-Vomp-3001doc/ The nucleotide triplet that encodes an amino acid is called a codon. Each group of three nucleotides
encodes one amino acid. Since there are 64 combinations of 4 nucleotides taken three at a time and only 20 amino acids, the code is degenerate (more than one codon per amino acid, in most cases). Q 3:
Describe how a protein folds in accordance with an emerging amino-acid sequence? Folded proteins are held together by various molecular interactions. During translation, each protein is synthesized as a
linear chain of amino acids or a random coil which does not have a stable 3D structure. The amino acids in the chain eventually interact with each other to form a well-defined, folded protein.

Amino Acids are molecules that combine together to form proteins. Amino acid is basic sub-unit of all proteins. A protein is made up of many peptides, and a peptide is made up of many amino acids. When
one amino acid reacts with another, it forms a peptide bond. During this reaction, the amino acids will lose a water molecule. This reaction will then produce a string of amino acids and fold in on top of each
other.

Amino acid

Wikipedia, the free encyclopedia

This article is about the class of chemicals. For the structures and properties of the standard proteinogenic amino acids, see Proteinogenic amino acid.

Rev 001 Session-5 Question Booklet Page 64 of 334


Structure of a generic, noncyclic L-alpha-amino acid in the "neutral" form.

Amino acids are chemical molecules that possess both amino and carboxylic acid functional groups.[1] While there are more than 500 amino acids found in nature, the 22 α-amino acids that are integrated
into proteins are significantly more significant.[2] Only these 22 elements are included in the genetic code of life.

Amino acids can be categorised based on the positions of the core structural functional groups (alpha- (α-), beta- (β-), gamma- (γ-) amino acids, etc.). Other classifications are based on polarity, ionisation,
and the type of side chain group (aliphatic, acyclic, aromatic, polar, etc.). Amino acid residues, in the form of proteins, constitute the second most abundant constituent (after water) of human muscles and
other tissues.The user's text is "[5]". In addition to their function as components of proteins, amino acids are involved in several processes, including neurotransmitter transport and biosynthesis. They are
believed to have played a crucial role in facilitating the development and emergence of life on Earth.

The IUPAC-IUBMB Joint Commission on Biochemical Nomenclature formally assigns names to amino acids based on the hypothetical "neutral" configuration depicted in the figure. As an illustration, the
systematic designation for alanine is 2-aminopropanoic acid, derived from the chemical formula CH3−CH(NH2)−COOH. The Commission provided a justification for this approach in the following manner:[6]
The above systematic names and formulas pertain to theoretical structures where amino groups lack protons and carboxyl groups remain undissociated. This practice serves the purpose of preventing
different naming issues, but it should not be interpreted as suggesting that these structures constitute a significant portion of the amino acid molecules.

1.1 Historical Background

The initial amino acids were identified throughout the early 1800s.The user's text is "[7]".[8] Asparagine, the initial amino acid to be identified, was extracted from asparagus by French chemists Louis-Nicolas
Vauquelin and Pierre Jean Robiquet in 1806. The discovery of cystine occurred in 1810,[11] whereas its individual unit, cysteine, was not identified until 1884. Glycine and leucine were first identified in the
year 1820. Threonine, the final amino acid among the 20 commonly known ones, was identified in 1935 by William Cumming Rose. Rose also defined the crucial amino acids and calculated the minimal
daily requirements for all amino acids to achieve maximum growth.

Wurtz acknowledged the cohesion of the chemical classification in 1865, although he did not assign it a specific designation. The phrase "amino acid" was first used in the English language in 1898, whereas
the German name "Aminosäure" was used earlier.[18] Proteins were observed to produce amino acids by enzymatic digestion or acid hydrolysis. In 1902, Emil Fischer and Franz Hofmeister separately
suggested that proteins are composed of many amino acids, with bonds forming between the amino group of one amino acid and the carboxyl group of another. This results in a linear structure that Fischer
referred to as "peptide".

Rev 001 Session-5 Question Booklet Page 65 of 334


1.6 General structure

The 21 proteinogenic α-
amino acids found in eukaryotes, grouped according to their side chains' pKa values and charges carried at physiological pH (7.4).

2-, alpha-, or α-amino acids[20] have the generic formula H2NCHRCOOH in most cases,[b] where R is an organic substituent known as a "side chain".[21]

Out of the numerous amino acids that have been described, 22 are classified as proteinogenic, meaning they are involved in the construction of proteins. The combination of these 22 chemicals results in the
formation of a wide variety of peptides and proteins, which are synthesised by ribosomes. Non-proteinogenic or modified amino acids can be generated through post-translational modification or
nonribosomal peptide synthesis.

Chirality refers to the property of an object or molecule that cannot be superimposed onto its mirror image.

The carbon atom adjacent to the carboxyl group is referred to as the α–carbon. Proteinogenic amino acids contain both the amine group and the R group, which is unique to each amino acid. The α-carbon
in all α-amino acids, except glycine, is stereogenic due to the presence of four different substituents. All proteogenic amino acids that exhibit chirality possess the L configuration. The enantiomers in
question are specifically "left-handed" and are classified as stereoisomers of the alpha carbon.

Nature has been shown to include a small number of D-amino acids, which are also known as "right-handed" amino acids. These D-amino acids can be found in many places such as bacterial envelopes,
where they serve as a neuromodulator called D-serine, as well as in certain antibiotics. Occasionally, proteins contain D-amino acid residues, which are formed through post-translational modification from L-
amino acids.The number 28 is enclosed in square brackets. Side chains (also known as substituent groups) are groups of atoms that are attached to the main chain or backbone of a molecule.

Positively or negatively charged amino acid side chains

At a pH of 7, five amino acids have an electric charge. Frequently, these additional chains are found on the outside regions of proteins to enhance their ability to dissolve in water. Side chains with
contrasting charges create significant electrostatic connections known as salt bridges, which play a crucial role in preserving the structure of individual proteins or facilitating interactions between different
proteins.[31] Numerous proteins exhibit a special affinity for metal incorporation within their structures, and these interactions are frequently facilitated by charged amino acid residues such as aspartate,
glutamate, and histidine. Under specific circumstances, every ion-forming group has the potential to acquire an electric charge, resulting in the formation of double salts.[32] At a pH of neutrality, the two
amino acids that carry a negative charge are aspartate (Asp, D) and glutamate (Glu, E). The anionic carboxylate groups typically act as Brønsted bases in most situations. Enzymes in very acidic
environments, such as the aspartic protease pepsin found in mammalian stomachs, may contain catalytic aspartate or glutamate residues that function as Brønsted acids.

Functional groups found in histidine (left), lysine (middle) and arginine


(right)

Rev 001 Session-5 Question Booklet Page 66 of 334


At neutral pH, there are three amino acids that have side chains functioning as cations: arginine (Arg, R), lysine (Lys, K), and histidine (His, H). Arginine possesses a positively charged guanidino group,
while lysine possesses a positively charged alkyl amino group. Both arginine and lysine are completely protonated at a pH of 7. The imidazole group of histidine has a pKa value of 6.0, indicating that at
neutral pH, only approximately 10% of it is protonated. Histidine frequently engages in catalytic proton transfers in enzyme processes due to its ready availability in both its basic and conjugate acid states.
Nonpolar, neutral side chains

Serine (Ser, S), threonine (Thr, T), asparagine (Asn, N), and glutamine (Gln, Q), which are polar and uncharged amino acids, have a high propensity to establish hydrogen bonds with water and other amino
acids. Under normal conditions, they do not undergo ionisation, with the notable exception of the catalytic serine found in serine proteases. This is an instance of significant disruption, and is not typical of
serine residues as a whole. Threonine possesses two chiral centres: the L (2S) chiral centre at the α-carbon, which is present in all amino acids except glycine, and the (3R) chiral centre at the β-carbon. The
stereochemical definition for this compound is (2S,3R)-L-threonine.

Nonpolar amino acid residues

The main driving force behind the folding of proteins into their functional three-dimensional structures is the interactions between nonpolar amino acids. With the exception of tyrosine (Tyr, Y), none of the
side chains of these amino acids readily undergo ionisation, and so do not possess pKas. At high pH, the hydroxyl group of tyrosine can undergo deprotonation, resulting in the formation of the negatively
charged phenolate ion. Due to its extremely low solubility in water, tyrosine aligns well with the criteria of hydrophobic amino acids. However, one might also classify tyrosine as a polar, uncharged amino
acid.

Side chains with unique characteristics

Some side chains are inadequately characterised by the charged, polar, and hydrophobic classifications. Glycine (Gly, G) can be classified as a polar amino acid due to its small size, which primarily
influences its solubility through the amino and carboxylate groups. Nevertheless, the absence of a side chain in glycine grants it an exceptional flexibility compared to other amino acids, which has significant
implications for protein folding.Cysteine (Cys, C) is capable of quickly establishing hydrogen bonds, classifying it as a polar amino acid. However, it is frequently observed in protein structures where it forms
covalent links, known as disulphide bonds, with other cysteines. These bonds have a crucial role in determining the folding and stability of proteins, and are required for the creation of antibodies. Proline
(Pro, P) possesses an alkyl side chain and can be classified as hydrophobic. However, due to the side chain's connection to the alpha amino group, it exhibits significant rigidity when integrated into proteins.
Like glycine, this amino acid has a distinct impact on protein structure that sets it apart from other amino acids. Selenocysteine (Sec, U) is an infrequent amino acid that is not directly specified by DNA, but is
integrated into proteins through the ribosome. Selenocysteine exhibits a diminished redox potential in comparison to its analogous cysteine, and engages in various distinctive enzymatic processes.
Pyrrolysine (Pyl, O) is an amino acid that is not specified by DNA, but is produced by ribosomes and incorporated into proteins. The enzyme is present in archaeal species and has a role in the catalytic
function of several methyltransferases.

β- and γ-amino acids

Amino acids with the structure NH+3−CXY−CXY−CO−2, such as β-alanine, a component of carnosine and a few other peptides, are β-amino acids. Ones with the
structure NH+3−CXY−CXY−CXY−CO−2 are γ-amino acids, and so on, where X and Y are two substituents (one of which is normally H).[6]

Zwitterions

Main article: Zwitterion

Rev 001 Session-5 Question Booklet Page 67 of 334


Ionization and Brønsted character of N-terminal amino, C-terminal carboxylate, and
side chains of amino acid residues

The common natural forms of amino acids have a zwitterionic structure, with −NH+3 (−NH+2− in the case of proline) and −CO−2 functional groups attached to the same C atom, and are thus α-amino acids.
and are the only ones found in proteins during translation in the ribosome. In aqueous solution at pH close to neutrality, amino acids exist as zwitterions, i.e. as dipolar ions with both NH+3 and CO−2 in
charged states, so the overall structure is NH+3−CHR−CO−2. At physiological pH the so-called "neutral forms" −NH2−CHR−CO2H are not present to any measurable degree.[35] Although the two charges in
the zwitterion structure add up to zero it is misleading to call a species with a net charge of zero "uncharged".

In strongly acidic conditions (pH below 3), the carboxylate group becomes protonated and the structure becomes an ammonio carboxylic acid, NH+3−CHR−CO2H. This is relevant for enzymes like pepsin
that are active in acidic environments such as the mammalian stomach and lysosomes, but does not significantly apply to intracellular enzymes. In highly basic conditions (pH greater than 10, not normally
seen in physiological conditions), the ammonio group is deprotonated to give NH2−CHR−CO−2.

The Brønsted definition is the sole one applicable to acids and bases in aqueous solutions in the field of chemistry, despite the existence of different definitions.An acid is a substance capable of donating a
proton to another substance, while a base is a substance capable of accepting a proton. This criterion is employed to categorise the groupings depicted in the aforementioned figure. The carboxylate side
chains of aspartate and glutamate residues serve as the primary Brønsted bases in proteins. Similarly, lysine, tyrosine, and cysteine commonly function as a Brønsted acid. Under these conditions, histidine
exhibits amphoteric behaviour, functioning as both a Brønsted acid and a Brønsted base.

Rev 001 Session-5 Question Booklet Page 68 of 334


Isoelectric point

Composite of titration curves of twenty proteinogenic amino acids grouped by side chain category

For amino acids with uncharged side-chains the zwitterion predominates at pH values between the two p Ka values, but coexists in equilibrium with small amounts of net negative and net positive ions. At the
midpoint between the two pKa values, the trace amount of net negative and trace of net positive ions balance, so that average net charge of all forms present is zero. [38] This pH is known as the isoelectric
point pI, so pI = 1/2(pKa1 + pKa2).
For amino acids with charged side chains, the p Ka of the side chain is involved. Thus for aspartate or glutamate with negative side chains, the terminal amino group is essentially entirely in the charged
form −NH+3, but this positive charge needs to be balanced by the state with just one C-terminal carboxylate group is negatively charged. This occurs halfway between the two carboxylate p Ka values:
pI = 1/2(pKa1 + pKa(R)), where pKa(R) is the side chain pKa.[37]

The same principles are relevant for additional amino acids that have side-chains capable of ionisation. This includes not just glutamate (similar to aspartate), but also cysteine, histidine, lysine, tyrosine, and
arginine, which have positively charged side chains.

Amino acids exhibit no movement during electrophoresis when they are at their isoelectric point. However, this phenomenon is commonly utilised for peptides and proteins rather than individual amino acids.
Zwitterions exhibit the lowest level of solubility when they are at their isoelectric point. By manipulating the pH to match the specific isoelectric point, certain amino acids, especially those with nonpolar side
chains, can be extracted from water through precipitation.

1.1 Physical and chemical characteristics

The 20 canonical amino acids can be categorised based on their characteristics. Crucial variables include charge, the degree of being water-loving or water-repelling, size, and functional groups.The number
27. These features have an impact on the structure of proteins and the interactions between proteins. Water-soluble proteins often position their hydrophobic residues (Leu, Ile, Val, Phe, and Trp) within the
interior of the protein, while the hydrophilic side chains are exposed to the surrounding aqueous solvent. In the field of biochemistry, a residue is a distinct unit inside a larger chain of a polysaccharide,
protein, or nucleic acid. Integral membrane proteins typically possess outside rings of exposed hydrophobic amino acids, which serve to bind them within the lipid bilayer. Certain peripheral membrane
proteins possess a cluster of hydrophobic amino acids on their surface, which adheres to the membrane. Similarly, proteins that need to attach to molecules with a positive charge possess surfaces
abundant in negatively charged amino acids, such as glutamate and aspartate. Conversely, proteins that bind to molecules with a negative charge have surfaces abundant in positively charged amino acids,
such as lysine and arginine. For instance, the low-complexity sections of nucleic-acid binding proteins contain substantial quantities of lysine and arginine. Multiple hydrophobicity scales exist for amino acid
residues.

Certain amino acids possess distinctive characteristics. Cysteine has the ability to create covalent disulfide connections with other cysteine residues. Proline cyclically connects to the polypeptide backbone,
while glycine exhibits greater flexibility compared to other amino acids.

Glycine and proline are abundant in low complexity sections of both eukaryotic and prokaryotic proteins, while cysteine, phenylalanine, tryptophan, methionine, valine, leucine, and isoleucine exhibit high
reactivity, complexity, or hydrophobicity. Many proteins experience various posttranslational modifications, in which extra chemical groups are added to the side chains of amino acid residues, resulting in the
formation of lipoproteins (which are hydrophobic) or glycoproteins (which are hydrophilic). These modifications enable the protein to temporarily bind to a membrane. As an illustration, a signalling protein
has the ability to bind and release from a cell membrane due to the presence of cysteine residues that can undergo the addition and subsequent removal of the fatty acid palmitic acid. A comprehensive list
of standard amino acid acronyms and their corresponding properties is provided in the table below.

The term "amino acid code" refers to a different topic. To learn about the encoding of amino acids using base pairs, go to the section on Codons in the Genetic code.

Main article: A proteinogenic amino acid is a type of amino acid that is used by cells to synthesise proteins.
While the table includes one-letter symbols, IUPAC–IUBMB suggests that the use of these symbols should be limited to comparing long sequences.

Rev 001 Session-5 Question Booklet Page 69 of 334


3- and 1-
letter Side chain Molar absorptivity[47]
symbols
Standar
Abundanc d
Hydropath
Amino Molecula e in genetic
y
acid r mass proteins coding,
Net index[46]
Chemica Coefficient (%)[48] IUPAC
charge Wavelengt
l ε notation
3 1 Class at h,
polarity[49 (mM−1·cm−1
] pH 7.4[4 λmax (nm)
9] )

Alanine Ala A Aliphatic Nonpolar Neutral 1.8 89.094 8.76 GCN

Fixed Basic MGR,


Arginine Arg R Positive −4.5 174.203 5.78
cation polar CGY[50]

As
Asparagine N Amide Polar Neutral −3.5 132.119 3.93 AAY
n

As Brønsted
Aspartate D Anion Negative −3.5 133.104 5.49 GAY
p base

Cy Brønsted
Cysteine C Thiol Neutral 2.5 250 0.3 121.154 1.38 UGY
s acid

Glutamine Gln Q Amide Polar Neutral −3.5 146.146 3.9 CAR

Brønsted
Glutamate Glu E Anion Negative −3.5 147.131 6.32 GAR
base

Glycine Gly G Aliphatic Nonpolar Neutral −0.4 75.067 7.03 GGN

Positive,
Brønsted
10%
Histidine His H Cationic acid and −3.2 211 5.9 155.156 2.26 CAY
Neutral,
base
90%

Isoleucine Ile I Aliphatic Nonpolar Neutral 4.5 131.175 5.49 AUH

YUR,
Leucine Leu L Aliphatic Nonpolar Neutral 3.8 131.175 9.68
CUY[51]

Brønsted
Lysine Lys K Cation Positive −3.9 146.189 5.19 AAR
acid

Rev 001 Session-5 Question Booklet Page 70 of 334


3- and 1-
letter Side chain Molar absorptivity[47]
symbols
Standar
Abundanc d
Hydropath
Amino Molecula e in genetic
y
acid r mass proteins coding,
Net index[46]
Chemica Coefficient (%)[48] IUPAC
charge Wavelengt
l ε notation
3 1 Class at h,
polarity[49 (mM−1·cm−1
] pH 7.4[4 λmax (nm)
9] )

Thioethe
Methionine Met M Nonpolar Neutral 1.9 149.208 2.32 AUG
r

Phenylalani Ph 257, 206,


F Aromatic Nonpolar Neutral 2.8 0.2, 9.3, 60.0 165.192 3.87 UUY
ne e 188

Proline Pro P Cyclic Nonpolar Neutral −1.6 115.132 5.02 CCN

Hydroxyli UCN,
Serine Ser S Polar Neutral −0.8 105.093 7.14
c AGY

Hydroxyli
Threonine Thr T Polar Neutral −0.7 119.119 5.53 ACN
c

Tryptophan Trp W Aromatic Nonpolar Neutral −0.9 280, 219 5.6, 47.0 204.228 1.25 UGG

Brønsted 274, 222,


Tyrosine Tyr Y Aromatic Neutral −1.3 1.4, 8.0, 48.0 181.191 2.91 UAY
acid 193

Valine Val V Aliphatic Nonpolar Neutral 4.2 117.148 6.73 GUN

Two additional amino acids are in some species coded for by codons that are usually interpreted as stop codons:

21st and 22nd amino acids 3-letter 1-letter Molecular mass

Selenocysteine Sec U 168.064

Pyrrolysine Pyl O 255.313

In addition to the specific amino acid codes, placeholders are used in cases where chemical or crystallographic analysis of a peptide or protein cannot conclusively determine the identity of a residue.
Additionally, they are employed to condense conserved protein sequence patterns. The utilisation of individual letters to represent collections of comparable residues is analogous to the utilisation of
abbreviated codes for ambiguous bases.

Rev 001 Session-5 Question Booklet Page 71 of 334


Ambiguous amino acids 3-letter 1-letter Amino acids included Codons included

Any / unknown Xaa X All NNN

Asparagine or aspartate Asx B D, N RAY

Glutamine or glutamate Glx Z E, Q SAR

Leucine or isoleucine Xle J I, L YTR, ATH, CTY[54]

Hydrophobic Φ V, I, L, F, W, Y, M NTN, TAY, TGG

Aromatic Ω F, W, Y, H YWY, TTY, TGG[55]

Aliphatic (non-aromatic) Ψ V, I, L, M VTN, TTR[56]

Small π P, G, A, S BCN, RGY, GGR

Hydrophilic ζ S, T, H, N, Q, E, D, K, R VAN, WCN, CGN, AGY[57]

Positively-charged + K, R, H ARR, CRY, CGR

Negatively-charged − D, E GAN

Unk is sometimes used instead of Xaa, but is less standard.

The term "Ter" (derived from "termination") is employed in protein notation to indicate mutations that result in the presence of a stop codon. It does not correspond to any amino acid.

Furthermore, numerous atypical amino acids possess a distinct genetic coding. For instance, several peptide medications like Bortezomib and MG132 are chemically produced and maintain their protective
groups, which possess distinct codes. Bortezomib is a compound with the chemical formula Pyz–Phe–boroLeu, while MG132 is composed of Z–Leu–Leu–Leu–al. Photo-reactive amino acid analogues are
accessible to facilitate the examination of protein structure. Two examples of these are photoleucine (pLeu) and photomethionine (pMet). Occurrence and functions in biochemistry

Rev 001 Session-5 Question Booklet Page 72 of 334


A polypeptide is an unbranched chain of amino acids

β-Alanine and its α-alanine isomer

The amino acid selenocysteine

Rev 001 Session-5 Question Booklet Page 73 of 334


Proteinogenic amino acids

Main article: Proteinogenic amino acid

See also: Protein primary structure and Posttranslational modification

Amino acids serve as the building blocks for proteins.[25] Peptides or polypeptides are formed when they undergo condensation processes, joining together to create either short polymer chains or larger
chains known as proteins. The chains are characterised by their linearity and lack of branching, with each amino acid residue in the chain being connected to two adjacent amino acids. The process of
synthesising proteins from DNA/RNA genetic material in Nature is referred to as translation. It entails the sequential incorporation of amino acids into a developing protein chain by a ribozyme known as a
ribosome. The sequential addition of amino acids is determined by the genetic code, which is interpreted from an mRNA template. This template is an RNA replica of a specific gene within the organism.

Polypeptides naturally consist of twenty-two amino acids, which are referred to be proteinogenic or natural amino acids.The number 27. Out of them, a total of 20 are encoded by the universal genetic code.
The two remaining amino acids, selenocysteine and pyrrolysine, are integrated into proteins through distinct synthetic methods. Selenocysteine is inserted into the growing polypeptide chain during
translation when the mRNA has an SECIS element, leading to the UGA codon being utilised to encode selenocysteine instead of functioning as a termination signal.[61] Methanogenic archaea utilise
pyrrolysine in their enzymes for methane production. The codon UAG, often a termination codon in other animals, is utilised for coding.[62] The UAG codon is succeeded by a PYLIS sequence
downstream.The number 63 is enclosed in square brackets.

Multiple autonomous evolutionary investigations have indicated that Gly, Ala, Asp, Val, Ser, Pro, Glu, Leu, Thr may be part of a cluster of amino acids that formed the initial genetic code, while Cys, Met, Tyr,
Trp, His, Phe may be part of a cluster of amino acids that were added to the genetic code at a later stage. Distinguishing between standard and nonstandard amino acids[revise]

The amino acids that are directly encoded by the codons of the universal genetic code are referred to as standard or canonical amino acids. In bacteria, mitochondria, and chloroplasts, a variant of
methionine called N-formylmethionine is frequently substituted for methionine as the first amino acid in proteins. Additional amino acids are referred to as nonstandard or non-canonical. The majority of
nonstandard amino acids are non-proteinogenic, meaning they cannot be integrated into proteins during the process of translation. However, two of these amino acids are proteinogenic, since they can be
translationally incorporated into proteins by utilising information that is not specified in the universal genetic code.

The two atypical proteinogenic amino acids are selenocysteine (occurring in numerous non-eukaryotes and most eukaryotes, but not directly encoded by DNA) and pyrrolysine (exclusive to certain archaea
and at least one bacteria). The use of these atypical amino acids is infrequent. As an illustration, there are 25 proteins in humans that contain selenocysteine as part of their main structure. Additionally, the
enzymes that have been studied and their structures have shown that they use selenocysteine as the active component in their catalytic regions. Pyrrolysine and selenocysteine are specified by non-
standard codons. As an illustration, selenocysteine is specified by a termination codon and an SECIS element.The number 69 is enclosed in square brackets.The

N-formylmethionine, commonly found at the beginning of proteins in bacteria, mitochondria, and chloroplasts, is widely regarded as a variant of methionine rather than a distinct proteinogenic amino acid.
Non-natural codon-tRNA pairings can be employed to "enlarge" the genetic code and create alloproteins, which are unique proteins that include non-proteinogenic amino acids.

Non-proteinogenic amino acids

Primary focus: Non-proteinogenic amino acids

In addition to the 22 proteinogenic amino acids, other non-proteinogenic amino acids have been identified. These compounds, such as carnitine, GABA, and levothyroxine, are either absent in proteins or not
directly synthesised by the typical cellular machinery. Hydroxyproline is derived from proline by a process of synthesis. Another instance is selenomethionine.

Post-translational modification is responsible for the formation of non-proteinogenic amino acids within proteins. These alterations can also influence the protein's location, for instance, the inclusion of
lengthy hydrophobic groups can result in the protein binding to a phospholipid membrane. Examples: • The process of carboxylation of glutamate enhances the ability to bind calcium cations. •
Hydroxyproline, which is produced through the hydroxylation of proline, is a significant constituent of the connective tissue collagen.

The translation initiation factor EIF5A contains a modified form of lysine known as hypusine. Certain non-proteinogenic amino acids are absent from proteins. Illustrative instances comprise 2-aminoisobutyric
acid and the neurotransmitter gamma-aminobutyric acid. Non-proteinogenic amino acids frequently appear as intermediates in the metabolic pathways of regular amino acids. For instance, ornithine and
citrulline are present in the urea cycle, which is a part of amino acid breakdown. β-Amino acid beta alanine (3-aminopropanoic acid) is a unique case where α-amino acids are not dominant in biology. This
particular amino acid is utilised by plants and microorganisms for the production of pantothenic acid (vitamin B5), an essential component of coenzyme A.

In mammalian nutrition

Share of amino acid in various human diets and the resulting mix of amino acids in human
blood serum. Glutamate and glutamine are the most frequent in food at over 10%, while alanine, glutamine, and glycine are the most common in blood.

Main article: Essential amino acids


Rev 001 Session-5 Question Booklet Page 74 of 334
Additional details: Protein, which is a vital nutrient, is synthesised through the process of amino acid synthesis.

Amino acids are not commonly found in diet; animals consume proteins instead. Digestion involves the enzymatic breakdown of proteins into individual amino acids. Subsequently, they are employed in the
process of creating novel proteins, other biomolecules, or alternatively, they undergo oxidation to produce urea and carbon dioxide, serving as an energy reservoir.The number 81. The oxidation process
commences with the elimination of the amino group by the action of a transaminase enzyme, after which the amino group is then directed towards the urea cycle. Another outcome of transamidation is the
production of a keto acid that subsequently participates in the citric acid cycle. Glucogenic amino acids have the ability to undergo gluconeogenesis, a process that converts them into glucose. Out of the
total 20 standard amino acids, there are nine amino acids (His, Ile, Leu, Lys, Met, Phe, Thr, Trp, and Val) known as essential amino acids. These amino acids cannot be produced by the human body in
sufficient quantities required for proper growth, and therefore, they must be acquired from food sources. Semi-essential and conditionally essential amino acids, as well as the nutritional needs of juveniles.

Furthermore, cysteine, tyrosine, and arginine are classified as semiessential amino acids, whereas taurine is classified as a semi-essential aminosulfonic acid specifically in children. Certain amino acids are
considered conditionally essential, meaning that their requirement may vary depending on specific age groups or medical situations. The composition of essential amino acids can differ among different
animals.[d] The metabolic pathways responsible for synthesising these monomers are incompletely established.

Non-protein functions

Biosynthetic pathways for catecholamines and trace amines in the human brain[89][90][91]

L-Phenylalanine

L-Tyrosine

L-DOPA

Epinephrine

Phenethylamine

p-Tyramine

Dopamine

Norepinephrine

N-Methylphenethylamine

Rev 001 Session-5 Question Booklet Page 75 of 334


N-Methyltyramine

p-Octopamine

Synephrine

3-Methoxytyramine

AADC

AADC

AADC

primary
pathway

PNMT

PNMT

PNMT

PNMT

AAAH

AAAH

brain
CYP2D6

minor
pathway

COMT

DBH

DBH

Catecholamines and trace amines are synthesized from phenylalanine and tyrosine in humans.

Further information: Amino acid neurotransmitter

Several proteinogenic and non-proteinogenic amino acids possess biological roles that extend beyond their role as precursors to proteins and peptides.Amino acids have crucial functions in several
metabolic processes in humans. Plants employ amino acids as a defence mechanism against herbivores. Illustrations:

Canonical amino acids[revise]

• Tryptophan is a precursor of the neurotransmitter serotonin.The user's text is "[93]".

Tyrosine, along with its precursor phenylalanine, serves as the building blocks for the production of neurotransmitters such as dopamine, adrenaline, norepinephrine, and numerous trace amines.

Phenylalanine serves as a precursor for the synthesis of phenethylamine and tyrosine in the human body. Within the realm of plants, it serves as a precursor for a variety of phenylpropanoids, which play a
crucial role in plant metabolism.

• Glycine serves as a precursor for the synthesis of porphyrins, including heme.[94]

• Arginine serves as a precursor for the synthesis of nitric oxide.The number 95 is enclosed in square brackets.

Ornithine and S-adenosylmethionine serve as precursors for the synthesis of polyamines.[96]

• Aspartate, glycine, and glutamine serve as precursors for the synthesis of nucleotides. Nevertheless, the roles of several nonstandard amino acids remain unknown.

Functions of atypical amino acids

Carnitine is employed in the transportation of lipids.

Gamma-aminobutyric acid functions as a neurotransmitter.The number 98 is enclosed in square brackets.

• 5-HTP, also known as 5-hydroxytryptophan, is utilised for the experimental management of depression.
Rev 001 Session-5 Question Booklet Page 76 of 334
L-DOPA, also known as L-dihydroxyphenylalanine, is used as a treatment for Parkinson's disease.

Eflornithine is a medication that blocks the activity of ornithine decarboxylase. It is specifically used for treating sleeping sickness.[101] • Canavanine, a derivative of arginine present in numerous legumes,
acts as an antifeedant, safeguarding the plant against predators.[102] • Mimosine, which is present in several legumes, is an additional potential antifeedant. This chemical is an analogue of tyrosine and can
poison animals that graze on these plants.

1.1 Industrial Applications

Feed for animals

Amino acids are occasionally used in animal feed due to the insufficient quantities of certain necessary amino acids, particularly lysine, methionine, threonine, and tryptophan, found in specific feed
components like soybeans.[104] In a similar manner, amino acids are employed to chelate metal cations, so enhancing the assimilation of minerals from feed additives. Edible sustenance[revision]

The food industry heavily relies on amino acids, particularly glutamic acid, for its flavor-enhancing properties. Additionally, aspartame (aspartylphenylalanine 1-methyl ester) is utilised as an artificial
sweetener. Food makers may incorporate amino acids into their products to mitigate symptoms of mineral shortages, such as anaemia, by enhancing mineral absorption and minimising adverse effects
resulting from inorganic mineral supplementation.[108]

Chemical constituents

Additional details: Asymmetric synthesis refers to the process of creating molecules with a certain arrangement of atoms that is not symmetrical.

Amino acids serve as inexpensive raw materials utilised in chiral pool synthesis for the production of enantiomerically pure building blocks. Amino acids are employed in the manufacture of several
cosmetics. 1.2 Potential applications for inspiration or motivation[revision]

Fertiliser

Amino acids' chelating capacity is occasionally employed in fertilisers to enhance the transportation of minerals to plants, hence rectifying mineral shortages such iron chlorosis. These fertilisers are utilised
for the purpose of averting inadequacies and enhancing the overall vitality of the plants. Biodegradable plastics

Additional details: Biodegradable plastic and biopolymer

Amino acids are regarded as constituents of biodegradable polymers, which find utility in environmentally sustainable packaging and in the fields of medicine for medication administration and the fabrication
of prosthetic implants. Polyaspartate, a water-soluble biodegradable polymer, is a compelling illustration of such materials, with potential uses in disposable diapers and agriculture. Polyaspartate is utilised
as a biodegradable antiscaling agent and corrosion inhibitor due to its solubility and capacity to bind metal ions

1.7 Synthesis
Main article: Amino acid synthesis

The Strecker amino acid synthesis

Chemical synthesis

Typically, the industrial synthesis of amino acids involves the use of genetically modified bacteria that create excessive amounts of specific amino acids by utilising glucose as their primary source of carbon.
Enzymatic reactions of synthetic intermediates generate some amino acids. 2-Aminothiazoline-4-carboxylic acid serves as an intermediary compound in the industrial production of L-cysteine, among other
applications. Aspartic acid is synthesised through the process of adding ammonia to fumarate utilising a lyase enzyme. Biosynthesis

Within plants, nitrogen undergoes assimilation and is initially converted into organic compounds, specifically glutamate. This conversion occurs through the combination of alpha-ketoglutarate and ammonia
within the mitochondrion. Plants employ transaminases to transfer the amino group from glutamate to a different alpha-keto acid in the case of other amino acids. As an illustration, aspartate
aminotransferase catalyses the conversion of glutamate and oxaloacetate into alpha-ketoglutarate and aspartate. Transaminases are also utilised by other organisms for the purpose of amino acid synthesis.

Nonstandard amino acids typically arise from modifications made to standard amino acids. Homocysteine is produced either through the transsulfuration pathway or by the demethylation of methionine using
the intermediate metabolite S-adenosylmethionine. Hydroxyproline, on the other hand, is synthesised through a post-translational modification of proline.

Various microorganisms and plants produce numerous rare amino acids. For instance, several microorganisms produce 2-aminoisobutyric acid and lanthionine, which is a compound derived from alanine
and connected by a sulphide bridge. Both of these amino acids are present in peptidic lantibiotics, such as alamethicin.[119] In plants, 1-aminocyclopropane-1-carboxylic acid is a short cyclic amino acid with
two substituents. It serves as an intermediary in the synthesis of the plant hormone ethylene. Originating synthesis

The origin of life on Earth is believed to be preceded and potentially triggered by the creation of amino acids and peptides. Amino acids can be synthesised from basic building blocks in many environments.
The chemical process occurring on the surface likely resulted in the accumulation of amino acids, coenzymes, and tiny carbon molecules containing phosphate. The process of elaborating amino acids and
related building ingredients into proto-peptides could have played a crucial role in the start of life, with peptides being recognised as significant contributors. The Urey-Miller experiment demonstrated that
when an electric arc is passed through a combination of methane, hydrogen, and ammonia, a significant quantity of amino acids is generated. Subsequently, researchers have identified various methods and
elements through which the formation and chemical development of peptides, which could potentially be prebiotic, may have taken place. These include condensing agents, the creation of self-replicating
peptides, and several non-enzymatic mechanisms that could have led to the emergence and elaboration of amino acids into peptides.[123] Multiple possibilities propose the use of the Strecker synthesis, in
which amino acids are formed by the reaction of hydrogen cyanide, basic aldehydes, ammonia, and water.The user's text is "[121]".

Rev 001 Session-5 Question Booklet Page 77 of 334


Based on a review, amino acids and peptides are frequently present in experimental broths that have been prepared from basic chemicals. The reason for this is that the chemical synthesis of nucleotides is
considerably more challenging compared to amino acids. In order to establish a chronological sequence, it is proposed that there likely existed a 'protein world' or at the very least a 'polypeptide world',
potentially succeeded by the 'RNA world' and the 'DNA world'. The mapping of codons to amino acids could potentially serve as the fundamental biological information system that originated life on
Earth.The number 125. Although amino acids and simple peptides have been observed to develop under various geological conditions in laboratory experiments, the process of transitioning from a non-living
environment to the earliest living organisms remains largely unexplained.

1.8 Reactions
[127][128]
Amino acids undergo the reactions expected of the constituent functional groups.

Peptide bond formation

See also: Peptide synthesis and Peptide bond

The condensation of two amino acids to form a dipeptide. The two amino
acid residues are linked through a peptide bond

Due to the reactivity of both the amine and carboxylic acid groups in amino acids, they can undergo a reaction to generate amide bonds. This allows one amino acid molecule to react with another and
become connected by an amide linkage. Proteins are formed by the process of polymerization of amino acids. The condensation reaction results in the formation of a peptide bond and the release of a water
molecule. Within cells, this process does not take place directly; rather, the amino acid is initially activated by binding to a transfer RNA molecule via an ester bond. An aminoacyl-tRNA is generated through
an ATP-dependent process facilitated by an aminoacyl tRNA synthetase. The ribosome utilises the aminoacyl-tRNA as a substrate, facilitating the reaction where the amino group of the growing protein
chain attacks the ester bond. Due to this method, ribosomes synthesise all proteins by initiating synthesis at the N-terminus and progressing towards the C-terminus.

Nevertheless, peptide connections can be created through alternative methods. Peptides are synthesised by certain enzymes in certain instances. Glutathione, a tripeptide, plays a crucial role in protecting
cells from oxidative damage. The synthesis of this peptide involves a two-step process using individual amino acids. During the initial stage, gamma-glutamylcysteine synthetase combines cysteine and
glutamate by creating a peptide bond between the carboxyl group of the glutamate's side chain (specifically, the gamma carbon of this side chain) and the amino group of the cysteine. Glutathione is formed
by the condensation of this dipeptide with glycine by the action of glutathione synthase. Peptides are synthesised in chemistry using a diverse range of processes. An extensively utilised method in solid-
phase peptide synthesis involves the utilisation of aromatic oxime derivatives of amino acids as activated units. The peptides are sequentially added to the expanding peptide chain, which is anchored to a
solid resin substrate. Peptide libraries are utilised in the process of drug development via high-throughput screening.

Amino acids possess functional groups that enable them to act as polydentate ligands, forming efficient metal-amino acid chelates. Amino acids possess the ability for their numerous side chains to
undertake chemical reactions.

Rev 001 Session-5 Question Booklet Page 78 of 334


Catabolism

Proteinogenic amino acids undergoing catabolism. Amino acids can be categorised based on the characteristics of their primary breakdown products.The user's text is "[136]".

The products are glucogenic, meaning they have the capacity to be converted into glucose through gluconeogenesis.

The products of the ketogenic diet lack the capacity to synthesise glucose. These products can still be utilised for ketogenesis or lipid synthesis. Amino acids are metabolised into compounds that can be
used for both glucose production (glucogenic) and ketone body formation (ketogenic).

The process of amino acid degradation frequently entails deamination, whereby the amino group is transferred to alpha-ketoglutarate, resulting in the formation of glutamate. This procedure entails the
utilisation of transaminases, which are frequently the identical enzymes employed in amination throughout the synthesis process. In numerous animals, the amino group is subsequently eliminated via the
urea cycle and excreted as urea. Nevertheless, the breakdown of amino acids might yield uric acid or ammonia as byproducts. As an illustration, serine dehydratase catalyses the conversion of serine into
pyruvate and ammonia. Following the elimination of one or more amino groups, the remaining portion of the molecule can occasionally be utilised to produce fresh amino acids, or it can be utilised for energy
by entering glycolysis or the citric acid cycle, as depicted in the graphic on the right.

Complexation

Amino acids are bidentate ligands, forming transition metal amino acid complexes.[137]

1.9 Chemical analysi

1.10 The primary source of the total nitrogen content in organic matter is primarily derived from the amino groups present in
proteins. Total Kjeldahl Nitrogen (TKN) is a commonly employed metric for quantifying nitrogen content in the examination of
water, soil, food, feed, and organic matter. The Kjeldahl method is used, as the name implies. There are alternative
approaches that are more sensitive.

1.11 See also

 Biology portal

 Chemistry portal

 Amino acid dating


 Beta-peptide
 Degron
 Erepsin
 Homochirality
 Hyperaminoacidemia

Rev 001 Session-5 Question Booklet Page 79 of 334


 Leucines
 Miller–Urey experiment
 Nucleic acid sequence
 RNA codon table

1.12 Notes

1. ^ The late discovery is explained by the fact that cysteine becomes oxidized to cystine in air.
2. ^ Proline and other cyclic amino acids are an exception to this general formula. Cyclization of the α-amino acid creates the corresponding secondary amine. These are occasionally referred to
as imino acids.
3. ^ The L and D convention for amino acid configuration refers not to the optical activity of the amino acid itself but rather to the optical activity of the isomer of glyceraldehyde from which that
amino acid can, in theory, be synthesized (D-glyceraldehyde is dextrorotatory; L-glyceraldehyde is levorotatory). An alternative convention is to use the (S) and (R) designators to specify
the absolute configuration.[29] Almost all of the amino acids in proteins are ( S) at the α carbon, with cysteine being (R) and glycine non-chiral.[30] Cysteine has its side chain in the same
geometric location as the other amino acids, but the R/S terminology is reversed because sulfur has higher atomic number compared to the carboxyl oxygen which gives the side chain a
higher priority by the Cahn-Ingold-Prelog sequence rules.
4. ^ For example, ruminants such as cows obtain a number of amino acids via microbes in the first two stomach chambers.

1.13 References

1. ^ Nelson DL, Cox MM (2005). Principles of Biochemistry (4th ed.). New York: W. H. Freeman. ISBN 0-7167-4339-6.
2. ^ Flissi, Areski; Ricart, Emma; Campart, Clémentine; Chevalier, Mickael; Dufresne, Yoann; Michalik, Juraj; Jacques, Philippe; Flahaut, Christophe; Lisacek, Frédérique; Leclère, Valérie;
Pupin, Maude (2020). "Norine: update of the nonribosomal peptide resource". Nucleic Acids Research. 48 (D1): D465–D469. doi:10.1093/nar/gkz1000. PMC 7145658. PMID 31691799.
3. ^ Richard Cammack, ed. (2009). "Newsletter 2009". Biochemical Nomenclature Committee of IUPAC and NC-IUBMB. Pyrrolysine. Archived from the original on 12 September 2017.
Retrieved 16 April 2012.
4. ^ Rother, Michael; Krzycki, Joseph A. (1 January 2010). "Selenocysteine, Pyrrolysine, and the Unique Energy Metabolism of Methanogenic Archaea". Archaea. 2010: 1–
14. doi:10.1155/2010/453642. ISSN 1472-3646. PMC 2933860. PMID 20847933.

Protein structure

Proteins are the final outcomes of the decoding process that originates from the information contained in cellular DNA. Proteins are essential components of cells, responsible for both structural and motor
functions. They also operate as catalysts, facilitating almost all biochemical reactions in living organisms. The extensive range of functions is a result of a really straightforward code that defines a wide
variety of structures.

Indeed, every gene within the cellular DNA harbours the blueprint for a distinct protein configuration. These proteins exhibit not only distinct amino acid sequences, but also diverse bonding patterns and a
wide range of three-dimensional conformations. The conformation of a protein is directly determined by its linear amino acid sequence.

Proteins are composed of amino acids.

Amino acids are the fundamental constituents of proteins. They are tiny chemical compounds composed of an alpha (central) carbon atom connected to an amino group, a carboxyl group, a hydrogen atom,
and a variable component known as a side chain (as shown below). Peptide bonds connect numerous amino acids within a protein, resulting in the formation of a lengthy chain. Peptide bonds are created
through a biological process that removes a water molecule while connecting the amino group of one amino acid to the carboxyl group of an adjacent amino acid. The fundamental structure of a protein
refers to the linear arrangement of amino acids within it.

Proteins are constructed using a specific group of twenty amino acids, each of which possesses a distinct side chain. Amino acids possess side chains with distinct chemical properties. The majority of
amino acids possess hydrophobic side chains. Several other amino acids include side chains that exhibit either positive or negative charges, whereas others possess polar side chains that are uncharged.
The chemical properties of amino acid side chains are crucial for protein structure as they have the ability to establish bonds with each other, thereby stabilising the protein in a specific shape or
conformation. Charged amino acid side chains have the ability to create ionic interactions, while polar amino acids have the capacity to make hydrogen bonds. Hydrophobic side chains engage in weak van
der Waals interactions with one another. Most of the bonds produced by these side chains are noncovalent. Cysteines are the exclusive amino acids that have the ability to create covalent connections,
namely through their unique side chains. The arrangement and positioning of amino acids in a certain protein are influenced by side chain interactions. This, in turn, determines the locations and patterns of
bends and folds in the protein (Figure 1).

The diagram consists of three parts, illustrating the fundamental chemical structure of an amino acid (at the top), the basic chemical structure of a polypeptide (in the centre), and the idealised structure of a
folded polypeptide chain forming loops (at the bottom). The interaction between amino acids on distinct loops is represented by a dotted line.

Diagram 1: The correlation between the side chains of amino acids and the structure of proteins

An amino acid is primarily characterised by its side chain, which is represented by a blue circle at the top and coloured circles below. Amino acids combine through peptide bonds to create a polypeptide,
which is another term for protein. The polypeptide will thereafter adopt a distinct shape based on the interactions (shown by dashed lines) among its amino acid side chains.

The content of this text is protected by copyright and may not be used without permission from Nature Education. Examine the Terms of Use

Rev 001 Session-5 Question Booklet Page 80 of 334


Detailed description or analysis of a figure

A schematic representation of the protein in the form of a ribbon diagram The image depicts bacteriorhodopsin. The protein consists of many elongated, vertical helices. The term "alpha helix" is used to
refer to a single coil. The alpha helix loop undergoes a process of uncoiling, resulting in a flattened and elongated structure resembling a wide and flat spaghetti noodle. The flattened area is designated as a
beta sheet, with an arrow indicating that one portion of the curved sheet is aligned in one direction, while the other portion is aligned in the opposite direction.

Figure 2 displays the molecular arrangement of the protein bacteriorhodopsin.

Bacteriorhodopsin is a bacterial membrane protein that functions as a proton pump. The shape of the object is crucial for its proper functioning. The protein's overall structure comprises alpha helices (green)
and beta sheets (red).

Copyright 2010 Nature Education. All rights reserved. Observe Terms and conditions

The amino acid sequence, known as the fundamental structure of a protein, is responsible for the folding and formation of intramolecular bonds within the linear chain of amino acids. This process eventually
dictates the distinctive three-dimensional shape of the protein. The occurrence of specific folding patterns in the protein chain is occasionally caused by hydrogen bonding between amino groups and
carboxyl groups in adjacent areas. The secondary structure of a protein is composed of stable folding patterns called alpha helices and beta sheets. Majority of proteins consist of numerous helices and
sheets, along with other infrequent configurations (Figure 2). The tertiary structure of a protein is formed by the arrangement of forms and folds in a linear chain of amino acids, also known as a polypeptide.
The quaternary structure of a protein pertains to macromolecules that consist of several polypeptide chains or subunits.

The ultimate conformation assumed by a recently produced protein generally corresponds to the most thermodynamically favourable state. During the process of protein folding, various conformations are
explored before the protein adopts its distinct and compact final structure. Protein folding is supported by many noncovalent bonds formed between amino acids. Furthermore, the intermolecular interactions
between a protein and its surrounding environment play a significant role in determining the protein's conformation and stability. For instance, the proteins present in the cell cytoplasm are solubilized and
possess hydrophilic chemical groups on their surfaces, while their hydrophobic components are often concealed within. Conversely, the proteins that are incorporated into the cell membranes have
hydrophobic chemical groups on their surface, particularly in the areas where the protein surface comes in contact with membrane lipids. It is crucial to acknowledge that completely folded proteins are not
immobilised in their shape. Instead, the atoms within these proteins retain the ability to undergo subtle motions.

Despite being classified as macromolecules, proteins are too little to be visualised, even under the magnification of a microscope. Therefore, scientists are compelled to employ indirect techniques to
ascertain their appearance and structural configuration. X-ray crystallography is the predominant technique employed for investigating protein structures. This technique involves the placement of purified
protein crystals in an X-ray beam, which then deflects the X-rays. By analysing the pattern of deflected X-rays, it becomes possible to determine the precise positions of the many atoms present within the
protein crystal.

What is the process by which proteins achieve their ultimate conformations?

Proteins achieve their ultimate structures without requiring any energy input once their constituent amino acids are linked together. Indeed, the cytoplasm is a densely populated environment, teeming with
several different macromolecules that have the ability to interact with a protein that is only half folded. Improper interactions with adjacent proteins can disrupt correct protein folding and lead to the formation
of extensive protein aggregates within cells. Cells depend on chaperone proteins to prevent improper interactions with undesired folding partners.

Chaperone proteins envelop a protein during the process of folding, isolating the protein until the folding is finished. For instance, in bacteria, several molecules of the chaperone GroEL assemble to create a
vacant enclosure around proteins undergoing folding. Subsequently, GroES molecules assemble to create a cover on top of the compartment. Eukaryotes employ distinct families of chaperone proteins, yet
with comparable mechanisms of action.

Cells contain a high concentration of chaperone proteins. These chaperones utilise ATP energy to selectively attach and detach polypeptides during the process of folding. Chaperones also aid in the
process of protein refolding within cells. Proteins that have been folded are inherently delicate structures that are prone to denaturation, or the process of unfolding. While proteins are held together by many
connections, the majority of these bonds are noncovalent and relatively weak. Even in typical conditions, a fraction of biological proteins remain in an unfolded state. A slight elevation in body temperature
can greatly accelerate the process of unfolding. In such instances, the process of mending old proteins via chaperones is significantly more effective than the synthesis of new proteins. Cells exhibit the
synthesis of supplementary chaperone proteins as a reaction to "heat shock."

Protein families refer to groups of proteins that share common structural and functional characteristics.

Proteins engage in molecular interactions to carry out their functions, and the specific role of a protein is determined by the manner in which its accessible surfaces interact with other molecules. Proteins that
possess similar structures have a tendency to interact with specific molecules in comparable manners, therefore classifying them as a protein family. Proteins belonging to a specific family exhibit a tendency
to carry out comparable tasks within the cellular environment.

Proteins belonging to the same family frequently exhibit extended regions of analogous amino acid sequences in their fundamental structure. These sequences have been preserved throughout the process
of evolution and are essential for the catalytic function of the protein. Cell receptor proteins exhibit distinct amino acid sequences at their binding sites, enabling them to receive chemical signals from the
external environment. However, they display more similarity in the amino acid sequences that interact with shared intracellular signalling proteins. Protein families often consist of several members, which are
believed to have originated from ancestral gene duplications. These duplications resulted in alterations of protein functions and increased the range of activities available to organisms over time.

Rev 001 Session-5 Question Booklet Page 81 of 334


In conclusion

Proteins are constructed as sequences of amino acids, which subsequently undergo a process of folding to acquire distinct three-dimensional configurations. Intermolecular bonding has a crucial role in
stabilising the structure of protein molecules, resulting in highly functional folded protein structures.

Electronic books

This page is featured in the following eBook.

Unit 2.4 covers the essentials of cell biology in the course "Cell Biology for Seminars."

https://www.nature.com/scitable/topicpage/protein-structure-14122136/#:~:text=Within%20a%20protein%2C%20multiple%20amino,of%20a%20neighboring%20amino%20acid.

Rev 001 Session-5 Question Booklet Page 82 of 334


Rev 001 Session-5 Question Booklet Page 83 of 334
2 3.1: Amino Acids and Peptides

1.
2.
o 3: Amino Acids, Peptides, and Proteins

o 3.3: Proteins - Analyses and Structural Predictions of Protein Structure


3. picture_as_pdf
Downloads

Submit Adoption ReportPeer ReviewDonate


 Henry Jakubowski and Patricia Flatt

 College of St. Benedict/St. John's University and Western Oregon University

Search Fundamentals of Biochemistry

2.1 Introduction

Proteins are one of the most abundant organic molecules in living systems and have the most diverse range of functions of all macromolecules. Proteins can have several functions such as providing
structure, regulating processes, facilitating contraction, or offering protection. They can also be involved in transportation, storage, or forming membranes. Additionally, proteins can operate as poisons or
enzymes. Every every cell within a live organism has the potential to house numerous distinct proteins, each serving a specific and exclusive purpose. Their architecture, as well as their functions, exhibit
significant variations. All of them, meanwhile, are polymers composed of alpha amino acids, organised in a linear sequence and linked by covalent connections.

1.1 Structure of Alpha Amino Acids

Alpha (α) amino acids are the fundamental components of proteins. They possess a carboxylic acid functional group and an amine functional group, as indicated by their name. The alpha label is employed
to signify that these two functional groups are positioned apart from each other by a single carbon group. Aside from the amine and carboxylic acid, the alpha carbon is additionally bonded to hydrogen and
another group that might differ in size and length. In the shown diagram, this particular group is identified as an R-group. Living organisms utilise a set of 20 widely occurring amino acids as fundamental
components for constructing proteins. They exhibit variation just at the R-group location. Figure 3.1.13.1.1 displays the structure of an amino acid when it is completely protonated, which occurs at low pH.

Figure 3.1.13.1.1: Structure of an Amino Acid

Each of the twenty naturally-occurring amino acids possesses an alpha-carbon, an amino group, a carboxylic acid group, and a R group (also known as a side chain). The side chains of the R group can
have three types: nonpolar, polar and uncharged, or charged. The specific type depends on the functional group, the pH, and the pKa of any ionizable group in the side chain.

Proteins may sometimes have two additional amino acids. Selenocysteine is present in Arachea, eubacteria, and mammals. Another example is pyrrolysine, which is discovered in Arachea. Bacteria have
been genetically engineered to include two more amino acids, namely O-methyl-tyrosine and p-aminophenylalanine. The yeast strain Saccharomyces cerevisiae has been genetically changed to include five
novel amino acids that are not naturally occurring. This was achieved by utilising the TAG nonsense codon together with newly designed and modified tRNA and tRNA synthetases. These amino acids
possess keto groups, which enable chemical alterations to be made to the protein. We shall focus exclusively on the 20 plentiful, naturally-occurring amino acids.

Figure 3.1.23.1.2 depicts the twenty alpha-amino acids that occur naturally, illustrating their internal arrangement inside a protein structure. The squiggles indicate the participation of the alpha-amino and
carboxyl groups in forming peptide bonds with neighbouring amino acids in the protein sequence. Many students mistakenly believe that the alpha-amino and carboxyl groups present in a protein sequence
are unbound and separate from the peptide bond. This diagram will aid in dispelling that misperception. The amino acids are represented by three-letter and one-letter acronyms, along with their
corresponding usual pKa values. Memorising the three-letter and one-letter codes for the amino acids is crucial.

Rev 001 Session-5 Question Booklet Page 84 of 334


Figure 3.1.23.1.2: The side chains of naturally occurring amino acids embedded within a protein.

Amino acids polymerize by a nucleophilic assault, where the amino group of one amino acid reacts with the electrophilic carbonyl carbon of the carboxyl group of another amino acid. In order to enhance the
reactivity of the carboxyl group in the amino acid, it needs to be activated to provide a more effective leaving group than OH-. The connection formed between the amino acids is an amide link, which is
commonly referred to as a peptide bond by biochemists. Water is liberated in this chemical reaction. Hydrolysis can cleave the peptide bond in a reversal process. This is depicted in Figure 3.1.33.1.3.

Figure 3.1.33.1.3: Amino acids undergo a chemical reaction to combine and produce a dipeptide.

Proteins consist of chains of twenty naturally occurring amino acids, forming polymers. On the other hand, nucleic acids consist of only four distinct monomeric nucleotides, making them polymers. The
protein's sequence and overall length are distinguishing factors between various proteins. For a single octapeptide, there exist more than 25 billion distinct potential configurations of amino acids (820).
Contrast this with a mere 65536 distinct oligonucleotides consisting of 8 monomeric units, namely 4 different monomeric deoxynucleotides, known as an 8mer (84). Consequently, the potential variety of
proteins is immense.

A dipeptide is formed when two amino acids are joined together through an amide bond. Similarly, we can also possess tripeptides, tetrapeptides, and other forms of polypeptides. Once the structure
reaches a certain length, it is referred to as a protein. The mean molecular weight of proteins in yeast is approximately 50,000, consisting of roughly 450 amino acids. The protein in question is likely to be
titin, which has a molecular weight of approximately 3 million daltons, corresponding to over 300,000 amino acids. Recently, a novel category of extremely compact proteins (consisting of 30 or fewer amino
acids) has been identified, which may be more accurately referred to as polypeptides. These proteins, known as smORFs (small open reading frames), have been found to possess notable biological
functionality. These are encoded directly in the genome and are synthesised by the same mechanisms that generate standard proteins (DNA transcription and RNA translation). These peptide fragments are
not produced through the process of selectively breaking down a bigger protein.

Figure 3.1.43.1.4 illustrates various methods of depicting the arrangement of a polypeptide or protein, each conveying varying levels of detail. It should be noticed that the atoms on the side chains are
designated as alpha, beta, gamma, delta, epsilon, and so on.

Figure 3.1.43.1.4 illustrates many methods of depicting the arrangement of a peptide/protein sequence.

1.2 Characteristics of Amino Acids:

The various R-groups exhibit distinct features depending on the atoms integrated into the functional groups. R-groups consisting primarily of carbon and hydrogen are very nonpolar or hydrophobic. Others
include polar uncharged functional groups such as alcohols, amides, and thiols. Some amino acids possess basic properties due to the presence of amine functional groups, while others exhibit acidic
properties due to the presence of carboxylic acid functional groups. These amino acids possess the ability to generate complete charges and can engage in ionic interactions. A three-letter and a one-letter
code can be used to shorten each amino acid. Figure 3.1.53.1.5 illustrates the categorization of amino acids according to their side chain characteristics.

Rev 001 Session-5 Question Booklet Page 85 of 334


Figure 3.1.53.1.5 illustrates the composition of the 20 Alpha Amino Acids employed in the
process of Protein Synthesis. The R-groups are denoted by the circled or coloured part of each molecule. Colours represent distinct categories of amino acids: The colour scheme for hydrophobic molecules
is green and yellow, while hydrophilic polar uncharged molecules are represented by the colour orange. Hydrophilic acidic molecules are indicated by the colour blue, while hydrophilic basic molecules are
represented by the colour rose.

Access the Amino Acid Chart by clicking on the provided link to download it.

The number is 1.1.Hydrophobic amino acids that lack polarity

The nonpolar amino acids can be classified into two distinct classes: the aliphatic amino acids and the aromatic amino acids. The aliphatic amino acids, namely glycine, alanine, valine, leucine, isoleucine,
and proline, are characterised by the presence of branched hydrocarbon chains. Among these amino acids, glycine has the simplest structure, while leucine and valine exhibit more complex arrangements.
Proline is categorised as an aliphatic amino acid, however it possesses distinctive characteristics due to the cyclization of its hydrocarbon chain with the terminal amine, resulting in a distinct 5-membered
ring structure. In the upcoming section on primary structure, we will observe that proline has a notable impact on the 3-dimensional structure of the protein. This is due to the rigid ring structure of proline
when it is included in the polypeptide chain. Proline is commonly found in regions of the protein where folds or turns take place.

The aromatic amino acids, namely phenylalanine, tyrosine, and tryptophan, possess an aromatic functional group in their structure, which renders them predominantly nonpolar and hydrophobic owing to
their high carbon/hydrogen composition. It is important to acknowledge that hydrophobicity and hydrophilicity exist on a continuum, and the physical and chemical properties of each amino acid can vary
based on its structure. For instance, the presence of a hydroxyl group in tyrosine enhances its reactivity and solubility in comparison to phenylalanine.

Methionine, a sulfur-containing amino acid, is typically categorised as a nonpolar and hydrophobic amino acid. This is due to the presence of a terminal methyl group that forms a thioether functional group.
As a result, the molecule lacks a permanent dipole and has low solubility.

1.2 Hydrophilic Amino Acids

The hydrophilic amino acids with polar characteristics can be categorised into three primary groups based on their functional groups: polar uncharged, acidic, and basic. In the category of polar uncharged
compounds, the side chains consist of heteroatoms (O, S, or N) that have the ability to create persistent dipoles within the R-group. The amino acids serine, threonine, and cysteine include hydroxyl and
sulfhydryl groups, while glutamine and asparagine contain amide groups. The acidic amino acids, glutamic acid (glutamate) and aspartic acid (aspartate), are composed of side chains that have carboxylic
acid functional groups. These functional groups have the ability to completely ionise in a solution. Lysine, arginine, and histidine, which are fundamental amino acids, possess amine functional groups that
can undergo protonation, resulting in the acquisition of a complete charge.

Several amino acids with hydrophilic R-groups can engage in the active site of enzymes. The active site of an enzyme is the specific region where it directly interacts to a substrate and facilitates a chemical
reaction. Enzymes produced from proteins possess catalytic groups composed of amino acid R-groups that facilitate the creation and breakdown of chemical bonds. The amino acids that contribute
significantly to the binding specificity of the active site are typically not contiguous in the primary structure. Instead, they come together to create the active site through the process of folding, resulting in the
tertiary structure. This concept will be further explained in a later section of the chapter.

3.1.13.1.1

Consideration Answer: Despite containing an amine functional group, tryptophan is not basic due to the presence of other functional groups that counteract its basic properties.

Tryptophan possesses an indole ring structure that incorporates the amine functional group. However, because the aromatic ring structure is close by and has a tendency to attract electrons, the lone pair of
electrons on the nitrogen atom cannot be used to take a proton. Instead, they participate in the formation of pi-bonds in many resonance configurations of the indole ring. Figure 2.3A displays four resonance

Rev 001 Session-5 Question Booklet Page 86 of 334


configurations that are potential representations of indole. In contrast, the imidazole ring structure present in histidine contains two nitrogen atoms. One of these nitrogen atoms (Nitrogen #1 in Figure 2.3B) is
involved in the creation of resonance structures and cannot receive a proton. On the other hand, the second nitrogen atom (Nitrogen #3) possesses a lone pair of electrons that can accept a proton.

Comparison of the Structural Availability of


Lone Pair of Electrons on Nitrogen to Accept a Proton in the Indole and Imidazole Ring Structures. (A) Shown are four resonance structures of the indole ring structure demonstrating that the lone pair of
electrons on the nitrogen are involved in the formation of pi-bonds. (B) The imidazole ring structure has one nitrogen (1) that is involved in resonance structures (not shown) and is not available to accept a
proton, while the second nitrogen (3) has a lone pair of electrons available to accept a proton as shown.

The presence of a lone pair of electrons on the nitrogen atom in the indole and imidazole ring structures allows for the acceptance of a proton. Four resonance structures of the indole ring structure are
depicted, illustrating the participation of the lone pair of electrons on the nitrogen in the creation of pi-bonds. The imidazole ring structure contains two nitrogen atoms. The first nitrogen atom (1) is involved in
resonance structures and cannot receive a proton. On the other hand, the second nitrogen atom (3) possesses a lone pair of electrons that can readily accept a proton.

Exercise 3.1.13.1.1

Solve it independently:

Using a chemical diagram, explain why the amide nitrogen atoms present in asparagine and glutamine do not exhibit basic properties.

Response 1.1 Amino Acid Stereochemistry

All amino acids, excluding glycine, possess chirality due to the presence of a side chain consisting of a hydrogen atom. A chiral molecule is a molecule that cannot be overlapped perfectly with its mirror
copy. Chiral molecules possess the same attachments in the same sequence, resembling the arrangement of a thumb and fingers on both left and right hands. However, they are mirror reflections of each
other and not identical. The enantiomers of chiral compounds exhibit almost indistinguishable physical properties, posing significant challenges in their differentiation and separation. Due to their inherent
characteristics, these compounds are assigned a specific designation known as enantiomers, and interestingly, the compounds themselves are given identical names. These compounds exhibit variations in
their ability to rotate plane-polarized light and their interactions with biological molecules. Molecules that cause the rotation of light in a clockwise direction are referred to as dextrorotary and are denoted by a
lowercase "d". Molecules that cause the rotation of light in the anticlockwise direction are referred to as levorotary and are assigned a lowercase "l" designation to differentiate one enantiomer from the other.
Biochemists also employ the traditional nomenclature of uppercase "L" and "D" to describe the three-dimensional stereochemistry of the amino acids. All proteins found in living organisms are composed of L
amino acids, which are selected based on their structural resemblance to L-glyceraldehyde.

The d- and l-designations are precise terminology used to describe the optical rotation of a molecule on plane-polarized light. It does not indicate the precise stereochemical arrangement of a molecule. An
absolute configuration pertains to the spatial organisation of the atoms of a chiral molecule or group, and is described using terms such as R or S, which stand for Rectus and Sinister, respectively. X-ray
crystallography is the most common method used to determine the absolute configurations of chiral molecules in their pure state. The alternative techniques include optical rotatory dispersion, vibrational
circular dichroism, the utilisation of chiral shift reagents in proton NMR, and Coulomb explosion imaging. The determination of whether a molecule has a R or S configuration is made using the Cahn–Ingold–
Prelog priority rules, after the absolute configuration is known. The absolute stereochemistry is associated with L-glyceraldehyde, as depicted in Figure 3.1.63.1.6 below.

Rev 001 Session-5 Question Booklet Page 87 of 334


All amino acids found in proteins are naturally occurring in the L form, which represents the S isomer, except for cysteine. The amino acids' absolute configuration can be represented by positioning the H
group towards the rear, the COOH groups pointing to the left, the R group to the right, and the NH3 group upwards, as depicted in the bottom left of Figure 6. This can be easily recalled using the mnemonic
"CORN".

Figure 3.1.63.1.6 illustrates the stereochemistry of amino acids.

What is the reason for the continued usage of the D and L nomenclature in Biochemistry for sugars and amino acids? This theory, sourced from an unidentified website, appears plausible.

The user did not provide any text.Furthermore, chemists frequently encounter the requirement to establish a configuration without any reference chemical. In such cases, the (R,S) method is particularly
advantageous, as it employs priority criteria to precisely define configurations. These laws occasionally provide nonsensical outcomes when applied to biological compounds. As observed, all of the
commonly occurring amino acids are L-isomers due to their same structure, including the placement of the R group represented as R. However, they do not all possess identical arrangements in the (R,S)
system: L-cysteine, also known as (R)-cysteine, is an exception among L-amino acids, as all the other L-amino acids are (S). However, this distinction is merely a result of human prioritisation of a sulphur
atom over a carbon atom, and does not indicate an actual difference in configuration. Substitution reactions can occasionally lead to more severe issues. In some cases, the configuration can be inverted
without any alteration to the (R) or (S) prefix, while in other cases, the configuration can be retained but with a change in the prefix.

Therefore, it is not solely conservatism or a lack of comprehension of the (R,S) system that leads biochemists to persist with D and L; rather, it is the fact that the DL system better satisfies their
requirements. Chemists utilise the designations D and L when they are suitable for their specific requirements. The aforementioned statement regarding the limited usage of the (R,S) system in biochemistry
is, in fact, nearly the complete reverse of the truth. The system described is the sole feasible method for accurately depicting the stereochemistry of complex molecules including several asymmetric centres.
However, it is not suitable for regular series of molecules such as amino acids and simple sugars due to its inconvenience." If you are instructed to accurately depict the stereochemistry of a molecule
containing 1 chiral carbon (such as the S isomer), and you are provided with the substituents, you can easily accomplish this by adhering to the R, S priority rules." How can the right isomer for the L isomer
of the amino acid alanine be accurately depicted? Without prior knowledge of the absolute configuration of the homologous molecule, L glyceraldehyde, or without recalling the anagram CORN, it would be
impossible for you to accomplish it. Nevertheless, biochemists find this nomenclature unattractive due to the possibility of labelling various L amino acids with the same absolute stereochemistry as either R
or S.

1.2 Amino Acid Charges: The electric charges of amino acids.

Monomeric amino acids consist of an alpha-amino group and a carboxyl group, both of which can exist in a protonated or deprotonated state. Additionally, they have a R group, some of which can also be
protonated or deprotonated. When the amino group is protonated, it has a positive charge of +1, but the carboxyl group remains uncharged. Upon deprotonation, the amino group becomes neutral, whereas
the carboxyl group carries a negative charge of -1. The R groups that can undergo protonation/deprotonation include Lysine, Arginine, and Histidine, which have a positive charge of +1 when protonated. On
the other hand, Glutamic acid and Aspartic acid (carboxylic acids), Tyrosine and Serine (alcohols), and Cysteine (thiol) do not carry any charge when protonated. When amino acids are connected by peptide
bonds (amide link), the alpha N and the carboxyl C form an uncharged amide connection.

Nevertheless, it is possible for the amino group of the N-terminal amino acid and the carboxyl group of the C-terminal amino acid of a protein to carry an electric charge. The Henderson-Hasselbalch
equation provides a method for determining the electrical charge state of any ionizable group based on the known pKa value of the group. Identify each functional group that can undergo deprotonation and
refer to it as an acid, HA. The deprotonated form should be referred to as A. The charge of HA and A will be ascertained by the functional group and the Henderson-Hasselbalch equation from Chapter 2.The
number 2.

The equation is pH equals pKa plus the logarithm of the concentration of A minus.The expression HA ��=���+log⁡[�−] represents a mathematical equation.The graph below illustrates the titration
curve of a monoprotic acid with varying pKa values.

At the point of inflection on the curve, when pH is equal to pKa, the system exhibits maximum resistance to pH changes upon the addition of either an acid or a basic. At this pH, the concentration of HA is
equal to the concentration of A-.

The characteristics of a protein are influenced by the presence or absence of charge in the side chain functional groups, as well as in the N-terminal and C-terminal regions. The Henderson-Hasselbalch
equation states that this will be contingent upon the pH and the pKa of the functional group.

If the pH is 2 units lower than the pKa, the Henderson-Hasselbalch equation can be expressed as -2 = log A/HA, which simplifies to .01 = A/HA. Consequently, the functional group will be around 99%
protonated, resulting in either a neutral charge or a +1 charge, depending on the specific functional group.

If the pH is 2 units higher than the pKa, the Henderson-Hasselbalch equation can be expressed as 2 = log A/HA, which is equivalent to 100 = A/HA. Consequently, the functional group will undergo 99%
deprotonation.

When the pH is equal to the pKa, the Henderson-Hasselbalch equation simplifies to 0 = log A/HA, or 1 = A/HA. Consequently, the functional group will undergo 50% deprotonation.

The +2 rule has been derived from these basic instances. This rule is utilised to rapidly ascertain protonation, and therefore charge state, and is exceedingly crucial to be aware of (and simple to deduce).
The titration curves for Glycine (with a non-ionizable side chain), Glutamic acid (with a carboxylic acid side chain), and Lysine (with an amine side chain) can be observed in Figure 3.1.7. One should be
capable of correlating certain segments of these curves with the process of titrating specific ionizable groups in the amino acids.

Rev 001 Session-5 Question Booklet Page 88 of 334


Figure 3.1.73.1.7: Titration
curves for Gly, Glu, and Lys

Rev 001 Session-5 Question Booklet Page 89 of 334


May 16, 2023: Access the Excel spreadsheet containing Titration Curves for a Triprotic Acid. The software features customisable scroll bars for modifying pKa values.

1.1 Review of Buffer

The Henderson-Hasselbalch equation is additionally valuable for determining the composition of buffer solutions. Buffer solutions consist of a weak acid and its corresponding conjugate base. Examine the
equilibrium of a weak acid, such as acetic acid, and its corresponding conjugate base, acetate:

The reaction CH3CO2H + H2O ⇌ H3O+ + CH3CO- can be represented as CH3CO2H +

The pH of a buffer solution with equal amounts of acetic acid and acetate is: The pH value can be calculated using the formula pH = pKa + log [A]/[HA]. In this case, with a pKa value of 4.7 and a ratio of
[A]/[HA] equal to 1, the resulting pH is also 4.7.

Examining the titration curve for the carboxyl group of Gly (as shown above) reveals that at the pH value equal to pKa, the curve exhibits a minimal slope, indicating the least change in pH upon the addition
of either basic or acid. Buffer solutions can typically be prepared for a weak acid or base within a pH range of around +/- 1 unit from the pKa of the weak acid. The buffer solution exhibits its maximum
buffering capacity when the pH is equal to the pKa value, since it effectively resists the addition of both acid and base. The weak acid can undergo a chemical reaction with the strong base that is added,
resulting in the formation of the weak conjugate base. Similarly, the conjugate base can react with the supplied strong acid, leading to the regeneration of the weak acid. This process helps to minimise any
fluctuations in pH that may occur with the addition of strong acid or base.

The introduction of a potent base results in the formation of a feeble conjugate base: CH3CO2H + OH- ↔ CH3CO2- + H2O

The introduction of a potent acid results in the formation of a less potent acid: H3O+ + CH3CO2 → CH3CO2H + H2O

There are two straightforward methods to create a buffered solution. Let's examine a solution that consists of acetic acid and acetate, which is known as an acetic acid/acetate buffer solution.

Prepare an equimolar solution of acetic acid and sodium acetate. Combine the solutions and use a pH metre to continuously measure the pH until it reaches the desired value, within a range of plus or minus
one unit from the pKa.

• Prepare a solution of acetic acid and gradually add NaOH in quantities that are less than the stoichiometric amount required until the appropriate pH is achieved, which should be within a range of +/- 1 unit
from the pKa value. By employing this approach, the weak base leads to the formation of the conjugate base, namely acetate.

The reaction between CH3CO2H and OH- produces CH3CO2- and H2O.

• pH control buffers: Recipes utilising pKas for acid strength, temperature, and ionic strength

1.2 The isoelectric point

If a molecule, such as a polypeptide or protein, contains several ionizable groups, what are the consequences? Examine a protein. At a pH of 2, all ionizable groups would be fully protonated, resulting in a
net positive charge for the protein. (Please note that when carboxylic acid side chains are protonated, their overall charge is neutral.) As the pH increases, the most acidic groups will undergo deprotonation,
resulting in a decrease in the net positive charge. Under high pH conditions, the ionizable groups within the protein will undergo deprotonation due to the presence of a strong base, resulting in an overall
negative charge of the protein. At a specific pH value, the overall charge will be neutral. The pH at which a substance has a net charge of zero is referred to as the isoelectric point (pI). The isoelectric point
(pI) can be calculated by taking the average of the pKa values of the two groups that are nearest to and include the pI. One of the online problems will provide a more detailed explanation of this matter. It will
provide a list of isoelectric points (pI) and molecular weights (MW) for proteins obtained from 2D gels.

It is important to note that pKa is essentially a quantification of the equilibrium constant for the process. It is important to recall that the change in Gibbs free energy, ΔGo, is equal to the negative product of
the gas constant, R, the temperature, T, and the natural logarithm of the equilibrium constant, Keq. Thus, the pKa value remains unaffected by changes in concentration and is solely determined by the
inherent stability of the reactants compared to the products. This statement holds true solely under specific conditions, including temperature, pressure, and solvent conditions.

Take, for instance, acetic acid, which has a pKa of approximately 4.7 when dissolved in water. It is a dilute acid that undergoes limited dissociation, resulting in the formation of H+ ions (in water, these
combine with water molecules to produce hydronium ions, H3O+) and acetate ions (Ac-). These ions have modest stability in water but easily reassociate to create the initial product. The acidity constant
(pKa) of acetic acid in a solution containing 80% ethanol is 6.87. This can be attributed to the reduced stability of the charged products, since they experience less shielding from each other due to the lower
polarity of ethanol. Water has a higher dielectric constant than ethanol. The pKa value rises to 10.32 in pure ethanol, then significantly jumps to 130 in the presence of air.

Rev 001 Session-5 Question Booklet Page 90 of 334


Due to their zwitterionic nature and the presence of ionizable groups in their R-groups, amino acids can exhibit different charge states and reactivity in vivo. These variations depend on factors such as pH,
temperature, and solvation status of the surrounding microenvironment. The standard pKa values for the amino acids are presented in Table 3.1.13.1.1. This table can be utilised to forecast the
ionization/charge status of amino acids as well as the peptides/proteins they form.

Table 3.1.13.1.1: Summary of pKas of amino acids


It is important to acknowledge that the solvation status in the microenvironment of an amino acid can change the relative pKa values of various functional groups and give them distinct reactive
characteristics within the active sites of enzymes. Chapter 6 will provide a detailed analysis of the impacts of desolvation, specifically focusing on enzyme reaction mechanisms.

Downloadable Version of pKa Values

A superbly engaging and interactive website: Amino Acid Acid/Base Titration Curves • A tool to calculate the isoelectric point (pI) for each protein sequence • A repository providing properties of amino acids

1.1 Overview of Amino Acid Reactivity

You should possess the ability to discern the side chains that include hydrogen bond donors and acceptors. Similarly, certain substances exhibit acidic and basic properties. Prior knowledge of the estimated
pKa values of the side chains, as well as the N and C terminal groups, is necessary. The UV absorption of a protein at 280 nm is greatly influenced by three specific amino acid side chains: Trp, Tyr, and
Phe. This section will primarily focus on the chemical reactivity of the side chains, which is crucial for comprehending the characteristics of proteins. A significant number of the side chains exhibit
nucleophilic properties. Nucleophilicity quantifies the speed at which molecules with unshared electron pairs can engage in nucleophilic substitution processes. It is associated with basicity, which quantifies
the ability of a molecule with unshared electron pairs to undergo a reaction with an acid (either Bronsted or Lewis). The characteristics of the atom that possess the unshared electron pair play a crucial role
in determining both nucleophilicity and basicity. In all scenarios, the atom must be inclined to share its unattached electron pair. If the atoms involved in the nonbonded pair have a higher electronegativity,
they will have a reduced tendency to share electrons, resulting in a molecule that is less likely to act as a nucleophile (nu:) and a weaker base. Based on these concepts, it is evident that RNH2 has superior
nucleophilic properties compared to ROH, OH- demonstrates more reactivity than H2O, and RSH displays greater nucleophilicity than H2O. In the second scenario, the element S has a larger size and its
electron cloud is more easily influenced by external forces, making it more prone to react with other substances. The side chain nucleophiles ranked in descending order of nucleophilicity are Cys (RSH, pKa
8.5-9.5), His (pKa 6-7), Lys (pKa 10.5), and Ser (ROH, pKa 13). The reactivity of the side chain of serine is typically equivalent to that of ethanol. When deprotonated, it exhibits strong nucleophilic properties
within specific proteins, such as proteases. The lysine's amino group exhibits strong nucleophilic properties exclusively in its deprotonated state.

Comprehending the chemical reactivity of the different R group side chains of amino acids in a protein is crucial because specific chemical reagents can be employed to: • detect the existence of amino acids
in unidentified proteins, or • ascertain whether a particular amino acid is essential for the protein's structure or function. For instance, if a reagent that forms covalent bonds only with Lysine residues is
discovered to hinder the protein's function, it suggests that Lysine may play a crucial role in the catalytic activity of the protein.

Figure 3.1.83.1.8 presents a concise overview of nucleophilic addition and substitution reactions occurring at carbonyl carbon atoms.

Figure 3.1.83.1.8 presents a concise overview of the chemical properties and characteristics of aldehydes, ketones, and carboxylic acid derivatives.

Rev 001 Session-5 Question Booklet Page 91 of 334


The following section will provide a concise overview of the chemical properties of the side chains of reactive amino acids. In the past, the role of a certain amino acid in a protein has been investigated by
treating them with chemical agents that specifically change the side chains. Furthermore, certain side chains undergo covalent modifications subsequent to their synthesis within a living organism (referred to
as post-translational modification - as explained below).

1.2 Lysine's reactions

Figure 3.1.93.1.9 illustrates the chemical reaction between lysine, anhydrides, and ethylacetimidate.

• Undergoes nucleophilic substitution reactions (acylation) with anhydrides.

• Undergoes a reversible reaction with methylmaleic anhydride (also known as citraconic anhydride) by a nucleophilic substitution process.

• Exhibits a strong and selective response to ethylacetimidate in a nucleophilic substitution process (ethylacetimidate is similar to ethylacetate but with an imido group replacing the carbonyl oxygen). As the
amidino group formed, ethanol departs. The letter C has two N's linked to it, forming the word "din".

Figure 3.1.93.1.9 illustrates the chemical reaction between lysine and anhydrides and ethylacetimidate.

Figure 3.1.103.1.10 depicts an additional series of typical responses involving lysine, which encompass the processes of linking a chromophore or a fluorescent marker to the amino acid's side chain.

• Undergoes a nucleophilic substitution reaction with O-methylisourea, resulting in the removal of methanol and the formation of a guanidino group (consisting of three nitrogen atoms connected to a carbon
atom).

• Undergoes a nucleophilic aromatic substitution reaction with fluorodinitrobenzene (FDNB or Sanger's reagent) or trinitrobenzenesulfonate (TNBS, as observed in the reaction with
phosphatidylethanolamine), resulting in the formation of 2,4-DNP-lysine or TNB-lysine.

• Undergoes a nucleophilic substitution process when it comes into contact with Dimethylaminonapthelenesulfonylchloride (Dansyl Chloride).

Figure 3.1.103.1.10 illustrates the chemical reaction between lysine and O-methylisourea, resulting in the formation of chromophores and fluorophores.

Figure 3.1.113.1.11 illustrates a recurring reaction known as the creation of an imine or Schiff base. This reaction occurs when lysine reacts with an aldehyde or ketone.

• Exhibits a strong and selective reaction with aldehydes, resulting in the formation of imines (Schiff bases). These imines can then be reduced using sodium borohydride or cyanoborohydride to produce
secondary amines.

Figure 3.1.113.1.11 illustrates the process of lysine reacting with an aldehyde or ketone to produce a Schiff base.

1.3 Cysteine Reactions

Cysteine is a highly reactive molecule that frequently forms a strong chemical bond called a covalent disulfide bond when it combines with another cysteine molecule.

Figure 3.1.123.1.12 displays the typical chemicals employed in the laboratory to mark unbound Cys side chains. These compounds are employed to modify the Cys side chains in order to ascertain their
functional importance in a protein, such as serving as an active nucleophile in an enzyme-catalyzed process. • They undergo an SN2 reaction with iodoacetic acid, resulting in the addition of a carboxymethyl
group to the S atom.

• Undergoes an SN2 reaction with iodoacetamide, resulting in the addition of a carboxyamidomethyl group to the sulphur atom.

• Undergoes an addition reaction with N-ethylmaleimide in the presence of a double bond

Figure 3.1.123.1.12 illustrates the typical labelling reactions involving cysteine.

Sulphur is positioned immediately below oxygen in the periodic table. Similar to water, sulfur-containing amino acids exist in many redox states, as depicted in Figure 3.1.133.1.13.

Figure 3.1.133.1.13 displays the oxidation states of sulphur.

The number is 1.4.Chemistry of Cystine

Cysteine side chains inside a protein have the ability to chemically bond together through a covalent interaction, resulting in the formation of a disulfide (RS-SR) known as cystine. HOOH (hydrogen
peroxide) has a higher level of oxidation compared to HOH (water), as the oxygen atom in H2O2 has an oxidation number of -1, whilst the oxygen atom in H2O has an oxidation number of -2. Similarly,
RSSR represents the oxidised form of thiols, with the sulphur atom having an oxidation number of -1, while RSH represents the reduced form of thiols, with the sulphur atom having an oxidation number of -
2. The oxidation values of O and S are similar because both elements belong to Group 6 of the periodic table and are more electronegative than C.

Cystine can undergo a disulfide exchange reaction with a free sulfhydryl (RSH) in a thermodynamically favourable manner. When this reaction is carried out with an excess of free sulfhydryls, it leads to the
reduction of cystine within the protein, as depicted in Figure 3.1.143.1.14.

Figure 3.1.143.1.14 illustrates the process of disulfide interchange and reduction of protein disulfides.

In laboratory settings, this reaction is frequently employed to measure the quantity of unbound cysteine residues in a protein, utilising Ellman's reagent, as depicted in Figure 3.1.153.1.15.

Figure 3.1.153.1.15 illustrates the chemical reaction between unbound cysteine and Ellman's reagent.

Rev 001 Session-5 Question Booklet Page 92 of 334


The anion of 2-nitro-5-thiobenzoic acid, acting as a leaving group, exhibits absorption at a wavelength of 412 nm, facilitating straightforward quantification. Only cysteines that are on the surface and not
hidden within the protein structure will be labelled, unless the protein is unfolded to expose all of its cysteines.

During the process of protein folding, it is possible for two Cysteine (Cys) side chains to come close to each other and create a disulfide bond inside the same chain. Similarly, two cysteine side chains
located on different proteins may get close to each other and create an intermolecular disulfide bond. In protein structural analysis, disulfides are commonly cleaved and the chains are subsequently
separated for examination. Reducing chemicals, such as beta-mercaptoethanol, dithiothreitol, and tris (2-carboxyethyl) phosphine (TCEP), can cleave the disulfides. Alternatively, oxidising agents as
performic acid can further oxidise the disulfide to separate cysteic acids. Figure 3.1.16 displays three frequently employed reagents for disulfide cleavage processes in laboratory settings.

Figure 3.1.163.1.16 illustrates three frequently employed disulfide cleaving (reducing) agents in laboratory settings.

The reaction between beta-mercaptoethanol (BME) and performic acid is depicted in Figure 3.1.173.1.17 below.

Figure 3.1.173.1.17 illustrates the process of breaking intrachain cystine disulfide bridges in proteins using betamercaptoethanol and performic acid.

Figure 3.1.183.1.18 depicts the reaction involving dithiothreitol (DTT). It is important to observe that it creates a stable ring resembling cyclohexane, which greatly enhances the thermodynamic favorability of
this reaction. The reaction with BME necessitates a greater amount of DTT compared to this.

Figure 3.1.183.1.18 illustrates the process of breaking disulfide bonds using dithiothreitol.

The process involving tris (2-carboxyethyl) phosphine (TCEP) does not proceed through a disulfide interchange mechanism, as depicted in Figure 3.1.19.

Figure 3.1.193.1.19 illustrates the chemical reaction between TCEP and disulfides.

Cells preserve a reducing environment through the existence of several "reducing" agents, such as the tripeptide gamma-Glu-Cys-Gly (glutathione). Therefore, intracellular proteins generally lack disulfides,
which are commonly found in external proteins (such as those in blood) and in certain organelles like the endoplasmic reticulum and mitochondrial intermembrane space, where disulfides can be present.

Sulphur redox chemistry plays a crucial role in biological processes. As previously explained, the sulphur in cysteine is capable of undergoing redox reactions, allowing it to exist in several states. These
states are determined by the specific redox conditions in the surrounding environment and the presence of substances that can either oxidise or reduce cysteine. Hydrogen peroxide, a powerful oxidising
agent, can be synthesised within cells and can cause significant and irreversible chemical alterations to the Cys side chains. If a reactive cysteine residue is crucial for protein functionality, the protein's
function can be altered (either reversibly or irreversibly) by employing different oxidising agents, as depicted in Figure 3.1.203.1.20.

Figure 3.1.203.1.20 illustrates the reaction between cysteine and H2O2, with a value of 1.5.Histidine's reactions

Histidine has high basicity at physiological pH conditions. There are two tautomers of His, as depicted in Figure 3.1.213.1.21. NMR studies indicate that in model peptides, the proton is primarily located on
the ε2, N3, or tele N positions in the imidazole ring. This is due to the fact that the pKa of these positions is 0.6 units greater than that of the δ1, N1, or pro N positions.

Figure 3.1.213.1.21 displays the many tautomers of histidine.

The nitrogen atom in a secondary amine is anticipated to have a higher nucleophilic strength compared to a primary amine due to its ability to release electrons to that nitrogen in a secondary amine. The
steric barrier caused by the two adjacent carbon atoms of the nitrogen atom hinders the attachment of an electrophile. However, in the case of His, the steric effect is reduced to a minimum due to the
constraint of the 2Cs by the ring. With a pKa of approximately 6.5, this amino acid is among the most potent bases present at a pH of 7.0 in the body. Therefore, it frequently exhibits cross-reactivity with
numerous chemicals employed for modifying Lys side chains. His reaction exhibits a relatively high degree of selectivity when interacting with diethylpyrocarbonate.

Figure 3.1.223.1.22 illustrates the chemical reaction between histidine and diethylpyrocarbonate.

The value is 1.6.Within a living organism Amino acids undergo post-translational modification.

Chemical changes can occur to amino acids in naturally occurring proteins within cells. These alterations change the characteristics of the changed amino acid, which in turn can change the structure and
function of the protein. The majority of chemical alterations to proteins in cells take place following protein synthesis, in a process known as translation. The chemical alterations that occur as a result are
referred to as post-translational modifications. Multiple instances are depicted in Figure 3.1.233.1.23. It should be noted that basic acid/base reactions are covered, although they are not regarded as
instances of post-translational alterations.

Figure 3.1.233.1.23 illustrates the typical post-translational changes that occur in proteins.

There exist several post-translational modifications (PTMs) that play a role in a complex cellular system, responding to both external cues such as hormones, neurotransmitters, nutrients, and metabolites, as
well as internal chemical signals. The post-translational modifications (such as phosphorylation, acetylation, etc.) and their enzymatic removal are integral components of a complex cellular signalling system
that will be extensively examined in Chapter 28. Nevertheless, not all post-translational modifications (PTMs) are harmless. Some examples of protein side chain modifications are glycation, oxidation,
citrullination, and carbonylation. These levels frequently rise during periods of inflammatory stress, whether it be acute or chronic. The proteins that have been changed are broken down into shorter peptides
within the cell, while still preserving the chemical modification. Regrettably, the immune system might identify these as alien entities, resulting in an immunological reaction against the body's own tissues and
the development of autoimmune diseases. An example of a potentially harmful post-translational modification (PTM) is the carboxyethylation of cysteine, which is facilitated by the enzyme cystathionine β-
synthase, as depicted in Figure 3.1.243.1.24 below.

Figure 3.1.243.1.24 illustrates the process of carboxyethylation of cysteine.

The result closely resembles the carboxymethylation of cysteine depicted in Figure 12 above. 3-hydroxypropionic acids, a metabolite produced by gut microorganisms, serves as a modifying reagent. This
alteration has been demonstrated to elicit an autoimmune reaction in individuals with ankylosing spondylitis.

________________________________________

The page named 3.1: Amino Acids and Peptides is shared without a specified licence and was created, modified, and/or selected by Henry Jakubowski and Patricia Flatt.

Rev 001 Session-5 Question Booklet Page 93 of 334


https://bio.libretexts.org/Bookshelves/Biochemistry/Fundamentals_of_Biochemistry_(Jakubowski_and_Flatt)/01%3A_Unit_I-_Structure_and_Catalysis/03%3A_Amino_Acids_Peptides_and_Proteins/
3.01%3A_Amino_Acids_and_Peptides

Q 2: How does a sequence of chromosomal nucleotides code for amino acids?

There are four bases in a strand of DNA 1) Adenine(A) 2) Thymine(T) 3) Cystosine(C) 4) Guanine(G) Each bases pairs with its complementary base. A pairs with T, and G pairs with C. These pairs form
rungs across the ladder of two helix strands. Every 3 base arrangement on one side of the DNA codes for one of 20 amino acids

The nucleotide triplet that encodes an amino acid is called a codon. Each group of three nucleotides encodes one amino acid. Since there are 64 combinations of 4 nucleotides taken three at a time and only
20 amino acids, the code is degenerate (more than one codon per amino acid, in most cases).

How is a nucleotide sequence translated into an amino acid sequence?


The mRNA is then pulled through the ribosome; as its codons encounter the ribosome's active site, the mRNA nucleotide sequence is translated into an amino acid sequence using the tRNAs as adaptors to
add each amino acid in the correct sequence to the end of the growing polypeptide chain.
What is the process by which codons are converted into amino acids?

Transfer RNAs, also known as tRNAs, are molecular entities that serve as intermediaries, linking mRNA codons to the corresponding amino acids they represent. The 5' end of each tRNA molecule has an
anticodon, consisting of three nucleotides, that is capable of binding to certain mRNA codons. The opposite extremity of the transfer RNA (tRNA) molecule transports the specific amino acid as indicated by
the codons.

What is the mechanism by which codons encode proteins?

A codon, which is a sequence of three nucleotides, often corresponds to a certain amino acid. Amino acids serve as the fundamental constituents of proteins. Transfer RNA (tRNA), a specific form of RNA, is
responsible for the sequential assembly of proteins by adding one amino acid at a time.

B M B 400, Part Three


Gene Expression and Protein Synthesis
Section IV = Chapter 13
GENETIC CODE
https://www.bx.psu.edu/~ross/workmg/GeneticCodeCh13.htm#:~:text=The%20nucleotide%20triplet%20that%20encodes,nucleotides%20encodes%20one%20amino%20acid.

Overview for Genetic Code and Translation:


After the completion of transcription and processing of rRNAs, tRNAs, and snRNAs, these RNA molecules are prepared for utilisation within the cell. They are assembled into ribosomes or
snRNPs and employed in processes such as splicing and protein synthesis. However, the mature mRNA is not yet capable of performing its role within the cell. The encoded protein must
undergo translation. The genetic code governs the process of transforming information encoded in nucleic acids into proteins. Experiments examining the impact of frameshift mutations
demonstrated that the removal or insertion of 1 or 2 nucleotides resulted in a loss of function, but the removal or insertion of 3 nucleotides allowed for a significant retention of function. This
experiment revealed that the coding unit consists of three nucleotides. A codon refers to the nucleotide triplet that encodes an amino acid. Every set of three nucleotides represents a single
amino acid. Due to the existence of 64 permutations of 4 nucleotides taken three at a time, but there are only 20 amino acids, the genetic code is considered degenerate, meaning that in most
circumstances, there is more than one codon per amino acid. The translation process relies on tRNA as the adapter molecule. A charged tRNA molecule possesses an amino acid at one
extremity and an anticodon at the other extremity, which is capable of pairing with a codon on the mRNA. In other words, it can communicate in the language of nucleic acids at one end and in
the language of proteins at the other end. The ribosome is the apparatus responsible for protein synthesis, which occurs under the guidance of template mRNA.

Figure 3.4.1. tRNAs serve as an adaptor for translating from nucleic acid to protein

A. The codon consists of 3 nucleotides.

1. A minimum of three nucleotides per codon is required to encode 20 amino acids.

a. Combinations of 4 nucleotides encode 20 amino acids.

b. If a codon consisted of two nucleotides, the total number of possible possibilities would be limited to

Rev 001 Session-5 Question Booklet Page 94 of 334


The product of multiplying 4 by 4 is equal to 16 amino acids.

c. The set of all combinations can encode a variety of possibilities using three nucleotides.

The product of multiplying 4 by 4 by 4 is equal to 64 amino acids.

(i.e. There are 64 unique combinations of four nucleotides when taken three at a time).

2. The outcomes of combinations of frameshift mutations indicate that the genetic code is organised into groups of three nucleotides, known as triplets.

Mutations that modify the length of a sequence by adding or deleting one or two nucleotides result in a significant faulty phenotype. These mutations disrupt the reading
frame, causing a complete alteration of the amino acid sequence after the mutation. However, alterations involving the addition or removal of
three nucleotides exhibit minimal or negligible impact. In the latter scenario, the reading frame remains intact, but there is an occurrence of
either an insertion or deletion of an amino acid at a specific location. Trios of distinct single nucleotide deletions (or insertions), each
exhibiting an independent loss-of-function phenotype, have the ability to significantly reinstate functionality to a gene. The original reading
frame is reestablished following the third deletion or insertion.

B. Code decryption experiments

1. Multiple distinct cell-free systems have been created to facilitate protein production. The capacity to do translation in a laboratory setting was a crucial technological
breakthrough that enabled researchers to decipher the genetic code.

a. Rabbit reticulocytes, which are mammals, have ribosomes that are actively producing a large amount of globin.

Wheat germ extracts

c. Extracts derived from bacteria

Another crucial advancement that enabled the studies to decode the code was the capacity to synthesise arbitrary polynucleotides.

S. Ochoa discovered and extracted the enzyme polynucleotide phosphorylase, demonstrating its ability to connect nucleoside diphosphates (NDPs) into polymers of
NMPs (RNA) by a reversible reaction.

The sum of n and nPi is represented as nNDP.

Polynucleotide phosphorylase serves the purpose of facilitating the reverse reaction, namely employed in the process of RNA breakdown. Nevertheless, within an
environment devoid of cells, the forward reaction proves highly advantageous for the production of arbitrary RNA polymers.

Rev 001 Session-5 Question Booklet Page 95 of 334


The programme facilitates the synthesis of certain homopolypeptides.

The reference is Nirenberg and Matthei, 1961.

a. When using solely UDP as a substrate for polynucleotide phosphorylase, the resulting product will be a homopolymer poly(U).

b. The introduction of poly(U) into an in vitro translation system, such as E. coli lysates, leads to the production of a newly formed polypeptide consisting of a polymer of
polyphenylalanine.

Consequently, the codon UUU represents the amino acid Phenylalanine.

d. Similarly, the synthesis of poly Lys is directed by poly(A); the codon AAA corresponds to the amino acid Lys.

Poly(C) utilises a planned process to synthesise poly Pro. The codon CCC is responsible for encoding Pro.

The synthesis of poly Gly is achieved by the use of Poly(G). The amino acid Glycine is encoded by the sequence GGG.

4. Utilisation of mixed copolymers

a. When two nucleoside diphosphates (NDPs) are combined in a predetermined ratio, polynucleotide phosphorylase will produce a mixed copolymer in which the
frequency of incorporation of each nucleotide is directly proportional to its initial concentration in the mixture.
b. For example, consider a 5:1 mixture of A:C. The enzyme will use ADP 5/6 of the time, and CDP 1/6 of the time. An example of a possible product is:

AACAAAAACAACAAAAAAAACAAAAAACAAAC...

Table 3.4.1. Frequency of triplets in a poly(AC) (5:1) random copolymer

Composition Number Probability Relative frequency


3A 1 0.578 1.0
2 A, 1 C 3 3 x 0.116 3 x 0.20
1 A, 2 C 3 3 x 0.023 3 x 0.04
3C 1 0.005 0.01

c. The probability of AAA occurring in the copolymer is calculated as (5/6)(5/6)(5/6), resulting in a value of 0.578.

This codon will occur most frequently and can be standardised to a value of 1.0 (0.578 divided by 0.578 equals 1.0).

The probability of a codon with 2 A's and 1 C occurring is calculated by multiplying the individual probabilities of each nucleotide: (5/6)(5/6)(1/6) = 0.116.

There are three permutations to obtain two A's and one C, namely AAC, ACA, and CAA.
The frequency of occurrence of all the A2C codons is 0.348.
By normalising to AAA with a relative frequency of 1.0, the frequency of A2C codons can be calculated as 3 multiplied by the ratio of 0.116 to 0.578, which equals 3
multiplied by 0.2.

e. By applying the same reasoning, it can be determined that the anticipated occurrence rate of AC2 codons is 3 multiplied by 0.04, and the projected occurrence rate of
CCC is 0.01.

Table 3.4.2. Amino acid incorporation with poly(AC) (5:1) as a template

Radioactive Precipitable cpm Observed Theoretical


amino acid - template + template incorporation incorporation
Lysine 60 4615 100.0 100
Threonine 44 1250 26.5 24
Asparagine 47 1146 24.2 20
Glutamine 39 1117 23.7 20
Proline 14 342 7.2 4.8
Histidine 282 576 6.5 4

The data presented in this study are sourced from Speyer et al.'s (1963) publication in the Cold Spring Harbour Symposium in Quantitative Biology, specifically in volume 28 on page 559.
Theoretical incorporation refers to the anticipated value based on the genetic code as it was eventually determined.

f. When this blend of heterogeneous copolymers is employed for in vitro translation programming, Lysine is mostly integrated, with an occurrence rate of 100. This verifies that AAA is
responsible for encoding the amino acid Lysine.

Rev 001 Session-5 Question Booklet Page 96 of 334


Relative to the incorporation of Lys as 100, the incorporation of Thr, Asn, and Gln is around 24 to 26, which is quite similar to the expected values for amino acids encoded by one of the A2C
codons. Nevertheless, this data does not indicate the specific codons inside A2C that encode each individual amino acid. The current understanding is that the genetic code assigns the
amino acid Threonine to the codon ACA, Asparagine to the codon AAC, and Glutamine to the codon CAA.

The values of 6 and 7 are assigned to Pro and His, respectively, which deviate from the predicted value of 4 for amino acids encoded by AC2 codons. For example, the codon CCA represents
the amino acid Proline, while the codon CAC represents the amino acid Histidine. ACC is responsible for encoding Threonine, however, its contribution is eclipsed by the significant integration
of "26.5" units at ACA. Alternatively, the value "26.5" can be expressed as the sum of 20 (ACA) and 4 (ACC) for the amino acid Threonine (Thr).

5. Trinucleotide codons precisely induce the binding of aminoacyl tRNAs to ribosomes.

a. In the presence of large concentrations of Mg cations, the typical initiation method that relies on f Met tRNAf can be overridden. Instead, specific trinucleotides can be employed to guide the
binding of labelled aminoacyl tRNAs to ribosomes.

b. For instance, when ribosomes are combined with UUU and radiolabeled Phe tRNAphe, a ternary complex will be created. This complex will adhere to nitrocellulose, a material commonly
known as "Millipore assay" due to its producer.

c. Next, one can systematically evaluate all potential combinations of triplet nucleotides.
Fig. 3.4.2.

Data from Nirenberg and Leder (1964) Science 145:1399.

6. Synthetic polynucleotides with repeating sequences (Khorana)

a. Alternating copolymers, such as (UC)n, facilitate the inclusion of Ser and Leu.

UCU and CUC both represent the amino acids Serine and Leucine, however they do not provide information on which amino acid is which. However, when combined with
additional data, such as the random mixed copolymers mentioned in section 4, one can draw some conclusive conclusions. Further research demonstrated
that UCU represents the amino acid Serine, while CUC represents the amino acid Leucine.

b. The presence of poly(AUG) facilitates the inclusion of poly Met and poly Asp when the Mg concentrations are high. The codon AUG specifies the amino acid
Methionine, while the codon UGA serves as a termination signal. Therefore, it may be inferred that the codon GUA corresponds to the amino acid Aspartic
Acid.

The genetic code refers to the set of rules by which information encoded inside DNA or RNA is translated into proteins.

1. The coding capability of each group of 3 nucleotides was estimated by gathering observations from experiments, as described in the previous section. This is commonly
known as the genetic code. The information is presented in Table 3.4.4. This elucidates the process by which the cell converts the information encoded in
nucleic acids (long chains of nucleotides) into the language of proteins (long chains of amino acids).

Understanding the genetic code enables the ability to forecast the sequence of amino acids in every gene that has been sequenced. The whole genomic sequences of multiple
creatures have unveiled genes encoding numerous proteins that were previously unidentified. An important ongoing objective is to allocate activities and
roles to these recently identified proteins.

Table 3.4.4. The Genetic Code

Position in Codon .
1st 2nd . 3rd
U . C . A . G .
U UUU Phe UCU Ser UAU Tyr UGU Cys U
UUC Phe UCC Ser UAC Tyr UGC Cys C
UUA Leu UCA Ser UAA Term UGA Term A
UUG Leu UCG Ser UAG Term UGG Trp G

C CUU Leu CCU Pro CAU His CGU Arg U


CUC Leu CCC Pro CAC His CGC Arg C
CUA Leu CCA Pro CAA Gln CGA Arg A
CUG Leu CCG Pro CAG Gln CGG Arg G

A AUU Ile ACU Thr AAU Asn AGU Ser U

Rev 001 Session-5 Question Booklet Page 97 of 334


AUC Ile ACC Thr AAC Asn AGC Ser C
AUA Ile ACA Thr AAA Lys AGA Arg A
AUG* Met ACG Thr AAG Lys AGG Arg G

G GUU Val GCU Ala GAU Asp GGU Gly U


GUC Val GCC Ala GAC Asp GGC Gly C
GUA Val GCA Ala GAA Glu GGA Gly A
GUG* Val GCG Ala GAG Glu GGG Gly G

* Sometimes used as initiator codons.

2. Out of the 64 codons, 61 are responsible for encoding amino acids, while the remaining 3 are used to indicate the cessation of translation.

3. Degeneracy refers to the state of being degenerate or having low quality or value.

a. The degeneracy of the genetic code is the phenomenon where most amino acids are encoded by several codons. Methionine (AUG) and tryptophan (UGG) are the only exceptions.

b. The degeneracy is largely located at the third position. Therefore, single nucleotide alterations at the third position may not result in an alteration of the transcribed amino acid. These
nucleotide alterations are said to as quiet or synonymous. They do not modify the encoded protein. Further elaboration on this topic is provided below.

The pattern of degeneracy facilitates the categorization of codons into "families" and "pairs". Within 9 sets of codons, the nucleotides at the initial two places are adequate to determine a
distinct amino acid, and any nucleotide (abbreviated as N) at the third position represents the same amino acid. These consist of 9 codon "families". One instance is the
encoding of threonine by ACN.

There are 13 codon "pairs" where the nucleotides at the first two locations are enough to determine two amino acids. The presence of a purine (R) nucleotide at the third position
determines the identity of one amino acid, whereas the presence of a pyrimidine (Y) nucleotide at the third position determines the identity of the other amino acid.

The sum of these cases exceeds 20, which is the total number of amino acids. This is due to the fact that leucine, serine, and arginine are encoded by several codons, belonging to
both a codon family and a codon pair. The UAR codons, which indicate the end of translation, were tallied as a pair of codons.

The three codons that encode isoleucine (AUU, AUC, and AUA) can be considered as an intermediate stage between a codon family and a codon pair.

The codons for leucine and arginine exhibit degeneracy at the first position, which is a rare occurrence among codons. Both UUA and CUA encode the amino acid leucine. No degeneracy is
detected at the second position of codons that encode amino acids. The termination codons UAA and UGA are the only instances of second position degeneracy.

Chemically analogous amino acids frequently exhibit comparable codons.

For instance, amino acids that repel water are frequently represented by codons with the letter U in the second position. Additionally, all codons with U in the second position represent
hydrophobic amino acids.

5. The primary codon that indicates the start of translation is AUG.

Bacteria have the ability to utilise GUG or UUG codons, and in rare cases, AUU and maybe CUG codons. The frequency of codon use in initiation was estimated using data from the
4288 genes identified by the entire genome sequence of E. coli.

AUG is employed for the expression of 3542 genes.


GUG is employed for a total of 612 genes.
UUG is utilised for a total of 130 genes.
AUU is specifically employed for a single gene.
CUG can be utilised for a single gene.

In bacteria, the first amino acid absorbed during translation is always f-Met, regardless of the codon used for initiation.

6. There are three codons, namely UAA, UAG, and UGA, that indicate the end of translation.

Among these three codons, UAA is the most commonly utilised in E. coli, with UGA being the second most frequent. UAG is hardly utilised.

UAA is employed for a total of 2705 genes.


The acronym UGA is employed to denote a total of 1257 genes.
UAG is employed for a total of 326 genes.

7. The genetic code is nearly ubiquitous.

In the exceptional cases that deviate from this pattern, the variations in the genetic code are relatively minor. One instance that deviates from the norm is the RNA derived from mitochondrial
DNA, in which both UGG and UGA serve as codes for the amino acid Trp.

Differential codon usage refers to the variation in the frequency of different codons within a genome or among different organisms.

1. Different species exhibit distinct codon usage patterns.

For example, the nucleotide sequence 5' UUA is commonly used to encode the amino acid Leu, occurring in around 90% of genes. The utilisation of CUR may be nonexistent, but the
occurrence of UUG combined with CUY may represent 10% of the codons.

2. The amount of tRNA is directly related to the utilisation of codons in natural mRNAs.

Rev 001 Session-5 Question Booklet Page 98 of 334


The tRNALeu with the anticodon sequence 3' AAU will be the most prevalent in this particular case.

3. The codon usage pattern can potentially indicate the gene's expression level. Typically, genes that are expressed at higher levels tend to utilise codons that are often found in other genes
throughout the genome. This has been measured and expressed as a "codon adaptation index". Therefore, when examining entire genomes, the identification of a gene
with a codon usage profile that aligns with the organism's preferred codon usage would result in a high score on the codon adaptation index. Consequently, it can be
inferred that this gene is highly expressed. Similarly, an individual with a low score on the index may indicate a protein that is not present in high quantities.

If a gene exhibits a significant deviation in its codon usage pattern compared to the rest of the genome, it suggests that this gene might have been introduced into the genome through
horizontal transfer from another species.

4. The optimal use of codons is an important factor to take into account in the field of "reverse genetics". If you possess knowledge of a partial amino acid sequence for a protein and desire to
isolate the corresponding gene, it is possible to readily discover the set of mRNA sequences that can encode this specific amino acid sequence. Due to the degeneracy in
the code, this set of sequences can be extremely vast. Given that these sequences are commonly employed as hybridization probes or PCR primers, a greater repertoire
of potential sequences increases the probability of achieving hybridization with a target sequence that deviates from the intended one. Therefore, it is desirable to restrict
the range of potential sequences. By consulting a codon preference table (provided it is available for the relevant organism), one can utilise the favoured codons instead of
considering all conceivable codons. This reduces the quantity of sequences required for the production of hybridization probes or primers.

The presence of a wobble in the anticodon

1. Explanation

The term "wobble" is used to describe the permissibility of non-Watson-Crick base pairing between the third position of the codon and the first position of the anticodon. Conversely, the initial
two sites of the codon establish consistent Watson-Crick base pairings with the final two positions of the anticodon.

The ability of certain tRNAs to bind with several codons in the "wobble" position results in a decreased need for a large number of tRNAs during translation.

The "wobble" criteria allow for the reading of the 61 codons (representing 20 amino acids) with only 31 anticodons (or 31 tRNAs).

2. Wobble rules
In addition to the usual base pairs, one can have G-U pairs and I in the anticodon 1st position can pair with U, C or A.

5' base of the anticodon = 3' base of the codon =


first position in the tRNA third position in the mRNA
C G
A U
U A or G
G C or U
I U, C or A

Figure 3.4.2.

Rev 001 Session-5 Question Booklet Page 99 of 334


F. Classification of mutations

1. Nucleotide substitutions at a single position in the DNA sequence.

This topic has already been addressed in Part Two, namely in the section on DNA Repair. To reiterate, there are two categories of base substitutions.

(1) Transitions occur when a purine replaces another purine, or when a pyrimidine replaces another pyrimidine. The nucleotide class remains unchanged. Here are few instances: A substitution of G
for C or a substitution of T for G.

(2) Transversions occur when a purine is replaced by a pyrimidine, or when a pyrimidine is replaced by a purine. An alternative type of nucleotide is inserted into the DNA, causing the helix to
become twisted, particularly when a purine purine base pair is involved. Examples include the substitution of A for T or C, or the substitution of C for A or G.

Throughout the course of development, the speed at which transitions accumulate surpasses the speed at which transversions accumulate.

2. Impact of mutations on the mRNA

(1) Missense mutations result in the substitution of one amino acid with another. The phenotypic impact of a specific replacement may or may not be detectable. Certain substitutions, such as
replacing a valine with a leucine at a crucial site for preserving an α helix, may not result in an observable alteration in the protein's structure or functionality. Substituting valine for a glutamate
at a specific location that leads to the polymerization of haemoglobin when it lacks oxygen, results in considerable pathological effects, as seen in the case of sickle cell anaemia.

(2) Nonsense mutations result in the premature cessation of translation. They arise when a substitution, insertion, or deletion causes the formation of a stop codon in the mRNA sequence that codes
for the polypeptide in the original mRNA. They typically exhibit significant phenotypic ramifications.

(3) Frameshift mutations refer to the insertion or deletion of genetic material that alters the reading frame of the mRNA. They typically exhibit significant phenotypic ramifications.

Not all base substitutions result in changes to the encoded amino acids.

(1) A base substitution can cause a change in the encoded polypeptide sequence. In such cases, the substitution is referred to as nonsynonymous or nonsilent.

(2) A synonymous or silent substitution refers to a base substitution in a degenerate spot inside the codon, where the amino acid encoded remains unchanged.

For example, the ACU codon undergoes a nonsynonymous substitution and changes to the AAU codon. This substitution results in the amino acid threonine being replaced by asparagine.

Rev 001 Session-5 Question Booklet Page 100 of 334


ACU is equivalent to ACC due to a synonymous replacement.
Change "Thr" to "Thr".

(3) Analysis of the degeneracy patterns in the genetic code reveals that nonsynonymous replacements primarily take place in the first and second positions of the codon, while synonymous
substitutions predominantly occur in the third position. Nevertheless, there exist various exceptions to this norm.

(4) Typically, the frequency at which synonymous substitutions become permanently established in a population is somewhat higher than the frequency at which nonsynonymous substitutions
become permanently established. This is a highly compelling argument that strongly supports the notion of neutral evolution, often known as evolutionary drift, as a primary factor responsible
for the substitutions observed in natural populations.

1 Inquiries regarding Chapter 13. Genetic Code

The enzyme polynucleotide phosphorylase differs from DNA and RNA polymerases in several ways.

The RNA sequence encodes a brief oligopeptide of 13.2 units.

The sequence is 5' GACUAUGCUCAUAUUGGUCCUUUGACAAG.

a) What are the boundaries of its start and end positions, and how many amino acids are specified by its genetic code?
b) What is distinctive about the amino acids that are encoded?

13.3 a) What is the definition of degeneracy in the genetic code?


b) Typically, which location of a codon exhibits degeneracy?
c) How does this facilitate the efficient utilisation of tRNAs in a cell?

The topic of discussion is the coding of a polypeptide using duplex DNA in the context of molecular biology.
The template strand of a given sample of double-helical DNA possesses the specific sequence:

The given sequence is 5' CTTAACACCCCTGACTTCGCGCCGTCG.

a) What is the mRNA's base sequence that can be transcribed from this strand?
b) What is the possible amino acid sequence that could be encoded by the mRNA base sequence in (a), beginning from the 5' end?
c) Assuming the non-template strand of this DNA sample undergoes transcription and translation. Will the resultant amino acid sequence be identical to that in (b)? Elaborate on the biological
importance of your response.

13.5 The Fundamental Cause of the Sickle-Cell Mutation.


Sickle-cell haemoglobin contains a Valine residue at position 6 of the β-globin chain, whereas normal haemoglobin A has a Glutamic acid residue at this location. Can you anticipate the alteration
that occurred in the DNA codon for glutamate to explain its substitution with valine?

Thirteen point six A codon representing the amino acid lysine (Lys) can be altered through a single nucleotide substitution to form a codon representing the amino acid isoleucine (Ile). What is the
specific order of the initial codon for the amino acid Lysine?

Thirteen point seven This question provides information on the impact of single nucleotide alterations on the amino acid produced by a specific codon. Derive the sequence of the wild-type codon in
each case.

a) Glutamine (Gln) undergoes conversion to Arginine (Arg), which is subsequently transformed to Tryptophan (Trp). What is the specific codon that represents the amino acid glutamine (Gln)?
b) A single nucleotide substitution can convert Leu to either Ser, Val, or Met, with each amino acid replacement requiring a distinct nucleotide substitution. What is the specific codon that
represents the amino acid Leucine?

13.8 Using the common genetic code and allowing for "wobble", what is the minimum number of tRNAs required to recognize the codons for

a) arginine?
b) valine?

13.9 Determine which amino acid should be attached to tRNAs with the following anticodons:

a) 5'-I-C-C-3'
b) 5'-G-A-U-3'

13.10 (POB) Determining the Gene Associated with a Protein by Recognising its Amino Acid Sequence.

Rev 001 Session-5 Question Booklet Page 101 of 334


Create a DNA probe capable of specifically identifying the gene responsible for encoding a protein with the given amino-terminal amino acid sequence. The probe should have a length of 18 to 20
nucleotides, which ensures appropriate specificity given enough similarity between the probe and the gene.

The user's text is a sequence of amino acids: H3N+-Ala-Pro-Met-Thr-Trp-Tyr-Cys-Met-As

Thirteen point one one Imagine yourself in a laboratory aboard the Starship Enterprise. An expedition crew has recently explored Planet Claire and retrieved a fungus that will be the main focus of
this week's show. While the other members of the team are occupied with determining if the fungus is beneficial or harmful (and receiving the majority of the attention on camera), your task is
to ascertain its genetic code. Using advanced technologies from the future, it is quickly determined that the proteins consist of a total of eight amino acids. These amino acids are labelled as
amino acids 1, 2, 3, 4, 5, 6, 7, and 8. The genetic material of this organism consists of a nucleic acid that contains only three unique nucleotides, namely K, N, and D, which are not present in
nucleic acids found on Earth.

The frameshift mutations' outcomes validate your hunch that the fungus indeed utilises the smallest feasible coding unit. Inserting one nucleotide or three nucleotides into a gene results in a total
loss of function, but inserting or deleting two nucleotides has minimal impact on the encoded protein.

You synthesise artificial polymers composed of the nucleotides K, N, and D, which are then employed to direct protein synthesis. The amino acids that are integrated into protein under the
guidance of each polynucleotide template are displayed below. It is assumed that the templates are read in a sequential manner from the left side to the right side.
Template Amino acid(s) incorporated

Kn = KKKKKKKKKK 1

Nn = NNNNNNNNNN 2

Dn = DDDDDDDDDDD 3

(KN)n = KNKNKNKNKN 4 and 5

(KD)n = KDKDKDKDKD 6 and 7

(ND)n = NDNDNDNDND 8

(KND)n = KNDKNDKNDKND 4 and 6 and 8

Lieutenant Data informs you that this information is sufficient for deciphering the code. However, to verify it, you investigate many variations of the fungus and ascertain that a solitary alteration in
a nucleotide within the codon responsible for amino acid 6 might transform it into a codon for amino acid 5. Furthermore, a substitution of a single nucleotide in a codon corresponding to
amino acid 8 can result in the conversion of that codon into a codon representing amino acid 7.

Kindly provide your findings regarding the genetic coding employed by the fungus originating from Planet Claire.

a) What is the magnitude of a codon?

b) Does the code exhibit degeneracy?

c) What is the codon or codons for each of the eight amino acids?

The codon(s) for each amino acid are as follows:

12345678

d) What is the criterion for terminating translation?


e) What is the specific mutation that will result in the substitution of the codon representing amino acid 6 with the codon representing amino acid 5? Display both the original codon and the altered
codon.
f) What is the specific mutation that will alter the codon representing amino acid 8 to the codon representing amino acid 7? Display both the original codon and the altered codon.

Translation: DNA to mRNA to Protein


By: Suzanne Clancy, Ph.D. & William Brown, Ph.D. (Write Science Right) © 2008 Nature Education

Citation: Clancy, S. & Brown, W. (2008) Translation: DNA to mRNA to Protein. Nature Education 1(1):101

By a process called protein synthesis, the cell converts the genetic information stored in DNA into functional proteins. Translation is the process of deciphering instructions for protein synthesis, which
involves the transcription of mRNA and the participation of tRNA.

The user's input consists of three repetitions of the letter "A".

DNA contains genes that encode protein molecules, which serve as the primary agents responsible for performing all essential cellular tasks. Enzymes, such as those involved in food metabolism and
cellular synthesis, as well as DNA polymerases and other enzymes responsible for DNA replication during cell division, are all classified as proteins.

Gene expression can be defined as the production of the protein that corresponds to it, and this complex process consists of two main stages. During the initial stage, the genetic information encoded in DNA
is transcribed into a molecule of messenger RNA (mRNA) by a process known as transcription. Transcription involves the utilisation of the DNA of a gene as a pattern for complementary base-pairing. This
process is facilitated by an enzyme known as RNA polymerase II, which triggers the creation of a pre-mRNA molecule. Subsequently, this pre-mRNA molecule undergoes processing to develop into mature
mRNA (Figure 1). The resultant mRNA is an unpaired replica of the gene, which subsequently needs to undergo translation into a protein molecule.

Rev 001 Session-5 Question Booklet Page 102 of 334


Figure 1: A gene is expressed through the processes of transcription and translation.

During transcription, the enzyme RNA polymerase (green) uses DNA as a template to produce a pre-mRNA transcript (pink). The pre-mRNA is processed to form a mature mRNA molecule that can be
translated to build the protein molecule (polypeptide) encoded by the original gene.

© 2013 Nature Education Figure Detail

During translation, which is the second major step in gene expression, the mRNA is "read" according to the genetic code, which relates the DNA sequence to the amino acid sequence in proteins (Figure 2).
Each group of three bases in mRNA constitutes a codon, and each codon specifies a particular amino acid (hence, it is a triplet code). The mRNA sequence is thus used as a template to assemble—in order
—the chain of amino acids that form a protein.

Rev 001 Session-5 Question Booklet Page 103 of 334


Figure 2: The amino acids specified by each mRNA codon. Multiple codons can code for the same amino acid.

The codons are written 5' to 3', as they appear in the mRNA. AUG is an initiation codon; UAA, UAG, and UGA are termination (stop) codons.

© 2014 Nature Education

Figure Detail

However, where does the process of translation occur within a cell? What specific subtasks are included in this procedure? Does translation vary between prokaryotes and eukaryotes? The responses to
inquiries of this nature unveil significant insights into the fundamental resemblances shared among all species.

1.1 The Location of Translation

The ribosome, a specialised organelle, houses the translation machinery in all cells. Eukaryotes require mature mRNA molecules to exit the nucleus and migrate to the cytoplasm, the site where ribosomes
are situated. In contrast, ribosomes in prokaryotic species have the ability to bind to mRNA even during the process of transcription. Translation initiates at the 5' terminus of the mRNA while the 3' terminus
remains bound to DNA.

The ribosome in all cell types consists of two subunits: a large subunit (50S) and a small subunit (30S). The unit of measurement for sedimentation velocity and mass, known as the svedberg unit (S), is used
to describe these subunits. Individually, each subunit is present in the cytoplasm, but they combine on the mRNA molecule. The ribosomal subunits comprise proteins and specialised RNA molecules,
namely ribosomal RNA (rRNA) and transfer RNA (tRNA). The tRNA molecules function as adaptor molecules, possessing one end capable of recognising the triplet code in the mRNA by complementary
base-pairing, while the other end binds to a specific amino acid (Chapeville et al., 1962; Grunberger et al., 1969). Francis Crick, one of the scientists who discovered the structure of DNA, was the first to
offer the concept that tRNA functions as an adapter molecule. Crick made significant contributions to understanding the genetic code and presented this hypothesis in 1958 (Crick, 1958).

The ribosome tightly holds the mRNA and aminoacyl-tRNA complexes in close proximity, which promotes the process of base-pairing. The ribosomal RNA (rRNA) facilitates the bonding of each successive
amino acid to the elongating polypeptide chain.

1.2 The initiation of mRNA translation does not occur.

Curiously, specific amino acids do not align with every component of an mRNA molecule. Specifically, there exists a region adjacent to the 5' end of the molecule referred to as the untranslated region (UTR)
or leader sequence. The segment of mRNA in question is situated between the initial transcribed nucleotide and the start codon (AUG) of the coding region. Importantly, it does not have any impact on the
amino acid sequence of a protein (refer to Figure 3).

What is the objective of the UTR? The leader sequence is crucial as it encompasses a ribosome-binding site. The Shine-Dalgarno box (AGGAGG) is the term used in bacteria to refer to this location, which
was initially characterised by scientists John Shine and Lynn Dalgarno. Marilyn Kozak identified a comparable location in vertebrates, which is now referred to as the Kozak box. The 5' untranslated region
(UTR) of bacterial mRNA is typically brief, whereas in human mRNA, the average length of the 5' UTR is approximately 170 nucleotides. If the leader sequence is lengthy, it may encompass regulatory

Rev 001 Session-5 Question Booklet Page 104 of 334


elements, such as protein binding sites, which can impact the mRNA's stability or the effectiveness of its translation process.

Figure 3: A DNA transcription unit.

A DNA transcription unit is composed, from its 3' to 5' end, of an RNA-coding region (pink rectangle) flanked by a promoter region (green rectangle) and a terminator region (black rectangle). Regions to the
left, or moving towards the 3' end, of the transcription start site are considered \"upstream;\" regions to the right, or moving towards the 5' end, of the transcription start site are considered \"downstream.\"

© 2014 Nature Education Adapted from Pierce, Benjamin. Genetics: A Conceptual Approach, 2nd ed.

2.2 Translation Begins After the Assembly of a Complex Structure

The translation of mRNA begins with the formation of a complex on the mRNA (Figure 4). First, three initiation factor proteins (known as IF1, IF2, and IF3) bind to the small subunit of the ribosome. This
preinitiation complex and a methionine-carrying tRNA then bind to the mRNA, near the AUG start codon, forming the initiation complex.

Rev 001 Session-5 Question Booklet Page 105 of 334


Figure 4: The translation initiation complex.

Rev 001 Session-5 Question Booklet Page 106 of 334


During the initiation of translation, the ribosome's small subunit and an initiator tRNA molecule come together on the mRNA transcript. The ribosome's small subunit contains three distinct binding sites: an
amino acid site (A), a polypeptide site (P), and an exit site (E). The initiator tRNA molecule, which carries the amino acid methionine, attaches to the AUG start codon of the mRNA transcript at the P site of
the ribosome. This is where it will be the first amino acid to be added to the developing polypeptide chain. Here, the initiator tRNA molecule is depicted binding subsequent to the assembly of the small
ribosomal subunit on the mRNA. This sequence of events is specific to prokaryotic cells. In eukaryotes, the unattached initiator tRNA initially attaches to the small ribosomal subunit to create a complex. The
complex subsequently associates with the mRNA transcript, facilitating the concurrent binding of the tRNA and the small ribosomal subunit to the mRNA.

© 2013 Nature Education

Figure Detail

While methionine (Met) is the initial amino acid integrated into all newly synthesised proteins, it is not consistently the first amino acid in fully formed proteins. In numerous proteins, methionine is eliminated
subsequent to translation. Indeed, when a substantial quantity of proteins is sequenced and juxtaposed with their established gene sequences, it is shown that methionine (or formylmethionine) consistently
appears at the N-terminus of each protein. Nevertheless, the occurrence of amino acids as the second element in the chain is not equally probable for all types, and the second amino acid has an impact on
whether the initial methionine is enzymatically eliminated. As an illustration, numerous proteins initiate with methionine and are subsequently followed by alanine. In both prokaryotes and eukaryotes, the
proteins undergo methionine removal, resulting in the substitution of alanine as the N-terminal amino acid (Table 1). Nevertheless, in instances when the second amino acid is lysine, which is commonly
observed, methionine is not eliminated (at least based on the existing data from analysed proteins). Consequently, these proteins initiate with methionine and subsequently lysine (Flinta et al., 1986).
Table 1 shows the N-terminal sequences of proteins in prokaryotes and eukaryotes, based on a sample of 170 prokaryotic and 120 eukaryotic proteins (Flinta et al., 1986). In the table, M represents
methionine, A represents alanine, K represents lysine, S represents serine, and T represents threonine.
Table 1: N-Terminal Sequences of Proteins
N-Terminal Sequence Percent of Prokaryotic Proteins with This Percent of Eukaryotic Proteins with This
Sequence Sequence

MA* 28.24% 19.17%

MK** 10.59% 2.50%

MS* 9.41% 11.67%

MT* 7.65% 6.67%

* Methionine was removed in all of these proteins

** Methionine was not removed from any of these proteins

After the formation of the initiation complex on the mRNA, the large ribosomal subunit attaches to this complex, leading to the release of initiation factors (IFs). The ribosome's large subunit possesses three
binding sites for tRNA molecules. The A site is the point where the aminoacyl-tRNA anticodon forms base pairs with the mRNA codon, ensuring accurate addition of the right amino acid to the developing
polypeptide chain. The P site is the point where the amino acid is transported from its tRNA to the developing polypeptide chain. The E (exit) site is where the "empty" tRNA resides before being released
into the cytoplasm to bind another amino acid and resume the cycle. Only the initiator methionine tRNA has the ability to bind specifically in the P site of the ribosome, while the A site is positioned in
alignment with the second codon of the mRNA. The ribosome is prepared to attach the second aminoacyl-tRNA at the A site, where it will be connected to the initiator methionine by the first peptide bond
(Figure 5).

Figure 5: The large ribosomal subunit binds to the small ribosomal subunit to complete the initiation complex.

The initiator tRNA molecule, carrying the methionine amino acid that will serve as the first amino acid of the polypeptide chain, is bound to the P site on the ribosome. The A site is aligned with the next
codon, which will be bound by the anticodon of the next incoming tRNA.

© 2013 Nature Education

Rev 001 Session-5 Question Booklet Page 107 of 334


2.3 The Elongation Phase

Figure 6

Figure Detail

The subsequent stage in translation is referred to as the elongation phase (Figure 6). Initially, the ribosome progresses along the mRNA in the 5'-to-3' direction, facilitated by the elongation factor G. This
movement is referred to as translocation. The tRNA corresponding to the second codon can then attach to the A site. This process necessitates elongation factors (known as EF-Tu and EF-Ts in E. coli) and
guanosine triphosphate (GTP) as an energy source. When the tRNA-amino acid complex binds in the A site, GTP is hydrolyzed to produce guanosine diphosphate (GDP). This GDP is subsequently
released along with EF-Tu and can be reused by EF-Ts for the subsequent cycle.

Subsequently, peptide bonds are established between the neighbouring first and second amino acids by means of peptidyl transferase activity. Previously, it was believed that an enzyme facilitated this
process, but recent data suggests that the transferase activity is really performed by rRNA (Pierce, 2000). Following the formation of the peptide bond, the ribosome undergoes translocation, resulting in the
tRNA occupying the E site. Subsequently, the tRNA is liberated into the cytoplasm in order to acquire another amino acid. Furthermore, the A site is currently unoccupied and prepared to accept the transfer
RNA (tRNA) for the subsequent codon.

This process is iterated until all the codons in the mRNA have been deciphered by tRNA molecules, and the amino acids bound to the tRNAs have been connected together in the developing polypeptide
chain in the correct sequence. At this juncture, the process of translation must be halted, and the newly formed protein must be detached from the mRNA and ribosome.

1.1 Cessation of Translation

Three termination codons, UAA, UAG, and UGA, are used to mark the end of a protein-coding sequence in mRNA. These codons are not recognised by any tRNAs. Therefore, instead of these transfer
RNAs (tRNAs), one of multiple proteins, known as release factors, attaches and aids in the release of the messenger RNA (mRNA) from the ribosome, leading to the separation of the ribosome.

1.2 Comparing the process of translation in eukaryotes and prokaryotes.

The translation process exhibits remarkable similarities between prokaryotes and eukaryotes. While various elongation, initiation, and termination factors may be employed, the genetic code is typically the
same. As mentioned before, in bacteria, transcription and translation occur concurrently, and mRNAs have a relatively limited lifespan. In eukaryotes, the half-lives of mRNAs vary greatly, they undergo
changes, and they need to leave the nucleus in order to be translated. These many processes provide extra chances to control protein synthesis levels and adjust gene expression accordingly.

References and Recommended Reading

Chapeville, F., et al. On the role of soluble ribonucleic acid in coding for amino acids. Proceedings of the National Academy of Sciences 48, 1086–1092 (1962)
Crick, F. On protein synthesis. Symposia of the Society for Experimental Biology 12, 138–163 (1958)
Flinta, C., et al. Sequence determinants of N-terminal protein processing. European Journal of Biochemistry 154, 193–196 (1986)
Grunberger, D., et al. Codon recognition by enzymatically mischarged valine transfer ribonucleic acid. Science 166, 1635–1637 (1969) doi:10.1126/science.166.3913.1635

Q 3: Describe how a protein folds in accordance with an emerging amino-acid sequence?

Rev 001 Session-5 Question Booklet Page 108 of 334


A molecule of transfer RNA will pick up its relevant amino acid. Once it recognizes its specific base sequence, it will tack onto an amino acid, after which, the amino acids will form a peptide bond. The
transfer RNA is then decoupled, and a peptide will form in a stricter sequence. As the peptide string forms up, it will naturally fold in upon itself, forming a 3D structure. This shape of this structure is highly
dependant on the sequence of amino acids.

Folded proteins are held together by various molecular interactions. During translation, each protein is synthesized as a linear chain of amino acids or a random coil which does not have a stable 3D
structure. The amino acids in the chain eventually interact with each other to form a well-defined, folded protein.

Protein folding is the process by which a protein molecule takes on its three-dimensional structure. To access more information about this topic, you can download the PDF file. Susha Cheriyedath, M.Sc. -
Duplicate

Written by Susha Cheriyedath, who holds a Master of Science degree.

Assessed by Sally Robertson, Bachelor of Science.

Protein folding refers to the transformation of a polypeptide chain into its functional, physiologically active form, adopting a certain 3D structure known as the native conformation. The function of a protein is
highly dependent on its structure. Protein folding is facilitated by a multitude of molecular interactions.

During translation, every protein is produced as a sequential arrangement of amino acids or a disordered configuration without a stable three-dimensional arrangement. The amino acids in the chain
ultimately interact with one another to create a clearly defined, folded protein. The 3D structure of a protein is determined by its amino acid sequence. The proper folding of proteins into their native
conformation is crucial for their functionality. Improper folding results in the production of non-functional or harmful proteins, leading to various illnesses.

The process of protein folding can be divided into four distinct steps.

Protein folding is an intricate procedure comprising of four phases, which results in the formation of distinct 3D protein structures that are crucial for a wide range of functions in the human body. Proteins
have a hierarchical arrangement of their structure, ranging from a primary to quaternary structure. The diverse range of amino acid sequences is responsible for the distinct conformations observed in protein
structure.

The phases of protein structure folding - Image by LadyofHats, commons.wikimedia.orgPrimary structure pertains to the sequential arrangement of amino-acid residues in the polypeptide chain.

The creation of hydrogen bonds between atoms in the polypeptide backbone leads to the generation of secondary structure, specifically alpha helices or beta-sheets, which fold the chains.

Tertiary structure is established through the intricate folding of secondary structure elements, such as sheets or helices, into one other. The tertiary structure of a protein refers to its specific three-
dimensional conformation. Typically, it possesses a polypeptide chain serving as a central structure, accompanied by one or more secondary structures. The protein's tertiary structure is dictated by the
interactions and bonding between the side chains of its amino acids.

Quaternary structure arises when folded amino acid chains in tertiary structures interact with each other, forming a functioning protein like haemoglobin or DNA polymerase.

Variables influencing the process of protein folding

The process of protein folding is very susceptible to several external influences, such as electric and magnetic fields, temperature, pH, chemicals, spatial constraints, and molecular crowding. These factors
impact the capacity of proteins to adopt their accurate functional conformations.

Proteins undergo unfolding or denaturation due to the destabilising impact of extreme temperatures. Proteins can be denatured by excessive pH levels, mechanical pressures, and chemical denaturants.
Denaturation refers to the process in which proteins undergo a loss of their tertiary and secondary structures, resulting in the formation of a random coil. While denaturation is often irreversible, several
proteins have the ability to undergo refolding under specific circumstances.

Certain cells possess heat shock proteins or chaperones that safeguard cellular proteins from undergoing denaturation due to heat. Chaperones facilitate the process of protein folding and maintain their
folded structure even in conditions of high temperatures. Additionally, they aid in the process of unfolding and correctly re-folding misfolded proteins.

Pathologies associated with misfolded proteins

Misfolded proteins are prone to denaturation, resulting in the loss of their structural integrity and functional capabilities. Malformation of protein structures can result in numerous human ailments.

CHEMUK - This eBook compilation showcases the most significant interviews, articles, and news from the year 2022.

Retrieve the most recent version

Alzheimer's disease is a neurodegenerative disorder that arises from the misfolding of proteins. This condition is defined by the presence of compact plaques in the brain, which are formed due to the
incorrect folding of the secondary β-sheets of the fibrillar β-amyloid proteins found in brain tissue. Huntington's disease and Parkinson's disease are more instances of neurodegenerative disorders linked to
the misfolding of proteins.

Cystic fibrosis (CF) is a lethal condition resulting from the improper folding of the cystic fibrosis transmembrane conductance regulator (CFTR) protein. The deletion of phenylalanine at position 508 of the
CFTR gene is the primary cause of misfolding of the regulatory protein in the majority of CF patients. Incorrect protein folding has been demonstrated to cause certain allergies.

Rev 001 Session-5 Question Booklet Page 109 of 334


Citations

The link provided is to the PubMed website, specifically to the page with the identifier "10494843".

The website you provided is a resource for learning about the different levels of protein structure.

The website link provided is http://www.chemicalconnection.org.uk/chemistry/topics/view.php?topic=5&headingno=6.

The link provided is to a webpage on the topic of protein folding in biological chemistry.

Additional Resources Complete Protein Content

Protein Synthesis: Commencement, Extension, and Conclusion

Study of Amino Acids and Protein Sequences

Analysis of Protein Complexes

Difficulties in Protein Complex Analysis

Last Updated: February 26, 2019

Susha Cheriyedath Authored by

Susha holds a B.Sc. in Chemistry and an M.Sc. in Biochemistry from the University of Calicut, India. She consistently had a strong enthusiasm for medical and health science. She pursued a master's
degree in Biochemistry, focusing on Microbiology, Physiology, Biotechnology, and Nutrition. During her free time, she thoroughly enjoys engaging in culinary pursuits in the kitchen, particularly through her
ambitious and untidy baking efforts.

Protein folding
Wikipedia, the free encyclopedia

Rev 001 Session-5 Question Booklet Page 110 of 334


Protein before and after folding

Results of protein folding

Protein folding is the physical process in which a polypeptide (a protein chain) is synthesized by a ribosome (involving translation by messenger RNA) from an unstable, random coil into a linear chain
of amino acids, resulting in protein's three-dimensional structure. This is typically a 'folded' conformation, by which the protein becomes biologically functional.

Protein folding commences during the translation of the polypeptide chain. Amino acids engage in interactions with one another to generate a precisely defined three-dimensional arrangement, commonly
referred to as the protein's native conformation. The arrangement of this structure is dictated by the sequence of amino acids, also known as the fundamental structure.The proper conformation of proteins is
crucial for their functionality, even though certain segments of functional proteins may remain unfolded. This suggests that protein dynamics play a significant role. Proteins that fail to adopt their natural
shape often become inactive, but in certain cases, misfolded proteins can acquire altered or harmful functionality. It is believed that the accumulation of amyloid fibrils, which are generated by misfolded
proteins, is responsible for the development of various neurological and other disorders. The infectious forms of these fibrils are referred to as prions.[4] Numerous allergies arise due to protein misfolding, as
the immune system fails to generate antibodies for certain protein configurations.Denaturation of proteins refers to the transformation from a structured, folded state to a disordered, unfolded state. It occurs
in the realm of cooking, as well as in situations involving burns, proteinopathies, and various other circumstances.

The length of the folding process significantly varies depending on the specific protein being studied. Proteins that fold slowly when examined outside of cells need a significant amount of time, ranging from
minutes to hours, to complete the folding process. This slow folding is mostly attributed to proline isomerization and involves passing through multiple intermediate states, similar to checkpoints.[6]
Conversely, diminutive single-domain proteins, spanning a maximum of one hundred amino acids, generally undergo folding in a solitary process.[7] The typical time frames for protein folding are in the
range of milliseconds, while the most rapid folding reactions of proteins are finished within a few microseconds.The time it takes for a protein to fold is determined by its size, contact order, and circuit
topology.The comprehension and replication of the protein folding process has posed a significant obstacle in the field of computational biology since the late 1960s.

1.1 The mechanism by which proteins adopt their three-dimensional structure[revision]

First and most fundamental level of organisation[revision]

The native shape of a protein is determined by its main structure, which refers to its linear sequence of amino acids.The user's text is "[10]". The precise arrangement of amino acid residues and their
sequential placement in the polypeptide chain dictate the regions of the protein that intimately associate and adopt its three-dimensional shape. The sequencing of amino acids is more crucial than their
composition.The user's text is "[11]". The fundamental truth of folding is that the amino acid sequence of each protein contains the information necessary to determine both the native structure and the
mechanism to achieve that state. It should be noted that almost identical amino acid sequences do not always exhibit similar folding patterns.The user's text is "[12]". Conformations vary depending on
environmental conditions; proteins with comparable structures adopt distinct folding patterns depending on their location.

Rev 001 Session-5 Question Booklet Page 111 of 334


Secondary structure

The alpha helix spiral formation An anti-parallel beta pleated sheet displaying hydrogen bonding within the backbone

The initial stage in the protein folding process involves the development of a secondary structure, which is crucial for the protein to adopt its native conformation. The characteristic features of secondary
structure include alpha helices and beta sheets, which fold quickly due to the stabilisation provided by intramolecular hydrogen bonds. This phenomenon was initially described by Linus Pauling.
Intramolecular hydrogen bonding plays a significant role in enhancing protein stability.The user's text is a reference to a source or citation, indicated by the number 13. α-helices are created through the
process of hydrogen bonding between the backbone, resulting in the formation of a spiral shape (see image on the right). The β pleated sheet is a structural arrangement in which the polypeptide backbone
folds back on itself, creating hydrogen bonds between adjacent strands (as depicted in the accompanying picture). The hydrogen bonds form between the amide hydrogen and carbonyl oxygen atoms of the
peptide bond. Two types of β pleated sheets may be distinguished: anti-parallel and parallel. In the anti-parallel β sheet, the hydrogen bonds are more stable due to their ideal 180 degree angle, but in the
parallel sheet, the hydrogen bonds are formed at a slanted angle. The tertiary structure refers to the three-dimensional arrangement of a protein's atoms and the overall folding pattern of the protein.

The α-Helices and β-Sheets are typically amphipathic, exhibiting both a hydrophilic and a hydrophobic region. This capability facilitates the establishment of the tertiary structure of a protein, wherein the
folding process ensures that the hydrophilic surfaces are oriented towards the aqueous environment around the protein, while the hydrophobic surfaces face the hydrophobic interior of the protein. Tertiary
structure is formed through the hierarchical arrangement of secondary structure. After the protein's tertiary structure is established and made stable by hydrophobic interactions, there is a possibility of
covalent bonding through the formation of disulfide bridges between two cysteine residues. The non-covalent and covalent interactions in a protein's native structure are organised in a certain topological
order. The tertiary structure of a protein is determined by a single polypeptide chain, while the creation of quaternary structure occurs due to additional interactions between folded polypeptide chains.
Quaternary structure refers to the arrangement and interactions of several protein subunits in a complex.

The tertiary structure of certain proteins can lead to the development of quaternary structure. This process typically involves the assembly or coassembly of subunits that have already undergone folding. In
other words, numerous polypeptide chains can interact to create a fully functional quaternary protein.

Driving forces of protein folding

All forms of protein structure summarized

The process of folding is primarily driven by hydrophobic contacts, the creation of intramolecular hydrogen bonds, and van der Waals forces. However, it is counteracted by conformational entropy. The
folding process often initiates co-translationally, with the N-terminus of the protein starting to fold while the ribosome is still synthesising the C-terminal section. However, a protein molecule can also fold
spontaneously either during or after biosynthesis. The folding of macromolecules is influenced by various parameters such as the solvent (water or lipid bilayer), the concentration of salts, the pH, the
temperature, and the potential presence of cofactors and molecular chaperones.

Proteins are constrained in their ability to fold due to the limited range of angles or conformations they can adopt. The permissible angles of protein folding are represented by a two-dimensional graph called
the Ramachandran plot, which illustrates the permitted rotations of psi and phi angles.

Rev 001 Session-5 Question Booklet Page 112 of 334


Hydrophobic effect

Hydrophobic collapse. In the compact fold (to the right), the hydrophobic amino acids (shown as black spheres) collapse toward the center to become
shielded from aqueous environment.

In order for protein folding to occur spontaneously within a cell, it must be thermodynamically favourable. Protein folding, being a spontaneous process, is associated with a negative amount of Gibbs free
energy. The Gibbs free energy in protein folding is directly proportional to both the enthalpy and entropy.[11] In order for a negative delta G to occur and for protein folding to be thermodynamically
favourable, it is necessary for either enthalpy, entropy, or both variables to be favourable.
The entropy decreases when the water molecules exhibit a higher degree of order in the vicinity of the hydrophobic solute.
The folding process is significantly influenced by the imperative to reduce the exposure of hydrophobic side-chains to water. The hydrophobic effect refers to the process by which the hydrophobic chains of
a protein undergo a collapse into the protein's core, distancing themselves from the hydrophilic surroundings.[11] In a water-based environment, the water molecules have a tendency to cluster around the
hydrophobic sections or side chains of the protein, forming organised water shells.[21] The arrangement of water molecules surrounding a hydrophobic area enhances the organisation within a system,
resulting in a decrease in entropy (reduced disorder in the system). The water molecules are immobilised within these water cages, leading to the hydrophobic collapse, which refers to the inward folding of
the hydrophobic groups. The hydrophobic collapse restores entropy to the system by disrupting the water cages and releasing the structured water molecules.[11] The presence of several hydrophobic
groups in the central region of the compactly folded protein greatly enhances its stability upon folding, mostly due to the substantial accumulation of van der Waals forces, notably London Dispersion forces.
[11] The hydrophobic effect is a thermodynamic driving force that occurs when there is an aqueous medium with an amphiphilic molecule that has a significant hydrophobic area. The strength of hydrogen
bonds is influenced by their surroundings. Therefore, hydrogen bonds that are surrounded by a hydrophobic core have a greater impact on the stability of the native state compared to hydrogen bonds that
are exposed to the aqueous environment.
Proteins with globular folds exhibit a tendency for hydrophobic amino acids to be dispersed throughout the primary sequence, rather than being randomly distributed or grouped together.The user's text is
"[24]".[25] Conversely, proteins that have recently emerged through de novo processes, and typically exhibit intrinsic disorder,[26][27] exhibit a contrasting arrangement of hydrophobic amino acids clustering
along the core sequence.
Chaperones

Example of a small eukaryotic heat shock protein

Molecular chaperones are a group of proteins that assist in the proper folding of other proteins within living organisms. Chaperones are present in all cellular compartments and interact with the polypeptide
chain to facilitate the proper folding of proteins into their native three-dimensional conformation. However, chaperones themselves are not incorporated into the final structure of the protein they assist in.
Chaperones can aid in the folding process when the ribosome is synthesising the nascent polypeptide. Molecular chaperones function by binding to stabilise the folding pathway of a protein, which would
otherwise be unstable. However, chaperones lack the information required to determine the correct native structure of the protein they assist. Instead, chaperones prevent the protein from adopting incorrect
folding conformations. Chaperones do not directly accelerate the rate of individual steps in the folding process towards the native structure. Instead, they function by preventing undesired clumping of the
polypeptide chain, which could otherwise impede the search for the appropriate intermediate state. Additionally, chaperones facilitate a more efficient pathway for the polypeptide chain to adopt the correct
conformations.[29] Chaperones should not be mistaken for folding catalyst proteins, which accelerate chemical reactions that are responsible for the slower stages of folding pathways. Protein disulfide
isomerases and peptidyl-prolyl isomerases are types of folding catalysts. They play a role in the creation of disulfide bonds and the conversion between cis and trans stereoisomers of peptide groups.
Chaperones play a crucial role in the protein folding process in living organisms by assisting the protein in adopting the correct alignments and conformations in an efficient manner, thus enabling it to
become physiologically significant.[31] This implies that the polypeptide chain has the potential to fold into its natural structure without the assistance of chaperones, as evidenced by protein folding
experiments carried out in a controlled environment;[31] nevertheless, this mechanism is either too ineffective or too sluggish to occur in living organisms; hence, chaperones are indispensable for protein
folding in living cells. In addition to facilitating the creation of native structures, chaperones are known to participate in several functions including protein transport, degradation, and enabling denatured
proteins to refold into their correct native structures when exposed to specific denaturant stimuli.The number 32 is enclosed in square brackets.

A protein that is entirely denatured does not have any tertiary or secondary structure, and instead exists in a disordered state known as a random coil. Under some circumstances, certain proteins have the
ability to undergo refolding. Nevertheless, in numerous instances, denaturation is permanent and cannot be reversed.The user's text is "[33]". Cells employ heat shock proteins, a type of chaperone enzyme,
to safeguard their proteins against the detrimental effects of heat-induced denaturation. These enzymes aid in the proper folding of proteins as well as maintaining their folded state. Heat shock proteins have
been identified in all species investigated, ranging from bacteria to humans, indicating their early evolution and significant role. Certain proteins in cells require the aid of chaperones to fold properly.
Chaperones either separate individual proteins to prevent them from being disrupted by interactions with other proteins, or assist in unfolding misfolded proteins, enabling them to refold into their normal
native structure. This function is essential for mitigating the potential for precipitation into insoluble amorphous aggregates. The conditions that can cause protein denaturation or disturbance of the natural
state include temperature, external fields (electric, magnetic), molecule crowding, and spatial confinement, all of which can significantly impact protein folding.The number 37 is enclosed in square brackets.
Protein denaturation can occur due to high concentrations of solutes, severe pH levels, mechanical stresses, and the presence of chemical denaturants. These discrete factors are collectively classified as
stressors. Chaperones are seen to be present in higher amounts during periods of cellular stress and aid in the correct folding of newly formed proteins, as well as those that have become denatured or
misfolded.

Proteins may fail to adopt their biologically active conformations under certain circumstances. Deviation from the optimal temperature range for cellular activity leads to the unfolding or denaturation of
thermally unstable proteins, resulting in the opacity of an egg white when boiled. The thermal stability of proteins is not constant. For instance, many hyperthermophilic bacteria can thrive at temperatures as
high as 122 °C. This implies that all their essential proteins and protein assemblies must remain stable at or above this temperature.

The bacterium E. coli serves as the host for bacteriophage T4. The phage-encoded gp31 protein (P17313) has structural and functional similarity to the E. coli chaperone protein GroES. It is capable of
replacing GroES in the assembly of bacteriophage T4 virus particles during infection. Similar to GroES, gp31 establishes a durable complex with GroEL chaperonin, which is essential for the proper folding
and assembly of the bacteriophage T4 main capsid protein gp23 in vivo.

Rev 001 Session-5 Question Booklet Page 113 of 334


Switching between folds

Certain proteins exhibit polymorphism, adopting numerous distinct native conformations, and undergo conformational changes in response to external stimuli. As an illustration, the KaiB protein undergoes
conformational changes periodically during the day, functioning as a timekeeper for cyanobacteria. Approximately 0.5–4% of proteins in the Protein Data Bank (PDB) undergo fold switching, according to
estimates.

1.1 Protein misfolding and neurodegenerative illness

Primary article: Proteopathy refers to a group of diseases characterised by the abnormal accumulation of misfolded proteins.

A protein is classified as misfolded when it is unable to attain its typical natural conformation. This can occur as a result of mutations in the amino acid sequence or interference with the regular folding
process caused by external causes. The misfolded protein commonly consists of β-sheets that are arranged in a supramolecular form called a cross-β structure. The assemblages composed of β-sheets are
highly stable, extremely insoluble, and generally resistant to proteolysis.[42] The robustness of these fibrous structures is due to the strong connections between the individual protein units, which are
generated by the bonding of their β-strands through hydrogen bonds in the backbone.[42] Protein misfolding can initiate the subsequent misfolding and aggregation of more proteins, forming aggregates or
oligomers. Elevated amounts of aggregated proteins within the cell result in the creation of amyloid-like formations, which can induce degenerative diseases and cellular demise. Amyloids are fibrous
structures composed of protein aggregates that are very insoluble and held together by intermolecular hydrogen interactions.[41] Hence, the proteasome pathway may lack sufficient efficiency in breaking
down the misfolded proteins before they form aggregates. Malformed proteins have the ability to engage with each other and assemble into organised clusters, resulting in increased toxicity due to
interactions between molecules.[41] Prion-related illnesses, such as Creutzfeldt–Jakob disease and bovine spongiform encephalopathy (mad cow disease), as well as amyloid-related illnesses like
Alzheimer's disease and familial amyloid cardiomyopathy or polyneuropathy, are characterised by the presence of aggregated proteins. Additionally, intracellular aggregation diseases such as Huntington's
and Parkinson's disease are also associated with the accumulation of these proteins. These degenerative disorders that occur with age are linked to the accumulation of misfolded proteins, which form
insoluble aggregates outside the cells or inside the cells as inclusions, including cross-β amyloid fibrils. The role of aggregates in relation to the loss of protein homeostasis, which encompasses synthesis,
folding, aggregation, and protein turnover, is not entirely definitive. The European Medicines Agency recently granted approval for the utilisation of Tafamidis or Vyndaqel, a pharmacological agent that acts
as a kinetic stabiliser of tetrameric transthyretin, in the management of transthyretin amyloid disorders. These findings indicate that the deterioration of post-mitotic tissue in human amyloid disorders is
caused by the process of amyloid fibril creation, rather than the fibrils themselves. Proteopathy diseases, such as antitrypsin-associated emphysema, cystic fibrosis, and lysosomal storage diseases, arise
from misfolding and excessive degradation of proteins, rather than their proper folding and functioning. In these disorders, the loss of function is the primary cause. Protein replacement therapy has
traditionally been employed to address the aforementioned illnesses. However, a new method involves utilising pharmaceutical chaperones to facilitate the folding of mutant proteins, hence restoring their
functionality.

1.2 Experimental methodologies for investigating the process of protein folding[edit]

Protein folding can be inferred through mutation studies, but experimental procedures for examining protein folding usually include observing the slow unfolding or folding of proteins and detecting
conformational changes using ordinary non-crystallographic techniques.

X-ray crystallography

Steps of X-ray crystallography

X-ray crystallography is a very effective and significant technique used to determine the three-dimensional structure of a folded protein. In order to perform X-ray crystallography, it is necessary for the
protein being studied to be situated within a crystal lattice. In order to incorporate a protein into a crystal lattice, it is necessary to have an appropriate solvent for the process of crystallisation, acquire a pure
protein in a solution that is highly concentrated, and induce the formation of crystals in the solution.The number 47 is enclosed in square brackets. After the crystallisation of a protein, X-ray photons can be
focused through the crystal lattice, causing them to scatter or radiate in different directions. The emission of these beams is directly linked to the precise three-dimensional structure of the protein contained
within. X-rays selectively interact with the electron clouds encompassing the individual atoms within the protein crystal lattice, resulting in a distinguishable diffraction pattern. The pattern can only be
interpreted and assumptions about the phases or phase angles can only be made by correlating the electron density clouds with the amplitude of the X-rays. This introduces complexity to the process.[48]
The ability to forecast diffraction patterns would be significantly hindered without the use of Fourier transform, a mathematical foundation that establishes a relationship. This is due to the presence of the
"phase problem".[14] Novel techniques such as multiple isomorphous replacement employ a heavy metal ion to manipulate the X-ray diffraction in a more predictable manner, hence minimising the number
of factors and resolving the phase problem.

Fluorescence spectroscopy is a technique used to study the emission of light by a substance after it has absorbed light of a different wavelength.

Fluorescence spectroscopy is an exceptionally sensitive technique used to investigate the folding state of proteins. Phenylalanine (Phe), tyrosine (Tyr), and tryptophan (Trp) are three amino acids that
possess inherent fluorescence characteristics. However, only Tyr and Trp are utilised in experiments due to their high quantum yields, which produce strong fluorescence signals. Both tryptophan (Trp) and
tyrosine (Tyr) exhibit excitation when exposed to a wavelength of 280 nm. However, only tryptophan is responsive to a wavelength of 295 nm. Trp and Tyr residues, known for their aromatic properties, are
frequently located in the hydrophobic core of proteins, as well as at the interfaces between protein domains or subunits of oligomeric proteins. Within this nonpolar setting, they exhibit elevated quantum
yields, resulting in correspondingly heightened fluorescence intensities. When the tertiary or quaternary structure of the protein is disturbed, the side chains become more exposed to the hydrophilic solvent

Rev 001 Session-5 Question Booklet Page 114 of 334


environment, resulting in a decrease in their quantum yields and thus leading to low fluorescence intensities. The wavelength of peak fluorescence emission for Trp residues is similarly influenced by their
surrounding environment.

Fluorescence spectroscopy is a technique that can be employed to analyse the equilibrium unfolding of proteins. This is done by detecting changes in the intensity of fluorescence emission or the
wavelength of maximum emission as denaturant values are altered. Denaturants can encompass many chemical molecules such as urea and guanidinium hydrochloride, as well as factors like temperature,
pH, and pressure. The distribution of protein states, including the native state, intermediate states, and unfolded state, is influenced by the denaturant value. Consequently, the overall fluorescence signal of
the protein mixture is likewise contingent upon this value. Therefore, a profile is obtained that establishes a connection between the overall protein signal and the denaturant value. The equilibrium unfolding
profile can be used to detect and identify intermediates of unfolding. Hugues Bedouelle has constructed general equations to derive the thermodynamic parameters that describe the equilibrium unfolding of
proteins, including homomeric or heteromeric proteins up to trimers and potentially tetramers, based on these profiles. Fluorescence spectroscopy can be integrated with rapid mixing devices like stopped
flow to quantify the kinetics of protein folding. This technique allows for the generation of a chevron plot and the derivation of a Phi value analysis.

Circular dichroism refers to the differential absorption of left- and right-handed circularly polarised light by a substance.

Primary article: Circular dichroism refers to the differential absorption of left- and right-handed circularly polarised light by a substance.

Circular dichroism is a fundamental and versatile technique used to investigate protein folding. Circular dichroism spectroscopy quantifies the absorption of light that is polarised in a circular manner. Proteins
contain chiral structures, such as alpha helices and beta sheets, which have the ability to absorb light. The light absorption serves as an indicator of the level of protein folding in the group. This method has
been employed to quantify the equilibrium unfolding of the protein by assessing the alteration in its absorption as a result of changes in denaturant concentration or temperature. A denaturant melt quantifies
both the free energy of unfolding and the protein's m value, which represents its dependency on denaturant concentration. A temperature melt assay is used to determine the denaturation temperature (Tm)
of the protein. Fluorescence spectroscopy can be enhanced by integrating circular-dichroism spectroscopy with rapid mixing devices like halted flow. This combination allows for the measurement of protein
folding kinetics and the generation of chevron plots.

Protein Vibrational Circular Dichroism

Vibrational circular dichroism (VCD) techniques have advanced in recent years, particularly with the use of Fourier transform (FT) equipment. These techniques offer a robust method for detecting protein
conformations in solution, even for large protein molecules. Protein VCD investigations can be integrated with X-ray diffraction data from protein crystals, FT-IR data from protein solutions in heavy water
(D2O), or quantum calculations.

Protein nuclear magnetic resonance spectroscopy is a technique used to study the structure and dynamics of proteins using nuclear magnetic resonance.

Primary focus: Protein Nuclear Magnetic Resonance (NMR)

Protein nuclear magnetic resonance (NMR) is a technique that gathers structural information about proteins by applying a magnetic field to concentrated protein samples. In nuclear magnetic resonance
(NMR), specific radio-frequencies are absorbed by various nuclei based on their chemical surroundings. Due to the range of time scales involved, with protein structural changes occurring from nanoseconds
to milliseconds, NMR is particularly well-suited for investigating intermediate structures within picoseconds to seconds. Key methods for investigating protein structure and conformational alterations in non-
folded proteins encompass COSY, TOCSY, HSQC, time relaxation (T1 & T2), and NOE. NOE is particularly valuable due to its ability to detect magnetization transitions between closely located hydrogen
atoms. Various NMR experiments exhibit varied levels of sensitivity to timescales, which are suitable for distinct protein structural alterations. Nuclear Overhauser Effect (NOE) is capable of detecting bond
vibrations and side chain rotations, however it is not sufficiently sensitive to detect protein folding due to its occurrence at a greater timescale. Timescale of protein structural changes matched with NMR
experiments. For protein folding, CPMG Relaxation Dispersion (CPMG RD) and chemical exchange saturation transfer (CEST) collect data in the appropriate timescale.

Rev 001 Session-5 Question Booklet Page 115 of 334


Protein folding occurs at a rate of approximately 50 to 3000 s−1. As a result, CPMG relaxation dispersion and chemical exchange saturation transfer have emerged as key approaches for NMR investigation
of folding. Furthermore, both methods are employed to reveal the presence of activated intermediate states within the protein folding landscape. CPMG Relaxation dispersion utilises the spin echo
phenomena to accomplish this task. This method subjects the specific nuclei to a 90-degree pulse, followed by one or more 180-degree pulses. Upon the nuclei refocusing, a wide distribution suggests that
the target nuclei are engaged in an intermediate stage of excitement. Relaxation dispersion plots provide data that elucidates the thermodynamics and kinetics between the excited and ground states.
Saturation Transfer quantifies alterations in signal intensity originating from the ground state due to disturbances in the excited states. This method use low-intensity radio frequency radiation to fully saturate
the excited state of a specific nucleus, which then transfers this saturation to the ground state.[55] The signal is enhanced by reducing the magnetization (and the signal) of the ground state. The primary
constraints of NMR lie in its diminished resolution when dealing with proteins over 25 kDa, and its comparatively less intricate depiction compared to X-ray crystallography. Moreover, protein NMR analysis is
inherently challenging and might yield numerous plausible interpretations from a single NMR spectrum. A study was conducted to investigate the folding of SOD1, a protein associated with amyotrophic
lateral sclerosis. The study examined excited intermediates using relaxation dispersion and Saturation transfer techniques. Previously, SOD1 was associated with numerous disease-causing mutations that
were believed to be implicated in protein aggregation. However, the precise mechanism remained unidentified. The application of Relaxation Dispersion and Saturation Transfer tests revealed numerous
exciting intermediate states associated with misfolding in the SOD1 mutants. Dual-polarization interferometry is a technique that involves measuring the interference between two polarised light waves.
Primary article: Dual-polarization interferometry is a technique that involves the use of two orthogonal polarisations to measure interference patterns.
Dual polarisation interferometry is a method that is used to measure the optical characteristics of molecular layers by analysing the surface. When applied to protein folding, this technique quantifies the
conformation by assessing the size and density of a protein monolayer in real time, with a resolution of less than an Angstrom. However, it can only measure the kinetics of protein folding processes that are
slower than approximately 10 Hz. The folding process can be initiated by a denaturant or changes in temperature, similar to circular dichroism.
Research on folding processes with precise temporal resolution[revise]
The progress in the field of protein folding has been significantly accelerated in recent years due to the emergence of rapid, time-resolved methodologies. Researchers promptly initiate the process of folding
a sample of protein that is in an unfolded state and carefully observe the subsequent movements and changes. Rapid methodologies currently employed including neutron scattering, ultrafast solution
mixing, photochemical approaches, and laser temperature jump spectroscopy. Notable scientists who have made significant contributions to the advancement of these approaches include Jeremy Cook,
Heinrich Roder, Harry Grey, Martin Gruebele, Brian Dyer, William Eaton, Sheena Radford, Chris Dobson, Alan Fersht, Bengt Nölting, and Lars Konermann.
Proteolysis refers to the process of breaking down proteins into smaller peptide fragments.
Proteolysis is commonly employed to investigate the portion of a substance that is not properly folded under various solution conditions, such as rapid parallel proteolysis (FASTpp).The number 62 is
enclosed in square brackets.The number 63 is enclosed in square brackets.
Single-molecule force spectroscopy is a technique used to study the mechanical properties of individual molecules.
The utilisation of single molecule techniques, such as optical tweezers and AFM, has facilitated the comprehension of protein folding mechanisms in both solitary proteins and proteins accompanied by
chaperones.[64] Optical tweezers have been employed to elongate individual protein molecules by applying force to their C- and N-termini, causing them to unfold and enabling examination of the
subsequent process of refolding. This technology enables the measurement of folding rates at the level of individual molecules. For instance, optical tweezers have recently been utilised to investigate the
folding and unfolding of proteins that play a role in blood coagulation. Von Willebrand factor (vWF) is a crucial protein involved in the process of blood clot formation. Through the utilisation of single molecule
optical tweezers measurement, it was found that calcium-bound vWF functions as a sensor for shear forces in the bloodstream. The application of shear force causes the A2 domain of vWF to unfold, and
the rate at which it refolds is significantly increased in the presence of calcium. Recent findings have demonstrated that the basic src SH3 domain is capable of accessing numerous unfolding routes when
subjected to force.The number 67 is enclosed in square brackets.
Biotin staining
Biotin labelling allows for capturing unique cellular images of proteins in their folded or unfolded states, depending on the conditions. The biotin 'painting' exhibits a preference for predicted intrinsically
disordered proteins.The number 68 is enclosed in square brackets.
1.1 Computational investigations of protein folding
Computational investigations of protein folding encompass three primary facets pertaining to the forecasting of protein stability, kinetics, and structure. An extensive analysis conducted in 2013 provides a
comprehensive overview of the computational techniques now available for the process of protein folding. [69] The topic discussed is Levinthal's paradox.
In 1969, Cyrus Levinthal observed that the unfolded polypeptide chain possesses an immense number of potential conformations due to its extensive degrees of freedom. One of his publications had an
approximation of either 3300 or 10143. Levinthal's paradox is a thought experiment that highlights the impracticality of folding a protein by systematically exploring all potential conformations. This process
would require an incredibly long period of time, even if the conformations were examined at a very fast pace (on the scale of nanoseconds or picoseconds). Levinthal's proposition is that proteins do not fold
through a random conformational search, as they fold considerably faster than that. Instead, he suggests that proteins fold by transitioning through a sequence of meta-stable intermediate states.
Energy landscape of protein folding

The energy funnel by which an unfolded polypeptide chain assumes its native structure

Rev 001 Session-5 Question Booklet Page 116 of 334


The energy landscape provides a visual representation of the configuration space of a protein during the folding process. Joseph Bryngelson and Peter Wolynes propose that proteins adhere to the principle
of minimal frustration, which implies that naturally occurring proteins have efficiently optimised their energy landscapes for folding. Furthermore, they argue that nature has selected amino acid sequences in
such a way that the folded state of the protein is adequately stable. Furthermore, it was necessary for the attainment of the folded condition to occur at a rapid rate. Despite nature's efforts to decrease
frustration in proteins, a certain amount of it nevertheless persists, as evidenced by the existence of local minima in the energy landscape of proteins.
These evolutionarily chosen sequences result in proteins having energy landscapes that are universally "funnelled" towards the native state. This concept was coined by José Onuchic. The geography of this
"folding funnel" enables the protein to achieve its native state by following numerous paths and intermediates, rather than being limited to a single process. Both computational simulations of model proteins
and experimental research provide support for the idea. Additionally, the theory has been utilised to enhance techniques for predicting and designing protein structures.The number is 72. The
characterization of protein folding through the concept of a levelling free-energy landscape is in accordance with the principles of the second law of thermodynamics. Conceptualising landscapes in terms of
visualizable potential or total energy surfaces, characterised by maxima, saddle points, minima, and funnels, may be somewhat deceptive when considering their physical nature. The description in question
pertains to a phase space with a high number of dimensions, where manifolds might assume various intricate topological structures.[75] The polypeptide chain, in its unfolded state, starts in the upper part of
the funnel, where it can adopt the greatest variety of unfolded configurations and is at its maximum energy level. These energy landscapes suggest that there are many potential starting points, but only one
final state is achievable. However, they do not provide information on the various ways in which the folding process can occur. Another molecule of the identical protein may potentially traverse slightly
distinct folding paths, in search of alternative lower energy intermediates, as long as it ultimately attains the same native structure. The utilisation frequencies of distinct pathways may vary based on the
thermodynamic favorability of each process. Therefore, if one pathway is determined to have a higher thermodynamic preference than another, it is expected to be more commonly utilised in the process of
achieving the natural structure. During the process of protein folding, the protein consistently strives to adopt a thermodynamically more favourable shape compared to its previous conformations, hence
progressing down the energy funnel. The presence of secondary structures in a protein is a reliable sign of enhanced stability. Among the various combinations of secondary structures that the polypeptide
backbone can adopt, only one will have the lowest energy and hence be found in the protein's natural form. Alpha helices and beta turns are some of the initial structures that emerge during the folding of a
polypeptide. Alpha helices can form within a remarkably short time frame of 100 nanoseconds, whereas beta turns take slightly longer, around 1 microsecond.[29] A saddle point can be detected in the
energy funnel landscape, which corresponds to the transition state of a specific protein. The transition state, seen in the energy funnel diagram, represents the specific shape that each molecule of the
protein must adopt in order to ultimately achieve the native structure. Proteins cannot adopt their native structure unless they first go through the transition state. The transition state can be described as an
altered or premature version of the native state, rather than simply being another intermediate phase. The rate-determining step is the folding of the transition state, which, despite being in a higher energy
state than the native fold, closely mimics the native structure. During the transition state, a nucleus is present that facilitates the folding of the protein. This nucleus is created through a process called
"nucleation condensation," which causes the structure to gradually collapse onto the nucleus.
Modeling of protein folding

Folding@home uses Markov state models, like the one diagrammed here, to model the possible
shapes and folding pathways a protein can take as it condenses from its initial randomly coiled state (left) into its native 3D structure (right).

De novo or ab initio methods in computational protein structure prediction can simulate several elements of protein folding. The technique of molecular dynamics (MD) was employed to simulate the process
of protein folding and study its dynamics in a computer-based environment.[78] Initial equilibrium folding simulations were conducted employing an implicit solvent model and umbrella sampling technique.
Due of the high computing expense, ab initio molecular dynamics (MD) folding simulations that incorporate explicit water molecules are constrained to peptides and exceedingly tiny proteins. Molecular
dynamics simulations of bigger proteins are now limited to studying the dynamics of either the experimentally determined structure or the unfolding process at high temperatures. Coarse-grained models can
be used to study folding processes that occur over long periods of time, such as the folding of small proteins (about 50 residues) or larger. Various extensive computational initiatives, such as
Rosetta@home, Folding@home, and Foldit, focus on the process of protein folding.

Anton, a massively parallel supercomputer developed by D. E. Shaw Research, has been utilised to conduct extensive simulations with uninterrupted trajectories. This supercomputer is specifically designed
and constructed using unique ASICs and interconnects. The most extensive outcome of a simulation conducted with Anton is a 2.936 millisecond simulation of NTL9 at a temperature of 355 K.The user's
text is "[88]". The current simulations have the capability to unravel and reassemble proteins with a small number of amino acid residues (<150) and forecast the impact of mutations on the speed and
stability of folding. In 2020, a group of scientists utilised AlphaFold, an AI software created by DeepMind, and achieved the top position in CASP.The number 90. The team attained a level of precision
surpassing that of any other group. For approximately two-thirds of the proteins in CASP's global distance test (GDT), it achieved a score of over 90. The GDT test evaluates how closely a computational
program's predicted structure matches the structure determined by laboratory experiments, with a perfect match being scored as 100, within the specified distance cutoff used for GDT calculations. The
protein structure prediction results of AlphaFold at CASP were characterised as "revolutionary" and "remarkable". A fraction of its predictions lack sufficient accuracy, as observed by certain researchers.
Furthermore, the solution to the protein folding problem cannot be deemed resolved while the process or rules governing protein folding remain undisclosed. However, this is regarded as a noteworthy
accomplishment in the field of computational biology[92] and a substantial advancement towards a long-standing and ambitious goal in the field of biology. See also

 Anfinsen's dogma
 Chevron plot
 Denaturation midpoint
 Downhill folding
 Folding (chemistry)
 Phi value analysis
 Potential energy of protein
 Protein dynamics
 Protein misfolding cyclic amplification
 Protein structure prediction software
 Proteopathy
 Time-resolved mass spectrometry

Rev 001 Session-5 Question Booklet Page 117 of 334


2.4 References

1. ^ Alberts B, Johnson A, Lewis J, Raff M, Roberts K, Walters P (2002). "The Shape and Structure of Proteins". Molecular Biology of the Cell; Fourth Edition. New York and London: Garland
Science. ISBN 978-0-8153-3218-3.
2. ^ Anfinsen CB (July 1972). "The formation and stabilization of protein structure". The Biochemical Journal. 128 (4): 737–49. doi:10.1042/bj1280737. PMC 1173893. PMID 4565129.
3. ^ Berg JM, Tymoczko JL, Stryer L (2002). "3. Protein Structure and Function". Biochemistry. San Francisco: W. H. Freeman. ISBN 978-0-7167-4684-3.
4. ^ Jump up to:a b Selkoe DJ (December 2003). "Folding proteins in fatal ways". Nature. 426 (6968): 900–
4. Bibcode:2003Natur.426..900S. doi:10.1038/nature02264. PMID 14685251. S2CID 6451881.
5. ^ Alberts B, Bray D, Hopkin K, Johnson A, Lewis J, Raff M, Roberts K, Walter P (2010). "Protein Structure and Function". Essential cell biology (Third ed.). New York, NY: Garland Science.
pp. 120–70. ISBN 978-0-8153-4454-4.
6. ^ Kim PS, Baldwin RL (1990). "Intermediates in the folding reactions of small proteins". Annual Review of Biochemistry. 59: 631–
60. doi:10.1146/annurev.bi.59.070190.003215. PMID 2197986.

5-3: Hazard Operability Analysis (HAZOP)

Step 1

Warm up - Before watching the video, answer the question to 'unlock' your prior knowledge

Q: When mixing ingredients and then baking a cake in a conventional domestic oven often the typical process-critical parameters are weights, volumes, time and temperature. What
are the consequences of too much or too little of one or all of those parameters on the quality of the overall cake when removed from the oven?

The cake's outcome can be affected by an excess or deficiency of specific factors, leading to potential issues such as undercooking, burning, failure to rise, sogginess, excessive hardness, or lack of taste.

Cake too small ,or too big. Overcooked, undercooked. Cake grow out of proportions of our baking tray and ended up all over oven.

This study investigates the effects of baking temperature and time on the physical characteristics of moist cakes baked in either an air fryer or a convection oven. The cakes were baked under different
conditions: (1) baking temperature of 150 °C, 160 °C, and 170 °C for both air fryer and convection oven, and (2) baking time of 25, 30, 35 min for air fryer and 35, 40, 45 min for convection oven. Baking
temperature and time were found to have a significant (p < 0.05) effect on the relative height, moisture content, firmness, and color of the product but no significant effect on the springiness of the product.

Based on the numerical optimization method, the optimum condition in an air fryer was 150 °C for 25 minutes. These optimized conditions resulted in higher relative height (37.19%), higher moisture content
(28.80%), lower crumb firmness and chewiness (5.05 N and 1.42 N respectively) as well as higher overall acceptance score (5.70) as compared to the optimum condition in a convection oven (150 °C at 55
min). Moreover, baking in the presence of rapid air flow in an air fryer may be declared that it is possible to produce high-quality moist cake with minimum baking temperature and shorter baking time.

Moist cake is one of the most-favored shortcakes, appealing to all ages. It must meet the high-quality standards demanded by consumers to maintain and expand bakery products up to the international
market. Different baking parameters give rise to distinct products, with excessive baking temperature resulting in high crust color, lack of volume, peaked tops, closely-packed or irregular crumbs, and the
downsides of under-baking.

Air flow also plays an important role in ensuring even temperature distribution, which has a significant effect on the quality of the finished product. Air flow enables convective heat transfer and reduces
baking time, hence leading to enhanced oven performance. In another study, various types of baking ovens—such as tunnel-type ovens, pilot plant ovens, microwave combination ovens, and electric ovens
—have different rates of air flow, which influence the distribution of temperature in the oven chamber.

Response surface methodology (RSM) is a mathematical and statistical practice employed to examine the interactions between factors and one or more response variables. Most researchers have used this
method in optimizing the formulations and effects of processing conditions on the quality of bakery products.

The study investigated the effects of air-frying temperature and time on the qualities of moist cakes (volume expansion, moisture content, texture, and color of the cakes). It was further extended to derive the
optimum baking conditions based on the sensory evaluation air-fried moist cakes as compared to those baked in a convection oven.

This research used a commercial air fryer and an electric convection oven to bake moist chocolate cake. Three different baking temperatures (150 °C, 160 °C, 170 °C) were chosen, with baking times of 25,
30, and 35 minutes for the air fryer and 35, 40, and 45 minutes for the electric convection oven. The selected baking temperatures and times were based on preliminary experiments.

The optimum baking temperature and time were determined using the Regression Stochastic Modeling (RSM) software version 10.0.3.1. A 2-variable, 3-coded level central composite design (CCD) was
selected to accommodate 13 experimental conditions. The independent variables were baking temperature (X1) and time (X2), while the dependent values included the relative central height of the cake,
moisture content, and texture characteristics.

Rev 001 Session-5 Question Booklet Page 118 of 334


The regression coefficients of the individual linear, quadratic, and interaction terms were determined using a second-order polynomial. The coefficients of the polynomial were represented by: Y, desired
value of response (relative central height, crumb moisture content and firmness, chewiness and springiness), bo, intercept; b1, coefficient for baking temperature (first order); b2, coefficient for baking time
(first order); b12, coefficient for interaction between baking temperature and time; b32, coefficient for baking temperature (second order); b42, coefficient for baking time (second order).

The RSM was also used to perform the analysis of variance (ANOVA) with a significance value of 5%. The regression coefficient was used in the statistical calculation to generate a contour map based on
the regression models.

The cake volume expansion was measured using the cross-sectional area-tracing method according to the AACC method (AACC 2000). The cake was assumed as a spherical cap, and the height of the
cake was measured using Vernier calipers. The schematic diagram of a half-cake sample is shown in Fig. 1a.

In conclusion, this research aimed to determine the optimal baking conditions for moist chocolate cake baked using an air fryer and an electric convection oven. The results showed that the optimal baking
conditions were achieved with varying degrees of success.

The study aimed to analyze the moisture content, texture, and color of moist chocolate cakes baked using an air fryer and convection oven. The moisture content was measured using a moisture analyzer
under standard drying and medium accuracy modes, while texture was measured using a TA-XT plus Texture Analyzer in the Texture Exponent software version 2.0.7.0. The top surface color of the baked
cake was measured with HunterLab Ultrascan PRO color spectrophotometer and expressed as the International Commission on Illumination (CIE) L*, a*, and b* color scale.

Preference testing of the samples was conducted using 20 untrained panellists with random order numbers. The moistness, firmness, springiness, chocolate taste, aroma, and overall acceptance of the
cakes were evaluated following baking under optimum conditions in an air fryer or convection oven. Each panellist was served with 2 samples (1 cake baked in an air fryer and convection oven respectively)
in a random order. The samples were cut into 2.5 cm3 cubes and served in plastic containers with randomly-coded three-digit numbers. The 7-point Hedonic Scale was used to measure the mean scores for
each characteristic.

Model-fitting based on RSM was used to examine the goodness of fit of the independent and dependent variables. The results showed that the models for all the responding variables were highly compatible
when their coefficients of determination were more than 80%, although this was not so for (1) chewiness following baking in a convection oven and (2) springiness after baking using both air fryer and
convection oven. The closer the R2 value to unity, the better the empirical model fits the actual data. Besides, the results showed an insignificant lack of fit for all the response variables, which indicates the
failure of a model to represent data in the experimental domain at which points not included in the regression.

The regression equation calculated by the RSM program for optimization of baking parameters for moist chocolate cake baked using air fryer and convection oven showed significant results. The results
showed an insignificant lack of fit for all the response variables, which indicates the failure of a model to represent data in the experimental domain at which points not included in the regression.

In conclusion, the study found that the air fryer and convection oven were effective in achieving the desired texture and color of moist chocolate cakes. However, there was an insignificant lack of fit for all the
response variables, indicating the failure of the model to represent data in the experimental domain at which points not included in the regression. Further research is needed to further understand the
effectiveness of these methods in improving the quality and flavor of moist chocolate cakes.

The study aimed to evaluate the effects of baking temperature and time on cake volume expansion, moisture content, firmness, chewiness, and total color change of cakes using air fryer and convection
oven. The relative heights of air fryer baked cakes and oven-baked cakes ranged from 38 to 66% and 21 to 50%, respectively. The baking temperature in both air fryer baked cakes and convection oven-
baked cakes showed a significant effect on the percentage of the relative heights of cakes. A high baking temperature (170 °C) in an air fryer led to a greater increase in the relative heights of the cakes than
those baked at 150 °C and 160 °C. Air-frying gave rise to a slightly higher increment in the percentage of cake height than baking in a convection oven, which could be due to the enhanced air flow in the
oven chamber and increased convective heat transfer, hence leading to greater volume expansion.

The present study found that air fryer baked cakes for 35 minutes at 150 °C produced 64.9% of the relative cake height, which was higher compared to those baked for 25 minutes. A longer heating time
increases the evaporation of water during baking, which causes the air bubbles in the cake to expand and form porous structures, thus yielding a larger cake. Meanwhile, using convection oven for a longer
time (45 min) at 150 °C showed a reduction in the relative heights of the cakes by 5.85%. According to the multiple regression analysis, the interaction between baking temperature and time was
acknowledged to have a significant effect (p < 0.05) on the relative heights of the air fryer baked cakes and oven-baked cakes. Changes in the baking temperature and time produced different percentages of
cake height increment at the end of the baking process, concluding that the increments in the air-frying temperature and time will enhance the volume expansions of cake.

The study investigates the effects of baking temperature and time on the moisture content of crumb cakes, texture, firmness, chewiness, and springiness. Baking temperature in an air fryer and a convection
oven had the most significant effect on the firmness of moist cakes. A longer baking time led to reductions in central moisture contents, with a decrease from 27.75 to 25.95% when the baking temperature
was constant (170 °C). An increase in the baking time from 35 to 45 minutes in the convection oven also resulted in a decrease in moisture content.

The textures of the cakes were described in terms of firmness, chewiness, and springiness. Analysis of variance indicated that baking temperature in an air fryer and a convection oven had the most
significant effect on the firmness of the moist cakes. The hardness significantly decreased with the decrease in baking temperature for all baking times for both air fryer and convection oven.

Rev 001 Session-5 Question Booklet Page 119 of 334


Only baking temperature was found to significantly affect the chewiness of the air fryer baked cakes and convection oven-baked cakes (p < 0.05). When the baking temperature decreased, the chewiness of
the air fryer baked cakes also decreased. By decreasing the baking temperature, the chewiness of the convection oven-baked cakes increased due to the presence of a small volume of air cells inside the
cake, which gave a higher mechanical strength. Baking time did not have a significant effect on the chewiness of the air fryer baked cakes and convection oven-baked cakes.

The evolution of color is an important appearance quality of the cake product and an important physical property to represent the influence of different baking modes on the baking cake. The total color
change (ΔE) values of cake using air fryer and convection oven were ranged from 6.2 to 10.6 and 4.0 to 8.2, respectively. Baking temperature had a significant effect than baking time on the total color
change in both air fryer baked cakes and convection oven-baked cakes (p < 0.05).

In conclusion, the study highlights the importance of considering baking temperature and time when preparing crumb cakes to ensure their optimal texture, firmness, and color.

The study aimed to determine the optimal baking conditions for moist cakes in an air fryer and a convection oven using numerical optimization of baking temperature and time. The optimal baking conditions
were 150°C, for 25 minutes in an air fryer, and 150°C and 35 minutes in a convection oven. The experimental values of all responses were compared with those predicted by the model.

The cakes baked at a minimum baking temperature and time in the air fryer had a higher moisture content (28.80% ± 0.18) and relative height (37.19 ± 1.35), apart from lower firmness (5.05 N ± 0.05) and
chewiness (1.42 N ± 0.03) as compared to those baked in the convection oven under similar conditions. The results showed that the experimental and predicted values were within the range, and the
percentages of error between the experimental and predicted values were less than 10%. Therefore, the model can be used to optimize the baking conditions for moist cakes.

Sensory attributes of moist chocolate cakes were also evaluated. The results showed no significant differences between air fryer baked cake and oven-baked cake on overall acceptability score (just varied
from 5.35 to 5.70). In terms of moisture, air fryer baked cakes were higher with scores 6.55 ± 0.69 compared to oven-baked cakes which obtained 5.15 ± 0.81. This is in agreement with the experimental
data results that show air fryer baked cakes have 28.8% of moisture content and oven-baked cakes have 27.64%.

In terms of firmness and chewiness, air fryer baked cakes scores (5.40 ± 0.50 and 5.05 ± 0.76 respectively) and oven-baked cakes scores (5.05 ± 0.83 and 5.25 ± 0.55 respectively) show no significant
differences between each other. This indicates that the air fryer baked cake and oven-baked cake contributed adequately to these textures of cake, which the panelists appreciated more. With regards to
both sensory and instrumental texture properties, the texture attributes were closely associated with the sensory characteristics of the cakes.

The use of rapid air technology in baking significantly influences the quality of cakes, with factors such as baking temperature and time having a significant impact on their quality. However, springiness is not
affected. An increase in air-frying temperature and time had the most significant effect on the qualities of moist cakes compared to baking in a convection oven.

Statistical analysis revealed no significant differences in springiness at different baking temperatures and times for both baking modes. The optimal air fryer conditions resulted in moist cake samples with a
relative height of 37.19% ± 1.35, moisture content of 28.88% ± 0.18, firmness of 5.05 N ± 0.05, chewiness of 1.42 N ± 0.03, and high overall acceptability (5.70 ± 0.66).

The study was financially supported by a grant from Universiti Putra Malaysia. The research also explores the optimization of ingredients and baking process for improved wholemeal oat bread quality. The
study also explores the effect of flour type and baking temperature on cake dynamic height profile measurements during baking. Overall, the use of rapid air technology in baking can improve the quality of
cakes and enhance their overall acceptability.

Experiment design and statistical analysis

The Design-Expert software version 10.0.3.1 (Stat-Ease Inc., Minneapolis, USA) was used to perform Response Surface Methodology (RSM) in order to determine the optimal baking temperature and
duration. The researchers chose a 2-variable, 3-coded level central composite design (CCD) to account for the 13 different experimental circumstances (Gan et al., 2007). The study examined two
independent variables: baking temperature (X1) with three levels (150 °C, 160 °C, 170 °C), and time (X2) with six levels (25 min, 30 min, 35 min for air fryer; 35 min, 40 min, 45 min for convection oven).
Table 1 displays the experimental designs for the independent variables, both in their uncoded and coded values. The measured variables comprised the comparative vertical dimension of the cake's centre,
the level of moisture, and the attributes related to its texture. The regression coefficients for the individual linear, quadratic, and interaction variables were calculated using a second-order polynomial based
on the following equation:

Y=bo+b1X1+b2X2+b12X1X2+b3X21+b4X22

The polynomial's coefficients were denoted as follows: Y, the desired value of the response (relative central height, crumb moisture content, firmness, chewiness, and springiness); bo, the intercept; b1, the
coefficient for the first-order baking temperature; b2, the coefficient for the first-order baking time; b12, the coefficient for the interaction between baking temperature and time; b32, the coefficient for the
second-order baking temperature; b42, the coefficient for the second-order baking time; X1, the baking temperature in degrees Celsius; X2, the baking time in minutes.

Table 1

Experimental data on the responses of optimization baking parameters for moist chocolate cake baked using air fryer and convection oven

Rev 001 Session-5 Question Booklet Page 120 of 334


Air fryer Convection oven

Parameters Responses Parameters Responses

X1b X2 Y1c Y2 Y3 Y4 Y5 Y6 X1 X2 Y6 Y7 Y8 Y9 Y10 Y11

26.5 80. 6.1


160 (0)a 35 (1) 66.5 6.37 2.41 10.0 160 (0) 40 (0) 49.6 26.31 1.83 62.8 6.7
2 3 6

150 (− 28.0 83. 5.0


30 (0) 48.0 5.50 2.55 7.6 150 (− 1) 40 (0) 27.3 27.25 2.52 79.8 4.6
1) 1 9 3

27.3 79. 6.1


160 (0) 30 (0) 47.4 6.13 3.12 9.4 16 (0) 40 (0) 50.2 26.24 2.01 80.8 7.1
5 9 8

27.7 81. 35 (− 5.0


170 (1) 25 (− 1) 56.3 6.39 2.49 9.7 150 (− 1) 21.3 28.25 2.52 87.6 4.0
1 4 1) 2

27.0 84. 6.0


160 (0) 30 (0) 43.2 6.26 3.41 9.6 160 (0) 40 (0) 44.5 26.15 1.66 64.9 7.1
6 4 5

27.0 79. 35 (− 6.0


160 (0) 30 (0) 46.7 6.28 3.56 9.2 160 (0) 41.0 28.23 1.73 70.4 5.8
5 5 1) 2

25.9 84. 35 (− 6.2


170 (1) 35 (0) 65.8 6.46 2.43 10.6 170 (1) 49.6 28.02 1.54 56.7 7.6
5 6 1) 5

26.8 84. 6.1


170 (1) 30 (0) 63.0 6.41 3.03 10.3 160 (0) 40 (0) 43.9 26.24 1.49 64.7 7.1
7 4 2

28.1 80. 6.3


160 (0) 25 (− 1) 44.5 6.34 2.80 8.8 170 (1) 40 (0) 50.2 26.36 1.71 72.8 7.9
7 5 3

27.6 78. 6.1


160 (0) 30 (0) 49.9 6.35 3.56 9.1 160 (0) 40 (0) 46.4 26.43 2.31 63.1 7.1
1 6 3

150 (− 27.2 84. 5.3


35 (1) 64.9 5.66 2.44 8.0 150 (− 1) 45 (1) 40.4 27.19 2.76 77.8 5.2
1) 1 6 6

150 (− 28.7 68. 6.4


25 (− 1) 38.1 4.98 1.42 6.2 160 (0) 45 (1) 46.4 25.99 1.67 68.7 6.9
1) 4 1 3

27.6 82. 6.6


160 (0) 30 (1) 47.1 6.26 3.66 9.2 170 (1) 45 (1) 46.7 25.27 2.22 79.0 8.2
4 4 7

a
Coded value
b
X1 = baking temperature (°C), X2 = baking time (min)
c
Y1,6 = relative height (%), Y2,7 = moisture content (%),Y3,8 = firmness (N), Y4,9 = chewiness (N), Y5,10 = springiness (%),Y6,11 = color change (ΔE)

The RSM also was used to perform the analysis of variance (ANOVA). The value of significance was set at 5%. The regression coefficient was used in the statistical calculation to generate a contour map
based on the regression models.

Volume expansion measurement

The cake volume expansion was measured using the cross-sectional area-tracing method according to the AACC method (AACC 2000). The cake was assumed as a spherical cap and the height of the
cake was measured using Vernier calipers. The schematic diagram of a half-cake sample is shown in Fig. 1a.

Rev 001 Session-5 Question Booklet Page 121 of 334


Fig. 1

Schematic diagram for measurement of a volume expansion, b moisture content and texture

Table 2

Regression equation calculated by RSM program for optimization of baking parameters for moist chocolate cake baked using air fryer and convection oven
Quality parameter Equations R2 Significant Lack of fit

Relative height (%)

Air fryer Y1 = 47.94 + 5.68 * X1 + 9.72 * X2− 4.33 * X1X2 + 4.84 * X12 + 4.84 * X22 0.9284 0.0007 0.1377ns

Convection oven Y6 = 46.37 + 9.58 * X1 + 3.60 * X2 − 5.50 * X1X2 − 6.25 * X12 − 1.30nsX22 0.9406 0.0004 0.4950ns

Moisture content (%)

Air fryer Y2 = 27.36 − 0.57 * X1 − 0.94 * X2 − 0.13nsX1X2 − 0.29nsX12 + 0.17nsX22 0.9488 0.0002 0.9861ns

Convection oven Y7 = 26.34 − 0.51 * X1 − 1.01 * X2 − 0.42 * X1X2 + 0.31 * X12 + 0.61 * X22 0.9776 0.0001 0.0586ns

Firmness (N)

Rev 001 Session-5 Question Booklet Page 122 of 334


Quality parameter Equations R2 Significant Lack of fit

Air fryer Y3 = 6.28 + 0.52 * X1 + 0.13 * X2 − 0.15 * X1X2 − 0.39 * X12 + 0.01nsX22 0.9579 0.0001 0.1128ns

Convection oven Y8 = 6.13 + 0.66 * X1 + 0.22 * X2 − 0.01nsX1X2 − 0.44 * X12 + 0.11 * X22 0.9962 0.0001 0.8279ns

Chewiness (N)

Air fryer Y4 = 3.43 + 0.26 * X1 + 0.10nsX2 − 0.27nsX1X2 − 0.55 * X12 − 0.73 * X22 0.9021 0.0020 0.2248ns

Convection oven Y9 = 1.82 − 0.39 * X1 + 0.14nsX2 + 0.11nsX1X2 + 0.40 * X12 − 0.01nsX22 0.7399 0.0497 0.6946ns

Springiness (%)

Air fryer Y5 = 81.50 + 2.30nsX1 + 3.25nsX2 − 3.33nsX1X2 + 1.30nsX12 − 2.45nsX22 0.6627 0.1098 0.1375ns

Convection oven Y10 = 67.72 − 6.12nsX1 + 1.80nsX2 + 8.03nsX1X2 + 7.44nsX12 + 0.69nsX22 0.7113 0.0686 0.8368ns

Color change (ΔE)

Air fryer Y6 = 9.36 − 1.47 * X1 + 0.65 * X2 − 0.23nsX1X2 − 0.56 * X12 − 0.11nsX22 0.9806 0.0001 0.3558ns

Convection oven Y11 = 6.93 + 1.65 * X1 + 0.48 * X2 − 0.15nsX1X2 − 0.45 * X12 − 0.35nsX22 0.9771 0.0001 0.1359ns

Rev 001 Session-5 Question Booklet Page 123 of 334


Fig. 2

Response surface plot of the effect of baking temperature and time on A relative height, B moisture content, C firmness, D chewiness and E total color change of cakes using air fryer (a, c, e, g, i) and
convection oven (b, d, f, h, j)

Table 3

Predicted (Pred.) and experimental (Exp.) values of the response variables at optimum condition
Air fryer (5.11 m/s) Convection oven (0.08 m/s)

Response variables Optimum values Optimum values


Error (%) Error (%)
Exp.a Pred. Exp. Pred.

Relative height (%) 37.19 ± 1.35 35.00 6.26 32.55 ± 1.54 35.00 7.00

Moisture content (%) 28.80 ± 0.18 28.77 0.10 27.64 ± 0.36 28.35 2.50

Firmness (N) 5.05 ± 0.05 5.14 1.75 5.09 ± 0.06 5.00 1.80

Chewiness (N) 1.42 ± 0.03 1.53 7.19 2.25 ± 0.07 2.19 2.74

Total color change (ΔE) 6.00 ± 0.14 6.40 0.07 4.55 ± 0.07 5.3 0.16

a
Mean ± standard deviation

Rev 001 Session-5 Question Booklet Page 124 of 334


Fig. 3

Radar chart on the sensory attributes on the optimized moist chocolate cake using air fryer and convection oven
As normal practices in baking processes, too high baking temperature will cause high crust colour, lack of volume with peaked tops, close or irregular crumb , and probably all the faults due to under-baking.
However, too low baking temperature will cause pale crust colour, large volume and poor crumb texture.

This study aimed to evaluate the effects of baking temperature and airflow on the volume development of cake and final cake quality, such as volume development, firmness, springiness, and moisture
content. The cake was baked at three different temperatures (160oC, 170oC, and 180oC) and two different airflow conditions. Baking time, height changes of the batter, texture, and moisture content of the
cake were compared to identify the differences or similarities on the final product as the process conditions varied.

Results showed that airflow has more significant effects towards the product quality compared to baking temperature, especially on baking time which was 25.58 - 45.16%, and the rate of height changes
which was 0.7 mm/min. However, different baking temperatures had more significant effects towards volume expansion which was 2.86 - 8.37% and the springiness of cake which was 3.44% compared to
airflow conditions.

Baking is a complicated process, and optimum conditions vary with the type of food being prepared and even with specific formulae within the food type. Processing conditions affect starch and protein
properties and the food's quality. In this study, by having some modifications on the convection oven, the airflow can be manipulated. The presence of airflow creates a forced convection process that
resembles the convection oven, while the absence of airflow creates a natural convection process that resembles the conventional oven or static oven.

A good quality cake should have large volume with a fine uniform moist crumb, good color and sheen, a good flavor, and a general appearance that is attractive and eye-appealing. During baking, volume
expansion, enzymatic activities, protein coagulation, and partial gelatinization of starch in batter are the most apparent interactions and affect the final product quality such as firmness, springiness, and
moisture content of crumb.

There have been numerous studies on the effects of process conditions such as baking temperature, types of oven used, and baking time to the final product quality such as volume expansion, texture, and
moisture content in cakes, bread, and biscuits. However, only a few studies have focused on comparing product qualities by manipulating baking temperature and airflow mode.

The study involved preparing a butter cake using a standard recipe, which included superfine flour, castor sugar, butter, fresh milk, eggs, baking powder, and vanilla essence. The batter was mixed using a
Panasonic mixer and a modified stainless steel baking pan. The cake was baked at three different baking temperatures (160°C, 170°C, 180°C) with two airflow conditions (with and without airflow). The
internal temperature of the batter was recorded using a thermocouple placed at the center of the pan.

Volume expansion measurements were conducted using five dowels attached to the pan and the height of the batter was measured at every 4 minutes intervals during baking. Cake firmness and
springiness were measured using the AACC Approved Method using TA-XT plus Texture Analyser using Texture Exponent software version 2.0.7.0.

Moisture content measurement was done by measuring the weight difference of the cake before and 1 hour after baking. The sample was dried overnight at 105±3°C in a vacuum oven, weighed out, and cut
into half. The crumb was then sliced at 3 cm height and the results were averaged.

Statistical analysis was performed using a two-way analysis of variance with baking temperature and airflow as the main parameters. The significant difference between baking temperature and airflow with
regards to quality was also analyzed. The interaction between baking temperature and airflow was also considered.

The results showed that the use of a modified electrical convection oven and the use of a thermocouple for volume expansion measurements were effective in determining the cake's texture and firmness.

The study investigates the effects of baking temperature and airflow on baking time in bakery products. The internal cake temperature is monitored to reach within 101-102°C, and when this temperature is
reached, the baking process is stopped and baking time is recorded. The results show that baking time is shorter with the presence of airflow (45.16%, 29.17%, and 25.58% for 160°C, 170°C, and 180°C,
respectively). This is due to better temperature distribution in the oven caused by the presence of airflow.

The increase in baking temperatures resulted in smaller differences of baking time between cakes baked with and without airflow. However, with the same airflow condition, an increase in baking
temperature resulted in shorter baking time for both with and without airflow by 6.25% and 44.20%, respectively. Higher heat transfer resulted in the increase of internal cake temperature, resulting in shorter
baking time.

The lower baking temperature of 160°C with the presence of airflow is more efficient in terms of reducing cooking time and energy consumption. However, cake baked at 160°C with airflow produced smaller
volume expansion with high firmness, springiness, and moisture content as compared to cake baked at different temperatures with no airflow. A two-way analysis of variance showed that for all cakes baked
at different temperatures and airflows, the baking times have significant differences.

Volume development is an important characteristic in evaluating cake and cake quality. The height profile method can be used to judge volume during baking process. A typical increase was observed to
reach a maximum volume, then the cake volume decreased slightly at the end of baking.

Rev 001 Session-5 Question Booklet Page 125 of 334


At the beginning of the baking process, a uniform increase in height can be seen at the first 4 minutes of baking at 170°C for both with and without airflow. This is due to high heat absorption occurring at the
beginning when the product is at room temperature and exposed to high oven temperature. Cakes baked with airflow have a higher rate of height changes compared to cakes baked without airflow, possibly
due to evenly distributed hot air inside the oven chamber.

There are significant differences in height changes for cakes baked at different temperatures and airflows, but airflow has a higher significant effect towards height changes compared to baking temperature.

The second stage of baking involves the maximum volume expansion of a cake, which is achieved by increasing the crumb temperature. This is due to heat penetration inside the batter, resulting in
successive expansion of bubbles. The height of the cake at the edges (p1, p5) is higher than the center of the cake (p3), and this trend is observed at all different process conditions. Two-way analysis of
variance showed that both airflow and baking temperature have significant differences in height changes between the center and edges of the cake. However, airflow had higher significant effect on the
height changes between the center and edges of the cake.

At the final stage, further increase of temperature causes strengthening of the cake structure and the batter releases gas in the form of bubbles, resulting in slight cake shrinkage at 30-34 minutes for cakes
baked with airflow and 40-44 minutes for cakes baked without airflow at 170°C. This takes 25% of the total baking time to the end of baking.

Similar trends of volume development were reported by Therdthai et al. and Lostie et al., but HadiNezhad et al. and Whitaker et al. reported different results at the first and second stages of baking. During
the first stage, a little expansion occurred followed by the second stage, which was a period of rapid expansion to the maximum volume.

Figure 6 shows volume expansion of cakes baked with airflow increased with the increase of baking temperature and showed slightly larger volume than cakes baked without airflow. Two-way analysis of
variance showed there are significant differences between baking temperature and airflow towards the volume expansion (p < 0.05). However, baking temperature has higher significant effect on the volume
expansion compared to airflow. For cakes baked with airflow, increased baking temperature increased the volume expansion by 9.72% compared to 5.86% without airflow. However, at the highest
temperature, i.e., 180°C, volume expansion reduced by 1.63% than for 170°C. This might be due to longer baking time required, resulting in higher shrinkage and reducing the volume at the end of the
baking process.

The study investigates the effects of baking temperature and airflow on cake texture, firmness, and springiness. The results show that increasing the temperature resulted in an increase in the firmness of the
cake both with and without airflow, except for the cake baked at 160°C. At 160°C with airflow, the cake showed slightly higher firmness (518.84 g) compared to 170°C and 180°C (514.77 g and 518.66 g,
respectively). However, cake baked without airflow at 160°C showed lower firmness (435.48 g) compared to 170°C and 180°C without airflow (522.13 g and 523.68 g, respectively).

However, two-way analysis of variance showed no significant differences between baking temperature and airflow towards the firmness of cake (p < 0.05). The springiness of the product showed that
increasing temperature at both airflows resulted in the decrease of the springiness of the cake. At 180°C for cake baked with airflow, the springiness slightly increased by 0.57%, while the springiness of
cake baked at 180°C without airflow showed lower springiness (57.36% reducing 3.44% from the springiness of 170°C).

The study also found that volume expansion has a negative relationship with firmness, with larger volume expansion resulting in lower firmness. Firmness should have a negative relationship with
springiness, but this study found no significant differences between baking temperature and airflow towards the moisture content of the cake (P>0.05). All moisture content of the cake baked at different
process conditions still lies within the range of accepted moisture content of 15-30%.

In conclusion, the study provides valuable insights into the effects of baking temperature and airflow on cake texture, firmness, and springiness. By understanding these factors, bakers can optimize their
baking methods and ensure the highest quality and texture of their cakes.

The study examines the effects of baking temperatures and airflow conditions on cake quality. It was found that baking in the presence of airflow maintained the baking temperature close to the set point
temperature and reduced baking time to 25.58 - 45.16%. Airflow had significant effects on the baking time and the quality of cake in terms of height changes of batter, volume expansion of cake compared to
baking temperature. Baking temperature had a significant effect on the springiness of cake. However, firmness and moisture content of cake were not affected by both baking temperature and airflow.

Interactions between baking temperature and airflow were observed towards the height changes of batter, volume expansion, and springiness of cake. This research is funded by the MoHE under ERGS
with Vote no 5527091. References include Xue & Walker's 2003 study on humidity change and its effects on baking in an electrically heated air jet impingement oven, Baik & Marcotte's (2000) study on cake
baking in tunnel-type multi-zone industrial ovens, Pyler & Gorton's 1988 book on baking science and technology, HadiNezhad & Butler's (2010) study on the effect of flour type and baking temperature on
cake dynamic height profile measurement during baking, Bruce's (1992) model of the effect of heated-air drying on bread baking quality of wheat, Neill & Al-Muhtaseb's (2012) study on optimizing
time/temperature for heat treated soft wheat flour, Sanz & Salvador (2009) evaluation of four types of resistant starch in muffins, and AACC International's Approved Methods of Analysis.

Baking is a crucial process in the manufacturing of starchy products, such as breads, cookies, and cakes. During baking, biochemical constituents undergo microscopic changes, such as phase transitions
and structural properties modification. The dough/batter of starchy products increases during baking, and the macroscopic volume expands due to factors such as air incorporation, CO2 production,
fermentation, and water vaporization.

The study aimed to analyze the main factors responsible for the generation of expanded or porous structures during baking. The amount of a typical leavening agent and five levels of constant baking
temperatures were studied for their effect on volume expansion, color, and texture (instrumental and sensory). Increasing the quantity of leavening agents did not necessarily result in a significant effect on

Rev 001 Session-5 Question Booklet Page 126 of 334


volume expansion but decreased the firmness of the resulting products. A slight increase in temperature resulted in an increase in volume expansion, but an important elevation of the temperature resulted in
a decrease in volume accompanied by an intense surface color and an increase in hardness.

Cellular solids are "structures" comprised of a solid matrix and an associated fluid. Many foods are cellular solids, either because nature made them so (fruits and vegetables) or mainly because they are
processed into them (e.g., bread, cookies, and cakes) to obtain desired or appealing textural properties. The principles for studying the structure and properties of cellular solids were laid out by Gibson and
Ashby (1997) and Weaire and Hutzler (1999). A relationship was established between the Young's modulus (an indication of the hardness of the structure) and the density as:

⎛ = ρεεε Cε and 1.5 * * ⎟

The relationship between formulation and final product properties is well-known, but most research on these relationships is specific to a particular operation and product, often using a trial and error
approach. In the bakery industry, processing conditions play a crucial role in determining the final product properties. Variations at any of these steps can affect the ability to manufacture a bakery product
with desired macroscopic physical characteristics. Industrial variability is experienced frequently, such as uncontrolled thickness of cookies, undesirable cake volcanoes, textural variations making the
resulting cookies more brittle and fragile, bread or cookie dough sticking to the equipment during the forming process, etc.

Controlling the product yield, which is the ratio of a given mass of product to its volume, remains an important difficulty due to the significant volume expansion (4-10 times) during processing.
Troubleshooting these recurring variability problems is time-consuming and expensive, resulting in considerable product waste. Understanding the effect of ingredient selection and quantity and processing
conditions on the resulting structure and properties during mixing, proofing, and baking is therefore a key issue.

This study aimed to quantify the volume expansion while varying baking temperatures and leavening agent concentration simultaneously to better understand the volume expansion phenomenon. A typical
AACC model formulation for white cake was used, consisting of water, sugar, flour, shortening, skimmed milk powder, white egg powder, leavening agents, and salt. The cake batter was prepared using a
Hobart Mixer in three steps, with pulses applied at every step.

Five baking temperatures and three amounts of leavening agents were varied simultaneously for 15 experimental conditions. Experiments were performed in duplicates, and weight losses were recorded
continuously during baking. A typical value of cake batter initial moisture content was 33%. Baking times for each oven temperature condition were determined to reach a final moisture content of 23-24%.

Measurement of quality parameters, including yield, volume index, color, texture parameters, and sensory evaluation, were performed using the software FIZZ. Results showed that the final moisture content
was not significantly different for each baking time-temperature condition at a 5% level.

The volume expansion kinetics of a cake are studied at various temperatures, with a typical increase observed to reach a maximum volume. The height of the cake and the rate of increase are greatly
affected by the baking temperature. These profiles are typically obtained for other bakery products such as bread or cookies and are even similar to those obtained by for extruded products. A visual
observation of the baking process demonstrated that the volume expansion ceased as soon as there is a significant crust formation at the surface. The viscous batter is gradually becoming an elastic cellular
solid, and bubbles expansion is reduced by the solidification of the structure.

A simplified model presented at the ASAE-CSAE meeting in Ottawa (Marcotte and Chen, 2004) was used to simulate macroscopic volume expansion. The shape of the cake was assumed to be a cylinder,
and the model included heat transfer, moisture transfer, gaseous production, and cake volume expansion. Heat and moisture transfer phenomena were well described by Fourier's law of conduction with
appropriate boundary conditions. Major assumptions included a uniform initial temperature and moisture for oven and cake, a constant temperature and moisture during baking, individual bubbles being
lumped into a macro bubble, gases inside the bubble following the ideal law, CO2 production following a first order reaction kinetics and an Arrhenius type of behavior for the effect of temperature.

Volume changes were related to changes in temperature and pressure as:

2^2 2^1 1 1 1 T PV = [2]

The pressure (P) inside the bubble was assumed dependent on CO2 pressure (Pc) and on vapor pressure (Pv). A Kelvin model, comprising both an elastic and a viscous element, was used to describe the
volume expansion using the pressure difference. Figure 3 shows that predicted volume expansion was in accordance with experimental data at both 200°C and 225°C, but it was not possible to describe the
plateau and decrease of the curve.

The study analyzed the volume index of various cakes as a function of baking temperatures and leavening agent concentrations (0X, 1X, and 2X). The volume index increased with the addition of leavening
agents, suggesting that water evaporation and air incorporation were sufficient to generate desired volume. However, doubling the concentration did not result in a significant increase in volume.

An increase in baking temperature did not necessarily result in an increase in volume expansion. There seems to be an optimal value of baking temperature, possibly linked to crust formation earlier at
higher baking temperatures. The best results were obtained for cakes (1X) baked at relatively low temperatures (200°C and 255°C), with a maximum volume expansion observed.

Rev 001 Session-5 Question Booklet Page 127 of 334


Using the five measurement points of the volume index, it was possible to evaluate the surface profile of the volume expansion and identify any major defects that could occur during baking. A high baking
temperature resulted in a reduced final height of cakes characterized by a flat shape, which was even more important for cakes containing twice the concentration of the leavening agent.

Color measurements showed that an elevation of the baking temperature resulted in a decrease of the average ΔE, with the presence or absence of the leavening agent not significantly affecting the final
surface color of cakes. Increasing the concentration of the leavening agent resulted in a decrease of ΔE, likely due to the degradation of sodium bicarbonate and an increase in pH, resulting in a darker cake.

Hardness decreased as the concentration of the leavening agent increased due to the increase of porosity. Significant differences were obtained with respect to temperature for the AACC recommended
concentration. Higher baking temperatures resulted in harder cakes, while for a doubled concentration of leavening agent, baking temperatures did not have any significant effect on the hardness of cakes.

Sensory evaluation is typically performed using a limited number of samples, and this study focused on the performance of judges in differentiating cakes baked at different temperatures. The results showed
that only samples with the AACC recommended concentration of leavening agents were evaluated. Judges were able to differentiate cakes baked at 225°C, 250°C, and 275°C based on their hardness but
could not statistically distinguish cakes baked at 200°C and 225°C.

Fifteen and seven judges ranked cakes baked at 200°C and 225°C as less firm (rank 1), while 9 and 12 judges ranked cakes baked at 200°C and 225°C as second (rank 2) in terms of hardness. Cakes
baked at 275°C were found to be the hardest (rank 4) by 19 judges, and 16 judges were able to rank cakes baked at 250°C as the third (rank 3) in terms of hardness.

Results were well correlated with those obtained with the instrumental analysis (TA-XT2), as harder cakes were found if baked at higher temperatures. Preferred samples were those baked at 200°C and
225°C. Panellists were found to be quite sensitive for the cake texture evaluation.

In conclusion, a certain amount of leavening is necessary to obtain an appropriate volume expansion. Increasing the amount of leavening agent did not positively influence the volume expansion, but it
resulted in some shape defect and surface color darkening. A slight increase in baking temperature had a positive effect on the volume expansion, but an important increase resulted in a decrease of the
volume expansion. An elevation of the baking temperature resulted in a decrease of the average ΔE. A strong interaction was found between baking temperature and leavening concentration for their effect
on cake volume expansion and instrumental analysis of cake hardness.

Oven baking parameters are crucial factors in transforming dough or batter into a finished product. These parameters include temperature, air velocity, heat flux, process time, and humidity. The interaction
of these parameters is considered both an art and science, as they influence the quality of bread.

Temperature in oven zones determines the rate of heat transfer from and to the product, which in turn affects the timing of thermal events such as yeast kill, starch gelatinization, protein denaturation,
moisture extraction, and crust browning. Higher temperatures result in shorter baking times. Air velocity refers to the flow of hot air inside the baking chamber, which directly controls the amount of heat
delivered to the product and influences baking time, weight loss due to water extraction, and color of baked products.

Heat flux is the amount of energy transferred per unit area per unit time from or to a surface, with three components: radiation, convection, and conduction. It can be expressed in Btu/hr·ft2 or W/m2. Both
the total amount of heat flux and the ratios of the three components influence the baked product's quality.

Process time is governed by the timing of thermal events and baker's experience, but should only be established via thermal profiling. In continuous ovens, bake time is controlled by conveyor speed.

Humidity influences moisture migration from the product's interior to its surface and thus, evaporation. Drier oven conditions promote faster water extraction due to increased mass transfer moisture gradient.
Humidity inside the baking chamber can be expressed as % moisture by volume or as absolute humidity mass ratio (lb water/lb dry air or kg water/kg dry air).

By monitoring baking parameters, bakers can tell if their oven is operating as expected. Oven auditing can be performed using thermal profiling equipment with sensors capable of measuring baking
parameters. In most high-speed bakeries, only temperature and time are monitored and controlled on a real-time basis. Human machine interface (HMI) screens are typically placed next to the oven for
operators and plant engineers to modify temperature profiles per zone, compare actual vs set point temperatures, check burners, and observe other process variables.

Rev 001 Session-5 Question Booklet Page 128 of 334


Why did my cake sink in the middle?
 Cakes by MK
 April 28, 2023
I think one of the worst things that can happen after baking a cake is when it sinks in the middle! But thankfully, understanding a bit more about the science behind baking can give you a little more insight
into where you may be going wrong.

0 seconds of 0 secondsVolume 0%

Loading ad

2.5 1 – Your ratio of ingredients is incorrect

When it comes to baking, achieving a harmonious equilibrium of your ingredients is crucial. An excessive amount of liquid or fat in the cake mix can lead to a cake with a compromised structure, resulting in a
sinking middle.
In summary, it is advisable to utilise a scale for weighing ingredients rather than relying on cup quantities. This discrepancy arises due to the potential for cup measures to be imprecise, especially when
attempting to replicate a recipe authored in a different geographical region. While it is possible to utilise cup measures in recipes and achieve satisfactory results, if you consistently find that your cakes are
not turning out as desired, utilising a scale could be the solution to your issue.

2.6 2 – Your cake tin is too small

Typically, recipes provide instructions for the appropriate size of the cake tin and the required quantity. For instance, I often utilise a pair of eight-inch cake pans in the majority of my recipes, and I explicitly
include this on the recipe card. If you opt for smaller cake tins, such as six-inch ones, or if you choose to pour all the batter into a single cake tin instead of two, it is important to modify your recipe
accordingly.
This occurs when an excessive amount of batter is placed in a single cake tin, leading to the batter's weight exceeding the cake's capacity to support it. Consequently, the cake collapses and sinks in the
middle during the baking process. This is particularly applicable to cake recipes that possess a softer and more delicate structure, as is the case with many of my cake recipes.

Rev 001 Session-5 Question Booklet Page 129 of 334


To ensure that I don't have an excessive amount of cake batter in a single cake pan, I carefully observe the height of the batter in the original tin and ensure that I do not beyond that height, regardless of the
size of the cake tin I am using. In my vanilla cake recipe, the batter fills two 8-inch cake tins and is approximately 1.5 inches high. If I choose to use a single larger cake tin instead of two tins, I will ensure
that the batter does not exceed a height of 1.5 inches.
1.1 3 – You are not mixing your cake batter enough.
Insufficiently blending your cake batter can be a challenge, especially when your recipe mandates manually including the dry components at the conclusion. This occurs because insufficiently mixing the
cake mixture prevents the ingredients from fully combining, leading to uneven baking. Insufficient mixing can also indicate inadequate gluten development, leading to the cake's structural collapse.

2.7 4 – You’re overmixing your cake batter

Excessive mixing of your cake batter might also result in your cake sinking and collapsing. Now I understand that this might be quite perplexing - am I excessively blending or insufficiently blending?
However, a crucial factor in distinguishing between the two is that excess mixing typically pertains to the creaming process.
Creaming refers to the process of vigorously combining butter and sugar to incorporate air bubbles. Typically, this step is performed at the initial stage of enhancing the cake batter. The issue with excessive
creaming is that it can lead to the formation of an excessive amount of air bubbles. This, in turn, results in a fragile structure that lacks the necessary strength to withstand the weight of the cake during the
baking process.

It is worth mentioning that excessively creaming butter and sugar at high speeds can generate numerous huge air bubbles, which then burst during the baking process and result in the sinking of the cake.
Typically, unless your recipe specifies differently, you should only mix your butter and sugar until they become light and fluffy, a process that should not exceed three minutes. Additionally, maintain a
consistent speed within the range of medium to medium high, without exceeding it.
1.1 5 – You are prematurely opening the oven door when inspecting your cake.
During the baking process, a cake depends on the meticulous equilibrium of temperature, duration, and components in order to achieve adequate leavening and solidification. If the oven door is opened
prematurely, it will lead to a sudden influx of cold air entering the oven, causing a significant reduction in temperature. Consequently, this leads to the cessation of the cake's rising process and its premature
solidification before the framework attains sufficient strength to bear the cake's weight.
Typically, I suggest examining your cake only after at least 75% of the specified cooking time has elapsed to avoid it from collapsing.

2.8 6 – Your oven is too cold

One final point I will mention is that the temperature of your oven may be insufficient. Regrettably, this can lead to several issues.
Firstly, it can impede the creation of air bubbles, resulting in a slower process. When a cake is cooked at a lower temperature, the batter will require more time to reach the desired heat, resulting in a slower
reaction of the leavening agents. Consequently, this will lead to the production of a reduced quantity of tiny air bubbles. The consequence of this is a compact and weighty cake that has not adequately
leavened and may collapse in the centre when it cools.
If the oven temperature is insufficient, a potential issue arises: the cake will not bake within the specified time indicated in the recipe. For instance, if the oven temperature is insufficient, the cake will not be
fully baked within the stated 30-minute cooking time. This can also lead to inadvertently opening the oven door prematurely to inspect the cake.
In order to address this issue, I have two suggestions! Begin by verifying the accuracy of your oven's temperature with an oven thermometer. This will assist you in determining if your oven is operating at a
slightly lower or higher temperature than desired.

Rev 001 Session-5 Question Booklet Page 130 of 334


Secondly, it is important to ascertain whether your recipe requires the use of a convection or a standard oven. I have provided a more detailed explanation in my blog post on uncommon baking blunders, but
in essence, a convection oven will cook your cakes at a significantly faster rate compared to a normal oven. If the recipe specifies the use of a convection (or fan forced) oven, and you do not possess one, it
is necessary to raise the temperature by approximately 15 degrees Celsius. This adjustment ensures that your cake bakes at the same pace as indicated in the recipe.

1.1 In conclusion

Ultimately, baking might be challenging and there is a plethora of knowledge to acquire. Acquiring a comprehensive understanding of baking has been a lengthy process for me, involving numerous attempts
and mistakes, and I am continuously expanding my knowledge in this area. If your cake collapses, I understand that it may be disappointing, but it is important not to lose hope. Hopefully, the
aforementioned suggestions assist you in gaining a more comprehensive understanding of potential alterations you could make in future endeavours. 🙂

Step 4

Quality Risk Management Workshop 5 of 7 – HAZOP

Review and complete the following examples and workshops to self-assess your knowledge of the subject.

Download, from the Moodle site, and review the fully complete HAZOP worked example

Rev 001 Session-5 Question Booklet Page 131 of 334


Rev 001 Session-5 Question Booklet Page 132 of 334
Rev 001 Session-5 Question Booklet Page 133 of 334
Download, from the Moodle site, and have a go at finishing the half-complete HAZOP example yourself

HAZOP – Half Worked Example

Rev 001 Session-5 Question Booklet Page 134 of 334


Scenario

1. A patient is to undergo an ‘angiography procedure’ as described.


2. HAZOP is applied to assess operational hazards associated with the procedure from the patient’s
perspective.
3. The regular guidewords have all been applied to the system parameter ‘flow’.
4. Apply the regular guidewords to the system parameter ‘time’ and brainstorm some potential hazards.

Rev 001 Session-5 Question Booklet Page 135 of 334


Rev 001 Session-5 Question Booklet Page 136 of 334
Rev 001 Session-5 Question Booklet Page 137 of 334
Rev 001 Session-5 Question Booklet Page 138 of 334
Rev 001 Session-5 Question Booklet Page 139 of 334
Rev 001 Session-5 Question Booklet Page 140 of 334
HAZOP Worksheet

Project Name Angiography Procedure Page _1___ of ___4_

Drawing #: 1 Drawing Revision No.: 1 Date: 13 JAN 2024

Team Leader: Adrian Casa Team Members: Maria Kaluzna, Beata Rudy Notes:

Component Examined: Balloon Catheter

Design Intent: Material: Balloon catheter Activity: Coronary artery inflation

Dilate coronary artery to unblock it, and improve blood flow


Source: Femoral artery Destination: Internal coronary artery wall

Safeguards in Action assigned


Study Node Guideword Process Parameter Deviation Possible Causes Consequences Actions Required Priority
place to
Coronary artery No Flow Balloon catheter Guidewire kinked Unable to dilate artery None High Medical device
insertion fails or bent at site of blockage design team
Surgeon
consultant
Coronary artery More Flow Balloon catheter Guidewire Artery dilates Low?
bypasses blockage terminates downstream of site of
downstream of site blockage
of blockage
Coronary artery Less Balloon catheter Guidewire Artery dilates upstream Medium>
doesn’t reach terminates of site of blockage
blockage upstream of site of
blockage

Workshop-5 – 50% Worked Example Module-1 Page 141 of 334


Process Action
Study Possible Consequence Safeguards Actions
Guideword Paramete Deviation Priority assigne
Node Causes s in place Required
r d to
Coronar As well as Flow Balloon Guidewir Inflation gas
y artery catheter e tears released at
punctures balloon site of
on catheter blockage
insertion
Coronar Reverse Flow Balloon Balloon Guidewire not
y artery catheter catheter adequately
is ejected slips secured in
away place
from the
site of
blockage
backward
towards
the
femoral
artery
Coronar Other Flow Wrong Dye Unable to
y artery catheter injection dilate artery at
type catheter site of
inserted used blockage
along the instead of
guidewire balloon
catheter

Workshop-5 Module-1 Page 142 of 333


HAZOP Worksheet

Project Name Angiography Procedure Page __3__ of


__4__

Drawing #: 1 Drawing Revision No.: 1 Date: 13


January 2024

Team Leader: Adrian Casa Team Members: Maria Kaluzna, Beata Rudy Notes:

Component Examined: Balloon Catheter

Design Intent: Material: Balloon catheter Activity: Coronary artery inflation

Dilate coronary artery to unblock


it, and improve blood flow Source: Femoral artery Destination: Internal coronary artery wall

Process Safeguar Actions Action


Study Guidewor Possible Consequenc Priorit
Paramet Deviation ds in Require assigne
Node d Causes es y
er place d d to
Coronar No Time 1.The patient 1.Patient 1.Angiograph
y artery experienced an injury or y procedure
additional mortality is not
significant resulting executed as
injury before from intended.
the surgery. complication Angiography
2. No time for s. Procedure
balloon catheter 2. does not go
insertion/inflatio Obstruction ahead as
n. identified at planned.
a 2. Medical
significantly intervention
delayed
necessitated
stage.
Coronar More Time 1.The effects 1.Complicatio 1.Patient
y artery of local ns during feels
anaesthetic operation. increased
diminish, wear 2. Artery discomfort or
off. presents signs pain at point
2. The balloon of narrowing of entrance,
catheter was at site of
inflated for a entry.
duration longer 2. Stents are
than placed in
necessary. order to avoid
restenosis,
Coronar Less Time 1.Local 1.Due to 1.The patient
y artery anaesthesia schedule has
partially constraints, heightened
ineffective. time pain or
2. The balloon pressure- discomfort
catheter was there was during the
rapidly inflated. insufficient insertion of
time for the the
local. guidewires or
anesthesia catheter.into
to become artery.
effective. 2. - An artery
2. Inflation has the
gas supply potential to
pressure too rupture and
high result in a
haemorrhage
.
The balloon
has the
potential to
rupture,
causing the
release of
the gas used
for inflation
near the
location of
the
obstruction. -
Artery may
crack and
produce a
hemorrhage
-Balloon may
burst and
inflation gas
Workshop-5 Module-1 Page 143 of 333
released at
sire of
blockage.
Coronar As well Time The patient The patient Operation is
y artery as had trauma experienced completed,
prior to the a myocardial patient
operation.. infarction as may be
a result of an secure or
accident insecure
before the after.
operation.
Coronar Reverse Time Myocardial The catheter Myocardial
y artery ischemia malfunctione infarction or
develops d and death of the
during surgery. resulted in patient.
an
obstruction
within the
artery.
Coronar Other Time 1.Arterial 1.No stent 1.Potential
y artery stenosis was placed myocardial
occurs. Re- within the ischemia or
narrowing artery during infarction,
of the the which may
artery procedure. vary in
occurs. 2. The severity.
2. The balloon balloon is 2. Inflation
catheter is faulty and gas is
removed prior there is a discharged
to complete leakage of at the site of
inflation. gas. a blockage.

Extra info:
Hazard and operability study
Wikipedia, the free encyclopedia

A hazard and operability study (HAZOP) is a methodical and organised analysis of an intricate system, typically a process facility, with the purpose of identifying
risks to individuals, equipment, or the environment, as well as operational issues that may impact efficiency of operations. It is the primary instrument for identifying
hazards in the field of process safety. The purpose of conducting a HAZOP analysis is to scrutinise the design in order to identify any design and engineering
problems that could have otherwise gone unnoticed. The technique is founded on decomposing the intricate structure of the process into several less complicated
segments referred regarded as nodes, which are subsequently assessed separately. The task is performed by a competent team with diverse expertise through a
series of meetings. The HAZOP technique is a qualitative approach that seeks to inspire the creativity of participants in order to uncover possible dangers and
operational issues. The review process is guided and organised by applying standardised prompts, known as guidewords, to review each individual node. The IEC
standard[1] requires team members to demonstrate "intuition and good judgement" and mandates that meetings take place in an environment characterised by
"critical thinking in a candid and open manner [sic]."

The HAZOP technique was originally created for systems that deal with the processing of a fluid medium or other material flow in the process industries. It has
since become a crucial component of process safety management in these industries. Subsequently, it was extended to encompass the examination of batch
reactions and operating procedures in process plants. In recent times, it has found application in various fields that are not directly or closely associated with the
process industries. These include software applications for programmable electronic systems, software and code development, systems involving transportation of
people by road, rail, and air, evaluation of administrative procedures in different industries, and assessment of medical devices.The user's text is empty. This article
specifically examines the approach as it is applied in the process industries.

1.1 Historical Background

The technology is commonly believed to have originated at the Heavy Organic Chemicals Division of Imperial Chemical Industries (ICI), a prominent British and
international chemical business at the time.

The origins of the company have been elucidated by Trevor Kletz, the safety advisor of the corporation from 1968 to 1982. In 1963, a triad of individuals convened
for a total of three days each week over a span of four months to meticulously examine the architectural blueprint of a novel phenol manufacturing facility. Their
initial approach involved employing a method known as critical evaluation, which initially sought alternate options but then shifted focus towards identifying
deviations. The corporation further enhanced the process, known as operability studies, and incorporated it as the third phase of its hazard analysis procedure.
This phase took place after the conceptual and specification stages, and coincided with the creation of the first detailed design.

The Institution of Chemical Engineers (IChemE) at Teesside Polytechnic provided a one-week safety course in 1974 that included this procedure. Following the
Flixborough disaster, the training quickly reached maximum capacity, as did subsequent courses in the following years. Simultaneously, the inaugural scholarly
article was also published.[4] The Chemical Industries Association released a manual in 1977.[5] Until now, the term 'HAZOP' had not been employed in official
literature. Kletz was the first person to accomplish this in 1983, using the course notes (which were subsequently changed and updated) from the IChemE courses.
[2] At this point, hazard and operability studies had become a customary component of chemical engineering degree programmes in the United Kingdom. In the
present day, regulators and the process industry, which includes operators and contractors, view HAZOP as an essential requirement in project development,
particularly during the detailed design phase.

1.2 Approach

Workshop-5 Module-1 Page 144 of 333


This strategy is utilised for intricate processes that have ample design knowledge and are unlikely to undergo significant changes. The specified data range must
be clearly identified and used as the fundamental basis for the HAZOP study, referred to as the "design intent." For instance, a cautious designer will have
accounted for predictable deviations in the process, resulting in a broader design range beyond the fundamental requirements. The HAZOP analysis will examine
any inadequacies in this regard.

The HAZOP is commonly employed at the initial stages of the detailed design of a plant or process. Additionally, it can be utilised throughout later operational
phases of existing plants, serving as a valuable revalidation tool to verify that any improperly managed modifications have not occurred after the initial plant start-
up. In situations where complete design information is lacking, such as during the front-end loading phase, a preliminary Hazard and Operability Study (HAZOP)
can be carried out. However, if a HAZOP is mandated by legislation or regulations, a preliminary study is not considered adequate. In such cases, a more
comprehensive HAZOP must be conducted during the later stages of detailed design.

In process plants, specific parts (nodes) are selected in order to define a clear design goal for each one[citation needed]. They are typically denoted on piping and
instrumentation diagrams (P&IDs) and process flow diagrams (PFDs). Piping and instrumentation diagrams (P&IDs) are the primary reference documents used for
undertaking a hazard and operability study (HAZOP). The size of each node should be proportional to the intricacy of the system and the potential risks it could
present. Nevertheless, it must strike a delicate equilibrium between being excessively extensive and intricate (with fewer nodes, but the team members may
struggle to address issues encompassing the entire node simultaneously) and being excessively limited and uncomplicated (with several minor and repetitious
nodes, each requiring individual evaluation and documentation).

The HAZOP team systematically examines each node by utilising a set of standardised guidewords and process parameters to detect any possible deviations from
the intended design. The team analyses each deviation to determine potential causes and consequences. They then assess whether the existing safeguards are
adequate or if additional measures, such as installing more safeguards or implementing administrative controls, are required to mitigate the risks to an acceptable
level. Confirmation through risk analysis, such as using an agreed-upon risk matrix, is sought when necessary.

The level of readiness for the HAZOP is crucial to the overall effectiveness of the review. The team members were given detailed information about the design of
"Frozen" and sufficient time to become acquainted with the procedure. A suitable timeline was allocated for the HAZOP analysis, and the most qualified team
members were assigned to their respective roles. When scheduling a HAZOP, it is important to consider the breadth of the review, the quantity of nodes to be
examined, the availability of finalised design drawings and documentation, and the requirement to sustain team performance over a prolonged period. The team
members may also be required to carry out their regular duties during this period, and the HAZOP team members may experience a decline in concentration unless
sufficient time is allocated for them to rejuvenate their cognitive powers.

An impartial and skilled HAZOP facilitator, also known as a HAZOP leader or chairperson, should be in charge of overseeing the team sessions. Their main
responsibility is to ensure the overall excellence of the review. Additionally, a committed scribe should be present to record the minutes of the meetings. According
to the IEC standard:[1] The study's effectiveness relies heavily on the team members' vigilance and focus. Thus, it is crucial to ensure that the sessions are of
optimal duration and that there are suitable breaks between them. The study leader is ultimately responsible for achieving these standards.

In order to assess a medium-sized chemical plant, which consists of approximately 1200 pieces of equipment and pipelines, it would be necessary to conduct
approximately 40 meetings.[6] Numerous software applications are currently accessible to aid in the organisation and transcription of the workshop.

1.2.1 Definitions and variables

To detect deviations, the team systematically applies a set of guidewords to each node in the process. In order to stimulate discussion or guarantee thoroughness,
the relevant process parameters, which are applicable to the design intent, are systematically examined. Common parameters include flow rate, temperature,
pressure, level, composition, and so on. The IEC standard emphasises the selection of guidewords that are suitable for the study, avoiding both excessive
specificity that restricts ideas and discussion, and excessive generality that leads to a loss of focus. An example of a commonly used set of guidewords is as
follows:

Guideword Meaning

No (not, none) None of the design intent is achieved

More (more of, higher) Quantitative increase in a parameter

Less (less of, lower) Quantitative decrease in a parameter

As well as (more than) An additional activity occurs

Part of Only some of the design intention is achieved

Reverse Logical opposite of the design intent occurs

Workshop-5 Module-1 Page 145 of 333


Complete substitution (another activity takes place
Other than (other) or an unusual activity occurs or uncommon
condition exists)

Where a guide word is meaningfully applicable to a parameter (e.g., "no flow", "more temperature"), their combination should be recorded as a credible
potential deviation from the design intent that requires review.

The following table gives an overview of commonly used guideword-parameter pairs (deviations) and common interpretations of them.

Other
Parameter / Guide Word No More Less As well as Part of Reverse
than

deviating reverse
Flow no flow high flow low flow
concentration flow

high low
Pressure vacuum
pressure pressure

high low
Temperature
temperature temperature

Level no level high level low level

sequence
too long / too short / missing wrong
Time step extra actions backwards
too late too soon actions time
skipped

Agitation no mixing fast mixing slow mixing

fast
no slow
Reaction reaction /
reaction reaction
runaway

actions wrong
Start-up / Shut-down too fast too slow
missed recipe

deviating
Draining / Venting none too long too short wrong timing
pressure

high low wrong


Inerting none contamination
pressure pressure material

Workshop-5 Module-1 Page 146 of 333


Utility failure (e.g.,
failure
instrument air, power)

DCS failure[b] failure

Maintenance None

Once the causes and effects of any potential hazards have been established, the system being studied can then be modified to improve its safety. The revised
design should thereafter undergo a formal HAZOP close-out process to verify the absence of any newly introduced issues.

1.1 HAZOP team

A HAZOP study requires the collaborative work of a team. The team should be as small as possible while still possessing the necessary skills and experience. If a
contractor has created a system, it is important to have a HAZOP team that includes workers from both the contractor and the client organisation. It is advisable to
have a team consisting of at least five members[8]. During a complex procedure, multiple HAZOP meetings will take place, and the composition of the team may
vary as different experts and substitutes are needed to fulfil different responsibilities. Up to 20 individuals could be implicated.. Each team member should have a
clearly defined role as outlined below:

Name Role

Someone experienced in leading HAZOPs, who is familiar with this type of process but is independent of the design
Study leader / Chairman /
team. Responsible for progressing through the series of nodes, moderating the team discussions, maintaining the
Facilitator
accuracy of the record, ensuring the clarity of the recommended actions and identifying appropriate actionees.

Recorder / secretary / To document the causes, consequences, safeguards and actions identified for each deviation, to record the conclusions
scribe and recommendations of the team discussions (accurately but comprehensibly).

To explain the design and its representation, to explains how a defined deviation can occur and the corresponding
Design engineer
system or organizational response.

Explains the operational context within which the system will operate, the operational consequences of a deviation and
Operator / user
the extent to which deviations might lead to unacceptable consequences.

Provide expertise relevant to the system, the study, the hazards and their consequences. They could be called upon for
Specialists
limited participation.

Maintainer Someone who will maintain the system going forward.

Workshop-5 Module-1 Page 147 of 333


2.9 Prior articles proposed that the study leader may assume the job of the recorder, however it is now widely advised to have different individuals for these
duties.

The utilisation of computers and projector screens improves the process of recording meeting minutes by allowing the team to visually verify the accuracy of the
information being documented. It also facilitates the display of P&IDs for the team's review, provides additional documented information to support the
team's work, and enables the logging of non-HAZOP issues that may arise during the review, such as corrections and clarifications to drawings and
documents. Various vendors now offer specialised software to facilitate the documentation of meeting minutes and monitor the progress of suggested
tasks.

See

 Hazard analysis
 Hazard analysis and critical control points
 HAZID
 Process safety management
 Risk assessment
 Safety engineering
 Workplace safety standards

^ If an individual team member spots a problem before the appropriate guideword is reached it may be possible to maintain rigid adherence to order;
if most of the team wants to take the discussion out of order no great harm is done if they do, provided the study leader ensures that the secretary is
not becoming too confused, and that all guidewords are (eventually) adequately considered

1. ^ This relates to the Distributed Control System (DCS) hardware only. Software (unless specially carefully written) must be assumed to
be capable of attempting incorrect or inopportune operation of anything under its control

2.10 References

1. ^ Jump up to:a b c d IEC (2016). Hazard and Operability Studies (HAZOP studies) – Application Guide. International Standard IEC 61882 (2.0 ed.).
Genève: International Electrotechnical Commission. ISBN 978-2-8322-3208-8.
2. ^ Jump up to:a b c d e Kletz, Trevor A. (1983). HAZOP & HAZAN. Notes on the Identification and Assessment of Hazards (2nd ed.). Rugby: IChemE.
3. ^ Kletz, Trevor (2000). By Accident... A Life Preventing Them in Industry. PFV Publications. ISBN 0-9538440-0-5
4. ^ Lawley, H.G. (1974). "Operability Studies and Hazard Analysis". Chemical Engineering Progress. 70(4): 105-116.
5. ^ Chemical Industry Safety and Health Council (1977). A Guide to Hazard and Operability Studies. London: Chemical Industries Association
6. ^ Swann, C. D.; Preston, M. L. (1995). "Twenty-five Years of HAZOPs". Journal of Loss Prevention in the Process Industries. 8(6): 349-353
7. ^ Crawley, Frank; Tyler, Brian (2015). HAZOP: Guide to Best Practice (3rd ed.). Amsterdam, etc.: Elsevier. ISBN 978-0-323-39460-4.
8. ^ Nolan, Dennis P. (1994) Application of HAZOP and What-If Safety Reviews to the Petroleum, Petrochemical and Chemical Industries . Park Ridge,
N.J.: Noyes Publications. ISBN 0-8155-1353-4.

2.11 Further reading

 Gould, John (2005). Review of Hazard Identification Techniques (PDF). HSL/2005/58. Buxton: Health and Safety Laboratory.
 Kletz, Trevor (1999). Hazop and Hazan. Identifying and Assessing Process Industry Hazards (4th ed.). Rugby: IChemE. ISBN 978-0-85295-506-2.
 Explanation by a software supplier:
o Lihou, Mike. "Hazard & Operability Studies (1 of 2)". LihouTech. Archived from the original on 2008-06-10.
o Lihou, Mike. "Hazard & Operability Studies (2 of 2)". LihouTech. Archived from the original on 2008-05-12.

NHS

Summary -Angiography

What occurs? Hazards.

Angiography is a radiographic technique employed to examine the condition of blood arteries.

In a regular X-ray, blood vessels are not easily visible, thus necessitating the injection of a specialised dye known as a contrast agent into your bloodstream.

Workshop-5 Module-1 Page 148 of 333


This procedure enhances the visibility of your blood vessels, enabling your physician to detect any abnormalities.

Angiograms refer to the X-ray images produced during angiography.

The purpose of angiography is to diagnose and evaluate various medical conditions by visualising the blood vessels.

Angiography is a diagnostic procedure that assesses the condition of your blood arteries and examines the circulation of blood within them.

This procedure can aid in the diagnosis and investigation of several conditions that impact blood vessels, such as:

Atherosclerosis refers to the condition in which the arteries get narrowed, potentially increasing the likelihood of experiencing a stroke or heart attack.

Peripheral arterial disease The user's text is empty. decreased blood flow to the muscles in the leg

A brain aneurysm refers to the protrusion of a blood artery in the brain.

Angina is a condition characterised by chest pain resulting from a decrease in blood flow to the muscles of the heart.

Blood clots or a pulmonary embolism, which is a blockage in the artery that provides blood to your lungs.

renal ischemia

Angiography can also assist in strategizing treatment for some medical disorders.

Angiography is a medical procedure that involves the visualisation and examination of blood vessels. During angiography, a contrast dye is injected into the blood
vessels, which allows for clear imaging of the blood flow. This procedure helps in identifying any abnormalities or blockages in the blood vessels

Angiography is performed within the confines of a hospital's X-ray or radiology department.

Regarding the examination:

Typically, you will remain conscious, although you may be administered a sedative medication to induce relaxation.

During the procedure, you will be positioned on an X-ray table and a minor surgical incision will be made on one of your arteries, typically at your groyne or wrist.
Local anaesthesia will be used to numb the specific area where the incision is done.

An extremely slender and pliable tube, known as a catheter, is placed into the artery. The catheter is meticulously directed towards the specific region under
examination, such as the heart. A contrast agent, or dye, is then injected into the catheter. Subsequently, a sequence of X-rays is captured while the contrast agent
traverses through your blood arteries.

The duration of the test ranges from 30 minutes to 2 hours. Typically, you will be discharged within a few hours.

Explore further details regarding the events preceding, occurring during, and following angiography.
Workshop-5 Module-1 Page 149 of 333
Dangers associated with angiography

Angiography is typically a benign and non-painful medical examination.

However, it is typical to have the following symptoms for a short period of time, ranging from a few days to a few weeks:

Contusion Discomfort

A little clot or aggregation of blood in close proximity to the incision site.

Additionally, there exists a minute probability of encountering more severe consequences, such as an adverse allergic response to the contrast agent, a
cerebrovascular accident, or a myocardial infarction.

Explore further the potential hazards associated with angiography.

Angiography can be classified into various types.

Angiography encompasses many modalities that are tailored to certain anatomical regions of interest.

Typical categories comprise:

Coronary angiography is performed to assess the condition of the heart and its adjacent blood arteries.

Cerebral angiography is a medical procedure used to examine the blood arteries in and around the brain.

Pulmonary angiography is performed to assess the vascular supply to the lungs.

Renal angiography is performed to assess the renal blood vessels.

Periodically, angiography may be performed utilising imaging techniques rather than X-rays. These imaging techniques are referred to either CT angiography or
MRI angiography.

Additionally, there exists a form of angiography known as fluorescein angiography, which is employed for the purpose of examining the eyes. This sort of
angiography is distinct from other types and is not discussed in this article.

Last reviewed: January 30, 2023


Is an angiogram a serious procedure?

Angiography is generally a safe procedure, but minor side effects are common and there's a small risk of serious complications. You'll
only have the procedure if the benefits outweigh any potential risk. Speak to your doctor about the risks with having angiography..

Workshop-5 Module-1 Page 150 of 333


Most people who have angiography do not have complications, but there's a small chance of minor or more serious complications.
Possible minor complications include: an infection where the cut was made, causing the area to become red, hot, swollen and
painful – this may need to be treated with antibiotics.

Risks and Complications of Coronary Angiography: A Comprehensive Review

Morteza Tavakol, MD, Salman Ashraf, MD, and Sorin J. Brener, MD

Author information Article notes Copyright and License information PMC Disclaimer

Go to:

Abstract

Coronary angiography and heart catheterization are invaluable tests for the detection and quantification of coronary artery disease, identification of valvular and
other structural abnormalities, and measurement of hemodynamic parameters. The risks and complications associated with these procedures relate to the patient’s
concomitant conditions and to the skill and judgment of the operator. In this review, we examine in detail the major complications associated with invasive cardiac
procedures and provide the reader with a comprehensive bibliography for advanced reading.

Keywords: Cardiac catheterization, Angiography, Contrast material, Acute kidney injury, Complications

1. Introduction

Coronary angiography is the most reliable and widely accepted diagnostic for detecting and determining the severity of atherosclerotic coronary artery disease
(CAD). Like any invasive treatment, the test carries inherent complications that are specific to the patient and the technique itself. Complications can vary greatly,
ranging from small issues with temporary consequences to life-threatening conditions that can result in lasting harm if immediate medical attention is not given.
Thankfully, the hazards associated with coronary arteriography have considerably decreased over time. This is mostly due to advancements in equipment design,
better management during the procedure, and the growing expertise of diagnostic centres and operators.

While there are no definitive reasons to avoid undergoing coronary arteriography, the potential dangers are related to both cardiac and non-cardiac problems. The
presence of certain medical conditions, such as advanced age, impaired kidney function, poorly managed diabetes mellitus, and severe obesity, can elevate the
likelihood of experiencing problems. The patient's cardiovascular condition can increase the likelihood of experiencing negative events. Cardiovascular features
such as coronary artery disease (CAD), congestive heart failure (CHF) with reduced ejection fraction, recent stroke or heart attack (myocardial infarction), and a
tendency to bleed can all contribute to an increased risk of cardiac and vascular problems. Moreover, the risk is influenced by the specific operation being
conducted, whether it is diagnostic coronary angiography or an extra percutaneous coronary intervention.

Despite the aforementioned factors, significant problems are infrequent. Due to the low incidence of serious problems (less than 2%) and fatality rate (less than
0.08%) associated with cardiac catheterization, there are only a small number of individuals who cannot be safely investigated in a well-equipped laboratory.
Utilising iso-osmolar contrast media, employing smaller profile diagnostic catheters, implementing methods to decrease the occurrence of bleeding, and leveraging
substantial operator expertise can all contribute to further diminishing the already minimal occurrence of such problems. Thus, the surgery can be effectively
executed even in very ill patients, as long as it is clinically necessary, with a relatively little level of risk. Nevertheless, it is crucial to evaluate the risk-to-benefit ratio
of cardiac catheterization and have a thorough understanding of the potential advantages and disadvantages on a case-by-case basis to mitigate any potential
complications. The objective of this chapter is to recognise the hazards linked to coronary angiography and coronary interventions in the contemporary
catheterization laboratory. Additionally, we will outline the progress made in equipment design and management measures that have been implemented to
minimise potential difficulties.

Proceed to:

1.1 2. Allergic and Adverse Reactions

The given text is "1.1.1 2.1". Regional Anaesthesia

Instances of allergic local and systemic responses to local anaesthesia are exceedingly uncommon. Finder and Moore (2002) reported cases of
methemglobinemia, asthma-like symptoms, vasodepressor reaction, and anaesthesia toxicity. The majority of instances involve the older agents, while occurrences
with amide agents, such as lidocaine, have been rare. The reactions are primarily related to the skin or the vagus nerve, and anaphylactic reactions are infrequent.
The reactions that do occur are typically a result of the preservatives utilised in pharmaceutical formulations. It is advisable to employ preservative-free substances,
such as bupivacaine, and conduct skin testing in patients who have a history of adverse reactions to local anaesthetics (T. Feldman, Moss, Teplinsky, & Carroll,
1990).

The values are 1.1.2 and 2.2. General anaesthesia

General anaesthesia is not commonly necessary in the catheterization laboratory, and most procedures are performed without an anesthesiologist present.
Conscious sedation and analgesia are frequently employed during the surgery to enhance patient comfort and alleviate anxiety. This is achieved by administering
modest dosages of short-acting drugs, such as midazolam or fentanyl. It is necessary to use caution in order to prevent excessive sedation of the patient in such
situations. It is essential to conduct thorough surveillance of blood pressure, heart rate, respiratory rate, and oxygenation in all patients. Administer reversal
medications, such as flumazenil for benzodiazepines and naloxone for opiates, rapidly in cases of hemodynamic compromise or oversedation. Anaphylactoid
responses are rare while using conscious sedation agents, although they are more likely to develop after administering contrast media. The management of any
negative response is contingent upon its intensity and include the potential administration of oxygen, bronchodilators, epinephrine, histamine blockers,
corticosteroids, and intravenous fluids (Dewachter, Mouton-Faivre, & Emala, 2009). For severe cases of anaphylaxis that do not respond to conservative treatment,
it is necessary to undergo endotracheal intubation and urgently consult with the anaesthesia experts. Thoroughly examining a patient's medical history and doing a
comprehensive assessment of their allergies helps prevent unnecessary exposure to local or systemic anaesthesia in those who have previously experienced
allergic responses or unpleasant effects. Special consideration should be given to those with a seafood allergy, as there is a potential for cross-reactivity with
contrast media that contain iodine.

The values are 1.1.3 and 2.3. Contrast media

Workshop-5 Module-1 Page 151 of 333


Adverse effects caused by contrast media can be categorised as either chemotoxic or anaphylactoid. Contrast media elicit an anaphylactoid response by triggering
the release of histamine. It distinguishes itself from an anaphylactic reaction by not being immune-mediated and not need prior sensitization to the triggering
substance in order to generate a reaction. The chemotoxic effects are mainly associated with the hyperosmolarity, ionic composition, viscosity, and calcium binding
properties of these substances (Goss, Chambers, & Heupler, 1995). All contrast agents consist solely of iodine, typically coupled with a benzoic acid ring in a
mixture of meglumine or sodium salt of diatrizoid acid with calcium EDTA. The concentrations of sodium and EDTA are maintained at approximately the same level
as that of blood, as deviations from this level have been linked to tachyarrhythmia and myocardial depression. To attain the necessary iodine content for good
visualisation during angiography, solutions of traditional contrast agents were highly hypertonic. The solutions formed by these agents, Hypaque (Nycomed) and
Angiovist (Berlex), have an osmolality approximately 5.8 times higher than that of plasma, measuring 1690 mOsm/kg (Barrett et al., 1992). Adverse reactions are
frequently observed with the ionic, high osmolality contrast agents, with a reported incidence of over 50% in certain investigations (Matthai et al., 1994). Less
severe constitutional symptoms such as warmth, discomfort, chest tightness, nausea, and vomiting are commonly seen and typically resolve on their own in most
cases. In a randomised experiment conducted by Barrett et al. in 1992, it was found that almost 30% of patients experienced adverse reactions that required
intervention. These reactions included hypotension, bradyarrhythmias, and lung congestion.

The utilisation of lower osmolar, ionic agents such as ioxaglate (Hexabrix), and water soluble low-osmolar, non-ionic agents like iohexol (Omnipaque) and ioxilan
(Oxilan), has considerably decreased the occurrence of hypersensitivity and unfavourable responses. The utilisation of high osmolar contrast material in
randomised clinical trials exhibited a 3.1% rise in the likelihood of requiring treatment for adverse reactions and a 3.6% increase in life-threatening responses when
compared to the usage of lower osmolar non-ionic agents. The observed reactions were primarily limited to individuals with severe coronary artery disease or
unstable angina (Barrett et al., 1992). The findings of these results have been replicated in two more randomised studies, which were able to further classify
patients at the greatest risk of experiencing adverse responses to contrast agents (Matthai et al., 1994; Steinberg et al., 1992). According to Matthai et al. (1994),
individuals who are older, have a higher New York Heart Association CHF class, a history of past contrast reaction, and raised left ventricular diastolic pressure are
up to six times more susceptible to experiencing unfavourable reactions when exposed to high osmolar ionic agents. The demand for risk stratification emerged
due to the exorbitant expense of the more recent low osmolar agents, which, at one stage, were 10-20 times more costly than traditional high osmolar agents
(Barrett et al., 1992). Utilising these compounds in specific groups has been proven to reduce the total cost by 66% while enhancing safety and cost-effectiveness
(Matthai et al., 1994). The price of these agents has substantially fallen in the last decade, enabling their more extensive utilisation for preventing unpleasant
effects. The difference in cost is minimal, considering the major benefits of using low osmolar agents.

Recently, a new molecule called iodixanol (Visipaque) has been created. It is non-ionic and iso-osmolar, meaning it has the same osmolality as blood, which is 290
mOsm/kg. In a comprehensive randomised experiment comparing iodixanol to the ionic, low osmolar agent ioxaglate, hypersensitivity reactions were observed in a
mere 0.7% of the population tested. Notably, there was no significant disparity in major cardiovascular events between the two agents (Bertrand, Esplugas,
Piessens, & Rasch, 2000). Initially, there was considerable concern about the introduction of non-ionic agents because there was evidence suggesting that ionic
contrast material had a stronger effect on preventing blood clotting and platelet aggregation, particularly in laboratory tests. These qualities can be advantageous in
a technique that may harm the vascular endothelium and result in thrombosis. Large randomised multicenter trials comparing the two types of contrast agents in
angioplasty have shown no evidence of an increased risk of thrombotic complications or major cardiovascular events (Bertrand et al., 2000; Schrader et al., 1999).

The values are 1.1.4 and 2.4. Prevention and Therapy

Successful prevention of allergic reactions to contrast material can be achieved. Two patient categories prone to anaphylaxis should be taken into account for pre-
treatment. Individuals who have experienced anaphylactic reactions in the past are most susceptible to experiencing subsequent reactions. individuals in the
second category include those with a history of atopy, asthma, or who take beta adrenergic blockers. Research has shown that these individuals had a double risk
of experiencing anaphylaxis (Lang, Alpern, Visintainer, & Smith, 1991). There is no evidence to support the idea that those with allergies to iodine-containing foods
(such as shellfish) are at a higher risk of experiencing contrast anaphylaxis (Goss et al., 1995; Hildreth, 1987). When dealing with patients who have a previous
allergic reaction to shellfish or seafood, it is important to ask about their history of atopy or asthma. This will help identify the individuals who are most likely to
develop anaphylaxis. Pre-treatment with preventive drugs is crucial for preventing recurring reactions in the population at highest risk, alongside the choice of
contrast agent. Pretreatment relies on the administration of corticosteroids and histamine blockers as the fundamental approach. Administering Prednisone 50 mg
at 13, 7, and 1 hour prior to the procedure, along with an oral dose of diphenhydramine 50 mg 1 hour before the procedure, effectively reduces the occurrence of
recurrent reactions (Bush & Swanson, 1991; Goss et al., 1995; Greenberger, Halwig, Patterson, & Wallemark, 1986; Nayak, White, Cavendish, Barker, & Kandzari,
2009; Wittbrodt & Spinler, 1994). Prior to urgent procedures, it is recommended to administer intravenous hydrocortisone at a dose of 200 mg along with 50 mg of
diphenhydramine, as indicated in Table 1 (Greenberger et al., 1986).

Table 1

Specific recommendation for pre-medication regimens. Adapted from the American College of Radiology guidelines (Amreican College of Radiology, 2010). Note
that use of H2 blockers is not supported by the current guidelines.
Elective Pre-Medication 1. Prednisone 50 mg by mouth at 13 hours, 7 hours, and 1 hour before contrast media injection
2. Diphenhydramine 50 mg intravenous, intramuscular, or by mouth 1 hour before contrast medium injection

Emergency Pre-Medication 1. Methylprednisolone 40 mg or hydrocortisone sodium succinate 200 mg intravenously every 4 hours until contrast study
(Decreasing order of required plus diphenhydramine 50 mg intravenous 1 hour prior to contrast injection
desirability) 2. Dexamethasone sodium sulfate 7.5 mg or betamethasone 6.0 mg every 4 hours until contrast study. Must be done in
patients with known allergy to methylprednisolone, aspirin, or nonsteroidal anti-inflammatory drugs, especially if asthmatic.
Also diphenhydramine 50 mg intravenous 1 hour prior to contrast injection.
3. Omit steroids entirely and give diphenhydramine 50 mg intravenous.

There is a hypothesis that adding Histamine-2 blockers (such as cimetidine or ranitidine) to the mentioned treatment may enhance the antihistamine action on the
vascular system, in addition to diphenhydramine, which is a standard Histamine-1 blocker. Due to its affordable price and excellent safety record, Histamine-2
blockers are frequently used in various catheterization laboratories. The usefulness of this method is a subject of controversy, and there is little evidence of
consistent results in prospective studies (Goss et al., 1995; Greenberger et al., 1986; Myers & Bloom, 1981; Wittbrodt & Spinler, 1994). Monteleukast has also
been recommended as an additional treatment intervention. The American College of Radiology (American College of Radiology, 2010) does not endorse the
usage of Histamine-2 blockers and Monteleukast.

Although patients in this category have received sufficient pre-medication, it has been demonstrated that breakthrough responses can still happen (Freed, Leder,
Alexander, DeLong, & Kliewer, 2001). This highlights the importance of being alert and closely monitoring these patients. For anaphylactic reactions accompanied
by laryngeal edoema and vascular compromise, it is crucial to promptly inject 0.3 ml of epinephrine at a concentration of 1:1000 subcutaneously, or 3 ml at a
concentration of 1:10,000 intravenously or subcutaneously. Administering corticosteroids, diphenhydramine, and a significant amount of intravenous fluids is
recommended to mitigate the intensity of the reaction. The utilisation of Histamine-2 blockers is a subject of debate, nevertheless, it should be taken into account
when dealing with patients that are resistant to treatment (Bush & Swanson, 1991; Goss et al., 1995).

The values are 1.1.1 and 2.5. Heparin-induced thrombocytopenia (HIT)

Workshop-5 Module-1 Page 152 of 333


Heparin Induced Thrombocytopenia (HIT) is a severe immune-mediated complication that can occur when heparin is administered through flush heparinized saline
or during percutaneous coronary procedures. While the danger may not be apparent during the surgery, patients with previous heparin exposure can experience
severe thromboembolic consequences from clinical symptoms that arise in the days following the treatment. Approximately 1-3% of patients on unfractionated
heparin will experience a severe kind of immune mediated thrombocytopenia accompanied by venous and arterial blood clotting (HIT-2) (Brieger, Mak, Kottke-
Marchant, & Topol, 1998; Jang & Hursting, 2005). The occurrence of this reaction is initiated by the binding of antibodies to the heparin platelet factor-4 complex.
This binding triggers a series of reactions that result in the activation of platelets and the release of substances that promote blood clotting and inflammation. These
reactions deplete platelets and provoke the formation of blood clots. Patients who acquire HIT-2 frequently undergo a significant decrease in their platelet count,
usually by at least 50%. This drop in platelet count typically occurs 5-15 days after starting heparin treatment, however it may occur more rapidly if the patient has
previously been sensitised to heparin (Jang & Hursting, 2005). Patients who have pre-existing coronary artery disease or have undergone cardiac transplantation
are more likely to develop heparin-induced thrombocytopenia (HIT), with incidence rates of 2-8% and 11% respectively (Hourigan, Walters, Keck, & Dec, 2002;
Kappers-Klunne et al., 1997). There have been reported cases of acute coronary syndrome, characterised by acute thrombosis, occurring during coronary
angioplasty in patients who developed HIT (Gupta, Savage, & Brest, 1995). The diagnosis relies on the clinical presentation of decreased platelet count, with or
without concurrent thrombosis. Diagnostic confirmation can be reliably achieved with the use of HIT-antibody assays. However, it is crucial not to postpone
treatment in cases where there is a high level of clinical suspicion, given the severity of concurrent medical conditions. Within the group of patients diagnosed with
HIT and thrombosis, around 9-11% experience the need for limb amputation, and mortality rates range from 17-30% (Jang & Hursting, 2005). The treatment
involves promptly and completely stopping the use of heparin and starting treatment with direct thrombin inhibitors, such as argatroban, bivalirudin, or lepirudin.
Prospective trials of bivalirudin and argatroban have shown that these medications are safe and effective in patients with or at risk for heparin-induced
thrombocytopenia (HIT) who come to the catheterization laboratory (Campbell et al., 2000; Lewis et al., 2002; Mahaffey et al., 2003). individuals with severe renal
impairment require dose adjustment for bivalirudin, but argatroban is not recommended for individuals with hepatic failure.

Proceed to section 1.2.3 on Infections.

1.2.1 3.1 Occurrence

Invasive cardiovascular operations have a low incidence of infections. The documented prevalence of catheter-related infections (excluding cut-down procedures)
is much below 1% according to retrospective investigations (Munoz et al., 2001; Ramsdale, Aziz, Newall, Palmer, & Jackson, 2004). This statement may
underestimate the actual occurrence of infections acquired during catheterization, since most indications and symptoms are unlikely to manifest promptly after the
procedure. In a prospective research involving 147 consecutive blood cultures collected after difficult cardiac catheterization operations, positive blood cultures
were detected in 18% and 12% of the participants immediately after and 12 hours after the procedure, respectively. The prevailing bacterium observed was
coagulase negative staphylococcus, and there were no instances of patients exhibiting clinical indications of infection (Ramsdale et al., 2004).

Fever is a condition that should be taken into consideration when deciding whether to proceed with elective operations. Prior to undergoing an elective cardiac
catheterization, it is necessary to provide adequate treatment to patients who have continuing infections (Chambers et al., 2006). Several catheterization
procedures have demonstrated an elevated susceptibility to infection consequences, as evidenced by case studies. Local infections following angioplasty have
been associated with early re-puncture of the same side femoral artery (Wiener & Ong, 1989), utilisation of arterial grafts for access (McCready et al., 1991), and
prolonged retention of catheters (Polanczyk et al., 2001). Localised hematomas can serve as a source of infection and require prompt treatment when they occur.
The occurrence of infection in the suture or collagen anchor of vascular closure devices is rare, with a rate of 0.5%. However, when these infections do occur, they
can result in arteritis that poses a threat to the limb (Baddour et al., 2004; Cooper & Miller, 1999). The placement of a Foley catheter before the procedure should
be recognised as a potential factor contributing to a severe urinary tract infection. Avoid using them whenever possible and remove them when urine output
monitoring is no longer necessary. 1.2.2 3.2 Precautions for Infections

The American College of Cardiology advises against the utilisation of rigorous sterile techniques in the operating room for the majority of catheterization procedures
(Bashore et al., 2001; Chambers et al., 2006).

Hair removal is necessary when there is an obstruction in accessing the spot. When it is required to remove hair, it is recommended to use electric clippers instead
of razors (Chambers et al., 2006; Ko, Lazenby, Zelano, Isom, & Krieger, 1992; O’Grady et al., 2002). Prior to administering local anaesthesia, it is recommended to
cleanse the skin with a preparation containing 2% chlorhexidine, such as Chloraprep. A recent investigation involving around 500 patients found no discernible
disparity in the incidence of infections when both caps and masks were utilised (Laslett & Sabin, 1989). Nevertheless, in order to have enough statistical power,
research examining the efficacy of sterile techniques in the catheterization laboratory will necessitate a substantial number of patients due to the infrequent
occurrence of infections. In addition, the utilisation of masks and eye shields can offer further safeguarding for the operator, preventing the occurrence of blood
splash during the procedure. To minimise the risk of bacterial and fungal infections, it is recommended to refrain from using occlusive dressings and topical
antimicrobials during the surgery (Chambers et al., 2006). Antibiotic prophylaxis is not commonly recommended during cardiac catheterization, as stated by
O’Grady et al. in 2002.

Proceed to section 1.3.4 on Nephropathy for a comparison. Induced Nephropathy (CIN) is a potentially severe complication that can occur as a result of coronary
angiography, leading to major consequences in the short and long term. CIN can be reduced by implementing appropriate risk classification, selecting the
appropriate contrast agent, staging methods, and employing preventive management techniques. CIN is defined as an increase in serum creatinine of at least 0.5
mg/dl or 25% above the baseline value. This definition is based on data that have shown a connection between such increases and important consequences, such
as irreversible kidney damage necessitating hemodialysis and mortality (Gami & Garovic, 2004). The use of different definitions of contrast-induced nephropathy
(CIN) in studies, together with variations in patient co-morbidity, has made it challenging to accurately determine the actual incidence of CIN. Reported rates have
ranged from 3.3% to 16.5% (Murphy, Barrett, & Parfrey, 2000). An extensive observational study involving 1,826 consecutive patients revealed a 14.4% occurrence
rate in a population situated in the community (McCullough, Wolyn, Rocher, Levin, & O’Neill, 1997). Smaller observational studies conducted on individuals with a
lower number of risk factors have indicated a significantly reduced risk, approximately 3% (Rudnick, Berns, Cohen, & Goldfarb, 1997). Fortunately, the majority of
patients undergo a minor and temporary rise in serum creatinine levels, which usually does not lead to reduced urine output. This increase reaches its highest point
within two to four days and normally returns to normal within seven days.

The development of CIN seems to include multiple factors. The effects of contrast media on various vasoactive substances (adenosine, nitric oxide, endothelin)
and the action of free radicals have been suggested as potential causes for the multidirectional changes in renal hemodynamics (Barrett & Carlisle, 1993; R.
Solomon, 2005). The most significant risk factors for developing contrast-induced nephropathy (CIN) are preexisting renal insufficiency, diabetes, age, as well as
the osmolality and amount of contrast used. When examining past studies of patients who underwent angiography, it was found that the occurrence of contrast-
induced nephropathy (CIN) was more frequent in diabetic patients compared to non-diabetic patients, specifically in those with a baseline creatinine level below 2.0
mg/dl. All individuals with a baseline creatinine level of 2.0 or above had a markedly increased chance of developing acute renal failure. Among the 7,856 patients
examined, the incidence of contrast-induced nephropathy (CIN) was merely 2.5% when the creatinine level was below 2 mg/dl. However, it escalated to 30.6%
when the creatinine level exceeded 3.0 mg/dl (Rihal et al., 2002). The incidence of persistent kidney damage necessitating hemodialysis in patients with acute
renal failure is approximately 7.1%, as reported by the two largest studies conducted by McCullough, Bertrand, Brinker, and Stacul (2006) and Rihal et al. (2002).
Furthermore, numerous studies have demonstrated a clear association between CIN and unfavourable long-term survival (Bartholomew et al., 2004; Freeman et
al., 2002; Rihal et al., 2002). The risk of renal injury necessitating dialysis, repeated hospitalisation, and mortality rises in direct proportion to the severity of acute
kidney injury (James et al.). Within extensive registries, the mortality rate during the initial hospitalisation is 22% for patients experiencing acute renal failure,
whereas it is just 1.4% for people who do not have acute renal failure. According to Rihal et al. (2002), the estimated mortality rates for hospital survivors with acute
renal failure were 12.1% and 44.6% at 1 and 5 years, respectively. These rates were significantly higher than the mortality rates of 3.7% and 14.5% observed in
patients without acute renal failure.

The values are 1.3.1 and 4.1. Preventive measures and prophylactic strategies

Workshop-5 Module-1 Page 153 of 333


Various discrete risk factors have been documented for the occurrence of CIN. Through the utilisation of multivariable regression models, researchers have created
risk scores that can evaluate the likelihood of developing contrast-induced nephropathy (CIN) (Figure 1) (James et al.; Mehran et al., 2004). Among the variables
that can be changed, reducing the amount of contrast medium given is the main strategy to prevent contrast-induced nephropathy (CIN). The administration of
radiocontrast was identified as the strongest independent predictor of nephropathy necessitating dialysis, as supported by studies conducted by Cigarroa, Lange,
Williams, and Hillis (1989), Marenzi et al. (2009), McCullough et al. (1997), and Rudnick et al. (1997). In patients with pre-existing chronic renal disease, the total
amount of contrast delivered is a significant factor. When more than 125-140 ml of contrast is given, these patients experience a 5-10 fold increase in contrast-
induced nephropathy, regardless of any other preventive precautions taken (McCullough et al., 1997; Taliercio et al., 1991). Consequently, the majority of experts
advise restricting the amount of contrast volume to 3 ml per kilogramme.

Figure 1

Multivariable CIN risk score (Mehran et al., 2004)

The osmolality and ionic content of the chosen contrast media have been found to be closely associated with various negative reactions, such as contrast-induced
nephropathy (CIN) (Barrett & Carlisle, 1993; Jo et al., 2006; Lautin et al., 1991; McCullough et al., 2006; Rudnick et al., 1995). Aspelin et al. demonstrated that the
iso-osmolar nonionic contrast agent Iodixanol (Visipaque) decreased the relative risk of contrast-induced nephropathy (CIN) by 23% compared to the low-osmolar
nonionic agent iohexol (Omnipaque) (Aspelin et al., 2003). A new randomised, double-blinded research has challenged the notion that osmolality is the only factor
contributing to contrast-induced nephropathy (CIN). The CARE study results did not demonstrate any disparity in contrast-induced nephropathy (CIN), as defined
by several criteria, after the use of non-ionic, low-osmolar Iopamidol compared to Iodixanol in high-risk patients with or without diabetes mellitus (R. J. Solomon et
al., 2007). A meta-analysis, undertaken by the same researcher, has similarly demonstrated minimal variation among agents with an osmolality below 800
mOsm/kg. The evidence indicates that parameters such as viscosity and ionic content, along with the osmolality of the chosen agent, play a role in the overall
likelihood of developing contrast-induced nephropathy (CIN) (R. Solomon, 2005).

Volume expansion is essential for preventing contrast-induced nephropathy (CIN). The efficacy of saline administration is extensively supported by a collection of
modest observational and randomised trials. In 1994, the first controlled study investigating this connection was conducted. The study found that administering
0.45% saline alone over a 24-hour period was more successful than combining volume supplementation with diuresis using furosemide or mannitol (R. Solomon,
Werner, Mann, D’Elia, & Silva, 1994). Mueller et al. examined the tonicity of fluids in a sample of 1,383 individuals, specifically comparing the effects of 0.45%
saline solution with 0.9% saline solution. The incidence of contrast-induced nephropathy (CIN) was higher in patients who received 0.45% saline (2.0% vs 0.7%, p
= 0.04), with no significant differences observed in terms of dialysis requirement or length of hospital stay (Mueller et al., 2002). Following this study, several
randomised trials with insufficient statistical power have demonstrated a moderate yet consistent advantage of administering isotonic saline at a rate of 1 ml/kg over
a 24-hour period, starting 12 hours before the procedure. These findings were reported by Bader et al. (2004), Krasuski, Beard, Geoghagan, Thompson, and
Guidera (2003), and Weisbord and Palevsky (2008). Peri-procedural hydration can effectively benefit patients with chronic renal failure by utilising continuous veno-
venous hemofiltration. Hemofiltration enables the administration of substantial amounts of fluid while mitigating the potential hazard of fluid overload. A study
conducted by Marenzi et al. in patients with moderate to severe renal insufficiency (baseline creatinine 3.0 mg/dl) compared the effectiveness of hemofiltration with
standard therapy. The results showed that hemofiltration reduced the requirement for hemodialysis by 18% compared to the control group. Additionally, there were
fewer in-hospital events and lower one-year mortality rates (10% vs. 30% for controls). However, prophylactic hemodialysis has not demonstrated equivalent
advantages (Vogt et al., 2001).

Acetylcysteine, an antioxidant agent, has been studied for its potential to prevent contrast-induced nephropathy (CIN). The recommended dosage is 600-1200 mg
orally before the procedure and 600 mg twice daily for 24-48 hours after the procedure. However, the effectiveness of acetylcysteine in preventing CIN has shown
inconsistent results in various studies (Briguori et al., 2007; Coyle et al., 2006; Diaz-Sandoval, Kosowsky, & Losordo, 2002; Fung et al., 2004; Marenzi et al., 2006;
Tepel et al., 2000; Webb et al., 2004). A meta-analysis conducted up until 2003 revealed that the inclusion of acetylcysteine alongside intravenous hydration
resulted in a 56% relative decrease in contrast-induced nephropathy (CIN) compared to hydration alone, as reported by Birck et al. in 2003. Its habitual usage has
been endorsed by numerous specialists and institutions due to its cost effectiveness, feasibility of use, and benign side effect profile. However, a recent
randomised worldwide investigation conducted in 2,303 patients from 46 hospitals across Brazil (ACT trial) has not demonstrated any advantageous effects. Both
groups had a 12.7% incidence of Contrast-Induced Nephropathy (CIN), with comparable increases in blood creatinine levels and requirement for dialysis. This

Workshop-5 Module-1 Page 154 of 333


study is the most extensive investigation carried out on the subject and may have provided a definitive answer about the potential advantages of acetylcysteine.
"Acetylcysteine for the prevention of kidney-related outcomes in patients undergoing coronary and peripheral vascular angiography: primary findings from the
randomised Acetylcysteine for Contrast-induced nephropathy Trial (ACT), 2011." The use of sodium bicarbonate infusion to alkalinize the urine has been
investigated as a promising method to avoid contrast-induced nephropathy (CIN) by reducing the production of free radicals (Brar et al., 2008). The outcomes,
nonetheless, have been diverse and have not demonstrated a consistent advantage across experiments.(Cited in et al., 2008; Maioli et al., 2008; Merten et al.,
2004; Recio-Mayoral et al., 2007) The majority of the advantage seems to have been obtained from smaller studies that evaluated results shortly after the delivery
of radiocontrast (Zoungas et al., 2009). Large amounts of sodium bicarbonate have been associated with notable volume overload and cardiac failure in certain
studies. The use of ascorbic acid as an antioxidant or the administration of the selective dopamine-1 agonist fenoldapam to promote renal plasma flow has not
yielded consistent positive results.The references cited are Spargias et al., 2004 and Stone et al., 2003.

Proceed to section 1.1.5, which covers Cholesterol Emboli.

Cholesterol emboli are discharged as cholesterol crystals from fragile vascular plaques. Systemic syndrome can be caused by distal embolisation of cholesterol
crystals during angiography, major vascular surgery, or thrombolysis.(Bashore and Gehrig, 2003; Kronzon and Saric) The diagnosis is clinically indicated by the
presence of discoloration of the extremities in a mottled purple pattern known as livedo reticularis, or when there is digital cyanosis or gangrene, or involvement of
the nervous system or kidneys. The renal involvement often exhibits a gradual progression over a duration of two to four weeks after angiography. The diagnosis is
confirmed by doing a biopsy of the afflicted tissues, which reveals the presence of cholesterol crystals. Common test findings include eosinophilia and increased
levels of C-reactive protein. The prevalence documented in prospective studies typically falls below 2%.The references cited are Fukumoto, Tsutsui, Tsuchihashi,
Masumoto, and Takeshita (2003), and Saklayen, Gupta, Suryaprasad, and Azmeh (1997). Autopsy studies have revealed a far greater occurrence, ranging from
25-30%, suggesting that a large number of these incidents occur without any noticeable symptoms.The references cited are Fukumoto et al. (2003) and Ramirez,
O'Neill, Lambert, and Bloomer (1978). This is additionally corroborated by the identification of plaque debris in approximately 50% of guiding catheters in a
prospective study involving 1,000 patients (Keeley & Grines, 1998). There is no notable disparity in the likelihood of atheroembolism between the brachial and
femoral approaches, indicating that the primary origin is the ascending aorta. Significant risk factors comprise older age, recurrent procedures, widespread
atherosclerotic disease, and increased pre-procedure C-reactive protein levels. The primary approach to treatment is mostly focused on providing support,
however, a retrospective research indicated a reduced occurrence of cholesterol emboli when simvastatin was administered prior to the procedure.The citation is
from Woolfson and Lachmann's work published in 1998. With the exception of statins, the use of steroids and prostaglandins for treatment has not yielded
substantial advantages.The references cited are Elinav, Chajek-Shaul, and Stern (2002) and Graziani, Santostasi, Angelini, and Badalamenti (2001).

Proceed to section 1.2.6, which discusses local vascular injury.

Vascular access site difficulties are prevalent and concerning issues that frequently arise during coronary angiography. These complications have a considerable
impact on the morbidity and mortality associated with the procedure. During the early stages of cardiac catheterization, the occurrence of vascular problems was
shown to range from 0.7% to 11.7% (Babu, Piccorelli, Shah, Stein, & Clauss, 1989; Omoigui et al., 1995; Oweida, Roubin, Smith, & Salam, 1990; Samal & White,
2002; Wyman et al., 1988). In the last ten years, there have been notable advancements in anticoagulant and antiplatelet treatments, resulting in a reduction in the
occurrence of severe cardiovascular events. However, this progress comes with an increased risk of bleeding. Significant post-procedural haemorrhage and the
need for blood transfusions are linked to longer hospital stays and reduced long-term survival rates. The reference is from the work of Doyle et al. in 2008.
Thankfully, as pharmacotherapy has improved, there has been a corresponding increase in experience and efforts to reduce the likelihood of access site problems.
The growing recognition of the importance of peri-procedural bleeding in terms of overall illness and death has led to the creation and verification of scoring
systems designed to identify patients with the greatest risk of bleeding. The references used are Applegate et al. (2006), Kinnaird et al. (2003), Mandak et al.
(1998), and Nikolsky et al. (2007). The analysis of the IMPACT II study has found several factors that can be modified to decrease the risk of bleeding and
problems. These factors include early removal of sheaths, avoiding the implantation of venous sheaths, and closely monitoring the doses of heparin.The citation
"Mandak et al., 1998" refers to a publication by Mandak and colleagues in the year 1998. Greater familiarity with these complexities has facilitated increased
awareness, as well as the development of earlier detection and treatment methods. The use of fluoroscopy to identify anatomical landmarks and detect potential
issues during peripheral angiography has become a common practice (Turi, 2005). Recent advancements in equipment design enable the utilisation of catheters
with reduced profiles through smaller sheaths, resulting in decreased vascular damage and a lower incidence of problems.The references cited are Applegate et al.
(2008), Metz et al. (1997), and Talley, Mauldin, and Becker (1995). The advancement of vascular closure devices has enhanced patient comfort post-procedure
and, with future growth, has the potential to decrease the occurrence of bleeding issues. Consequently, these advancements have led to a substantial reduction in
vascular problems between 1998 and 2007. The rate of diagnostic catheterization decreased from 1.7% to 0.2%, whereas the rate of percutaneous coronary
intervention decreased from 3.1% to 1.0%.The reference is from the study conducted by Applegate et al. in 2008.

Workshop-5 Module-1 Page 155 of 333


Figure 2

Any vascular complications by procedure and closure method. CATH - diagnostic cardiac catheterization; MC - manual compression; PCI - percutaneous coronary
intervention; VCD - vascular closure device.(Applegate et al., 2008)
Optimal insertion of the sheath in the common femoral artery (Figure 3) can help prevent most of the local vascular problems. In 92% of cases, the common
femoral artery runs over the femoral head. Additionally, in 99% of cases, the bifurcation of the common femoral artery is located below the middle of the femoral
head (Garrett, Eckart, Bauch, Thompson, & Stajduhar, 2005), (Jacobi, Schussler, & Johnson, 2009; Sherev, Shaw, & Brent, 2005),(Kim et al., 1992).

Figure 3

(a) Fluoroscopy of the femoral head using forceps to determine the location of the lower edge of the femoral head on the patient's skin. (b) Proper positioning of the
sheath within the common femoral artery. (c) The sheath was correctly positioned in reference to the femoral head, but the arterial access was mistakenly inserted
in the superficial femoral artery because to the anatomical variation of a high bifurcation. (d) Proper positioning of the sheath relative to the femoral head is
essential to avoid improper arterial placement in the external iliac artery, especially when dealing with a low hypogastric artery. (e) The profunda femoris artery has
a low sheath placement. (f) The external iliac artery has a high sheath placement, as reported by Jacobi et al. in 2009.

The values are 1.1.1 and 6.1. Hematoma with retroperitoneal haemorrhage

Workshop-5 Module-1 Page 156 of 333


Inadequate management of blood clotting after removing the femoral sheath can lead to the formation of a hematoma, which is a localised collection of blood in the
front area of the thigh that resolves on its own. The majority of hematomas are harmless, sensitive lumps that are not linked to the vessel that was tapped.
Formation of deep vein thrombosis and nerve compression resulting in sensory loss have been linked to the presence of larger hematomas.(Butler & Webster,
2002; Shammas, Reeves, & Mehta, 1993) In a comprehensive registry conducted from 2000 to 2005, a significant proportion of the population (2.8%) experienced
substantial hematomas that necessitated the administration of blood transfusions. This data is presented in Table 2.The reference is from Doyle et al. (2008).
Immediate initiation of manual compression of the proximal femoral artery and hematoma is recommended upon finding and evaluation. Based on our expertise,
applying manual compression for a duration of 20-30 minutes leads to the disappearance of the hematoma, provided that there is no additional bleeding or false
aneurysm. Removing access sheaths promptly and allowing 2-4 hours of bed rest thereafter can effectively reduce the occurrence of femoral hematomas.

Table 2

Changing incidence of major femoral bleeding and blood transfusions after PCI. (*p < 0.005 versus 2000-2005)
1994-1995 (n = 2,441) 1996-1999 (n = 6,207) 2000-2005 (n = 9,253)

Femoral Hematoma 172 (7.0%)* 236 (3.8%)* 257 (2.8%)

Femoral Bleed 60 (2.5%)* 76 (1.2%)* 54 (0.6%)

Retroperitoneal Bleed 20 (0.8%)* 19 (0.3%) 26 (0.3%)

Blood Transfusion 207 (8.5%)* 482 (7.8%)* 516 (5.6%)

1 to 2 Units 98 (4.0%) 288 (4.6%)* 347 (3.8%)

3 + Units 109 (4.5%)* 194 (3.1%)* 169 (1.8%)

Hematomas that are larger in size or growing quickly might cause problems with the circulation of blood and require frequent blood transfusions. Suspect free
femoral bleeding caused by laceration of the femoral artery in this scenario. To address such situations, it is necessary to introduce a crossover sheath into the
femoral artery on the opposite side and use angiography to precisely identify the location of the bleeding. To manage uncontrolled bleeding, one can use a
peripheral angioplasty balloon or a graft stent to stop blood loss at the location of the injured blood vessel (Samal & White, 2002).

Retroperitoneal haemorrhage is a potentially life-threatening consequence of arterial access, which occurs more frequently when the artery is ruptured above the
inguinal ligament. Such haemorrhaging is usually not visible externally, but should be considered when a patient experiences stomach or flank pain in addition to
low blood pressure and a falling level of haemoglobin. CT scans are utilised to verify clinical suspicion, nevertheless, prompt identification is crucial to accelerate
the implementation of manual compression and the administration of fluids (Figure 4). Advanced age, being female, having a small body surface area, and
undergoing femoral artery puncture are notable risk factors for retroperitoneal hematomas. The references cited are Farouque et al. (2005) and Sherev et al.
(2005). While there have been worries over the heightened risk associated with PCI during the glycoprotein IIb/IIIa inhibition era, extensive retrospective
investigations have not found any links. The citation is from the study conducted by Farouque et al. in 2005. The majority of patients can be effectively treated by
reversing the anticoagulation effects, applying pressure to the access site, closely monitoring their condition, and administering fluids, with or without the use of
blood products. When conservative treatment has been unsuccessful, the puncture site can be effectively sealed by using balloon angioplasty into either the same
side or opposite side femoral artery (Samal & White, 2002).

Workshop-5 Module-1 Page 157 of 333


Figure 4

Retroperitoneal bleeding following cardiac catheterization via right femoral access.

6.2 Pseudoaneurysm

The treatment is contingent upon the dimensions of the pseudoaneurysm and the pace of its expansion. False aneurysms with a diameter of less than 2-3 cm can
be monitored without intervention and regularly examined using ultrasound on an outpatient basis (Johns, Pupa, & Bailey, 1991; Kent et al., 1993; Kresowik et al.,
1991). Nevertheless, the size of an aneurysm does not definitively indicate the likelihood of thrombosis (Kent et al., 1993). Consequently, patients with false
aneurysms, regardless of their size, should be constantly monitored until thrombosis takes place. The majority of specialists recommend intervening in individuals
who exhibit symptoms and have a false aneurysm larger than 2.0 cm (Webber et al., 2007). Historically, larger aneurysms have been treated by means of surgical
intervention or by applying pressure to the femoral neck using ultrasound guidance, while ensuring that blood flow in the femoral artery is not compromised (Samal
& White, 2002). A more recent development in the field is the use of percutaneous injection of thrombin (1000 US U/mL) as a highly effective means of inducing
thrombosis, with a primary success rate of up to 96% (Krueger et al., 2005; Webber et al., 2007). The procedure of injecting thrombin under ultrasound guidance
can be done quickly, without the need for surgery or the discomfort of ultrasound-guided compression. It is also effective in patients who are taking anticoagulant
medication (Lennox et al., 1999; Pezzullo, Dupuy, & Cronan, 2000; Taylor et al., 1999) (Figure 5a--d)d). Surgical intervention for fake aneurysms is only considered
in cases when there is rapid extension, infection, or unsuccessful closure with thrombin injection. (Samal, White, Collins, Ramee, & Jenkins, 2001; Webber et al.,
2007)

Figure 5

The duplex ultrasound image shows a pseudoaneurysm with arterial flow passing through a lengthy and narrow neck that originates from a lesion in the femoral
artery. Additionally, there is chaotic colour flow into the cavity. By eliminating colour flow, the precise location of the needle tip can be consistently determined
throughout the procedure. This is achieved by the formation of a small quantity of echogenic thrombus at the needle tip when thrombin interacts with blood, which
aids in guiding the placement of the needle (b). The presence of a needle and the observation of colour flow during the injection of thrombin confirm the recent
formation of a blood clot within the sac (c). The image shows the patent native femoral vessels (CFA represents the common femoral artery; SFA represents the
superficial femoral artery; and PFA represents the profunda femoris artery) using power Doppler. It also shows the absence of flow after a successful injection of
thrombin into the cavity of a pseudoaneurysm (d) (Lennox et al., 1999).

The values are 1.1.1 and 6.3. Arteriovenous Fistula

Arteriovenous Fistulas (AVF) occur when a needle pathway intersects both the artery and vein, resulting in enlargement during the insertion of a sheath (Figure 6).
They can also form from on-going bleeding from the puncture site that compresses into a nearby femoral vein. As such, they are often caused by low arterial
access into the superficial femoral artery because of the anterior-to-posterior proximity of the artery to the superficial femoral vein, as opposed to the side-by-side
relationship of the common femoral artery and vein (Kim et al., 1992). The diagnosis is established through the auscultation of a palpable vibration or persistent
abnormal sound over the puncture site, and further verified by contrast CT or Doppler sonography. Subsequent investigations conducted on more than 10,000
individuals who underwent transfemoral cardiac catheterization have revealed an occurrence rate of nearly 1%. Typically, management takes a cautious approach
and closely monitors the situation, since approximately one-third of arteriovenous fistulas (AVF) naturally resolve within a year (Kelm et al., 2002). Surgical
intervention is only recommended for patients who experience symptoms, have high output heart failure, or have fistulas that do not close naturally within a year
(Samal & White, 2002).

Workshop-5 Module-1 Page 158 of 333


Figure 6

AVF result when needle tract crossing both artery and vein is dilated and catheterized. V = vein, A = artery(Kim et al., 1992)

Workshop-5 Module-1 Page 159 of 333


6.4 Dissection

The dissection of the femoral and iliac arteries is a rare occurrence, observed in only 0.42% of the most recent cohorts. (Prasad et al., 2008) It is more frequently
observed in iliac arteries that have a higher burden of atherosclerosis, are more tortuous, or have experienced traumatic sheath implantation. Occlusive dissection
is a serious condition that can pose a risk to both the limb and the person's life. However, it can be discovered and treated safely once it is diagnosed. Obtaining
cine pictures of the femoral access site before concluding the study can be beneficial for identifying possible dissections. This procedure should be conducted in
individuals who have challenging access or have experienced traumatic sheath implantation. Upon detection, the removal of wires and catheters can lead to
spontaneous resolution (Samal & White, 2002). In cases where there are significant flow-limiting dissections, angioplasty and stenting have proven to be a safe and
successful therapeutic option, often eliminating the need for surgery (Scheinert et al., 2000).

1.1.1 6.5 Thrombosis and Embolism

Thrombosis commonly happens in female patients who have narrow blood vessels, peripheral artery disease, diabetes mellitus, the insertion of a large diameter
catheter or sheath (such as an intraaortic balloon pump), or prolonged catheter dwell-time (Noto et al., 1991; Popma et al., 1993). Patients commonly report
experiencing discomfort in the leg along with reduced feeling and motor function in the lower part of the limb. Physical examination frequently reveals the absence
of peripheral pulses and the presence of a whitish, aching foot. To prevent the occurrence of thrombotic and embolic problems, it is essential to regularly flush the
arterial sheaths to prevent the formation of blood clots. Additionally, the use of anticoagulation is recommended during extended procedures and while utilising an
intraaortic balloon pump. The treatment for this condition consists of percutaneous thrombectomy or thrombolytic therapy, along with consultation with a vascular
surgeon (Samal & White, 2002).

1.1.2 6.6 Vascular Closure Devices

Several techniques have been investigated over time to percutaneously seal the femoral artery, aiming to eliminate the need for manual compression and reduce
the duration of bed rest, which are major causes of patient unhappiness. While these devices offer increased comfort and earlier mobility, their safety and cost
effectiveness compared to manual compression has not been definitively established. Their advantage has been minimal, at most, and numerous studies have
documented a rise in the occurrence of vascular problems after PCI (Koreny, Riedmuller, Nikfardjam, Siostrzonek, & Mullner, 2004; Nikolsky et al., 2004). However,
these studies specifically focused on trials conducted during the first stages of using vascular closure devices. The enhancement of technology in devices and the
growing expertise of operators are expected to enhance effectiveness and safety. The ACUITY trial, analysed by Sanborn et al., showed that the use of a vascular
closure device during percutaneous coronary intervention (PCI) resulted in a significantly lower occurrence of interventional or surgical correction or hematoma ≥ 5
cm (0.4% versus 0.8%, P<0.05 and 1.9% versus 2.7%, P<0.03, respectively) (Sanborn et al.). The reductions in bleeding and vascular complications observed in
this study are comparable to those reported in a recent study conducted in Northern New England. In that study, the use of bivalirudin and vascular closure devices
during PCI resulted in a 52% reduction in bleeding and a 25% reduction in vascular complications (Ahmed et al., 2009). Therefore, advancements in vascular
closure device technology, increased operator experience, and the use of medications that reduce bleeding risk can lead to significant reductions in femoral access
complications.

1.1.3 The transradial approach is a medical procedure that involves accessing the body through the radial artery.

The transradial method has been increasingly accepted since it was introduced more than 20 years ago, primarily because it reduces vascular problems and
improves patient satisfaction with the treatment. The method has multiple benefits, as the radial artery is not in close proximity to adjacent nerves or veins and can
be readily compressed, resulting in enhanced hemostasis. In addition, the hand is supplied with blood from both the ulnar and radial arteries through the palmar
arch. Hence, the occurrence of radial artery obstruction (reported in 5-19%) (Greenwood et al., 2005) is generally not of significant clinical concern for the majority
of patients, as the hand receives ample blood supply through the large collateral circulation connecting the two arteries. The Allen test, when executed correctly, is
a straightforward and efficient technique for evaluating the sufficiency of collateral blood circulation in the hand. When the Allen test is unsuccessful in patients,
there is a higher occurrence of problems including blockage of blood flow to the hand.

Agostini et al. conducted a meta-analysis in 2004 and found that both access routes had equal rates of major adverse cardiovascular events. However, the radial
access group had a much reduced rate of problems at the entry site. Nevertheless, the benefits are counterbalanced by the increased occurrence of procedural
failures, with a rate of 7.2% compared to 2.4% for femoral access (Agostoni et al., 2004). Analysis of data from the National Cardiovascular Data Registry spanning
the years 2004 to 2007 has revealed a consistent and gradual rise in the utilisation of radial artery operations. Nevertheless, they only account for a mere 1.3% of
the total procedures conducted throughout that period. In recent data, the use of radial percutaneous coronary intervention (PCI) is associated with a comparable
rate of procedural success to the femoral approach, but with a lower risk of procedural bleeding (Rao et al., 2008). The interest in this field is driven by factors such
as increased operator experience, advancements in low profile catheters and stents, and improved patient satisfaction and comfort.

Proceed to:

1.2 and 7. Conduction disturbances

1.2.1 7.1 Bradyarrhythmia

Workshop-5 Module-1 Page 160 of 333


Bradycardia, which is a temporary slowing of the heart rate, frequently happens in the catheterization laboratory. The occurrence of these events was far more
common during the period when high osmolar ionic contrast agents were used, but it has decreased in recent times due to the widespread adoption of iso-osmolar,
non-ionic contrast material. Extended periods of bradycardia can result in a vagal response, which is accompanied by hypotension, nausea, perspiration, and
yawning. According to a study conducted by Landau, Lange, Glamann, Willard, and Hillis in 1994, around 3.5% of patients had this phenomenon. Among these
patients, 80% developed it while their access was being performed, and 16% experienced it following the removal of the sheath. Administering therapy for anxiety
and pain, in addition to ensuring sufficient hydration, can effectively prevent extended vagal reactions. In addition, hypotension and bradycardia can serve as initial
indicators of perforation and tamponade, as stimulation of the pericardium triggers a vagal response. Forceful coughing can enhance coronary perfusion and
reinstate regular heart rhythm. If coughing fails to resolve the issue, administering intravenous fluids quickly, addressing the underlying pain or anxiety, and
administering 0.5-1 mg of atropine intravenously can effectively reverse the bradycardia. For instances of total cardiac block, it is crucial to promptly commence
temporary pacing with a transvenous pacemaker.

Conduction disruptions also happen, but they are significantly less frequent than vagal episodes. Inserting the catheter via the aortic valve typically results in the
occurrence of ectopic heartbeats. Nevertheless, in a patient who already has right bundle branch block, the occurrence of left bundle branch block due to septal
scraping can result in total heart block and circulatory collapse. On the other hand, if a patient already has a left bundle branch block, doing right cardiac
catheterization and developing a right bundle branch block can lead to a comparable situation. Therefore, it is essential for the operator to examine the EKG of
each patient before the procedure. Reducing the duration of ectopic beats can aid in preventing these issues.

1.2.2 7.2 Tachyarrhythmia refers to an abnormal heart rhythm characterised by a rapid heartbeat.

Atrial arrhythmias can develop as a result of the irritation caused by the catheter in the right atrium during right heart catheterization. Immediate therapy for these
arrhythmias is typically unnecessary unless they cause ischemia or hemodynamic instability. The presence of ventricular tachycardia and ventricular fibrillation in
modern times is associated with irritation of the myocardium caused by the catheter. Trained technicians and engaged operators can identify ventricular ectopy,
which can help decrease the occurrence of these abnormal heart rhythms. Upon detection of a bout of ventricular tachycardia, it is imperative to promptly retract
the problematic catheter in order to facilitate the return to a regular sinus rhythm. In the past, ventricular arrhythmias were more common when a specific type of
contrast material was used for injecting into the right coronary artery. This caused ventricular dysrhythmia in 1.3% of patients, according to studies conducted in
1973 and 1987. However, more recent reports indicate that the rate of this complication has decreased to 0.1% (Chen, Gao, & Yao, 2008). Ventricular tachycardia
was observed in 4.3% of patients with ST-elevation myocardial infarction during cardiac catheterization in the PAMI trial (Mehta et al., 2004). For high-risk patients,
it is advisable to consider pre-treatment with beta-blockers or initiation of antiarrhythmic therapy using lidocaine or amiodarone during recurrent episodes as
potential treatment options. Direct current cardioversion is the recommended treatment for atrial rhythms that are hemodynamically unstable or any sustained
ventricular tachyarrhythmia.

Proceed to:

1.3 to 8. Mortality

Over the past few decades, there has been a gradual decrease in the occurrence of death following left cardiac catheterization. During the early 1960s, diagnostic
catheterization had a mortality rate of 1%, which has since decreased to 0.08% in the 1990s (Braunwald & Gorlin, 1968; Chandrasekar et al., 2001; Johnson et al.,
1989; Kennedy, 1982; Noto et al., 1991). Several baseline variables contribute to mortality during coronary angiography, with the most significant ones being the
presence of multivessel disease, left main coronary artery disease (LMCA), congestive heart failure (CHF), renal insufficiency, and advanced age (Laskey, Boyle, &
Johnson, 1993). Cardiac catheterization and percutaneous coronary intervention have experienced advancements in recent years, including the introduction of
stents and powerful antiplatelet drugs. These improvements have the potential to impact the overall rate of complications. (Chandrasekar et al., 2001)

The presence of significant LMCA disease significantly increases the risk of dissection during catheter engagement and injection of contrast. This risk is reported to
be approximately 0.07%, and it is almost twice as high with percutaneous intervention compared to other methods (Cheng et al., 2008; Eshtehardi et al.). The
mortality rate associated with iatrogenic LMCA dissection is reported to be around 3% (Eshtehardi et al.), especially if it goes undetected. Immediate medical
procedures such as coronary artery bypass surgery or percutaneous coronary intervention with stents are necessary.

Patients with chronically depressed left ventricular function and those with acute myocardial infarction (MI) who are in a state of shock have the highest risk of
mortality, according to studies by Anderson et al. (2007), Johnson et al. (1989), and Shaw et al. (2002). In cases where there is evident or developing cardiogenic
shock during cardiac catheterization, the use of an intra-aortic balloon pump and inotropic support may be necessary.

When coronary angiography is combined with PCI, the risk of mortality is higher, as indicated by studies conducted by Dorros et al. in 1983 and Shaw et al. in
2002. Recent data from the American College of Cardiology-National Cardiovascular Data Registry (ACC-NCDR) published in 2010 have identified several factors
that are associated with an increased risk of mortality during PCI. These factors include cardiogenic shock, advancing age, salvage procedures, urgent or
emergency PCI, reduced left ventricular ejection fraction, acute myocardial infarction, diabetes, chronic renal failure, multivessel disease, prior coronary artery
bypass grafts (CABG), and chronic occlusion. The overall in-hospital mortality rate for percutaneous coronary intervention (PCI) was 1.27%. This rate varied from
0.65% for elective PCI to 4.81% for patients with ST-elevation myocardial infarction (MI). The references used are Anderson et al. (2007), Peterson et al., and
Shaw et al. (2002).

Individuals diagnosed with aortic stenosis experience a greater likelihood of death. The VA Cooperative study on Valvular Heart Disease has revealed a mortality
rate of 0.2% among 1559 preoperative catheterizations. (Folland et al., 1989) Bartsch et al. have demonstrated that patients with aortic valve stenosis who have left
cardiac catheterization to assess the transvalvular gradient have a mortality rate of 1.1%. (Bartsch, Haase, Voelker, Schobel, & Karsch, 1999)

Patients who have already undergone coronary artery bypass grafting (CABG) and need diagnostic and therapeutic cardiac catheterization are usually elderly and
have widespread atherosclerosis, impaired left ventricular function, and necessitate a longer and more intricate treatment. Varghese et al. demonstrated that there
is no disparity in death rates between patients who undergo percutaneous coronary intervention (PCI) with grafts after coronary artery bypass grafting (CABG) and
patients who undergo PCI with native vessels after CABG (Varghese, Samuel, Banerjee, & Brilakis, 2009), (Garcia-Tejada et al., 2009).

Workshop-5 Module-1 Page 161 of 333


Proceed to:

The number is 1.4 multiplied by 9. Acute myocardial infarction

Myocardial injury may occur in various clinical contexts, including spontaneous events, diagnostic cardiac catheterizations, percutaneous interventions, and CABG
surgery. Various criteria have been used in clinical studies to determine the presence of an infarct: CK-MB levels over 2 times the upper limit of normal (ULN) for
spontaneous myocardial infarction (MI); CK-MB levels exceeding 3 times the ULN for coronary procedures; and CK-MB levels exceeding 5-10 times the ULN for
bypass surgery (Alpert, Thygesen, Antman, & Bassand, 2000). The results from the Coronary Artery Surgery Study (CASS) in the late 1970s revealed that the rate
of myocardial infarction (MI) associated with coronary angiography was 0.25% (Davis et al., 1979). The risk of myocardial infarction (MI) decreased gradually in the
initial, second, and third registries carried out by the Society for Cardiac Angiography. The rates of MI dropped from 0.07%, to 0.06%, to 0.05% respectively
(Johnson et al., 1989; Kennedy, 1982; Noto et al., 1991). The incidence of myocardial infarction (MI) during diagnostic catheterization is significantly affected by
patient-related factors, such as the severity of coronary artery disease (CAD). The risk is estimated to be 0.06% for patients with single vessel disease, 0.08% for
those with triple-vessel disease, and 0.17% for individuals with left main disease (Johnson et al., 1989). The incidence of myocardial infarction during cardiac
catheterization has significantly decreased due to advancements in equipment, operator skill, the use of more powerful antithrombotic and antiplatelet medications,
improved patient preparation with beta blockers and statins, and the adoption of low osmolar contrast agents (Judkin & Gander, 1974; Pasceri et al., 2004).

Each year, almost 1.5 million patients in the United States have percutaneous coronary intervention (PCI) (Roger et al.). Depending on the specific customs and
diagnostic standards employed in a given region, about 5 to 30% of these individuals exhibit indications of peri-procedural myocardial infarction.

At the upper estimate, the occurrence of these occurrences is comparable to the yearly frequency of significant spontaneous myocardial infarction. The factors that
potentially predict peri-procedural myocardial infarction (MI) can be divided into three main categories: patient-related, lesion-related, and procedure-related risk
factors (Herrmann, 2005). The primary risk factors, in terms of both occurrence and severity, include intricate lesions (such as the presence of blood clots,
narrowing of a saphenous-vein graft, or a type C lesion), complicated procedures (such as treating multiple lesions or employing rotational atherectomy), and
procedural complications (such as sudden vessel closure, occlusion of side branches, distal embolisation, or impaired blood flow). (Herrmann, 2005; Mandadi et al.,
2004; van Gaal et al., 2009) Peri-procedural ischemic symptoms, such as chest pain at the end of the procedure, or electrocardiographic evidence of ischemia, are
used to identify the specific group of patients who are most likely to experience peri-procedural myocardial infarction (Cai et al., 2007).

Significant peri-procedural myocardial infarctions (MIs) are typically caused by problems that may be seen on angiography. However, this is generally not true for
most patients who have elevated biomarker levels following percutaneous coronary intervention (PCI). Cardiac magnetic resonance imaging has verified that peri-
procedural myonecrosis occurs in two separate areas: near the intervention site, where the damage is likely caused by the occlusion of a side-branch on the outer
surface of the heart, and downstream from the intervention site, where it is likely due to a compromise in the circulation of small blood vessels.

Studies assessing the correlation between post-procedural cardiac troponin levels and long-term mortality have typically included patients with acute coronary
syndromes (ACS), a significant number of whom had aberrant cardiac biomarker levels at the beginning of the study. The frequency of post-procedural elevations
in cardiac troponin has been highly variable, as reported by Cantor et al. (2002), Cavallini et al. (2005), D. N. Feldman et al. (2009), Kini et al. (2004), Kizer et al.
(2003), Nallamothu et al. (2003), Natarajan et al. (2004), Nienhuis et al. (2008), and Testa et al. (2009). While some studies have shown that the serum
concentration of cardiac troponin independently predicts survival, others have not.

The prognostic implications of equivalent amounts of damage in diverse circumstances are yet unknown. Mahaffey et al. conducted a study on the results of peri-
procedural myocardial infarction (MI) compared to spontaneous MI in a substantial sample of 16,173 patients from the PURSUIT and PARAGON B trials involving
non-ST-elevation MI. The data clearly indicated that patients who experienced peri-procedural or spontaneous myocardial infarction (MI) had considerably higher
mortality rates at one and six months. (Mahaffey et al., 2005) A recent study included 7,773 patients who had moderate to high risk, non-ST elevation MI and had
PCI as part of the ACUITY trial. (Prasad et al., 2009) The occurrence of myocardial infarctions (MIs) during the procedure and without any specific cause
throughout the follow-up period was observed in 6.0% and 2.6% of the study population, respectively. Upon accounting for variations in initial and procedure
factors, it was found that spontaneous myocardial infarction (MI) was a strong and autonomous indicator of a heightened likelihood of mortality. On the other hand,
peri-procedural MI did not exhibit a significant correlation with an elevated risk of death. An analogous finding was reported in individuals with diabetes and stable
coronary artery disease (CAD) in the Bypass Angioplasty Revascularization Investigation 2 Diabetes (BARI 2D) experiment. The citation is from Chaitman et al.
(2009).

Collectively, current research suggests that an unplanned myocardial infarction (MI) is a strong indicator of death. Peri-procedural myocardial infarction, while
common, indicates the presence of atherosclerosis and a complicated procedure. However, in stable coronary artery disease or non-ST-elevation myocardial
infarction, it generally does not have significant independent predictive value. While significant peri-procedural infarcts can impact the outlook, they seldom happen
without procedure problems or in patients with normal baseline cardiac troponin levels.

Proceed to:

The number is 1.5 times 10. Complications related to the blood vessels in the brain

While the occurrence of stroke following left heart catheterization or percutaneous intervention is generally rare, it is the most severe complication and is linked to a
significant rate of illness and death (Table 3, Figure 7) (Akkerhuis et al., 2001; Fuchs et al., 2002; Lazar et al., 1995; Wong, Minutello, & Hong, 2005). The initial
findings indicated a prevalence of 0.23% in the 1973 study conducted by Adams and colleagues (Adams et al., 1973), however the more current diagnostic
catheterization data from the Society for Cardiac Angiography-registries (Johnson et al., 1989; Kennedy, 1982) reported a prevalence of 0.07%.

Workshop-5 Module-1 Page 162 of 333


Figure 7

Pooled relative risk (random effects) of mortality after stroke in PCI or in patients with non ST- elevation MI

Table 3

Incidence of peri-procedural stroke in PCI registries(Hamon, Baron, & Viader, 2008)


Reference No. Patients No. Percentage 95% CI

Lazar et al., 1995

Total 6465 27 0.42 0.27-0.60

Ischemic NA NA NA

Hemorrhagic NA NA NA

Uncertain NA NA NA

Akkerhuis et al., 2001

Total 8555 31 0.37 0.24-0.51

Ischemic 19 0.22 0.13-0.34

Hemorrhagic 12 1.4 0.07-0.24

Uncertain 1 0.01 0.00-0.06

Fuchs et al., 2002

Total 9662 43 0.44 0.32-0.6

Ischemic 21 0.22 0.13-0.33

Hemorrhagic 20 0.21 0.13-0.32

Uncertain 2 0.01 0.00-0.07

Dukkipati et al., 2004

Total 20679 92 0.44 0.36-0.54

Workshop-5 Module-1 Page 163 of 333


Reference No. Patients No. Percentage 95% CI

Ischemic 43 0.21 0.15-0.28

Hemorrhagic 13 0.06 0.03-0.10

Uncertain 36 0.17 0.12-0.24

Wong et al., 2005

Total 76903 140 0.18 0.15-0.21

Ischemic NA NA NA

Hemorrhagic NA NA NA

Uncertain NA NA NA

The risk of stroke is elevated with coronary intervention because to the utilisation of guiding catheters, frequent equipment swaps in the aortic root, intensive
anticoagulation, and prolonged operation durations. In a study conducted by Dukkipati et al. (2004), stroke was observed in 0.44% of the 20,697 patients who
received percutaneous coronary intervention (PCI) at a high-volume medical facility. A multivariable analysis has demonstrated that the presence of stroke was
linked to diabetes, hypertension, previous stroke, and renal failure. Additionally, stroke was found to be a standalone predictor of death during hospitalisation
(Hamon et al., 2008). Stroke patients experienced prolonged cardiac catheterization procedures, more contrast usage, a greater likelihood of undergoing the
operation for urgent reasons, and a higher incidence of intra-aortic balloon counter pulsation.The reference cited is Stone et al., 1997. One possible reason for this
trait is the higher likelihood of hemodynamic compromise in these patients, which can raise the chances of experiencing an ischemic stroke. Scraping of aortic
plaque occurs in over 50% of percutaneous coronary intervention (PCI) patients and is more common when bigger catheters are used (Keeley & Grines, 1998).

The primary cause of peri-procedural ischemic stroke that occurs with PCI is believed to be cerebral micro-embolism. This observation is corroborated by
transcranial doppler investigations conducted during cardiac catheterization, which demonstrate the consistent presence of brain micro-emboli.The references used
are Bladin et al. (1998), Hamon et al. (2006), and Leclercq et al. (2001). The primary causes of ischemic stroke during cardiac catheterization or PCI include air
embolism, thrombus formation in the catheter or its surface, or displacement of aortic atheroma during the manipulation and passage of catheters within the aorta,
which result in the release of embolic material. those with coronary artery disease (CAD) exhibit a higher prevalence of severe atheroma in the descending aorta
and aortic arch compared to those without CAD (Khoury, Gottlieb, Stern, & Keren, 1997).

Aside from the aortic root, embolic debris can also originate from the heart chambers, thrombotic coronary arteries, or the surface of cardiac valves. It is advisable
to refrain from inserting the pigtail catheter into the left ventricle in patients who have a suspected aneurysm or have recently experienced a myocardial infarction
(MI), as either condition may be linked to a possibly movable blood clot on the heart wall.

The majority (80%) of peri-procedural strokes related to invasive operations are primarily caused by embolic material that becomes trapped in the cerebral arteries.
Nevertheless, due to the progressively assertive antithrombotic conditions employed in percutaneous coronary intervention (PCI), particularly in acute coronary
syndrome (ACS), instances of cerebral haemorrhages are also observed. Prior to initiating any treatment, it is necessary to determine if the processes involved are
ischemia or hemorrhagic.

Proceed to section 1.1, which covers the dissection and perforation of great vessels.

Fortunately, the occurrence of perforation in the heart chambers, coronary arteries, or intrathoracic major vessels during diagnostic catheterization is rare. The
prevalence of catheter-induced ascending aorta dissection is approximately 0.04% of cases, as reported by Gomez-Moreno et al. in 2006. The prevalence of
coronary artery dissection during balloon angioplasty is around 30%, as seen by Figures 8-10.The references cited are Cowley, Dorros, Kelsey, Van Raden, and
Detre (1984) and Huber, Mooney, Madison, and Mooney (1991).

Figure 8

Angiogram of right coronary artery before (a) and after perforation (b)

Workshop-5 Module-1 Page 164 of 333


Figure 9

Angiogram of right coronary artery prior to intervention (a), after balloon angioplasty (b) and dissection (c)

Figure 10

Angiogram of the left coronary system (a). Dissection of the left circumflex artery with guidewire catheter (b) with subsequent extension in to the left anterior
descending artery (c)
The occurrence of coronary artery perforation has been documented to range from 0.3% to 0.6% in recent registries of patients who undergo PCI. The references
cited are Cowley et al. (1984), Dippel et al. (2001), Ellis et al. (1994), and Gruberg et al. (2000). The utilisation of hydrophilic guidewires, platelet IIb/IIIa receptor
blockers, and more advanced atherectomy technologies may result in an increased occurrence of coronary perforation. Perforations that only cause profound injury
to the artery wall and result in localised perivascular contrast staining can be monitored without intervention. However, these individuals are susceptible to
experiencing delayed tamponade within a few hours after the surgery and so require close monitoring. On the other hand, if there is free perforation, it can quickly
cause the patient to develop frank tamponade, especially if they are totally anticoagulated. Urgent actions should be taken to close the hole by inflating a balloon
near the hole. If the extravasation of contrast continues after 10-15 minutes or the development of ischemia, it is necessary to employ graft stents to close the
arterial rupture. Simultaneously, the option of pericardiocentesis should be taken into account in order to provide for the required duration to close the perforation.
The overall occurrence rate of necessary emergency surgery after diagnostic angiography is 0.05%, while it is 0.3% after therapeutic treatments (Chandrasekar et
al., 2001; Loubeyre et al., 1999). Nevertheless, when coronary artery perforation is identified, the documented frequency of urgent surgical interventions such as
pericardial window, bypass surgery, or coronary artery ligation can reach as high as 24-40% (Table 4). The occurrence of coronary artery perforation has been
documented to range from 0.3% to 0.6% in recent registries of patients who undergo PCI. The references cited are Cowley et al. (1984), Dippel et al. (2001), Ellis
et al. (1994), and Gruberg et al. (2000). The utilisation of hydrophilic guidewires, platelet IIb/IIIa receptor blockers, and more advanced atherectomy technologies
may result in an increased occurrence of coronary perforation. Perforations that only cause profound injury to the artery wall and result in localised perivascular
contrast staining can be monitored without intervention. However, these individuals are susceptible to experiencing delayed tamponade within a few hours after the
surgery and so require close monitoring. On the other hand, if there is free perforation, it can quickly cause the patient to develop frank tamponade, especially if
they are totally anticoagulated. Urgent actions should be taken to close the hole by inflating a balloon near the hole. If the extravasation of contrast continues after
10-15 minutes or the development of ischemia, it is necessary to employ graft stents to close the arterial rupture. Simultaneously, the option of pericardiocentesis
should be taken into account in order to provide for the required duration to close the perforation. The overall occurrence rate of necessary emergency surgery after
diagnostic angiography is 0.05%, while it is 0.3% after therapeutic treatments (Chandrasekar et al., 2001; Loubeyre et al., 1999). Nevertheless, when coronary
artery perforation is identified, the documented frequency of urgent surgical interventions such as pericardial window, bypass surgery, or coronary artery ligation
can reach as high as 24-40% (Table 4).

Table 4

Incidence of coronary artery perforation with in-hospital complications (Nair & Roguin, 2006)
Reference Patients Incidence CABG MI Death

Bittl et al., 1993 764 3% 34.7 4.3 9

Ajluni et al., 1994 8932 0.40% 37 26 5.6

Workshop-5 Module-1 Page 165 of 333


Reference Patients Incidence CABG MI Death

Holmes et al., 1994 2759 1.30% 36.1 16.7 4.8

Ellis., 1994 12900 0.50% 24 19 0

Cohen et al., 1996 2953 0.70% 41 45.5 9

Gruberg et al., 2000 30746 0.29% 39 34 10

Dippel., 2001 6214 0.58% 22 NA 11

Gunning et al., 2002 6245 0.80% 39 29 42

Fejka et al., 2002 25697 0.12% 39 29 42

Stankovic et al., 2004 5728 1.47% 13 27 8

Witzke et al., 2004 12658 0.30% 5 18 2.5

Ramana et al., 2005 4886 0.50% 0 20 8

Workshop-5 Module-1 Page 166 of 333


The occurrence of perforation in the major blood vessels, such as the aorta or pulmonary artery, is exceptionally uncommon. Ascending aortic dissection may also occur
due to forceful use of a guiding catheter or extension from a coronary dissection in the proximal region.

Right heart catheterization may result in cardiac perforation, typically accompanied by bradycardia and hypotension due to vasovagal activation. When blood builds up in
the pericardium, the outline of the heart may increase in size and the usual movement of the heart's edges during fluoroscopy will become less pronounced. In cases
where the patient's hemodynamic status is impaired, it is imperative to promptly carry out pericardiocentesis via the subxiphoid technique. After pericardiocentesis has
successfully stabilised the situation, the operator must make a decision regarding the necessity of emergent surgery to suture the perforation location. The majority of
perforations will naturally close without intervention, rendering surgery unnecessary.

Proceed to section 1.1, specifically subsection 12, which discusses more complications.

The numerical values are 1.1.1 and 12.1. Low blood pressure

Arterial hypotension is a prevalent issue observed during catheterization. This decrease is the ultimate shared expression of various circumstances, encompassing the
following: The potential causes of hypovolemia during the procedure include inadequate pre-procedure hydration or excessive diuresis caused by the contrast agent.
Other possible causes include a decrease in cardiac output, tamponade, arrhythmia, valvular regurgitation, or inappropriate systemic arteriolar vasodilation due to a
vasodepressor response to the contrast agent. Additionally, there is a risk of potential bleeding from retroperitoneal haemorrhage.

Insufficient filling pressures require quick delivery of fluids, while insufficient filling pressure along with inappropriate bradycardia suggests a vasovagal reaction, in which
case atropine should be administered along with fluid resuscitation. Elevated filling pressures, on the other hand, indicate underlying heart dysfunction and should lead to
evaluation of ischemia, tamponade, or rapid start of valvular regurgitation. Patients of this nature should be provided with empirical support by the use of inotropic drugs,
vasopressors, or circulatory support devices.

Patients exhibiting hypotension and normal or elevated cardiac output, as determined by saturation Swan-Ganz catheters, have a higher probability of experiencing an
allergic reaction to contrast. In such cases, these patients may necessitate vasopressor support, administration of steroids, and histamine blockers.

1.1.2 12.2 Hypoglycemia

Diabetic individuals undergoing fasting before to a medical treatment may have hypoglycemia. It is crucial to thoroughly monitor these patients and regularly measure their
blood glucose levels with finger-stick tests before and during the surgery. If any indications of hypoglycemia, such as worry or lethargy, arise, immediate measures should
be taken to deliver intravenous glucose.

1.1.3 12.3 Respiratory insufficiency refers to a condition where the respiratory system is unable to adequately supply oxygen to the body or remove carbon dioxide from
the body.

Respiratory insufficiency may arise due to several factors, such as congestive heart failure accompanied by pulmonary edoema, pre-existing lung illness, and allergic
reaction or excessive sedation. Urgent evaluation of the patient's condition is necessary, and appropriate treatment should be administered according to the presumed
cause.

Proceed to:

1.2 13. Conclusion

Cardiac catheterization is a procedure that carries minimal risk and has a low incidence of complications. Despite improvements in medical treatment and
equipment design, the primary factors that determine unfavourable outcomes are the operator's awareness and proper response, even when the incidence of
problems is currently low. Before performing a coronary angiography, it is important to carefully consider the potential advantages of the procedure in relation
to the known risk factors and the clearly specified rates of illness and death. The extensive utilisation and accessibility of angiography are expected to drive
further progress in percutaneous techniques, perhaps enhancing patient comfort and further decreasing problems.

References

Acetylcysteine for prevention of renal outcomes in patients undergoing coronary and peripheral vascular angiography: main results from the randomized Acetylcysteine for
Contrast-induced nephropathy Trial (ACT) [Comparative Study Meta-Analysis Randomized Controlled Trial Research Support, Non-U.S.
Gov’t] Circulation. 2011;124(11):1250–1259. http://dx.doi.org/10.1161/CIRCULATIONAHA.111.038943 . [PubMed] [Google Scholar]

Adams D. F, Fraser D. B, Abrams H. L. The complications of coronary arteriography. Circulation. 1973;48(3):609–618. [PubMed] [Google Scholar]

Agostoni P, Biondi-Zoccai G. G, de Benedictis M. L, et al. Radial versus femoral approach for percutaneous coronary diagnostic and interventional procedures;Systematic
overview and meta-analysis of randomized trials. J Am Coll Cardiol. 2004;44(2):349–356. [PubMed] [Google Scholar]

Workshop-5 Module-1 Page 167 of 333


Download, from the Moodle site, and try Workshop-5-of-7 yourself ……. Email your completed work to your course coordinator.

Workshop-5

Use Hazard Operability


Analysis (HAZOP) to identify
and record operational
hazards associated with the
use of a chromatography
column in the context of
product quality.

Scenario

A scientist wishes to separate a mixture of two coloured compounds – one


yellow, one blue (the mixture will look green) – using the column
chromatography arrangement as described overleaf.

Instructions

1. Download the HAZOP worksheet at the end of this document.


2. Review the column chromatography arrangement and process.
3. Fill in the worksheet’s header details as follows:

 Design Intent
 Material
 Activity

Workshop-5 Module-1 Page 168 of 333


 Source
 Destination
4. Systematically apply the regular guidewords against any two (2) system
parameters of your choosing and assess the process for operational hazards in
the context of product quality.
5. Electronically save your work and upload the completed file into LAMS.

Chromatography Arrangement and Process

 The scientist has a concentrated solution of the green coloured mixture in


the same solvent as used in the column.

 First the tap is opened to allow the solvent already in the column to drain
so that it is level with the top of the packing material, and then the coloured
mixture is carefully loaded to the top of the column.
 The tap is again opened so that the coloured mixture is all absorbed into
the top of the packing material, and visually appears as follows:

Workshop-5 Module-1 Page 169 of 333


 Next fresh solvent is added to the top of the column, with little disturbance
of the packing material.
 The tap is opened so that the solvent can flow down through the column,
and is collected at the bottom.
 As the solvent runs through, fresh solvent is continually added to the top
so that the column never dries out.

 The above set of diagrams shows what happens over time.

Workshop-5 Module-1 Page 170 of 333


 The process of washing a compound through a column using a solvent is
known as elution. The solvent is sometimes known as the eluent.

HAZOP Worksheet

Project Name Downstream Processing – Column Chromatography Step Page _1__


of __3__

Drawing #: 1 P&ID Column Drawing Revision No.: 2 : C – Issued for HAZOP Date: 14 J
Chrom 2024
Team Members: Maria Kaluzna MK, Cíara McLoughlin CM,
Team Leader: Ida Notes: Risk
Elaine Delaney ED
Lazewska IL

Component Examined: Column Feed Inlet Line

Design Intent: Solvent for Material: Solvent for elution Activity: Column/Affinity
elution post mixture loaded Chromatography

Source: Solvent tank Destination: Collection beaker


Stu
Process Safeguar Ac
dy Guidew Possible Consequen Actions Priori
Paramet Deviation ds in as
Nod ord Causes ces Required ty
er place e
e

Elue No Flow The The If the None Mount/ High De


nt column supplying column is install a tea
feed was not tank is not flow
replenish devoid of receiving a transmitte
ed with any continuous r onto the
any new solvent. flow of tank.
solvent. fresh
solvent, it
can get
dehydrated.

Elue More Flow An The pump The column None Adjust the High De
nt excessive is was pump's tea
feed amount malfunction washed at set point
of newly ing and not a rapid to
distilled operating rate, decrease
solvent is properly. resulting in the
continuou the failure revolution
sly to separate s per
flowing minute
through (RPM).
Workshop-5 Module-1 Page 171 of 333
the the mixture.
column
system.

Elue More Temperat Temperatur The Loss of None Install a High De


nt ure e of solvent supplying control temperat tea
feed too high tank is refers to ure
experiencin the inability control
g to manage system
excessive or regulate on the
heat, the reaction supplying
perhaps and the tank.
due to a process of
sensor materials
problem. combining.

Elue Less Temperat Cold Malfunction Loss of None Mount/ High De


nt ure solvent of the control over install a tea
feed supplied pressure the reaction temperat
valve and slower ure
results in a transfer of sensor
fall in solvent to onto the
temperatur the column. feed line
e due to and verify
the loss of the set
pressure. point on
the
pressure
valve.

Extra info:

Modern trends in downstream processing of biotherapeutics through continuous chromatography: The potential of Multicolumn Countercurrent Solvent Gradient
Purification

Chiara De Luca,a Simona Felletti,a Giulio Lievore,a Tatiana Chenet,a Massimo Morbidelli,b Mattia Sponchioni,b Alberto Cavazzini,a,∗ and Martina Catania,∗∗

Author information Copyright and License information PMC Disclaimer

2.12 Abstract

Single-column (batch) preparative chromatography is the preferred method for purifying biotherapeutics. However, it is typically limited in terms of achieving high
yields while maintaining purity, particularly when separating mixtures with numerous contaminants linked to the product. To mitigate this limitation, one can utilise
multicolumn continuous chromatography. This paper will especially examine Multicolumn Countercurrent Solvent Gradient Purification (MCSGP), a method
developed for effectively separating target biomolecules from their contaminants in demanding scenarios. MCSGP is one of several continuous mode approaches
being studied. The enhancements arise from the automated internal recycling of the impure fractions within the chromatographic system, leading to a higher yield
while maintaining the purity of the pool. This article will outline the manufacturing process of biopharmaceuticals, specifically highlighting the advantages of
continuous chromatography versus batch procedures, with a particular emphasis on MCSGP.

Workshop-5 Module-1 Page 172 of 333


Terms: Continuous chromatography, Preparative chromatography, Purification, Multicolumn platforms, Biopharmaceuticals, Biotherapeutics

Proceed to: 1.11. Background

Biopharmaceuticals have been a pioneering category of therapies since the 1980s, thanks to their exceptionally targeted efficacy, a characteristic that cannot be
replicated by conventional medications. These compounds exhibit a strong affinity for the target receptors, allowing them to be very efficacious even at low doses
[1,2]. Furthermore, the majority of these substances are also found within the human body, resulting in diminished adverse effects when compared to other
chemical medications. Recently, their promise has been further ignited because several treatments now being tested for the treatment or prevention of COVID-19
rely on biopharmaceuticals, particularly monoclonal antibodies or oligonucleotides [3, 4, 5].

Over the past few years, there has been significant enhancement in the production of biopharmaceuticals. The selection of the method used to acquire the
biomolecule of interest is the initial stage of the manufacturing process [6,7]. Recombinant technology is the primary approach used to acquire monoclonal
antibodies, hormones, and blood factors. Continuous bioreactors, such as perfusion bioreactors, are increasingly gaining popularity in this context, to the extent
that they are now replacing conventional batch methods. Biopharmaceuticals can be obtained either by extraction from their natural source or through chemical
synthesis. The latter approach, however, can only be utilised for the synthesis of brief biopolymeric sequences, such as polypeptides. The advancements in the
upstream phase of biopharmaceuticals have not been matched by equal improvements in the downstream process, to the extent that the latter is currently a major
obstacle in the overall production of biotherapeutics [[8], [9], [10]]. The phrase "downstream" refers to the process of extracting and purifying a product from a
complicated mixture [11]. The preferred purification procedures must effectively differentiate between molecules that frequently exhibit only minor differences in
size, hydrophobicity, or charge. Liquid chromatography is the optimal method to fulfil this need because to its versatility, selectivity, and flexibility. Typically, meeting
the market requirements necessitates multiple chromatographic steps [12,13]. Typically, these purification methods involving chromatography are carried out in
batches, frequently employing a solitary chromatographic column [14].

Typically, a minimum of two distinct purification procedures are required to separate and obtain the desired product with the acceptable level of purity. The initial
step in the purification process involves eliminating process-related impurities, which are species that do not share chemical similarities with the target molecule [6].
Their composition typically comprises nucleic acids, host cell proteins, lipids, components of the cell culture media, salts, and other substances that originate from
the production process. Frequently, affinity chromatography is used in batch circumstances, employing a bind-and-elute mode [15]. The method is referred to as
the capture step, which involves loading a substantial quantity of feed into the column until it reaches its breakthrough point. The product selectively adheres to the
stationary phase, whereas all other distinct species pass through the column and can be discarded. Staphylococcus Protein A-based stationary phase is commonly
used to purify monoclonal antibodies (mAbs) due to its ability to bind mAbs selectively and reversibly [16]. In this step, it is crucial to maximise the retrieval of the
target substance, but precise purity standards are not essential.

Following the capture stage, one or more subsequent polishing stages are necessary to meet the stringent purity standards for pharmaceuticals. To accomplish
this, it is necessary to isolate the product from impurities that are frequently similar to the desired molecule, such as shortened or deamidated species [17].
Frequently, this task is really demanding. Due to the resemblance between the desired product and the contaminants, affinity chromatography is not feasible at this
point. Thus, reversed-phase, ion-exchange, and hydrophobic interaction chromatography are commonly favoured as the preferred procedures [6]. To enhance the
clarity of the peaks, it is recommended to operate under gradient settings. This is because the retention of biomolecules is greatly influenced by the mobile phase's
composition, such as the salt concentration or the amount of organic modifier [9, [18], [19], [20]].

In preparative chromatography, the peaks of the target compound and its impurities often overlap due to their resemblance. This occurs when the target product
has adsorption properties that are in between those of the weakly and strongly adsorbing impurities [21]. Hence, it is exceedingly challenging to gather a
substantial quantity of unadulterated stuff. Expanding the collecting window leads to a higher yield but lower purity, while narrowing the window has the opposite
effect. This phenomenon is known as a trade-off between yield and purity, which is a specific limitation of batch chromatography [22].

Within this context, the utilisation of multicolumn continuous chromatographic methods has gained significant interest in the realm of high value biological products
[15], as it offers the potential to partially overcome this constraint. Overall, multicolumn continuous chromatography offers various benefits, such as enhanced
recovery and improved resin utilisation. However, these gains are accompanied by the drawback of higher hardware complexity [23].

This research primarily examines Multicolumn Countercurrent Solvent Gradient Purification (MCSGP), a freshly developed countercurrent multicolumn approach
suited for complex separations with several product-related contaminants. The operational principles of the technology will be examined and its advantages over
traditional single-column methods will be outlined. The method of transferring from batch to continuous will also be demonstrated, along with a comprehensive
discussion of the most intriguing uses of MCSGP. This endeavour aims to elucidate the process primarily from the perspective of analytical chemists, rather than
chemical engineers, with the intention of fostering a greater sense of familiarity with the technology among this particular group.

Proceed to: 1.22. Key parameters for purifying procedures

Prior to elucidating the core principles of batch and continuous processes, it is important to establish certain pertinent criteria. Their evaluation typically involves the
analysis of the eluted fractions using an appropriate analytical high-performance liquid chromatography (HPLC) technique.

Purity is the first parameter that is essential for pharmaceutical scopes. It is defined as the ratio between the area of the product peak and the total area of the
HPLC chromatogram: purity is calculated as the mean of the purities of the pools at the steady state.

Purity %=AproductAtotal×100

(1)

Workshop-5 Module-1 Page 173 of 333


Also, recovery (or yield) of the target at the end of the process needs to be carefully evaluated. This is particularly important when very expensive Active
Pharmaceutical Ingredients (APIs) are purified. It is defined as the mass fraction of the product recovered in the eluted stream with respect to the mass of the
product dissolved in the feed injected into the column.

Recovery%=mprod collectedmprod injected×100

(2)

Moreover, also productivity can be defined; it is expressed as the mass of target product collected in the eluent stream per total volume of stationary phase and per
time. Thus, this parameter indicates how much product is produced per minute and per column volume (V col):

Productivity(mg/mL/h)=mprodcollectedVcol×time

(3)

where Vcol is calculated as the geometrical volume of the column (in case of multicolumn processes the geometrical volume of all the columns must be considered),
whereas the time considered is the duration of a run in batch conditions or a cycle in MCSGP (see later on).

Go to:

2.13 3. Limits of batch chromatography

The outcome of the separation (i.e. resolution of the main peak from the impurities) has a high impact on the performance of the whole process.

As previously stated, batch purifications often experience a trade-off between yield and purity, particularly when several contaminants connected to the product are
present. The depicted scenario is visually illustrated in Figure 1. By totally excluding the overlapping zones, the purity of the pool will increase. Nevertheless, a
significant quantity of product remains beneath the intersecting sections of the peak. Expanding the collection window will result in an increase in yield, but it will
also lead to a drop in purity due to the inclusion of contaminants in the collected area of the peak. This trade-off is an inherent constraint of batch chromatography.
The simultaneous achievement of high purity and high yield is generally unfeasible in classical batch chromatography, rendering it impractical [24].

Fig. 1

Schematic representation of a batch chromatogram.

One can consider reducing the feed loading or the gradient slope, but this would result in longer processing times, increased solvent usage, and decreased
productivity. Alternatively, the usage of more efficient columns is possible; however, the utilisation of smaller particles would result in increased backpressures.
Consequently, none of these alternatives can adequately serve as a resolution to the issue [25,26].

Proceed to: 1.14. The Multicolumn Countercurrent Solvent Gradient Purification (MCSGP) method.

A potential solution to address the limitation of batch chromatography mentioned earlier is to substitute the single column procedure with a continuous (or semi-
continuous) countercurrent chromatographic process, in which the chromatographic system is consistently supplied with the raw mixture. In order to achieve
continuous or semi-continuous operation, the instrument must have multiple identical columns that are interconnected using a series of valves. Countercurrent is a
word used to describe a category of chromatographic procedures where the stationary and mobile phases flow in opposite directions. The movement of the
stationary phase is not genuine, but rather simulated by switching the input and outlet valves of the columns [6,16].

Workshop-5 Module-1 Page 174 of 333


The utilisation of continuous chromatographic operations offers significant benefits, both in terms of product recovery (as demonstrated subsequently) and in terms
of automating the purifying process.

The Simulated Moving Bed (SMB) was the first chromatographic arrangement that allowed for continuous countercurrent separation of two distinct components
under isocratic conditions. It was introduced in 1950 [27,28]. Subsequently, other enhanced iterations of the method have been suggested; nonetheless, the scope
of SMB has primarily been confined to the segregation of binary mixtures. Twelve years ago, researchers linked two SMB units together in a series to cleanse
ternary mixtures [6,29]. During the initial SMB process, a single compound can be isolated from the two remaining species, which then go to the second unit for
additional separation. A benefit of this configuration, in contrast to MCSGP, is the ability to select chromatographic settings (such as column and mobile phase)
independently for each unit. For example, this can enhance the level of detail. In contrast, the experimental setup in SMB is not only more intricate, involving the
connection of tubings, valves, and other components, but also SMB separations are restricted solely to isocratic procedures.

Two attractive alternatives to SMB have recently emerged that can be utilised for both the capture and processing stages. Indeed, in the first scenario, the
captureSMB method can be effectively employed to separate the target product from contaminants by taking advantage of affinity chromatography interactions. In
order to conserve space, this work will not provide a description of this technique. Therefore, readers who are interested in the topic are referred to other recent
papers on the subject [6,23,[30], [31], [32], [33], [34], [35]].

However, our focus in this study will be specifically on describing MCSGP, a countercurrent approach that is suitable for the polishing step. It is essentially derived
from the same concepts as SMB, but it enables the management of ternary separations, specifically the separation of target products from co-eluting contaminants
in both the front and rear sections of the target peak. Furthermore, it enables working under linear gradient circumstances, which is highly beneficial when handling
biomolecules [36,37]. The initial configuration of MCSGP utilised six identical columns [19,38]. Subsequently, the equipment underwent a series of simplifications,
culminating in the final version consisting of only two columns [22,25]. This final version is distinguished by its streamlined design, with reduced complexity in
tubing, valves, and connections.

1.1.1 4.1. Initial reference: the chromatogram of the design batch

To comprehend the concepts and vast possibilities of MCSGP, let us revisit the batch chromatogram depicted in a schematic manner in Figure 1. This refers to a
scenario when a center-cut or ternary separation is performed using gradient elution conditions. In this case, the primary molecule is separated from weak and
strong impurities, and their peaks partially overlap [6,39,40]. As evident, it is partitioned into distinct zones.

Zone 1: The column, which has been previously equilibrated with the eluent, is filled with a fresh feed. After the analyte has been adsorbed onto the stationary
phase, the gradient of the modifier can begin at time tA.

Zone 2 refers to impurities that have poor adsorption properties, which are referred to as W. These impurities are eluted from the column before the target product
because to their lower retention.

Zone 3 represents the stage where the product (P) begins to separate from the column, while the impurities that do not strongly adhere to the column are still being
separated. Due to inadequate resolution, the peaks of W and P overlap. The product in this area clearly fails to meet the purity standard due to contamination by
species W. However, it is imperative to recover and salvage the product in order to achieve a high process yield.

Zone 4: The target chemical exhibits no coelution with any other species, hence meeting the purity requirements for medicinal use.

Zone 5 represents an overlapping zone where the target compound coelutes with strongly adsorbing contaminants, referred to as S.

Zone 6: The column is treated with a significant amount of organic modifier to eliminate contaminants containing S, and then it is brought back to its initial eluent
composition at the start of the gradient.

Intermittently, fractions of the eluate are collected during the gradient and subsequently analysed using HPLC to determine the purity profile (zones 2–5).

According to the information presented in Figure 1, the recycling and collection windows in the batch process are determined by specific time intervals. These
intervals are crucial for transitioning a chromatographic method from batch to the MCSGP process, which will be further elaborated on.

Furthermore, it is important to emphasise that the letter W (or S) does not pertain to a solitary species with weak (or strong) adsorption properties, but rather
denotes a cluster of impurities that exhibit comparable chromatographic characteristics.

The chromatogram collected during the batch is then utilised for the purpose of designing the MCSGP process, so it is referred to as the design batch
chromatogram. The calculation should be performed on one of the two columns that will be used for MCSGP.

1.1.2 4.2. The operational principles of MCSGP

In contrast to the preparative batch chromatography procedure, the MCSGP approach, due to its inherent characteristics, enables the simultaneous achievement of
high purity and high yield of the desired product. The primary feature that enhances the performance of MCSGP in comparison to the batch process is the
automated internal recycling of the partially purified side fractions. In batch chromatography, the peripheral sections of the primary peak, which contain both W (or
S) and a significant quantity of P, are excluded from the primary collection window. However, it is common for these sections to be manually reintroduced into the
system by the operator, which carries the potential for errors and time wastage [41]. In twin-column MCSGP, the recycling process occurs automatically between
the two columns, without requiring any interaction from the operator. The two identical columns can operate either in series (interconnected mode) or in parallel
(batch mode), depending on the configuration of the input and exit column valves [42,43]. When transitioning a procedure from batch to continuous
chromatography, the method is applied to both columns, but shifted by half a cycle [22].

Figures 2 and 3 depict a scenario in which column-1 is located upstream and column-2 is located downstream. This arrangement results in the recycling areas from
column-1 being introduced into column-2.

Workshop-5 Module-1 Page 175 of 333


Column-1 is initially filled with fresh feed, following the batch process. At the beginning of the gradient, the initial set of analytes that are separated is labelled as W.
This portion does not include P (zone 4) and is therefore eliminated. Currently, the columns are not connected.

• • Subsequently, the valves undergo a positional change, resulting in the interconnection of the columns. Consequently, the overlapping region of W and P,
denoted as W/P, is transferred immediately from column-1 (zone 5) to column-2 (zone 1). Inline dilution is utilised to guarantee the re-adsorption of W/P on column-
2.

The columns have been restored to function in batch mode, and a window has been recovered from column-1 (zone 6) where the product purity meets the required
conditions. Simultaneously, column-2 is filled with a batch of newly supplied feed (zone 2).

The columns are subsequently reconnected to facilitate the recycling of the P/S region from column-1 (zone 7) back into column-2 (zone 3). Inline dilution is utilised
to guarantee the re-adsorption of P/S on column-2.

Now that column-2 has been completely loaded, it is ready to undergo the solvent gradient. Impurities begin to separate (zone 4) while column-1 is simultaneously
undergoing a stripping process to eliminate S and being prepared for further use (zone 8).

Fig. 2

Schematic representation of a single switch chromatogram in MCSGP. Reproduced with permissions from Ref. [44].

Fig. 3

Schematic representation of the path of the eluent stream during the first switch of an MCSGP cycle. The flow direction depends on the position of the inlet and
outlet columns valves.
Currently, the positions of the columns have been reversed. A cycle is completed when they once again switch positions and return to their initial configurations.
Therefore, a single cycle consists of two switches. Typically, after a few changes, the chromatographic system approaches a steady-state, indicated by the
complete overlap of UV profiles in each cycle. It is important to observe that UV profiles are identified at the column's exit, prior to the eluent stream being sent to
the garbage, fractionator, or another column. Under conditions of equilibrium, nearly identical values of purity and recovery are achieved for each collected pool.
Hence, once the system has reached a stable condition, the total number of purification cycles required for the complete process mostly relies on the quantity of
fresh feed that needs to be purified. To enhance comprehension of the concept of steady-state, an illustrative example is provided in Figure 4. This image displays
the elution profiles of the initial switch in five MCSGP cycles. These experiments were conducted by several of the authors of this review in a previous publication.
The biopharmaceutical under consideration in this instance was an unrefined synthetic blend of a medicinal peptide (Glucagon) [44]. It is evident that only the initial
cycle has a distinct UV profile compared to the subsequent cycles, indicating that cycles 2 to 5 have achieved a state of equilibrium.

Workshop-5 Module-1 Page 176 of 333


Fig. 4

Chromatograms (overlapped) of the first switch of five cycles of a MCSGP run for the purification of crude mixture of glucagon. Sharp peaks on the right correspond
to the stripping and equilibration of the column. Reproduced with permission from Ref. [41]. CV: column volume (mL).

The time intervals depicted in the design batch chromatogram shown in Figure 1 represent the moments when the intake and exit valves of the columns in MCSGP
(as shown in Figure 2) swap positions, hence controlling the route available for the eluent stream. Figure 3 provides a comprehensive depiction of the trajectory
taken by the mobile phase across both the disconnected and interconnected stages. The tB point marks the initiation of the outflow of the overlapping region
between W and P. Subsequently, the product elutes from tC to tD, and lastly, the overlapping region between P and S elutes until the time tE. tA denotes the
initiation of the solvent gradient.

It is crucial to note that the overlapping regions have a greater proportion of modifier compared to the initial part of the gradient. Hence, during the recycling
process, it is necessary to mix them with a dilution stream in order to reduce their concentration and allow the product to be absorbed by the stationary phase. The
fraction containing W/P is diluted in order to achieve the desired concentration of the modifier at tB. This allows the product to adsorb onto the stationary phase,
while the weaker contaminants begin to move through the column. The concentration of organic modifier in the window containing P/S is adjusted in order to match
the initial gradient composition (tA), as both the desired product and the potent impurities need to be kept.

The quantity of fresh feed injected sequentially (in zone 2 of Fig. 2) is determined to ensure the constant mass of the desired component within the system. Hence,
the mass of P to be loaded at each switch is calculated by subtracting the amount of target product recycled inside the overlapping zones (zone 1 and 3) from the
quantity of target product supplied in the batch run.

The numbers are 1.1.1 and 4.3. Migration of a batch process to a Multi-Channel Sequential Gaussian Process (MCSGP)

To transfer a batch method to MCSGP, the initial step is to compute a Pareto curve that presents the relationship between purity and yield for the batch method
(refer to Figure 5). This is achieved by analysing the fractions of eluate stream collected from the batch column using High Performance Liquid Chromatography
(HPLC). The purity and yield of the target in each fraction are then determined. The outcome is a purity profile over the gradient that helps determine which part of
the peak meets the purity criteria. This section of the chromatogram corresponds to the elution window of the product. To begin, it is necessary to envision
combining the most uncontaminated fraction with the adjacent fractions, gradually incorporating one fraction at a time in descending order of purity. Increasing the
size of the pooling window leads to a drop in purity but an improvement in recovery. The purity and recovery numbers obtained for each hypothetical pool are
subsequently graphed to establish the Pareto curve.

Workshop-5 Module-1 Page 177 of 333


Fig. 5

Blue triangles: Pareto curve of a hypothetical design batch chromatogram. Red and green points: performance of two hypothetical MCSGP processes (red:
successful; green: unsuccessful).

To ensure a fair comparison between batch and MCSGP, it is necessary to construct the Pareto curve for both the column used in MCSGP and a longer column
with a similar Vcol to the total Vcol of MCSGP. This column will function as a point of reference for a group of items processed together. The reference batch is
used to evaluate the performance of the processes at comparable Vcol, whereas the design batch is required to determine the switching durations for MCSGP.

When transferring a batch method to an MCSGP process, all operating characteristics remain unchanged, including feed loading per column, gradient slope, and
time of each method step. Therefore, the sole factors that may be altered to adjust the performance of the MCSGP process are the switching timings.

In the initial trial, tC and tD values are selected based on a hypothetical pool that meets the required purity and sufficiently high recovery criteria. In order to
minimise the amount of product eluting in the waste windows, it is necessary to set tB and tE. Figure 2 illustrates an optimal scenario where there is no wastage of
products in zones 4 and 8. However, in other situations, it is more favourable to discard a small quantity of very impure product in these zones rather than risking
the build-up of impurities in the recycling system. Once the system reaches a steady state, the purity and recovery values remain constant for each cycle in
MCSGP. As a result, instead of obtaining a Pareto curve, a single point is obtained. If the MCSGP point is situated below the Pareto curve, it indicates that the
MCSGP process is unsatisfactory as it achieves a poorer recovery compared to the batch process at the same level of purity. Conversely, if the point is located
above the Pareto curve, it indicates that the recovery of MCSGP has surpassed that of the batch. From a pragmatic perspective, it must be emphasised that data
points located in the upper right corner of the graph serve as a clear indication of a successful MCSGP. A clearer understanding of this topic can be achieved by
referring to Figure 5. This picture illustrates a Pareto curve in a batch chromatogram, highlighting the relationship between purity and recovery. The highest level of
purity, at 99%, is achieved with a recovery rate of only 15%. However, if the yield were 100%, the purity would decline to 55%. In the event that the MCSGP fails, it
is necessary to modify the list of switching times. Specifically, it has been demonstrated that the durations tB and tE have a significant impact on the process of
recuperation. Conversely, the timings tC and tD have a significant impact on purity since they determine the range of product elution [26].

The final factor to take into account when comparing purification procedures is productivity. In certain instances, MCSGP yields comparable outcomes to those of
the batch [25] or marginally lower [44], however this is only partially a matter of concern. When working with expensive biotherapeutics, it is more economically
advantageous to focus on maximising the recovery of the product rather than the productivity of the process. As an illustration, the price of unprocessed glucagon
is stated to be approximately several thousand dollars per gramme [45]. The correlation between an increase in recovery and a significant economic benefit is
clearly apparent. In addition, the conventional definition of productivity as stated in Equation (3), commonly used for process comparisons, fails to account for the
economic benefits derived by process automation, which is, in fact, a crucial aspect.

4.4. Utilisations of Monte Carlo Simulation in Genetic Programming

MCSGP has mostly been utilised in the purification of biomolecules, including proteins, antibodies, and peptides. Extensive experimentation has been conducted
using several mobile and stationary phases.

The demand for antibodies, particularly monoclonal antibodies (mAbs), as medicines is growing due to rising interest in their use. Simultaneously, monoclonal
antibodies (mAbs) are generated as a blend of many isomers, necessitating their separation to guarantee product excellence and adhere to market requirements.
The efficacy of the MCSGP procedure has been demonstrated as a successful approach for this specific purpose and for this particular category of biomolecules
[16, [46], [47], [48]].

The MCSGP approach enables achieving a higher yield and improved productivity compared to the batch process, even for a mono-PEGylated protein such as α-
Lactalbumin. Anion exchange chromatography [17] was employed to separate the mixture of proteins with varying degrees of PEGylation.

In addition to proteins, mixtures of peptides have also been purified using the MCSGP technique. The initial instances of implementing a 6-column or 3-column
multi-column simulated moving bed chromatography (MCSGP) on an industrial sample were specifically focused on the isolation of Calcitonin, a peptidic hormone
composed of 32 amino acids, from its contaminants [19,21,49]. Recently, certain writers of this review have effectively explored the process of purifying an
industrial mixture of Glucagon (a peptide consisting of 29 amino acids) utilising a 2-column MCSGP apparatus. Under those circumstances, the yield exhibited a
23% increase compared to the batch, with an approximate purity of 90% [44].

The MCSGP has been demonstrated to be an effective purification technique for oligonucleotides, which are another category of biotherapeutics. By employing this
method on a combination of oligonucleotides, it was possible to enhance the amount of recovered mass by 50% while maintaining a target purity of 92% [50].

Workshop-5 Module-1 Page 178 of 333


The application of MCSGP can also be extended to the identification and purification of cannabinoids. Cannabidiols (CBD) are a specific category of cannabinoids,
which are naturally occurring substances derived from Cannabis Sativa L. The medicinal effects of CBD are now being investigated. However, regulations put
stringent restrictions on the concentration of tetrahydrocannabinol (THC) in CBD mixes due to its psychoactive nature. The application of MCSGP has resulted in
the successful acquisition of a product that is free of THC, as documented in reference [51].

Table 1 presents a comparison of the performance achieved in batch and in MCSGP for purifying several target compounds. The results of MCSGP demonstrate
its effectiveness in achieving challenging ternary separations of valuable biomolecules and biopharmaceuticals, particularly when there is a significant trade-off
between yield and purity during batch purification. In such instances, MCSGP can result in a rise in crop output, leading to economic advantages in production as
well [41].
Table 1

Comparison between the performance of batch and MCSGP processes for different purification cases found in literature.
Batch MCSGP

Compound Ref.

Purity % Recovery % Productivity (g/L/h) Purity % Recovery % Productivity (g/L/h)

Oligonucleotide 91.6% 55.7% 11.9 91.9% 91.2% 5.89 [50]

>99.5%
Cannabidiol THC < 100 ppm 52% 8 94% 60 [51]
(THC < 100 ppm)

Peptide (glucagon) 89.3% 71.2% 9.9 89.2% 88% 6.1 [44]

Peptide 98.7% 19.3% 3 98.7% 94.3% 28 [22]

Monoclonal
92% 85% 1.8 92% 94% 2.6 [25]
antibody

Open in a separate window

Workshop-5 Module-1 Page 179 of 333


o 5. 1.1 Final Remarks and Future Perspectives

Due to ongoing technological advancements, preparative liquid chromatography using continuous or semi-continuous countercurrent methods has now become a well-
established and mature technology. These approaches are becoming increasingly important from an industrial perspective and are regarded a prospective candidate for
revolutionising the purification of biomolecules at the production level. The growing attention towards continuous purification processes is mainly motivated by the
enhanced quality of the end products, resulting in improved medicine safety and efficacy. Additionally, there are economic benefits associated with the high level of
automation and increased yields. This is especially true when the objective is to optimise the amount of product obtained rather than the efficiency of the process, as is
the case with highly reactive compounds. A significant portion of the treatments being developed for today and the future fall into this category. From a broader standpoint,
the technology has the capacity to serve as a catalyst for the transition to precision medicine [52].

Nevertheless, there remain numerous obstacles to surmount. From a theoretical perspective, there is potential for research that specifically examines the modelling of the
process [9, [53], [54], [55], [56], [57], [58], [59], [60], [61]]. Despite its reliance on the established theory of nonlinear chromatography, there is currently a lack of robust and
tested models capable of accurately simulating the entire process. This will enhance the optimisation of purifying conditions and concurrently bolster trust in utilising the
technology. The presence of sturdy and dependable models will also promote the use of automation and digitalization. By employing model-based algorithms, which are
also derived from machine learning techniques, it becomes feasible to regulate the functioning of these units. This control involves two aspects: rejecting disturbances to
ensure that the product remains within specified parameters, and maintaining optimal operating conditions that minimise production costs, such as productivity and buffer
consumption [62]. The utilisation of model predictive control methods seems very appropriate for this objective, as demonstrated in the context of the chiral SMB
continuous process [63].

While MCSGP is currently used effectively in batch or fed-batch bioreactors, we anticipate that it will also have a significant impact on the development of continuous and
integrated processes for producing therapeutic proteins [64]. The prominent pharmaceutical regulatory agencies view these advancements favourably and are actively
involved in establishing quality aspects (QA) and implementing regulatory measures for continuous manufacturing [65,66]. Thus, the present moment is opportune for
transformation.

Proceed to: 1.2 Declaration of competing interest

The authors assert that they do not possess any identifiable conflicting financial interests or personal ties that could have potentially influenced the findings presented in
this paper.

Proceed to: 1.3 Acknowledgements

The authors express their gratitude to ChromaCon AG- A YMC Company (Zurich, Switzerland), specifically Dr. Thomas Müller-Späth, for providing technical assistance.
Prof. Walter Cabri from the University of Bologna, Bologna, Italy, and Fresenius Kabi iPSUM, Villadose, Rovigo, Italy, as well as Dr. Antonio Ricci and Dr. Marco Macis
from Fresenius Kabi iPSUM, Villadose, Rovigo, Italy, are also recognised. The authors express their gratitude to the Italian University and Scientific Research Ministry for
providing financial assistance under grant PRIN2017Y2PAB8_003, titled "Cutting edge analytical chemistry methodologies and bio-tools to enhance precision medicine in
hormone-related diseases."

2.14 References

1. de Castro R.J.S., Sato H.H. Biologically active peptides: processes for their generation, purification and identification and applications as natural additives in the
food and pharmaceutical industries. Food Res. Int. 2015;74:185–198. [PubMed] [Google Scholar]

2. Uhlig T., Kyprianou T., Martinelli F.G., Oppici C.A., Heiligers D., Hills D. The emergence of peptides in the pharmaceutical business: from exploration to
exploitation. EuPA Open Proteomics. 2014;4:58–69. [Google Scholar]

3. Nicastri E., Petrosillo N., Bartoli T.A., Lepore L., Mondi A., Palmieri F., D'Offizi G., Marchioni L., Muratelli S., Ippolito G., Antinori A. National institute for the
infectious diseases “L. Spallanzani”, IRCCS. Recommendations for COVID-19 clinical management. Infect. Dis. Rep. 2020;12 [PMC free article] [PubMed] [Google
Scholar]

4. Morse J.S., Lalonde T., Xu S., Liu W.R. Learning from the past: possible urgent prevention and treatment options for severe acute respiratory infections caused
by 2019-ncov. Chembiochem. 2020;21:730–738. [PMC free article] [PubMed] [Google Scholar]

5. Li G., Clercq E.D. Therapeutic options for the 2019 novel coronavirus (2019-ncov) Nat. Rev. 2020;150:149–150. [PubMed] [Google Scholar]

Save your work as follows:

Your name followed by HAZOP workshop or Workshop 5 e.g. Joe Bloggs workshop 5

This forms part of your end of module assignment.

Workshop-5 Module-1 Page 180 of 333


5-4: Downstream Processing – Column Chromatography

Step 1

Warm up - Before watching the video, answer the question to 'unlock' your prior knowledge

Q: What is your understanding of chromatography and why can it be considered a ‘separation’ technique?

Chromatography relies on the notion of separating molecules in a mixture by applying them onto a surface or into a solid, and then moving them apart from one
other using a mobile phase while a stable phase keeps them immobile.

Chromatography is effective as a method of separation because the various constituents in a mixture exhibit distinct affinities for the stationary and mobile phases.
The various attractions arise from the distinct characteristics of the components present in the mixture.

What is chromatography and how does it work?


https://www.bioanalysis-zone.com/how-does-chromatography-work/

5 FEB 2020

CHROMATOGRAPHY EDUCATION ZONE

Chromatography is a scientific method employed in laboratories to effectively isolate and distinguish the many constituents present in mixtures, regardless of their
complexity. Chromatography comprises several techniques, such as paper chromatography, thin layer chromatography, and gas chromatography.

Irrespective of the presence of different chromatography techniques, they all operate on the same underlying principle. Every iteration of the technique comprises a
stationary phase, typically a solid substance, and a mobile phase that carries the intricate mixtures through the stationary phase. The mobile phase is commonly
composed of either a gaseous or liquid substance.

Chromatography can be categorised according to the characteristics of the mobile phase used. Liquid chromatography is the classification given to the process
when the mobile phase is in a liquid state. Similarly, if the mobile phase comprises gas, the procedure is categorised as gas chromatography.

________________________________________

Additionally, you may find the following subject captivating: • Does this signify the conclusion? The future of chromatography is a subject that generates significant
curiosity and conjecture.The effectiveness of High Performance Liquid Chromatography (HPLC) in the examination of naturally occurring substances

The user's text includes a bullet point symbol.A conversation with Joseph Pesek regarding the advancement of novel stationary phases and capillary columns.

Is mass spectrometry an essential constituent of chromatography?

________________________________________

The stationary phase usually comprises a porous solid substance, such as silica or alumina. The configuration of the stationary phase varies depending on the
particular chromatographic technique used. For example, in thin layer chromatography, it is common to use aluminium sheets that are covered with silica gel as the
stationary phase. The immobilised phase is typically introduced into a glass tube within a column system.

The mobile phase functions as a conduit for conveying the mixture being separated over the stationary phase. Liquid chromatography employs a mobile phase
including a solvent or a blend of solvents that may dissolve the mixture under analysis. Two examples of such solvents are dichloromethane and ethyl acetate. Gas
chromatography employs an inert gas, such as helium or nitrogen, as the mobile phase.

1What is the underlying mechanism of chromatography?

Every form of chromatography operates on the same core principle. The mobile phase, as its name suggests, is a fluid that flows through a stationary phase that
remains immobile. As the mobile phase carries the mixture through the stationary phase, the various components in the mixture are dispersed between the
stationary and mobile phases. In this scenario, separation is made easier by the unique interactions that various components in the mixture have with both the
stationary and mobile phases.

Essentially, this means that the components in the mixture that have a greater attraction to the stationary phase will stay on the stationary phase for a longer period
of time. This suggests that they are separated from other components in the mixture that have a lower attraction to the stationary phase. Chromatography is the
underlying basis behind separation procedures.

Workshop-5 Module-1 Page 181 of 333


3 What is chromatography?
Chromatography is a laboratory technique used to separate components inside mixtures, whether they are simple or complicated. Chromatography encompasses a
wide array of techniques, including paper chromatography, thin layer chromatography, and gas chromatography.

Despite the existence of various chromatography techniques, they all function based on the same fundamental concept. Each variant of the method includes a
fixed phase, often a solid substance, and a mobile phase that transports the complex mixes through the fixed phase. The mobile phase is typically a gas or a liquid.

Chromatography can be classified based on the nature of the mobile phase employed. When the mobile phase is in a liquid state, the process is classified as liquid
chromatography. Similarly, if the mobile phase consists of gas, the process is classified as gas chromatography.

________________________________________

You might also find the following topic intriguing: • Is this the end of the line? The future of chromatography is a topic of much interest and speculation.The efficacy
of High Performance Liquid Chromatography (HPLC) in the study of endogenous compounds

The user's text consists of a bullet point symbol.An interview with Joseph Pesek about the development of innovative stationary phases and capillary columns.

Is mass spectrometry a necessary component of chromatography?

________________________________________

Typically, the stationary phase consists of a permeable solid material, such as silica or alumina. The arrangement of the stationary phase differs based on the
specific type of chromatography employed. As an illustration, in thin layer chromatography, it is customary to utilise aluminium sheets coated with silica gel as the
stationary phase. The stationary phase is commonly filled into a glass tube in a column system.

The mobile phase serves as a medium for transporting the mixture undergoing separation across the stationary phase. In liquid chromatography, the mobile phase
consists of a solvent or a mixture of solvents in which the mixture being analysed is soluble. Examples of such solvents are dichloromethane and ethyl acetate. In
gas chromatography, an inert gas such as helium or nitrogen is used as the mobile phase.

OneWhat is the mechanism behind chromatography?

All forms of chromatography operate based on the same fundamental idea. The mobile phase, as indicated by its name, is fluid and moves through a stationary
phase that remains fixed. As the mobile phase transports the mixture through the stationary phase, the different constituents in the mixture are distributed across
the stationary and mobile phases. The process of separation in this context is facilitated by the distinct interactions that different components in the mixture have
with both the stationary and mobile phases.

Practically, this implies that the components in the mixture that have a stronger affinity for the stationary phase will remain on the stationary phase for a longer
duration. This indicates that they are segregated from other constituents in the mixture that have a weaker affinity for the stationary phase. Chromatography serves
as the fundamental principle for separation techniques.

4 Chromatography: step by step

Workshop-5 Module-1 Page 182 of 333


Figure 1. Simple liquid column chromatography shown in three stages; 1) assembly of the column, 2) continual addition of the mobile phase to begin separation
and 3) separation of the components with high vs low affinity.
Figure 1 illustrates a straightforward liquid column chromatography apparatus.

1.A glass sintered frit or cotton wool is positioned at the base of a glass column, which is then filled with silica. Various techniques can be employed to fill the
column, such as dry packing or using a silica slurry. It is crucial to ensure that the silica is tightly packed, without any air bubbles or fissures, as this will hinder the
flow of the mobile phase. After the silica is densely packed, the mobile phase, also known as the eluent, is introduced until it reaches the uppermost part of the
silica. Subsequently, the mixture intended for separation is cautiously introduced onto the uppermost layer of the fully saturated silica. Subsequently, a stratum of
sand is introduced to the mixture in order to prevent any disruption caused by the subsequent addition of more eluent.

2. The eluent, which may consist of a single solvent or a combination of solvents, is continuously introduced into the system until the mixture has been successfully
separated. Ensuring that the eluent level remains above the silica's upper surface is crucial to prevent the introduction of air bubbles into the stationary phase,
which can hinder the flow of the mobile phase.

3. As the eluent is passed through the column, the components of the mixture that strongly adhere to the stationary phase (in this case, silica) will have a longer
retention time in the column. In contrast, constituents that do not strongly adhere to the immobile phase will exit the column at a faster rate and can be gathered as
separate portions. This process is repeated until each individual component has been separated and released from the column independently. The outcome is a
distinct combination.

Chromatography operates as a separation technique due to the varying affinities of the different components in a mixture towards the stationary and mobile
phases. The various attractions arise from the distinct characteristics of the components inside the mixture. The surface of silica, which is a commonly used
stationary phase, is characterised by Si-O-H bonding. The presence of Si-O bonds and hydroxyl groups on the surface of the silica gel (the stationary phase)
makes it highly polar. As a result, it can form hydrogen bonds and participate in several types of interactions, such as van der Waals, dipole-dipole, and dipole-
induced dipole interactions.

The elution rate of the different components of the mixture from the column will be determined by two characteristics when the complicated mixture flows through
the column.

1) The degree to which each compound retains the stationary phase. The outcome is contingent upon the interactions of the different components and the silica
gel.

2) The solubility of the constituent components in the mobile phase (eluent). The outcome of this situation is contingent upon the interplay between the constituents
and the solvent system.

While some components in the mixture may share similarities and have the ability to form hydrogen bonds with the silica in the stationary phase, it is exceedingly
improbable that they will form hydrogen bonds with the silica and interact with the mobile phase in a same manner. Co-elution, the simultaneous elution of
numerous components, can occur in intricate chromatographic separations. By systematically manipulating the mobile phase, experimenting with various solvent
systems, and adjusting the pH, even highly intricate mixtures can be effectively separated into their constituent parts using basic liquid column
chromatography.Chromatography is employed for what purpose?

As previously said, chromatography is a method used for separating substances, resulting in a wide range of applications. Scientists frequently engage in
chromatography during various stages of their studies. Chromatography can serve as a purification method for isolating the desired reaction product from a mixture
of contaminants.

Chromatography is employed as an analytical method as well. Through the process of separating a mixture, one can extract and isolate the many components
contained in a sample, allowing for subsequent analysis of each individual substance. Certain types of chromatography have the ability to identify chemicals at the
attogram level, which is equivalent to 10-18 grammes. This makes chromatography a highly effective approach for analysing trace amounts of substances.

Chromatography is an unrivalled separation technology that is also utilised in the petroleum business. Within this particular sector, it is employed to scrutinise the
intricate amalgamations of hydrocarbons included in petroleum. Chromatography is extensively employed in the bioanalytical domain to separate and identify
chemical substances and medicinal medicines.

Workshop-5 Module-1 Page 183 of 333


TwoGas chromatography is a scientific technique used to separate and analyse the components of a mixture based on their interactions with a stationary phase
and a mobile phase, both of which are in a gaseous state.

Gas chromatography, like other chromatography methods, necessitates the utilisation of a stationary phase and a mobile phase. In gas chromatography, the
mobile phase consists of an inert gas, often helium or nitrogen. The stationary phase typically consists of a thin layer of liquid or polymer that is immobilised on an
inert solid or support material. The column, often known as a coiled glass or metallic tube, contains this. At the conclusion of this process, there is a detecting
mechanism in place that identifies the individual components as they separate from the column.

The number is 3.What is the mechanism behind gas chromatography?

Gas chromatography is a widely used method that is applicable only when the combination can be vaporised without breaking down, as it relies on a mobile phase
of gas. After the intricate blend has been converted into vapour, it is introduced into the column together with the inert gas mobile phase.

Once the mixture transitions into a gaseous state and passes through the column, the distinct components within the mixture exhibit varying interactions with the
stationary phase. Similar to the aforementioned column system, the components that have a weaker interaction with the stationary phase will exit the column at a
faster rate and can be detected by the detection system. Compounds that have a higher affinity for the stationary phase require a greater amount of time to elute
from the column, resulting in the separation of the mixture.

The utility of liquid chromatography:

• A high performance liquid chromatography–MS/MS method was developed and validated to determine SOMCL-15-290 in a first-in-human study. • Hybridization
liquid chromatography-tandem mass spectrometry methods were validated and applied for quantitative bioanalysis of antisense oligonucleotides. • Microspray and
microflow liquid chromatography are recommended for LC–MS bioanalysis.

How is chromatography a separation technique?


Column chromatography is one of the most common methods of protein purification. Chromatography is based on the principle where molecules in mixture applied
onto the surface or into the solid, and fluid stationary phase (stable phase) is separating from each other while moving with the aid of a mobile phase.
Chromatography Overview
By Susha Cheriyedath, M.Sc.Reviewed by Afsaneh Khetrapal, BSc

The term "chromatography" originates from the Greek words "chroma" meaning "colour" and "graphein" meaning "to write." Mikhail Tswett, a Russian botanist,
invented a versatile separation technique in 1903. He isolated vibrant plant pigments by employing a calcium carbonate column. Since its invention,
chromatography has become an influential tool in the laboratory for separating and identifying various substances in a mixture.

Chromatography genetic fingerprinting - Image Copyright: science photo / Shutterstock

4.1 The Principle of Chromatography


Chromatography utilises the disparity in polarity among various molecules within a mixture. This approach involves the utilisation of a liquid as a mobile phase,
which moves across a layer of particles known as the stationary phase. The mobile phase carries the sample solution, which needs to be separated, through the
stationary phase. The constituents in the mixture are segregated according to their respective affinity to the two phases. Molecules with higher affinity for the
stationary phase exhibit slower movement compared to those with lower affinity. The isolated molecules are subsequently compared to established benchmarks
and identified.

Tosoh Basics - Chromatography is a scientific technique used to separate and analyse the components of a mixture based on their different affinities for a
stationary phase and a mobile phase.

1.1 The Benefits of Chromatography

Chromatography enables accurate separation, analysis, and purification.

• The technique requires minimal sample volumes. • It is applicable to a diverse range of samples, such as drugs, food particles, plastics, pesticides, air and water
samples, and tissue extracts. • Individual components of mixtures separated by chromatography can be collected separately. • It is capable of separating highly
intricate mixtures.

1.2 The Categories of Chromatography

The field of chromatography has undergone advancements over time in response to the changing demands for the separation of molecules. Currently, several
forms of chromatography are being employed for diverse applications in laboratories worldwide. Below, we will provide a concise overview of several significant
types:

Paper chromatography involves the use of paper soaked in a liquid as the stationary phase, while a liquid solvent serves as the mobile phase. Upon drying, the
paper exhibits distinct components in the form of visible dots.

Liquid Chromatography is a method that employs silica and alumina as the immobile phase and organic solvents as the mobile phase.

Thin layer chromatography involves the application of a thin layer of adsorbent, such as alumina (Al2O3) or silica (SiO2), onto a plastic or glass sheet. Components
are segregated according to their attraction to the adsorbent and manifest as distinct spots on the sheet following chromatographic separation.

Workshop-5 Module-1 Page 184 of 333


Column chromatography is a chromatographic technique that shares similarities with thin layer chromatography, as it employs the same stationary and mobile
phases. In this case, both phases are enclosed behind a vertical glass column, and the separation process is characterised by its time-consuming nature.

CHEMUK - Highlights from 2022 eBook Compilation of the top interviews, articles, and news in the last year.Download the latest edition

Gas chromatography (GC) utilises an inert gas (such as Helium, Nitrogen, or Argon) as the mobile phase, while a solid or liquid composed primarily of silicon
polymers serves as the stationary phase. The sample combination is put into the column coated with the stationary phase and is specifically adsorbed. The
molecules that have been separated are detected by a detector as they exit the column.

High performance liquid chromatography (HPLC) is a more sophisticated version of column chromatography. HPLC involves the introduction of a sample mixture
into the mobile phase, often a solvent, which is then injected into an analytical column with high pressure. This process allows for the quick separation of the
sample molecules. This separation is based on the molecules' affinity for both the mobile phase and the particles that cover the column, known as the stationary
phase. It is alternatively referred to as high pressure liquid chromatography.Affinity chromatography is a technique that separates biological mixtures by using the
particular affinity between various cognate components, such as enzyme and substrate, antigen and antibody, or receptor and ligand.

Ion-exchange chromatography is a method used to separate ions and polar compounds by using their affinity for an ion exchanger. It facilitates the segregation of
electrically charged molecules, such as proteins, amino acids, and nucleotides. In this context, the mobile phase typically consists of a conductive solution, which is
dictated by the concentration of salt. The adsorption of the sample molecules onto a solid support with an opposite charge is influenced by certain ionic features,
such as the quantity and positioning of charges on the molecule.

The separation process entails manipulating variables such as ionic strength and pH in order to release solute molecules from the column in a sequence based on
their respective binding strengths, with the least strongly bound compounds being eluted first.

1.1 The Utilisations of Chromatography

The following are the significant applications of chromatography across many industries:

• Air quality monitoring and drinking water testing • Detection of drugs in urine and other bodily fluids • Application in chemical fingerprinting and species
identification • Pharmaceutical industry usage: - Purification of materials and analysis of chemical compounds for trace contaminants - Separation of chiral
compounds

• In the food sector, quality control involves the separation and examination of additives, preservatives, vitamins, and proteins.

To identify poisons and impurities in food.

1.2 Citations

• The website for Rensselaer Polytechnic Institute is https://rpi.edu/. • The website for Antoine's Organic Chemistry website is
http://antoine.frostburg.edu/chem/senese/101/matter/chromatography.shtml.

The website link is http://www.umich.edu/~orgolab/Chroma/chromahis.html.

The link provided is to a PDF document titled "AC Chapter 3 Chromatography 0411" on the website of the Institute of Physics, Academia Sinica.

• The website for Pennsylvania State University is located at https://www.psu.edu/. • Information about high-performance liquid chromatography may be found at
http://hiq.linde-gas.com/en/analytical_methods/liquid_chromatography/high_performance_liquid_chrom

1.3 Additional Resources • Complete Compilation of Chromatography Materials • Applications of Gas Chromatography-Mass Spectrometry (GC-MS)

• High Performance Liquid Chromatography (HPLC) is a technique used to separate, identify, and quantify components in a mixture based on their interactions with
a stationary phase and a mobile phase. • Liquid Chromatography-Mass Spectrometry (LC-MS) is a powerful analytical technique that combines the separation
capabilities of liquid chromatography with the detection capabilities of mass spectrometry to identify and quantify compounds in a sample. • Affinity
Chromatography is a separation technique that exploits the specific interactions between a target molecule and a ligand immobilised on a stationary phase to
isolate and purify the target molecule from a complex mixture.

Last Updated: Jul 19, 2023

Written by

Workshop-5 Module-1 Page 185 of 333


Susha Cheriyedath

Susha has a Bachelor of Science (B.Sc.) degree in Chemistry and Master of Science (M.Sc) degree in Biochemistry from the University of Calicut, India. She
always had a keen interest in medical and health science. As part of her masters degree, she specialized in Biochemistry, with an emphasis on Microbiology,
Physiology, Biotechnology, and Nutrition. In her spare time, she loves to cook up a storm in the kitchen with her super-messy baking experiments.
Chromatography is a laboratory technique used in chemical analysis to separate mixtures into their components. It involves the separation of a mixture into its
components, which is carried through a system on which a stationary phase is fixed. The constituents of the mixture have different affinities for the stationary phase
and are retained for different lengths of time depending on their interactions with its surface sites. This causes the constituents to travel at different apparent
velocities in the mobile fluid, causing them to separate. The separation is based on the differential partitioning between the mobile and stationary phases.

Chromatography can be preparative or analytical. Preparatory chromatography is used to separate components of a mixture for later use, which is associated with
higher costs due to its mode of production. Analytical chromatography is done normally with smaller amounts of material and is for establishing the presence or
measuring the relative proportions of analytes in a mixture. The two types are not mutually exclusive.

Chromatography was first devised at the University of Kazan by Mikhail Tsvet in 1900, primarily for the separation of plant pigments such as chlorophyll, carotenes,
and xanthophylls. New types of chromatography developed during the 1930s and 1940s made the technique useful for many separation processes. The work of
Archer John Porter Martin and Richard Laurence Millington Synge during the 1940s and 1950s, for which they won the 1952 Nobel Prize in Chemistry, established
the principles and basic techniques of partition chromatography, and their work encouraged the rapid development of several chromatographic methods: paper
chromatography, gas chromatography, and high-performance liquid chromatography.

The technology has advanced rapidly since then, with researchers finding that the main principles of Tsvet's chromatography could be applied in many different
ways, resulting in the different varieties of chromatography described below. Advances are continually improving the technical performance of chromatography,
allowing the separation of increasingly similar molecules.

Chromatography is a separation technique that involves the separation of substances between two phases, one stationary (stationary phase) and the other moving
in a definite direction. The chromatogram is the visual output of the chromatograph, with different peaks or patterns corresponding to different components of the
separated mixture. In an optimal system, the retention time is proportional to the concentration of the specific analyte separated.

Analyte is the substance to be separated during chromatography, which is usually what is needed from the mixture. Analytical chromatography uses
chromatography to determine the existence and possibly also the concentration of analyte(s) in a sample. Bonded phase is a stationary phase covalently bonded to
the support particles or the inside wall of the column tubing. Chromatogram is the visual output of the chromatograph, with different peaks or patterns on the
chromatogram corresponding to different components of the separated mixture.

Eluent is the solvent or solvent fixure used in elution chromatography and is synonymous with mobile phase. Eluate is the mixture of solute and solvent exiting the
column. Effluent is the stream flowing out of a chromatographic column, while Eluite is a more precise term for solute or analyte.

Immobilized phase is a stationary phase immobilized on the support particles or the inner wall of the column tubing. Mobile phase is the phase that moves in a
definite direction, and it may be a liquid (LC and capillary electrochromatography), a gas (GC), or a supercritical fluid (supercritical-fluid chromatography). The
mobile phase consists of the sample being separated/analyzed and the solvent that moves the sample through the column.

Preparative chromatography is the use of chromatography to purify sufficient quantities of a substance for further use, rather than analysis. Retention time is the
characteristic time it takes for a particular analyte to pass through the system under set conditions.

Sample is the matter analyzed in chromatography, which may consist of a single component or a mixture of components. When treated in the course of an
analysis, the phase or phases containing the analytes of interest is referred to as the sample, while everything out of interest separated from the sample before or
in the course of the analysis is referred to as waste.

Column chromatography is a separation technique in which the stationary bed is within a tube. Differences in rates of movement through the medium are calculated
to different retention times of the sample. Modern flash column chromatography systems are sold as pre-packed plastic cartridges, and the solvent is pumped
through the cartridge.

Planar chromatography is a separation technique where the stationary phase is present on a plane, such as paper or a layer of solid particles spread on a support
like a glass plate. Different compounds in the sample mixture travel different distances according to how strongly they interact with the stationary phase compared
to the mobile phase. The specific Retention factor (Rf) of each chemical can be used to aid in the identification of an unknown substance.

Workshop-5 Module-1 Page 186 of 333


Paper chromatography involves placing a small dot or line of sample solution onto a strip of chromatography paper. The paper is placed in a container with a
shallow layer of solvent and sealed. As the solvent rises through the paper, it meets the sample mixture, which starts to travel up the paper with the solvent. This
paper is made of cellulose, a polar substance, and the compounds within the mixture travel further if they are less polar. More polar substances bond with the
cellulose paper more quickly, and therefore do not travel as far.

Thin-layer chromatography (TLC) is a widely employed laboratory technique used to separate different biochemicals on the basis of their relative attractions to the
stationary and mobile phases. It is very versatile; multiple samples can be separated simultaneously on the same layer, making it very useful for screening
applications such as testing drug levels and water purity. Compared to paper, it has the advantage of faster runs, better separations, better quantitative analysis,
and the choice between different adsorbents. For even better resolution and faster separation that utilizes less solvent, high-performance TLC can be used.

Displacement chromatography is a separation technique in which a molecule with a high affinity for the chromatography matrix (the displacer) competes effectively
for binding sites, thus displaces all molecules with lesser affinities. There are distinct differences between displacement and elution chromatography, with elution
mode chromatography typically emerging from a column in narrow, Gaussian peaks. Displacement chromatography has advantages over elution chromatography
in that components are resolved into consecutive zones of pure substances rather than "peaks."

Gas chromatography (GC), also sometimes known as gas-liquid chromatography (GLC), is a separation technique in which the mobile phase is a gas. Gas
chromatographic separation is always carried out in a column, which is typically "packed" or "capillary". Packed columns are the routine work horses of gas
chromatography, while capillary columns generally give far superior resolution and are becoming widely used, especially for complex mixtures. Both types of
columns are made from non-adsorbent and chemically inert materials.

Liquid chromatography (LC) is a separation technique where the mobile phase is a liquid, either in a column or plane. High-performance liquid chromatography
(HPLC) uses small packing particles and high pressure to force samples through a column packed with a stationary phase. HPLC is divided into two sub-classes
based on the polarity of the mobile and stationary phases: normal phase liquid chromatography (NPLC) and reversed phase liquid chromatography (RPLC).

Supercritical fluid chromatography is a separation technique where the mobile phase is a fluid above and relatively close to its critical temperature and pressure.
Affinity chromatography is based on selective non-covalent interaction between an analyte and specific molecules, often used in biochemistry for purifying proteins
bound to tags. It often utilizes a biomolecule's affinity for a metal (Zn, Cu, Fe), and columns are often manually prepared.

Immobilized metal affinity chromatography (IMAC) is useful for separating molecules based on their relative affinity for the metal. Ion exchange chromatography
uses an ion exchange mechanism to separate analytes based on their respective charges. It is usually performed in columns but can also be useful in planar mode.
There are two types of ion exchange chromatography: Cation-Exchange and Anion-Exchange. In Cation-Exchange Chromatography, the stationary phase has
negative charge and the exchangeable ion is a cation, while in Anion-Exchange Chromatography, the stationary phase has positive charge and the exchangeable
ion is an anion.

In summary, HPLC is a versatile separation technique that can be used in various applications, including protein purification, affinity chromatography, and ion
exchange chromatography.

Size-exclusion chromatography (SEC), also known as gel permeation chromatography (GPC) or gel filtration chromatography, separates molecules according to
their size or hydrodynamic diameter or volume. It is a low-resolution technique often reserved for the final "polishing" step of a purification and is useful for
determining the tertiary structure and quaternary structure of purified proteins.

Expanded bed chromatographic adsorption (EBA) is a convenient and effective technique for the capture of proteins directly from unclarified crude samples. In EBA
chromatography, the settled bed is expanded by upward flow of equilibration buffer. The crude feed, a mixture of soluble proteins, contaminants, cells, and cell
debris, is then passed upward through the expanded bed. Target proteins are captured on the adsorbent, while particulates and contaminants pass through. A
change to elution buffer while maintaining upward flow results in desorption of the target protein in expanded-bed mode. Alternatively, if the flow is reversed, the
adsorbed particles will quickly settle and the proteins can be desorbed by an elution buffer.

Reversed-phase chromatography (RPC) is any liquid chromatography procedure in which the mobile phase is significantly more polar than the stationary phase.
Hydrophobic molecules in the mobile phase tend to adsorb to the relatively hydrophobic stationary phase. Hydrophilic molecules in the mobile phase will tend to
elute first. Separating columns typically comprise a C8 or C18 carbon-chain bonded to a silica particle substrate.

Hydrophobic interaction chromatography (HIC) is a purification and analytical technique that separates analytes, such as proteins, based on hydrophobic
interactions between that analyte and the chromatographic matrix. It can provide a non-denaturing orthogonal approach to reversed phase separation, preserving
native structures and potentially protein activity. In HIC, the matrix material is lightly substituted with hydrophobic groups, which can range from methyl, ethyl,
propyl, butyl, octyl, or phenyl groups.

Workshop-5 Module-1 Page 187 of 333


At high salt concentrations, non-polar sidechains on the surface on proteins "interact" with the hydrophobic groups, excluding both types of groups by the polar
solvent. The sample is applied to the column in a highly polar buffer, driving an association of hydrophobic patches on the analyte with the stationary phase. The
eluent is typically an aqueous buffer with decreasing salt concentrations, increasing concentrations of detergent (which disrupts hydrophobic interactions), or
changes in pH.

In general, HIC is advantageous if the sample is sensitive to pH change or harsh solvents typically used in other types of chromatography but not high salt
concentrations. The amount of salt in the buffer is varied, and using temperature to effect change allows labs to cut costs on buying salt and save money.

If high salt concentrations and temperature fluctuations want to be avoided, a more hydrophobic competitor can compete with the sample to elute it. This so-called
salt independent method of HIC showed a direct isolation of Human Immunoglobulin G (IgG) from serum with satisfactory yield and used Beta-cyclodextrin as a
competitor to displace IgG from the matrix.

Hydrodynamic chromatography (HDC) is a method used to separate analytes by molar mass, size, shape, and structure when combined with light scattering
detectors, viscometers, and refractometers. It has two main types: open tube and packed column. Open tube offers rapid separation times for small particles, while
packed column HDC can increase resolution and is better suited for particles with an average molecular mass larger than 10^-5^-
5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5
^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^
^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^^5^ 5^ 5^ 5^ 5^ 5^ 5^
5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^
5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^
5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^
5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^
5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^
5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^
5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^
5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^
5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^
5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^
5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^
5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^
5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^
5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^
5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^
5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^
5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^
5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^
5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^
5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^
5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^
5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^
5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^
5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^ 5^
5^ 5^ 5^ 5^ 5^ 5^ 5^

Pyrolysis gas chromatography (PyGGC) is a method of chemical analysis where a sample is heated to decompose to produce smaller molecules that are
separated by gas chromatography and detected using mass spectrometry. Pyrolysis is the thermal decomposition of materials in an inert atmosphere or vacuum,
with the sample being placed into direct contact with a platinum wire or placed in a quartz sample tube and rapidly heated to 600–1000 °C. Three different heating
techniques are used in actual pyrolyzers: isothermal furnace, inductive heating (Curie Point filament), and resistive heating using platinum filaments. Large
molecules cleave at their weakest points and produce smaller, more volatile fragments, which can be separated by gas chromatography.

Pyrolysis GC chromatograms are typically complex because a wide range of different decomposition products is formed. The data can either be used as
fingerprints to prove material identity or the GC/MS data is used to identify individual fragments to obtain structural information. To increase the volatility of polar
fragments, various methylating reagents can be added to a sample before pyrolysis.

Fast protein liquid chromatography (FPLC) is a form of liquid chromatography often used to analyze or purify mixtures of proteins. Separation is possible because
the different components of a mixture have different affinities for two materials, a moving fluid (the "mobile phase") and a porous solid (the stationary phase). In
FPLC, the mobile phase is an aqueous solution, or "buffer", and the stationary phase is a resin composed of beads, usually of cross-linked agarose, packed into a
cylindrical glass or plastic column.

Countercurrent chromatography (CCC) is a type of liquid-liquid chromatography, where both the stationary and mobile phases are liquids and the liquid stationary
phase is held stagnant by a strong centrifugal force. There are many types of CCC available today, including HSCCC (High Speed CCC) and HPCCC (High
Performance CCC).

Centrifugal partition chromatography (CPC) is a series of cells interconnected by ducts attached to a rotor, creating the centrifugal field necessary to hold the
stationary phase in place. The separation process in CPC is governed solely by the partitioning of solutes between the stationary and mobile phases, which
mechanism can be easily described using the partition coefficients (KD) of solutes.

Workshop-5 Module-1 Page 188 of 333


Periodic counter-current chromatography (PCC) uses a solid stationary phase and only a liquid mobile phase, much more similar to conventional affinity
chromatography than to counter current chromatography. PCC uses multiple columns, which during the loading phase are connected in line. This mode allows for
overloading the first column in this series without losing product, which already breaks through the column before the resin is fully saturated. The breakthrough
product is captured on the subsequent column(s).

Chiral chromatography involves the separation of stereoisomers, which have no chemical or physical differences apart from being three-dimensional mirror images.
To enable chiral separations to take place, either the mobile phase or the stationary phase must themselves be made chiral, giving differing affinities between the
analytes. Chiral chromatography HPLC columns (with a chiral stationary phase) in both normal and reversed phase are commercially available.

Aqueous normal-phase chromatography (ANP) chromatography is characterized by the elution behavior of classical normal phase mode, with water being one of
the mobile phase solvent system components. It is distinguished from hydrophilic interaction liquid chromatography (HILIC) in that the retention mechanism is due
to adsorption rather than partitioning.

Chromatography is used in many fields including the pharmaceutical industry, food and beverage industry, chemical industry, forensic science, environment
analysis, and hospitals.

Wikipedia, the free encyclopedia

Part of a series on

Continuum mechanics

Fick's laws of diffusion

show

Laws

show

Solid mechanics

show

Fluid mechanics

show

Rheology

show

Scientists

v
t
e

Workshop-5 Module-1 Page 189 of 333


Part of a series on

Chemistry

Science of matter

Index
Outline
Glossary
History (timeline)

show

Key components

show

Branches

show

Research

 Chemistry portal

 Category

v
t
e

For the album by Second Person, see Chromatography (album).

Workshop-5 Module-1 Page 190 of 333


Thin-layer chromatography is used to separate components of a plant extract, illustrating the experiment with plant pigments which gave
chromatography its name

Plotted on the x-axis is the retention time and plotted on the y-axis a signal (for example obtained by a spectrophotometer, mass spectrometer or a variety
of other detectors) corresponding to the response created by the analytes exiting the system. In the case of an optimal system the signal is proportional to
the concentration of the specific analyte separated.

4.2 Techniques by chromatographic bed shape

Column chromatography[edit]

Further information: Column chromatography

Workshop-5 Module-1 Page 191 of 333


Column chromatography is a method of separating substances when the stationary bed is located inside a tube. In chromatography, the solid stationary phase
particles or the liquid stationary phase coating on the support can either completely fill the inner volume of the tube (packed column) or be concentrated on the
inner tube wall, allowing for an unobstructed path for the mobile phase in the middle section of the tube (open tubular column). Variances in the speed of movement
within the substance are used to determine the distinct periods of time that the sample remains in the medium. In 1978, W. Clark Still presented an altered iteration
of column chromatography known as flash column chromatography (flash). The technique closely resembles standard column chromatography, with the exception
that the solvent is propelled through the column by applying positive pressure. As a result, the majority of separations could be completed in about 20 minutes,
exhibiting enhanced efficiency compared to the previous technique. Contemporary flash chromatography systems are available for purchase in the form of pre-filled
plastic cartridges, with the solvent being propelled through the cartridge via pumping. Automation can be achieved by connecting systems with detectors and
fraction collectors. The implementation of gradient pumps led to expedited separations and reduced solvent consumption.
Planar chromatography refers to a technique used to separate and analyse mixtures of substances on a flat surface.
Planar chromatography is a separation method where the stationary phase is located on a flat surface or plane. A plane can function as a medium for
chromatography, either by being made of paper and used as a stationary bed (known as paper chromatography), or by having a layer of solid particles placed over
a support like a glass plate (known as thin-layer chromatography). The chemicals in the sample mixture exhibit varying degrees of interaction with the stationary
phase relative to the mobile phase, resulting in various travel distances. The precise Retention factor (Rf) of any chemical can assist in the identification of an
unfamiliar compound.
Paper chromatography

Paper chromatography in progress

Paper chromatography

Further information: Paper chromatography

Workshop-5 Module-1 Page 192 of 333


Paper chromatography is a method that entails depositing a small dot or line of sample solution onto a strip of chromatography paper. The paper is immersed in a
vessel containing a thin layer of solvent and then sealed. As the solvent ascends through the paper, it encounters the sample mixture, which commences its
upward journey on the paper along with the solvent. The paper is composed of cellulose, which is a polar substance. The compounds in the mixture have a greater
distance of travel if they exhibit lower polarity. Polar chemicals exhibit a higher affinity for cellulose paper, resulting in faster bonding and shorter transit distances.
Thin-layer chromatography (TLC)
Further information: Thin-layer chromatography

Thin layer chromatography

Thin-layer chromatography (TLC) is a commonly used laboratory method that separates various biochemicals based on their differing affinities for the stationary
and mobile phases. It bears resemblance to paper chromatography. However, instead of utilising a stationary phase composed of paper, this method employs a
stationary phase consisting of a thin layer of adsorbent such as silica gel, alumina, or cellulose over a flat, inert substrate. Thin-layer chromatography (TLC) has
high versatility as it enables the simultaneous separation of numerous samples on a single layer. This characteristic renders it highly valuable for screening
purposes, including the assessment of drug concentrations and the evaluation of water quality.
Liquid chromatography

Preparative HPLC apparatus

Liquid chromatography (LC) is a method of separating substances where the mobile phase consists of a liquid. The task can be executed in either a vertical
structure or a flat surface. High-performance liquid chromatography, also known as HPLC, is a modern form of liquid chromatography that involves the use of small
packing particles and operates at high pressure.

HPLC involves the application of high pressure to push a sample through a column containing a stationary phase made up of either irregularly or spherically
shaped particles, a porous monolithic layer, or a porous membrane. Monoliths, as defined by their composition, are chromatographic media with a sponge-like
structure composed of continuous organic or inorganic components. HPLC is traditionally categorised into two distinct subclasses according on the polarity of the
mobile and stationary phases. Normal phase liquid chromatography (NPLC) refers to a method where the stationary phase is more polar than the mobile phase,
such as using silica as the stationary phase and toluene as the mobile phase. On the other hand, reversed phase liquid chromatography (RPLC) refers to the
opposite scenario, where the mobile phase is more polar than the stationary phase. This can be achieved by using a water-methanol mixture as the mobile phase
and C18 (octadecylsilyl) as the stationary phase.

Workshop-5 Module-1 Page 193 of 333


Two-dimensional chromatograph GCxGC-TOFMS at Chemical Faculty of GUT Gdańsk, Poland, 2016

Chromatography

Chromatography is a separation procedure that involves two phases: a stationary phase and a mobile phase. The stationary phase is usually a permeable solid
material, such as glass, silica, or alumina, which is either filled into a glass or metal tube or forms the walls of an open-tube capillary. The mobile phase passes
through the packed bed or column. The specimen to be isolated is introduced at the initiation of the column and is conveyed throughout the system by means of the
mobile phase. During their passage through the column, the various chemicals segregate based on their respective affinity for the two phases. The velocity of
movement is contingent upon the values of the distribution coefficients, with components that have a stronger interaction with the stationary phase necessitating
longer durations for elution (full removal from the column). Therefore, separation is determined by variations in distribution behaviour that are evident in distinct
migratory periods across the column. In the case of repeating extraction, a higher separation factor between two components will result in a shorter column
required to separate them. Chromatography is similar to multistage extraction, however it differs in that chromatography involves a continuous flow instead of
discontinuous steps. Currently, chromatography is the foremost technique for separating organic compounds and is commonly employed, alongside
electrophoresis, for biological molecules.

The different chromatographic procedures are distinguished by the nature of the mobile phase used: gas for gas chromatography (GC), liquid for liquid
chromatography (LC), and supercritical fluid for supercritical-fluid chromatography (SFC). The methods are further categorised based on the stationary phase. For
instance, if the stationary phase consists of a solid adsorbent, there are techniques like gas-solid chromatography (GSC) and liquid-solid chromatography (LSC).
Chromatography is performed using computer-controlled equipment to achieve a high level of accuracy and operate without human intervention. Furthermore, it is
common practice to position a detector online following the column, serving the purpose of either structural analysis, quantification, or both. An extremely effective
method of analysis now in use is the online integration of chromatography with mass spectrometry.

Gas chromatography is a significant technique due to its rapidity, ability to separate components, and high sensitivity of the detector. This approach is most suitable
for substances that can undergo vaporisation without undergoing breakdown. Several compounds that typically have low vaporisation rates can undergo chemical
derivatization to facilitate their separation by gas chromatography.

Furthermore, alongside chromatography, gas-solid distribution is extensively utilised for purification purposes, utilising specific adsorbents known as molecular
sieves. These materials possess pores that are roughly equivalent in size to tiny molecules. This feature can be utilised in the differentiation of molecules with
linear structures from those with bulky architectures. The former can easily infiltrate the pores, while the latter are incapable of permeating. This is an instance of a
separation process known as exclusion, which relies on distinctions in shape. Molecular sieves are crucial in the dehydration of gases. Water, being a polar
substance with uneven distribution of positive and negative charges inside its molecule, is easily adsorbed on the particles. However, gases with lower polarity are
not retained.

Sublimation is a process when a solid substance transforms directly into a gas without going through the liquid phase. Due to the fact that not all compounds have
the ability to sublime, the method's usefulness is restricted.

Liquid chromatography has emerged as the leading technique for separating organic compounds since the early 1970s. LC, or liquid chromatography, has an
advantage over GC, or gas chromatography, in that it does not require vaporisation of the mobile phase. This allows LC to separate a wider variety of chemicals
compared to GC. Species that have been effectively resolved encompass inorganic ions, amino acids, pharmaceuticals, sugars, oligonucleotides, and proteins.
Both analytical-scale liquid chromatography, which involves samples in the microgram-to-milligram range, and preparative-scale liquid chromatography, which
deals with samples in the tens-of-grams range, have been created. Preparative-scale liquid chromatography plays a crucial role in biotechnology, particularly in the
purification of proteins and peptide hormones produced through recombinant technology.

A significant technique is liquid-solid chromatography, where the porous adsorbent is polar and separation is determined by the characteristics of component
classes, such as amines (alkaline) from alcohols (neutral) and esters (neutral) from acids.

Workshop-5 Module-1 Page 194 of 333


Liquid-solid chromatography is the most ancient among the several chromatographic techniques. Prior to the mid-20th century, the experimental process remained
mostly unchanged from its initial form. Following substantial enhancements, liquid-solid chromatography is currently performed using porous particles measuring as
small as 3–5 micrometres (0.00012–0.00020 inch) in diameter. Additionally, liquid pumps are employed to propel the liquid through the column filled with these
particles. High resolution and rapid separations are attained due to the utilisation of tiny particles, which provide excellent efficiency when combined with fast
mobile phase velocities (equal to or exceeding one centimetre per second). This approach is crucial in the process of purification, since it allows for the automatic
collection of separated compounds after passing down the column, using a fraction collector.

Reverse-phase chromatography is a notable technique in liquid-solid chromatography. It involves using a liquid mobile phase consisting of water mixed with an
organic solvent like methanol or acetonitrile. The stationary phase surface is nonpolar or hydrocarbon-like. Unlike normal-phase chromatography, where the
adsorbent surface is polar, reverse-phase chromatography involves the elution of compounds from the column in increasing order of polarity. Furthermore, the
separation process relies on the nonpolar characteristics of the components. Trypsin, an enzyme, is employed in the isolation of a sequence of peptides from
human growth hormone, a medication produced by genetic engineering. Its purpose is to cleave peptide bonds that contain the fundamental amino acids arginine
and lysine, resulting in a distinctive protein fingerprint. Peptide mapping is an essential technique used to assess the degree of purity of intricate compounds, such
as proteins.

Ion-exchange chromatography (IEC) is a specific type of liquid-solid chromatography that is very significant and hence merits special recognition. The process of
ion separation is based on the differential attraction of ions in a solution to oppositely charged sites on a finely divided, insoluble substance known as an ion
exchanger, typically a synthetic resin. A cation-exchange resin exclusively has negatively charged sites, allowing for the separation of solely positive ions.
Conversely, an anion-exchange resin possesses positively charged sites. Ion-exchange chromatography has emerged as a highly significant technique for the
separation of proteins and short oligonucleotides.

Ion exchange is commonly employed to eliminate dissolved iron, calcium, and magnesium ions from hard water, which serves as a significant application of this
process. The anionic sites of a cation exchanger are initially neutralised with sodium ions through exposure to a concentrated solution of sodium chloride. When the
hard water is then passed through the resin, the undesired ions in the water are substituted with sodium ions.

Thin-layer chromatography (TLC) is a technique that involves performing liquid-solid adsorption chromatography on thin, flat plates. TLC is a cost-effective and fast
method, albeit it is less sensitive and efficient compared to column chromatography. Practically, the adsorbent is evenly distributed over a glass plate and subjected
to the process of drying. The sample is applied as a small area close to one extremity of the plate, which is positioned (in a vertical orientation) into a shallow
container containing the mobile phase. During capillary action, the mobile phase moves upwards on the plate, causing the sample to dissolve in the liquid. As a
result, the components of the sample are carried to different points on the plate, at variable distances from the initial point. To delve more into this topic, please
refer to the article on chromatography.

Exclusion and clathration refer to two distinct phenomena.

Molecular size disparities can also serve as the foundation for segregations. One illustration of these methods is the employment of molecular sieves in gas-solid
chromatography. Size-exclusion chromatography (SEC) has demonstrated efficacy in the separation and analysis of polymer mixtures. The chromatographic
column separates molecules based on their size, with the larger molecules being the first to emerge due to their inability to permeate the porous support matrix.
Subsequent emergence of smaller molecules is due to their ability to navigate the entire porous matrix. Calibrating a column with polymer samples of known
molecular weight allows us to determine the molecular weights and proportions of the components in an unknown mixture by measuring the time it takes for them
to emerge. These molecular weight distributions are crucial characteristics of polymers. Exclusion chromatography is also utilised for the separation of protein
mixtures, which are naturally occurring polymers.

Clathration involves the separation of molecules by fitting them into spots with precise diameters. When certain chemicals solidify from a solution, they create
molecular-scale cages of specific dimensions. Should the liquid solution contain other compounds of sufficiently tiny size, they will become caught within the cage,
while larger components will be rejected. This technique has been employed in extensive industrial procedures to segregate petroleum-derived compounds.

Supercritical-fluid techniques

When gaseous substances surpass a particular temperature and pressure known as the critical point, they transform into a supercritical fluid. This state is
characterised by a higher density than a gas, yet lower density than a liquid. A supercritical fluid has the ability to dissolve substances more effectively than a gas,
while also having lower viscosity than a liquid. Supercritical-fluid chromatography is employed for the separation of compounds that exhibit low polarity and
volatility.

Supercritical-fluid extraction (SFE) is a significant technique used to purify complicated liquid or solid matrices, such as contaminated streams, on a massive scale.
An inherent benefit of this technique, in comparison to liquid-liquid extraction, is the ease with which the supercritical fluid can be eliminated post-extraction through
the manipulation of temperature, pressure, or both. The supercritical fluid undergoes a phase transition into a gaseous state, causing the extracted species to

Workshop-5 Module-1 Page 195 of 333


undergo condensation and transition into either a liquid or solid state. The issue of liquid extraction is resolved. An instance of the Supercritical Fluid Extraction
(SFE) technique is the extraction of caffeine from coffee.

Crystallisation and precipitation

Crystallisation is a well-established method for purifying chemicals. Frequently, when a solid substance (consisting of a single chemical) is introduced into a liquid,
it undergoes dissolution. When more solid is added, there comes a point where no more solid dissolves, and the solution is considered saturated with the solid
compound. The concentration of a saturated solution is directly influenced by the temperature, with higher temperatures often leading to higher concentrations.

These phenomena can be utilised as a method of achieving separation and purification. Therefore, when a solution that is already saturated at a certain
temperature is cooled, the dissolved component starts to precipitate from the solution and will continue to do so until the solution reaches saturation again at the
lower temperature. Due to the varying solubilities of two solid compounds in a specific solvent, it is frequently feasible to identify conditions where the solution
becomes saturated with only one of the components in a mixture. Upon cooling, a fraction of the less soluble substance undergoes crystallisation independently,
while the more soluble constituents remain in a dissolved state.

Crystallisation, the intricate process of transforming from a liquid solution into a solid state, is very intricate. Seed particles, also known as nuclei, are generated
inside the solution, and subsequent molecules adhere to these solid surfaces by deposition. The particles ultimately reach a size that causes them to descend to
the bottom of the container. To attain a high level of purity in the crystallised solid, it is imperative that the precipitation occurs at a slow rate. Impurities can become
trapped in the solid matrix if the process of solidification occurs quickly. To minimise the entrapment of foreign material, it is advisable to maintain small individual
crystals. Occasionally, it becomes imperative to introduce a seed crystal into the solution to initiate the process of crystallisation. The seed crystal serves as a solid
substrate onto which subsequent crystallisation can occur.

Precipitation is often distinguished from crystallisation by limiting it to procedures where an insoluble substance is produced in a solution by a chemical reaction.
Frequently, many chemicals are precipitated as a result of a specific interaction. In order to achieve separation in such instances, it is important to regulate the
concentration of the precipitating agent in such a way that it surpasses the solubility of only one material. Alternatively, an additional agent can be introduced into
the solution to create stable and soluble compounds containing one or more components. This helps to prevent their involvement in the precipitation reaction.
Compounds utilised for the purpose of separating metal ions are commonly referred to as masking agents.

Precipitation has been employed for an extended period as a conventional technique for the separation and analysis of metals. Nowadays, selective and sensitive
instrumental methods are used to directly analyse various metals in aqueous solutions, replacing the previous method.

Zone melting is a process used to purify materials by melting a small section of the material and then solidifying it again, allowing impurities to be removed.

Zone melting is a separation technique that relies on liquid-solid equilibria. It is mostly used for purifying metals. By employing this approach, it is possible to
achieve purities as high as 99.999 percent. Prior to zone melting, samples typically exhibit a state of intermediate purity.

The visualisation of the zone-melting process is straightforward. Usually, the sample is shaped into a slender cylindrical object, ranging from 60 centimetres to over
3 metres (2 to 10 feet) in length. The rod is enclosed within a tube and can be suspended either horizontally or vertically. A small ring, capable of being heated, is
placed around the rod. The ring is maintained at a temperature slightly higher than the melting point of the solid material. Additionally, the ring is moved at a very
slow speed of a few centimetres per hour down the rod. Consequently, a molten region moves across the rod, with liquid material forming at the leading edge of
this region, while solid material crystallises at the trailing edge. Impurities in a substance lower its freezing point, causing the final part of a liquid sample to contain
a higher concentration of impurities. As the molten zone progresses, it gradually accumulates a higher concentration of contaminants. Upon completion of the
procedure, the impurities are observed to have crystallised at the extremity of the rod, and the contaminated portion can be easily eliminated by severing it.
Ultrahigh purities can be attained by employing a multistage process, either by repeatedly recycling the ring or by utilising many rings consecutively.

Quantify the degree of separation

Division of fields

Electrophoresis, previously discussed in this page, is a crucial technique for separating biopolymers, specifically DNA molecules and proteins. Electrophoresis is
traditionally performed on plates or slabs, similar to thin-layer chromatography. Slab-gel electrophoresis requires the use of an anticonvective medium or gel to
ensure the stability of the ionic buffer solution on the plate. The gel material generally employed is either polyacrylamide or agarose.

Electrophoresis separates substances based on their charge, as mentioned previously. Gels can also be used for size separation or sieving, if the pore dimensions
of the gel are similar to the dimensions of the biopolymers. The gel matrix acts as a barrier to the movement of substances in the electric field, causing separation
dependent on the size of the molecules, with the smallest molecules moving the fastest. The porous gel matrix is required for the separation of DNA molecules, as

Workshop-5 Module-1 Page 196 of 333


these species cannot be separated by electrophoresis without it. A significant utilisation of this technique is DNA sequencing, wherein the precise arrangement of
the four nucleotides (adenine, cytidine, guanine, and thymidine) in an oligonucleotide molecule needs to be ascertained. This approach facilitates the sequencing of
the human genome.

Gel sieving is an electrophoretic method that can be used to separate proteins. This method involves the denaturation of the protein, which entails the destruction
of its higher structural characteristics. Subsequently, the denatured protein is mixed with an excessive amount of detergent, such as sodium dodecyl sulphate
(SDS). The SDS-protein complexes obtained possess same charge density and structure, thereby allowing their separation based on size within a gel matrix. This
approach is valuable for characterising proteins and assessing their purity.

Proteins can be further categorised based on their unique charge residues, in addition to their size. An especially advantageous technique that relies on this idea is
isoelectric focusing (IEF). At a particular pH of a solution, a specific protein will exhibit an equal number of positive and negative charges, resulting in no movement
when subjected to an electric field. The term used to refer to this pH value is the isoelectric point. A slab gel (or column) can be filled with a complicated mixture of
buffers, called ampholytes, which, when subjected to an applied field, move to the location of their specific isoelectric points and subsequently become immobile. A
pH gradient is created, enabling the concentration of proteins at their specific isoelectric points.

A two-dimensional technique can combine charge (IEF) and size (SDS-protein complex) separations. Two-dimensional gel electrophoresis is now one of the most
effective techniques for resolving samples.

Electrophoresis can also be employed in a preparative manner. In the process of continuous-flow paper electrophoresis, the sample is consistently supplied
(together with a salt solution) to the central top of a vertically positioned sheet of paper. As the sample moves along the paper, it is exposed to an electrical
potential that is perpendicular to the direction of movement. The different species distribute themselves across the paper based on their charge and mobility, and
then fall from the unevenly notched lower edge of the paper into receivers.

Ultracentrifugation is a field-separation technique that utilises the centrifugal force generated by extremely high-speed rotation (50,000 revolutions per minute or
higher) to achieve separation. Various species, based on their masses, will precipitate at distinct velocities under these circumstances. The primary application of
ultracentrifugation is in the isolation of polymeric substances, specifically proteins and nucleic acids.

Field-flow fractionation encompasses a collection of techniques that rely on the application of a field perpendicular to a flowing stream within a small channel. Due
to the presence of friction along the channel walls, the liquid's velocity will be higher in the centre compared to the walls. In sedimentation field-flow fractionation,
the channel is rotated while a perpendicular force, such as centrifugal force (gravity), is applied. Particles gradually settle towards the walls of the channel and
eventually attain a stable location. Due to the uneven flow velocity within the channel, substances will migrate at varying rates, leading to separation. The applied
force may manifest as centrifugal, electrical, or thermal. Field-flow fractionation is most appropriate for compounds that have particle or colloid sizes. One instance
is the segregation of latex particles employed in paints. Further techniques for particle separation are outlined below.

Electrolytic separations and purifications exploit the varying voltages needed to transform ions into neutral compounds. An exemplary instance of this approach is
the purification of copper. Copper ores generally contain trace levels of other metals that are not eliminated during the early operations used to convert the ores into
metal. In this process, a slab of copper having impurities and a sheet of copper with high purity are submerged in a solution of sulfuric acid dissolved in water. The
two copper pieces are then linked to a direct electric current source, with the pure copper acting as the cathode and the impure copper as the anode. During the
process, the anode undergoes dissolution, causing the metal atoms to transform into positively charged ions. These ions then move through the solution towards
the cathode. The voltage across the electrodes is controlled in such a way that only copper ions are converted into metal atoms and deposited on the cathode.
Certain impurities, such as zinc and nickel, persist in the solution as ions because their reversion to neutral metal atoms necessitates a higher voltage than that of
the system. On the other hand, impurities like silver and gold do not dissolve at all. Instead, as the surrounding atoms dissolve, they precipitate to the bottom of the
container as a slimy residue, which can be retrieved through alternative methods.

Partitioning barriers

Various separation techniques rely on the permeation of molecules across semipermeable membranes. Membrane filtration is a process where substances move
over a membrane due to a difference in concentration between the two sides. Ultrafiltration expedites diffusion through the membrane by utilising a pressure
gradient. Electrodialysis involves the use of an electrical field to enhance the migration.

The unhindered movement of the separate constituents of a solution leads to the homogenization of the concentration of each constituent throughout the solution.
Every component participates in this process: the solvent has an equal inclination to diffuse from areas of high concentration (resulting in a dilute solution) to areas
of low concentration (resulting in a concentrated solution), as the dissolved substance has to diffuse from areas of high concentration to areas of low concentration.
In numerous separations, the emphasis is placed on the propensity of the dissolved particles to migrate, whereas the matching propensity of the solvent particles to
migrate is often disregarded. Osmosis is a phenomena where only the solvent may freely move through a membrane that divides two regions with different
compositions. The solvent, propelled by its inclination to migrate from areas of greater concentration to areas of lower concentration, transfers from the less
concentrated solution to the more concentrated one. This process would persist indefinitely as long as the liquid levels on both sides of the membrane stay
Workshop-5 Module-1 Page 197 of 333
constant. However, when the solvent traverses the membrane, the quantities of the two solutions become imbalanced, and the resultant disparity in pressure
ultimately halts the migration. The pressure disparity is referred to as the osmotic pressure of the solution.

Reverse osmosis is a separation process where a pressure greater than the osmotic pressure is applied to drive the solvent through a membrane against its
concentration gradient. This process is a highly efficient approach for concentrating impurities, reclaiming contaminated solvents, purifying dirty streams, and
removing salt from saltwater. Dialysis is a membrane-separation process commonly employed in biochemistry to eliminate dissolved salts from solutions containing
proteins or other macromolecules.

Particle distances

Sedimentation refers to the process by which particles settle out of a liquid or gas and accumulate at the bottom.

Particles such as viruses, colloids, bacteria, and microscopic particles of silica and alumina can be segregated into distinct fractions based on their sizes and
densities. Gravity causes the settling of suspensions containing large particles, and this difference in settling rates can be utilised to separate them. In order to
distinguish viruses and similar entities, it is important to utilise far stronger force fields, such as those generated in an ultracentrifuge.

Filtration and screening are processes used to separate particles or substances from a mixture based on their size or other physical properties.

Filtration is the utilisation of a permeable substance to segregate particles based on their varying sizes. If the pore diameters exhibit a high degree of uniformity, the
separation process can be quite sensitive to the size of the particles. However, this method is mostly employed for achieving large-scale separations, such as the
separation of liquids from suspended crystals or other solid substances. Pressure is typically employed to expedite filtration. A stack of sieves is arranged, with the
largest aperture screen positioned at the top. The particulate mixture is positioned at the uppermost section, and the apparatus is agitated to facilitate the
movement of the particles via consecutive screens. After the operation concludes, the particles are allocated among the sieves based on their respective particle
diameters.

Elutriation refers to the process of separating particles based on their size and density by suspending them in a fluid and subjecting them to controlled flow

This technique involves the placement of particles within a vertical tube, through which a gradual upward flow of water (or another fluid) is maintained. The particles
descend through the water at velocities that fluctuate based on their dimensions and density. By gradually increasing the flow rate of the water, the particles that
sink at the slowest rate will be carried upwards by the fluid flow and eliminated from the tube. Intermediate particles will remain immobile, while the largest or most
compact particles will continue to descend. The flow can be further augmented to eliminate particles of the subsequent smaller dimensions. Therefore, with
meticulous regulation of the flow within the tube, particles can be sorted based on their size.

Electrophoresis of particles and electrostatic precipitation

Particle electrophoresis is a technique that separates charged particles by applying an electric field. It is commonly used for separating viruses and bacteria.
Electrostatic precipitation is a technique used to remove fogs, which are suspensions of particles in the atmosphere or other gases. This approach involves
applying a high voltage to the gas phase, which causes the particles to acquire electrical charges and precipitate. The presence of charges induces an attractive
force between the particles and the walls of the separator, which have opposing charges. As a result, the particles relinquish their charges and descend into the
collectors.

Foam fractionation and flotation

Several techniques utilise foams to achieve separations. In many cases, the primary mechanism involved is the adsorption of substances onto gas bubbles or at
the interface between gas and liquid. Two techniques used for separation include foam fractionation, which separates molecular species, and flotation, which
separates particles. When a soap or detergent is dissolved in water, it produces foam when gas is passed through the solution. Collecting the foam is a method of
consolidating the soap. Flotation is the method by which particles are transported out from a liquid mixture by means of a froth. Here, a soap or another chemical
substance initially attaches to the surface of the particle, enhancing its capacity to stick to tiny air bubbles. The adhesive bubbles render the particle sufficiently
buoyant to ascend to the surface, facilitating its extraction. This procedure is highly significant for concentrating the precious components of minerals prior to
chemical processing in order to retrieve the metals that are present.

Barry L. Karger

Separation techniques: Chromatography

Ozlem Coskun

Author information Article notes Copyright and License information PMC Disclaimer

Workshop-5 Module-1 Page 198 of 333


Go to:

Workshop-5 Module-1 Page 199 of 333


Abstract

Chromatography is a significant biophysical method that allows for the separation,


identification, and purification of the constituents of a mixture for both qualitative and
quantitative study. Proteins can be purified by using their size and shape, overall charge,
hydrophobic surface groups, and affinity for the stationary phase. There are four separation
approaches that rely on molecule properties and interaction type: ion exchange, surface
adsorption, partition, and size exclusion. Additional chromatography methods utilise a fixed
bed, such as column, thin layer, and paper chromatography. Column chromatography is a
widely used technique for purifying proteins.

Topics: Chromatography, column chromatography, protein purification

Chromatography relies on the notion of separating molecules in a mixture by applying them


onto a surface or into a solid, and then moving them apart from each other using a mobile
phase while a stable phase remains stationary. The mechanisms influencing this separation
process encompass molecular attributes associated with adsorption (liquid-solid), partition
(liquid-solid), and affinity or disparities in their molecular weights [1, 2]. Due to these
disparities, certain constituents of the mixture exhibit prolonged retention in the stationary
phase and move at a sluggish pace inside the chromatographic system, whilst others swiftly
transition into the mobile phase and exit the system expeditiously [3].

Three components are fundamental to the chromatography procedure, according to this


theory.

• The stationary phase consists of either a solid phase or a layer of liquid adsorbed onto a
solid support.

• The mobile phase consists of either a liquid or a gaseous component.

• Discrete molecules

The fundamental factor that influences the separation of molecules from each other is the
interaction between the stationary phase, mobile phase, and the substances present in the
mixture. Partition-based chromatography technologies are highly efficient for separating and
identifying tiny molecules such as amino acids, carbohydrates, and fatty acids. Ion-exchange
chromatography, a type of affinity chromatography, is particularly efficient at separating large
molecules such as nucleic acids and proteins. Paper chromatography is employed for protein
separation and protein synthesis studies. Gas-liquid chromatography is utilised for separating
alcohol, ester, lipid, and amino groups, as well as observing enzymatic interactions.
Molecular-sieve chromatography is specifically used to determine the molecular weights of
proteins. Agarose-gel chromatography is employed for the purification of RNA, DNA particles,
and viruses [4].

In chromatography, the stationary phase refers to either a solid phase or a liquid phase that is
applied to the surface of a solid phase. The mobile phase that passes through the stationary
phase might be either a gas or a liquid. When the mobile phase consists of a liquid, it is
referred to as liquid chromatography (LC), whereas if it consists of a gas, it is known as gas
chromatography (GC). Gas chromatography is utilised for the analysis of gases, as well as
mixtures containing volatile liquids and solid substances. Liquid chromatography is
particularly employed for samples that are thermally unstable and non-volatile [5].

The primary objective of utilising chromatography, as a technique for quantitative analysis in


addition to its separation capabilities, is to achieve an acceptable separation within a suitable
time frame. Several chromatographic techniques have been developed for this purpose.
Several chromatography techniques are available, such as column chromatography, thin-
layer chromatography (TLC), paper chromatography, gas chromatography, ion exchange
chromatography, gel permeation chromatography, high-pressure liquid chromatography, and
affinity chromatography [6].

Chromatography encompasses various types, including column chromatography, ion-


exchange chromatography, gel-permeation (molecular sieve) chromatography, affinity
chromatography, paper chromatography, thin-layer chromatography, gas chromatography,
dye-ligand chromatography, hydrophobic interaction chromatography, pseudoaffinity
chromatography, and high-pressure liquid chromatography (HPLC).

Column chromatography is a technique used to separate and purify different components of a


mixture based on their different affinities for a stationary phase and a mobile phase.

Workshop-5 Module-1 Page 200 of 333


Proteins can be purified using chromatographic methods based on their distinct characteristics, such as size, shape, net charge, stationary phase utilised, and
binding capacity. Out of these procedures, column chromatography is the most commonly used. This method is employed for the purification of biomolecules. In
Figure 1, the sample to be separated is initially applied on a stationary phase column, followed by the application of a wash buffer as the mobile phase. Their
passage through the interior column material, which is positioned on a fibreglass support, is guaranteed. The samples are collected at the lower part of the device
in a manner that depends on both time and volume [7].

FIGURE 1

Column chromatography.
Ion- exchange chromatography

Ion-exchange chromatography relies on electrostatic interactions between charged protein groups and a solid support material, known as the matrix. The matrix
possesses an ion charge that is opposite to that of the protein to be separated, and the protein's affinity to the column is established by ionic interactions. Proteins
can be separated from the column by altering the pH, concentration of ion salts, or ionic strength of the buffer solution [8]. Anion-exchange matrices are matrices
with a positive charge that attract and bind negatively charged proteins. Matrices that are associated with negatively charged groups are referred to as cation-
exchange matrices. These matrices have the ability to adsorb proteins that have a positive charge, as seen in Figure 2 [9].

FIGURE 2

Ion- exchange chromatography.


Gel- permeation (molecular sieve) chromatography

The fundamental idea of this technique involves the utilisation of dextran-based materials to segregate macromolecules according to their variations in molecular
sizes. This approach primarily serves to ascertain the molecular weights of proteins and reduce the salt concentrations of protein solutions [10]. The stationary
Workshop-5 Module-1 Page 201 of 333
phase in a gel-permeation column is composed of inert molecules that have small pores. The solution, consisting of molecules of varying diameters, is constantly
fed through the column at a steady flow rate. Molecules that exceed the size of the pores are unable to pass through the gel particles and instead remain confined
inside a limited space between the particles. Macromolecules traverse the gaps between porous particles and exhibit swift movement within the column. Smaller
molecules diffuse into pores and, as their size decreases, they exit the column with higher retention durations (Figure 3) [11]. The column material most commonly
utilised is Sephadeks G type. In addition, dextran, agorose, and polyacrylamide are utilised as column materials [12].

FIGURE 3

Gel-permeation (molecular sieve) chromatography.


Affinity chromatography

This chromatography method is employed for the purification of enzymes, hormones, antibodies, nucleic acids, and particular proteins [13]. A ligand capable of
forming a complex with a certain protein (such as dextran, polyacrylamide, cellulose, etc.) binds to the filling material of the column. The ligand forms a compound
with a particular protein, which is then bound to a solid support (matrix) and kept in the column, while unbound proteins exit the column. Subsequently, the protein
that is attached to the column is released by modifying its ionic strength through pH adjustment or the introduction of a salt solution (Figure 4) [14].

FIGURE 4

Affinity chromatography.
Paper chromatography

Workshop-5 Module-1 Page 202 of 333


The support material in paper chromatography is a water-saturated layer of cellulose. This approach utilised a dense filter paper as the support, with water droplets
settling in its pores to form the stationary "liquid phase." The mobile phase is comprised of a suitable fluid that is placed within a developing tank. Paper
chromatography is a form of chromatography that involves the separation of substances using a liquid solvent.

Thin-layer chromatography (TLC)

Thin-layer chromatography is a form of chromatography that involves the adsorption of a solid-liquid mixture. This approach involves the use of a solid adsorbent
substance that is applied as a coating on glass plates, known as the stationary phase. All solid substances such as alumina, silica gel, and cellulose can be
employed as adsorbent materials in column chromatography. This method involves the upward movement of the mobile phase through the stationary phase. The
solvent ascends the solvent-soaked thin plate through capillary action. During this technique, the pipette is used to move the mixture that was previously dropped
on the bottom sections of the plate upwards, at varying flow rates. Consequently, the analytes are successfully separated. The rate at which the material travels
upwards is influenced by the polarity of both the solid phase and the solvent [16].

If the molecules in the sample lack colour, fluorescence, radioactivity, or a specific chemical component can be employed to generate a visible coloured reactive
result, hence facilitating the identification of their locations on the chromatogram. The emergence of a discernible hue can be perceived under ambient lighting or
ultraviolet (UV) lighting. The spatial coordinates of each molecule in the mixture can be determined by computing the ratio of the distances covered by the molecule
and the solvent. The term used to denote this measurement value is relative mobility, which is represented by the symbol Rf. The Rf value is employed to provide a
qualitative description of the molecules [17].

Gas chromatography is a technique used to separate and analyse the components of a mixture based on their different affinities for a stationary phase and a
mobile phase.

The method involves the use of a column containing a stationary phase, which consists of a liquid adsorbed onto an inert solid surface. Gas chromatography is a
form of chromatography that involves the separation of compounds based on their interaction with a gas phase and a liquid phase. The carrier phase of the system
is composed of gases such as helium (He) or nitrogen (N2). An inert gas is introduced into a column at high pressure, serving as the mobile phase. The material to
be analysed undergoes vaporisation and transitions into a mobile phase in the form of a gas. The constituents present in the sample are distributed between the
mobile phase and the stationary phase on the solid support. Gas chromatography is a versatile and efficient technique that allows for the precise separation of
extremely small compounds with high sensitivity and speed. It is employed in the isolation of minute quantities of analytes [18].

Dye-ligand chromatography

This approach was developed by demonstrating the capacity of several enzymes to bind purine nucleotides using Cibacron Blue F3GA dye [19]. The planar ring
structure containing negatively charged groups bears resemblance to the molecular structure of NAD. This parallel has been confirmed by demonstrating the
attachment of Cibacron Blue F3GA dye to the adenine and ribose binding sites of NAD. The dye functions as a chemical compound that closely resembles ADP-
ribose. The adsorption capacity of this type of adsorbents is 10–20 times more than the affinity of other adsorbents. The adsorbed proteins can be removed from
the column by elution with high-ionic strength solutions, taking use of the ion-exchange property of the adsorbent, at suitable pH conditions [20, 21].

Hydrophobic interaction chromatography (HIC)

This method utilises adsorbents that have been manufactured as column material for the purpose of ligand binding in affinity chromatography. The HIC approach
relies on hydrophobic interactions between side chains attached to the chromatographic matrix [22, 23].

Pseudoaffinity chromatography

Anthraquinone dyes and azo-dyes can function as ligands due to their strong affinity, particularly towards dehydrogenases, kinases, transferases, and reductases.
One widely recognised kind of this sort of chromatography is immobilised metal affinity chromatography (IMAC) [24].

High-pressure liquid chromatography (HPLC)

This chromatography technology enables rapid structural and functional investigation, as well as purification, of many compounds. This technology produces
precise outcomes in the separation and characterization of amino acids, carbohydrates, lipids, nucleic acids, proteins, steroids, and other physiologically active
substances. In High Performance Liquid Chromatography (HPLC), the mobile phase flows through columns at pressures ranging from 10 to 400 atmospheres, with
a flow velocity of 0.1 to 5 centimetres per second. This technique utilises small particles and applies high pressure to enhance the rate of solvent flow, hence
increasing the separation efficiency of HPLC. Additionally, this method allows for rapid completion of the analysis.

The fundamental elements of a High Performance Liquid Chromatography (HPLC) instrument include a reservoir for the solvent, a high-pressure pump, a pre-made
column, a detector, and a recorder. The computerised system regulates the length of separation, and material is accumulated [25].

Chromatography's applications in medicine

The chromatography technique is a significant tool for biochemists and can be simply employed in clinical laboratory tests. For example, paper chromatography is
employed to ascertain certain varieties of sugar and amino acids in physiological fluids that are linked to inherited metabolic problems. Laboratories employ gas
chromatography to quantify steroids, barbiturates, and lipids. The chromatographic approach is employed for the separation of vitamins and proteins.

In conclusion

Initially, chromatographic techniques were employed to separate compounds based on their colour, as was the situation with herbal pigments. Over time, its
application field has expanded. Currently, chromatography is widely acknowledged as a very sensitive and efficient technique for separation. Column
chromatography is a valuable technique for separating and determining substances. Column chromatography is a protein purification technique that exploits one of
the distinctive properties of proteins. In addition, these techniques are employed to regulate the quality of a protein. The HPLC approach possesses numerous
advantageous characteristics, notably its heightened sensitivity, rapid turnover rate, and ability to serve as a quantitative method. Additionally, it is capable of
purifying a wide range of substances, such as amino acids, proteins, nucleic acids, hydrocarbons, carbohydrates, medicines, antibiotics, and steroids.

4.3 REFERENCES

1. Cuatrecasas P, Wilchek M, Anfinsen CB. Selective enzyme purification by affinity chromatography. Proc Natl Acad Sci U S A. 1968;61:636–43. [PMC free
article] [PubMed] [Google Scholar]

2. Porath J. From gel filtration to adsorptive size exclusion. J Protein Chem. 1997;16:463–8. [PubMed] [Google Scholar]

Workshop-5 Module-1 Page 203 of 333


3. Harris DC. Exploring chemical analysis. 3rd ed. WH. Freeman&Co; 2004. [Google Scholar]

4. Gerberding SJ, Byers CH. Preparative ion-exchange chromatography of proteins from dairy whey. J Chromatogr A. 1998;808:141–51. [PubMed] [Google
Scholar]

5. Donald PL, Lampman GM, Kritz GS, Engel RG. Introduction to organic laboratory techniques. 4th ed. Thomson Brooks/Cole; 2006. pp. 797–817. [Google
Scholar]

6. Harwood LM, Moody CJ. Experimental organic chemistry:Principles and Practice. Oxford:Blacwell Science; 1989:180–5. [Google Scholar]

Workshop-5 Module-1 Page 204 of 333


Workshop-5 Module-1 Page 205 of 333
Workshop-5 Module-1 Page 206 of 333
Step 4

Self Assessment - Answer the following questions to self-assess your knowledge of the subject.

Q 1: Briefly describe ‘Size Exclusion Chromatography’ (SEC).

Size exclusion chromatography (SEC) separates molecules by their size by filtering using a gel matrix. The gel is composed of spherical beads that include pores
with a precise distribution of sizes. Separation arises from the selective inclusion or exclusion of molecules of varying sizes within the pores of the matrix.

Size Exclusion Chromatography is a technique used to separate and analyse molecules based on their size.

Proteins exist in various sizes and can be separated based on their size. However, this method is not very precise, and only large-scale separations are feasible in
commercial settings.

Once again, beads are employed, but this time they are composed of a permeable matrix that allows proteins to diffuse through. The diffusion rate of proteins with
different sizes into the beads is determined by the pore size in the matrix, and certain proteins are entirely prevented from entering.

The column is subsequently washed with buffer alone, causing the proteins to be released in a sequential manner, with the biggest protein being the first to come
out. By selecting the appropriate pore sizes for your protein, you may effectively separate your protein from some troublesome impurities throughout the purification
process.

Chromatogram for Size Exclusion Module-1 5-4 - Biopharmaceuticals - Downstream Processing

Module-1 5-4 - Biopharmaceuticals - Downstream Processing

This module covers the process of biopharmaceuticals in their downstream processing stage, which involves the separation of cells from the fermentation broth,
purification and concentration of desired product, and waste disposal or recycle. The main stages of this process include cross flow/tangential flow filtration
(CFF/TFF), microfiltration & ultrafiltration, diafiltration (DF), column chromatography, and size exclusion chromatography.

CFF/TFF is a filtration process that uses a filter to mechanically separate a mixture. It involves interaction between substances or objects to be removed and the
filter, with the substance being a fluid. Microfiltration removes contaminants from a fluid by passage through a microporous membrane, while ultrafiltration (UF) is a
variety of membrane filtration where hydrostatic pressure forces a liquid against a semipermeable membrane. This separation process is used in industry and
research for purifying and concentrating macromolecular solutions, especially protein solutions.

Diafiltration (DF) is a TFF process that can be performed in combination with other categories of separation to enhance product yield or purity. In processes where
the product is in the retentate, diafiltration washes components out of the product pool into the filtrate, exchanging buffers and reducing the concentration of
undesirable species. When the product is in the filtrate, diafiltration washes it through the membrane into a collection vessel.

Workshop-5 Module-1 Page 207 of 333


Column chromatography involves several techniques based around using a vertical column filled with solid support, with the sample to be separated placed on top
of this support. The rest of the column is filled with a solvent that moves the sample through the column under positive pressure. Differences in rates of movement
through the solid medium translate to different exit times from the bottom of the column for the various elements of the original sample.

Affinity chromatography relies on protein binding specifically to an immobilized ligand while the rest of the proteins pass through the column. Monoclonal antibodies
are the preferred ligands, but they are expensive and require purification. They typically bind with such tenacity that they are hard to turn loose, so harsh conditions
may be used to inactivate the protein or destroy part of the monoclonal antibodies.

Proteins can be separated using various methods, including size exclusion chromatography, ionic exchange chromatography, and hydrophobic interaction
chromatography (HIC). Size exclusion chromatography uses beads made from a porous matrix to diffuse proteins into the beads, with the size of the pores
determining the rate at which proteins of various sizes diffuse into the beads. Some proteins are completely excluded, and the column is then eluted with just
buffer, with the largest proteins coming out first.

Ionic exchange chromatography is the most useful of all protein purification and concentration methods. It relies on the charges of proteins to isolate them, while
hydrophobic interaction chromatography uses the hydrophobic properties of some proteins. The more hydrophobic a protein is, the stronger it will bind to the
column.

HIC separations are designed using the opposite conditions of those used in ion exchange chromatography. In this separation, a buffer with a high ionic strength,
usually ammonium sulfate, is initially applied to the column. The salt in the buffer reduces the solvation of sample solutes, and as solvation decreases, hydrophobic
regions that become exposed are adsorbed by the medium.

Specific guidance for APIs manufactured by cell culture or fermentation using natural or recombinant organisms is provided in the ICH Harmonised Tripartite
Guideline Good Manufacturing Practice Guide for Active Pharmaceutical Ingredients Q7. This guideline covers specific controls for APIs or intermediates
manufactured by cell culture or fermentation using natural or recombinant organisms and is not intended to be a stand-alone section. The principles of fermentation
for "classical" processes for production of small molecules and for processes using recombinant and non-recombinant organisms for production of proteins and/or
polypeptides are the same, although the degree of control will differ.

In summary, size exclusion chromatography, ionic exchange chromatography, and hydrophobic interaction chromatography are essential methods for protein
purification and concentration. By selecting the correct pore sizes for each protein, the degree of control for biotechnological processes used to produce proteins
and polypeptides can be achieved.

The term "biotechnological process" refers to the use of cells or organisms generated or modified by recombinant DNA, hybriddoma, or other technology to produce
APIs. These APIs typically consist of high molecular weight substances, such as proteins and polypeptides, for which specific guidance is given in this Section.
Certain APIs of low molecular weight, such as antibiotics, amino acids, vitamins, and carbohydrates, can also be produced by recombinant DNA technology. The
level of control for these types of APIs is similar to that employed for classical fermentation.

Classical fermentation refers to processes that use microorganisms existing in nature and/or modified by conventional methods (e.g., irradiation or chemical
mutagenesis) to produce APIs. APIs produced by "classical fermentation" are normally low molecular weight products such as antibiotics, amino acids, vitamins,
and carbohydrates. Control of bioburden, viral contamination, and/or endotoxins during manufacturing and monitoring of the process at appropriate stages may be
necessary depending on the source, method of preparation, and the intended use of the API or intermediate.

Appropriate controls should be established at all stages of manufacturing to assure intermediate and/or API quality. This Guide covers cell culture/fermentation
from the point at which a vial of the cell bank is retrieved for use in manufacturing. Appropriate equipment and environmental controls should be used to minimize
the risk of contamination. Process controls should take into account maintenance of the Working Cell Bank (where appropriate), proper inoculation and expansion
of the culture, control of critical operating parameters during fermentation/cell culture, monitoring of the process for cell growth, viability, and productivity, harvest
and purification procedures that remove cells, cellular debris, and media components while protecting the intermediate or API from contamination and loss of
quality, monitoring bioburden and endotoxin levels at appropriate stages of production, and viral safety concerns as described in ICH Guideline Q5A Quality of
Biotechnological Products:Viral Safety Evaluation of Biotechnology Products Derived from Cell Lines of Human or Animal Origin.

The text provides guidelines for the downstream processing of biopharmaceuticals, focusing on cell culture and fermentation processes. It emphasizes the
importance of closed or contained systems when necessary, and controls and procedures to minimize the risk of contamination. If manipulations using open
vessels are performed, they should be performed in a biosafety cabinet or similar controlled environment. Personnel should be appropriately gowned and take
special precautions handling the cultures.

Critical operating parameters, such as temperature, pH, agitation rates, addition of gases, and pressure, should be monitored to ensure consistency with the
established process. Cell growth, viability, and productivity should also be monitored. Cell culture equipment should be cleaned and sterilized after use, and

Workshop-5 Module-1 Page 208 of 333


fermentation equipment should be cleaned and sanitized or sterilized. Culture media should be sterilized before use when appropriate to protect the quality of the
API.

Procedures should be in place to detect contamination and determine the course of action to be taken. This includes procedures to determine the impact of the
contamination on the product and those to decontaminate the equipment and return it to a condition for use in subsequent batches. Foreign organisms observed
during fermentation processes should be identified and their effect on product quality should be assessed, if necessary. Records of contamination events should be
maintained, and shared equipment may warrant additional testing after cleaning between product campaigns to minimize cross-contamination.

Handling, isolation, and purification steps should be performed in equipment and areas designed to minimize the risk of contamination. Filtration and Tangential
Flow Filtration (TFF) procedures should be adequate to ensure consistent quality recovery of intermediate or API. All equipment should be properly cleaned and
sanitized after use. If open systems are used, purification should be performed under environmental conditions appropriate for product preservation. Additional
controls, such as the use of dedicated chromatography resins or additional testing, may be appropriate if equipment is to be used for multiple products.

Viral removal and inactivation steps are critical processing steps for some processes and should be performed within their validated parameters. Open processing
should be performed in separate areas with separate air handling units and appropriate cleaning and sanitization before reuse.

Size-exclusion chromatography

Additional details: Size-exclusion chromatography

Size-exclusion chromatography (SEC), sometimes referred to as gel permeation chromatography (GPC) or gel filtration chromatography, is a technique that
separates molecules based on their size, specifically their hydrodynamic diameter or hydrodynamic volume. The smaller molecules can penetrate the pores of the
media, resulting in the entrapment and removal of molecules from the mobile phase flow. The average duration of stay in the pores is determined by the effective
dimensions of the analyte molecules. Nevertheless, molecules that exceed the typical pore size of the packing material are not retained and so experience minimal
retention; these substances are the first to be flushed out. Typically, this approach is used for chromatography with low resolution, making it suitable for the last
stage of purification, known as the "polishing" phase. Additionally, it is valuable for elucidating the tertiary and quaternary structures of pure proteins, particularly
due to its ability to be conducted in native solution conditions.

Size-exclusion chromatography
Wikipedia, the free encyclopedia

Size-exclusion chromatography

Equipment for running size-exclusion chromatography. The buffer is pumped through the column
(right) by a computer-controlled device.

Acronym SEC

Classification Chromatography

Analytes macromolecules
synthetic polymers
biomolecules

Workshop-5 Module-1 Page 209 of 333


Manufacturers Cytiva, Bio-Rad, Bio-Works, emp Biotech, Knauer, Phenomenex.

Other techniques

Related High-performance liquid chromatography


Aqueous normal-phase chromatography
Ion exchange chromatography
Micellar liquid chromatography

Size-exclusion chromatography, also known as molecular sieve chromatography,[1] is a chromatographic method in which molecules in solution are separated by
their size, and in some cases molecular weight.[2] It is usually applied to large molecules or macromolecular complexes such as proteins and industrial polymers.
[3]
Gel-filtration chromatography is the term used when an aqueous solution is used to move the sample through the column. On the other hand, gel permeation
chromatography is the term used when an organic solvent is utilised as the mobile phase. The chromatography column is filled with small, permeable beads often
made of dextran, agarose, or polyacrylamide polymers. The bead's pore diameters are utilised to approximate the dimensions of macromolecules. Size-exclusion
chromatography (SEC) is a commonly employed technique for polymer characterization due to its capacity to yield accurate molar mass distribution (Mw) data for
polymers.

Size exclusion chromatography (SEC) distinguishes itself from other chromatographic procedures by relying solely on the classification of molecule sizes for
separation, rather than any form of contact.

1.1 Uses

Size-exclusion chromatography is mostly utilised for separating proteins and other water-soluble polymers based on their size, whereas gel permeation
chromatography is employed to determine the distribution of molecular weights in organic-soluble polymers. It is important to note that both techniques should not
be mistaken for gel electrophoresis, which employs an electric field to propel molecules across the gel based on their electrical charges. The duration of solute
retention in a pore is contingent upon the pore's size. Smaller volumes will be accessible to larger solutes, and vice versa. Consequently, a diminutive solute will
persist in the pore for a greater duration in comparison to a larger solute.

Size exclusion chromatography can also be employed to analyse the stability and properties of natural organic materials present in water.[6] Margit B. Müller,
Daniel Schmitt, and Fritz H. Frimmel conducted a study where they analysed water samples from various locations worldwide to assess the long-term stability of
natural organic matter. Despite its widespread use in the analysis of natural organic material, size exclusion chromatography has certain limitations. One
disadvantage is the absence of a standardised molecular weight marker, resulting in the inability to compare the obtained data with a reference. If an exact
molecular weight is necessary, alternative procedures should be employed.

1.2 Benefits[revise]

The benefits of this technique encompass effective segregation of substantial molecules from smaller molecules with a low amount of eluate,[7] and the ability to
apply different solutions without disrupting the filtration process, while simultaneously maintaining the biological functionality of the particles being separated. The
technique is commonly employed in conjunction with other methods that additionally segregate molecules based on distinct properties, such as acidity, basicity,
charge, and affinity for specific substances. Size exclusion chromatography offers precise and efficient separation with minimal time requirements and small peaks,
resulting in high sensitivity. Furthermore, the absence of sample loss is attributed to the non-interaction between solutes and the stationary phase.

Another benefit of this experimental approach is that, in some instances, it is possible to ascertain the approximate molecular weight of a chemical. The
compound's interaction with the gel (stationary phase) is determined by the shape and size of the compound (eluent). In order to estimate the molecular weight, the
elution volumes of compounds are measured and plotted against the logarithm of their molecular weights, denoted as "Kav" and "log(Mw)" respectively, where Mw
represents the molecular mass. The figure serves as a calibration curve, employed to estimate the molecular weight of the requested chemical. The Ve component
shows the volume at which the intermediate molecules elute, specifically referring to molecules that have little interaction with the beads of the column.
Furthermore, Vt represents the combined magnitude of the overall volume including the space between the beads and the volume contained within the beads
themselves. The Vo component corresponds to the elution volume of bigger molecules, which elute at the beginning. Drawbacks include the limited capacity to
accommodate bands due to the short time scale of the chromatogram, as well as the requirement for a minimum 10% difference in molecular mass to achieve
satisfactory resolution.

1.3 Exploration

The approach was pioneered in 1955 by Grant Henry Lathe and Colin R Ruthven, who were conducting research at Queen Charlotte's Hospital in London.
Subsequently, they were bestowed with the prestigious John Scott Award in recognition of their creation.[12] Lathe and Ruthven utilised starch gels as the
framework, although Jerker Porath and Per Flodin subsequently incorporated dextran gels;[13] further gels having size fractionation qualities include agarose and
polyacrylamide. An abridged analysis of these advancements has been published.[14] Efforts were made to separate synthetic high polymers into smaller fractions,
but it wasn't until 1964 that J. C. Moore from the Dow Chemical Company published his research on creating gel permeation chromatography (GPC) columns using
cross-linked polystyrene with controlled pore size. This publication sparked a significant surge in research in this area. It was quickly acknowledged that by
calibrating properly, GPC had the ability to provide precise information on the molar mass and molar mass distribution of synthetic polymers. Due to the limited
availability of other approaches, GPC quickly gained widespread usage.

Theory and method

Workshop-5 Module-1 Page 210 of 333


Agarose-based SEC columns used for protein purification on an AKTA FPLC machine

Size exclusion chromatography (SEC) is predominantly utilised for the examination of macromolecules, such as proteins or polymers. Size exclusion
chromatography operates by immobilising smaller molecules within the pores of the adsorbent material, also known as the "stationary phase". The procedure is
commonly conducted in a column, typically comprised of a hollow tube densely filled with polymer beads at the micron scale. These beads include pores of varying
diameters. These pores might either be indentations on the surface or passageways through the bead. As the solution descends the column, certain particles
infiltrate the pores. Particles of greater size are unable to penetrate as many pores. Greater particle size results in increased elution velocity. The larger molecules
are unable to enter the pores due to their size, therefore passing by them. Consequently, larger molecules exhibit a higher flow rate across the column compared to
smaller molecules. In other words, the retention time increases as the size of the molecule decreases.

A prerequisite for SEC is that the analyte must not engage in any interaction with the surface of the stationary phases. Ideally, any disparities in elution time
between analytes should be solely attributed to the volume of solute that the analytes can access, rather than any chemical or electrostatic interactions with the
stationary phases. Therefore, a minute molecule capable of infiltrating all sections of the stationary phase's pore system can access a combined volume that
includes both the total pore volume and the interparticle volume. This diminutive molecule exhibits a delayed elution, occurring after it has traversed the entirety of
the pore and interparticle volume, which accounts for approximately 80% of the column capacity. On the opposite end of the spectrum, a molecule of significant
size that is unable to pass through smaller holes can only access the space between particles (~35% of the column volume). It will be eluted sooner after this
portion of the mobile phase has flowed through the column. The fundamental concept of SEC is that particles of varying sizes pass through a stationary phase at
various velocities. This leads to the segregation of a solution of particles according to their size. If all the particles are loaded at the same time or very close to one
other, particles of the same size will separate together.

Nevertheless, the theory of size exclusion chromatography (SEC) has faced a significant challenge in determining an appropriate molecular size parameter to
effectively separate molecules of different types, given the existence of many metrics such as the radius of gyration and the hydrodynamic radius. Benoit and his
colleagues discovered a strong association between elution volume and hydrodynamic volume, which is a measure of molecule size based on dynamic properties.
This correlation held true for various chain architectures and chemical compositions. The correlation obtained using the hydrodynamic volume was widely accepted
as the foundation for universal SEC calibration.

However, the complete understanding of the interpretation of SEC data using the hydrodynamic volume, which is a size determined by dynamic parameters, is
lacking. This is because the SEC method is usually performed at moderate flow rates, which minimises the impact of hydrodynamic factors on the separation
process. Both theory and computer simulations rely on a thermodynamic separation principle. This principle states that the separation process is governed by the
equilibrium distribution of solute macromolecules between two phases. These phases include a dilute bulk solution phase in the interstitial space and confined
solution phases within the pores of the column packing material. According to this hypothesis, it has been demonstrated that the mean span dimension (the
average maximum projection onto a line) is the important size parameter for the separation of polymers in pores. While the matter remains unresolved, there is a
substantial likelihood of a significant correlation between the mean span dimension and the hydrodynamic volume.

A size exclusion column

Every size exclusion column possesses a specific range of molecular weights that can be effectively separated. The exclusion limit refers to the maximum
molecular weight that can be accommodated within the effective range of the column, beyond which molecules are too big to be retained by the stationary phase.
The lower limit of the range is determined by the permeation threshold, which specifies the molecular weight of a molecule that is sufficiently tiny to pass through all
pores of the stationary phase. Molecules with a molecular mass lower than this threshold are of such little size that they separate as a single band.The user's text is
"[7]".

The solution that is collected at the end after being filtered is referred to as the eluate. The void volume encompasses particles that are too big to penetrate the
medium, while the solvent volume is referred to as the column volume.

The following materials are typically used for porous gel beads in size exclusion chromatography: The user's text is "[20]".

Fractionation range
Sr. No Material
(Molecular mass in Da)

Workshop-5 Module-1 Page 211 of 333


And Trade name

1 Sephadex G-10 0 to 700

2 Sephadex G-25 1000 to 5000

3 Sephadex G-50 1500 to 30000

4 Sephadex G-75 3000 to 70000

5 Sephadex G-100 4000 to 150000

6 Sephadex G-150 5000 to 300000

7 Sephadex G-200 5000 to 800000

8 Bio-gel P-2 100 to 1800

9 Bio-gel P-6 1000 to 6000

10 Bio-gel P-60 3000 to 60000

11 Bio-gel P-150 15000 to 150000

12 Bio-gel P-300 16000 to 400000

13 Sepharose 2B 2 x 106 to 25 x 106

14 Sepharose 4B 3 x 105 to 3 x 106

15 Sepharose 6B 104 to 20 x 106

4.4 Factors affecting filtration

A cartoon illustrating the theory behind size exclusion chromatography

Particles in solution exhibit variable sizes in real-life scenarios, leading to the possibility of a particle that would normally be obstructed by a pore passing directly
beside it. Furthermore, the particles in the stationary phase are not precisely defined; both the particles themselves and the pores they contain may exhibit
variations in size. Elution curves exhibit a resemblance to Gaussian distributions. Interactions between the stationary phase and particles can potentially affect
retention times. However, column manufacturers are meticulous in selecting inert stationary phases to mitigate this problem.

Similar to other chromatographic techniques, resolution is improved by increasing the length of the column, while column capacity is increased by enlarging the
column diameter. Optimal column packing is crucial for achieving the highest possible resolution: Excessive packing of the column might cause the pores in the
beads to collapse, leading to a decrease in resolution. An insufficiently packed column can decrease the relative surface area of the stationary phase that is
available for smaller species, leading to a decrease in the amount of time that these species are retained in the pores. Contrary to affinity chromatography
methods, the presence of a solvent head at the top of the column can significantly reduce resolution by causing the sample to spread out before being loaded,
resulting in a wider elution downstream.

1.1 Examination

Workshop-5 Module-1 Page 212 of 333


In basic manual chromatography, the liquid that carries the sample along the column is collected in consistent volumes, referred to as fractions. The likelihood of
particles being detected independently decreases as their sizes become increasingly similar, leading to a higher probability of them being in the same fraction.
Advanced columns resolve this issue by continuously monitoring the eluent.

Standardization (calibration) of a size exclusion column

Size exclusion chromatogram after bioanalytical continuous-elution gel


chromatography of a plant sample

Spectroscopic techniques are commonly used to analyse the collected fractions and quantify the concentration of the eluted particles. Refractive index (RI) and
ultraviolet (UV) are widely used techniques for detecting spectroscopy. When separating spectroscopically similar species, such as during biological purification,
additional procedures may be required to determine the composition of each fraction. Continuous analysis of the eluent flow can be performed using techniques
such as refractive index (RI), low-angle laser light scattering (LALLS), multi-angle laser light scattering (MALS), ultraviolet (UV), and viscosity measurements.

The elution volume (Ve) exhibits a linear decline as the logarithm of the molecular hydrodynamic volume increases. Typically, columns are calibrated by employing
4-5 standard samples, such as folded proteins with known molecular weight. Additionally, a sample containing a significantly big molecule like thyroglobulin is used
to ascertain the void volume. (The use of blue dextran for Vo determination is not advisable due to its heterogeneity and potential for yielding inconsistent
outcomes) The elution volumes of the standards are divided by the elution volume of the thyroglobulin (Ve/Vo) and graphed against the logarithm of the standards'
molecular weights.

1.1 Applications

Applications in the field of biochemistry[revise]

SEC, or size-exclusion chromatography, is typically regarded as a chromatographic technique with low resolution. It is not very effective at distinguishing between
identical species. As a result, it is commonly used as the last stage of a purification process. The method can ascertain the quaternary structure of pure proteins
with prolonged exchange durations, as it can be conducted in a natural solution environment, hence maintaining macromolecular interactions. Size-exclusion
chromatography (SEC) can be used to determine the tertiary structure of proteins by measuring their hydrodynamic volume rather than their molecular weight. This
allows for the differentiation between folded and unfolded forms of the same protein. As an illustration, the observable hydrodynamic radius of a standard protein
domain may be 14 Å and 36 Å for the compact and extended states, respectively. The SEC permits the segregation of these two structures, as the compacted
structure is eluted at a later stage because of its reduced dimensions.

Polymer synthesis

SEC can serve as a metric for assessing both the dimensions and the heterogeneity of a produced polymer, namely its capacity to determine the range of sizes
among polymer molecules. By running standards of a well-defined size beforehand, it is possible to establish a calibration curve that may be used to ascertain the
sizes of the polymer molecules of interest in the selected solvent for analysis, often THF. Alternatively, light scattering and/or viscometry techniques can be
employed in conjunction with SEC to get absolute molecular weights that do not depend on calibration using standards of known molecular weight. Because of the
variation in size between two polymers that have the same molecular weights, the absolute determination methods are generally preferred. An average SEC
system may rapidly provide polymer scientists with data regarding the size and polydispersity of the sample, often within a timeframe of approximately thirty
minutes. Preparative size exclusion chromatography (SEC) is suitable for fractionating polymers at an analytical level.

1.2 Limitations

Workshop-5 Module-1 Page 213 of 333


In SEC, the measurement of mass is not primarily based on the actual weight of the polymer molecules, but rather on their hydrodynamic volume, which refers to
the amount of space occupied by a certain polymer molecule while it is dissolved in a solution. Nevertheless, the estimated molecular weight can be determined
from SEC data due to the known correlation between molecular weight and hydrodynamic volume for polystyrene. In this case, polystyrene serves as a reference
material. However, the correlation between hydrodynamic volume and molecular weight varies among different polymers, therefore resulting in only an approximate
assessment. Another disadvantage is the potential for interaction between the stationary phase and the analyte. Every contact results in a subsequent increase in
elution time, thereby imitating a reduced size of the analyte.

During the execution of this technique, the bands of the eluted molecules may experience broadening. This phenomenon might arise due to turbulence resulting
from the movement of the mobile phase molecules as they traverse the molecules of the stationary phase. Furthermore, the broadening of the bands is influenced
by molecular thermal diffusion and friction occurring between the molecules of the glass walls and the molecules of the eluent. In addition to expanding, the bands
also intersect with one another. Consequently, the eluent typically undergoes significant dilution. To minimise the probability of the bands widening, certain
precautions might be implemented. For example, one can apply the sample in a narrow and highly concentrated strip at the top of the column. A higher
concentration of the eluent would result in a more efficient operation. Nevertheless, it is not always feasible to condense the eluent, which can be regarded as an
additional drawback. 1.3 Absolute size-exclusion chromatography

Absolute size-exclusion chromatography (ASEC) is a method that combines a light scattering instrument, typically multi-angle light scattering (MALS) or another
type of static light scattering (SLS), but potentially a dynamic light scattering (DLS) instrument, with a size-exclusion chromatography system. This allows for
precise determination of the absolute molar mass and/or size of proteins and macromolecules as they are released from the chromatography system.The user's
text consists of the number 22 enclosed in square brackets.

In this context, "absolute" means that the calibration of retention time on the column using a set of reference standards is unnecessary to determine the molar mass
or hydrodynamic size, also known as hydrodynamic diameter (DH, measured in nm). The final result is not affected by non-ideal column contacts, such as
electrostatic or hydrophobic surface interactions, which alter the retention period compared to standards. Similarly, variations in the structure of the substance
being analysed compared to the standard do not impact a precise measurement. For instance, in MALS analysis, the molecular weight of proteins that lack a fixed
structure can be accurately determined, even if they elute at earlier times than proteins with a compact structure but the same molecular weight. This also applies
to polymers with branches, which elute later than linear reference standards of the same molecular weight.The user's text is "[22]".The number 23.The user's text is
"[24]". ASEC offers the advantage of determining the molar mass and/or size at every point within an eluting peak, hence indicating the homogeneity or
polydispersity within the peak. SEC-MALS study of a monodisperse protein will reveal that the entire peak comprises molecules with identical molar mass, a
characteristic not achievable with regular SEC analysis.

The determination of molar mass using SLS necessitates the combination of light scattering observations with concentration measurements. SEC-MALS often
consists of a light scattering detector along with either a differential refractometer or a UV/Vis absorbance detector. Furthermore, MALS is utilised to ascertain the
root mean square (rms) radius, denoted as Rg, of molecules that exceed a specific size threshold, commonly about 10 nm. SEC-MALS can analyse the
conformation of polymers by examining the relationship between molar mass and Rg. To analyse smaller molecules, either Dynamic Light Scattering (DLS) or,
more frequently, a differential viscometer is employed to measure the hydrodynamic radius and assess the molecular shape in a similar fashion.

SEC-DLS measures the sizes of macromolecules as they pass through the size exclusion column set and elute into the flow cell of the DLS instrument. The
measurement focuses on the hydrodynamic size of the molecules or particles rather than their molecular weights. A Mark-Houwink computation can be employed
to estimate the molecular weight of proteins based on their hydrodynamic size.

An important benefit of combining DLS with SEC is the capability to achieve improved resolution in DLS measurements.The user's text is "[25]". Batch Dynamic
Light Scattering (DLS) is a rapid and uncomplicated method that offers a direct assessment of the average size. However, the baseline resolution of DLS is limited
to a diameter ratio of 3:1. SEC is employed to separate proteins and protein oligomers, hence achieving oligomeric resolution. ASEC can also be utilised for
conducting aggregation investigations. While light scattering cannot be used to calculate the aggregate concentration, it is possible to measure the size of the
aggregate using a concentration detector like SEC-MALS, which is also used to determine molar mass. The only limitation is that the maximum size of the
aggregate that can be measured is determined by the SEC columns.

The limitations of ASEC with DLS detection encompass factors such as flow velocity, concentration, and precision. Due to the time required to construct a
correlation function, which ranges from 3 to 7 seconds, only a restricted amount of data points may be gathered during the peak. ASEC with SLS detection is not
constrained by flow rate, and its measurement time is virtually immediate. Additionally, the concentration range it can handle is significantly greater than that of
DLS, spanning many orders of magnitude. Nevertheless, precise concentration measurements are necessary for molar mass analysis using SEC-MALS. MALS
(multi-angle light scattering) and DLS (dynamic light scattering) detectors are frequently integrated into a single instrument to enable a more thorough absolute
analysis after separation by SEC (size exclusion chromatography).

4.5 See also

 PEGylation
 Gel permeation chromatography
 Protein purification

4.6 References

1. ^ Jump up to:a b Garrett RH, Grisham CM (2013). Biochemistry (5th ed.). Belmont, CA: Brooks/Cole, Cengage Learning.
p. 108. ISBN 9781133106296. OCLC 1066452448.
2. ^ Paul-Dauphin, S; Karaca, F; Morgan, TJ; et al. (6 Oct 2007). "Probing Size Exclusion Mechanisms of Complex Hydrocarbon Mixtures: The Effect
of Altering Eluent Compositions". Energy & Fuels. 6. 21 (6): 3484–3489. doi:10.1021/ef700410e.
3. ^ Kastenholz, B (29 Apr 2008). "Phytochemical Approach and Bioanalytical Strategy to Develop Chaperone-Based Medications". The Open
Biochemistry Journal. 2: 44–48. doi:10.2174/1874091X00802010044. PMC 2570550.
4. ^ Meyer, Veronika R.; Meyer, Veronika R. (2010). Practical high-performance liquid chromatography (5. ed.). Chichester: Wiley. ISBN 978-0-470-
68218-0.

Workshop-5 Module-1 Page 214 of 333


Q 2: Briefly describe ‘Ionic Exchange Chromatography’ (IEX).

Ion exchange chromatography is commonly used to separate charged biological molecules such as proteins, peptides, amino acids, or nucleotides. The amino
acids that make up proteins are zwitterionic compounds that contain both positively and negatively charged chemical groups

Ionic Exchange Chromatography This is the most useful of all protein purification and concentration methods Ionic exchange chromatogram

1.8 Hydrophobic interaction chromatography (HIC) A droplet of water forms a spherical shape to minimize contact with the hydrophobic leaf Hydrophobic
Interaction

Solvation, also sometimes called dissolution. As ions dissolve in a solvent they spread out and become surrounded by solvent molecules. As different proteins
have different compositions of amino acids, different protein molecules precipitate at different concentrations of salt solution.

Ion exchange chromatography

Additional details: Ion exchange chromatography

Ion exchange chromatography, commonly known as ion chromatography, employs an ion exchange process to separate analytes according to their individual
charges. Typically, it is executed in a vertical arrangement, however it can also be advantageous in a flat configuration. Ion exchange chromatography employs a
stationary phase with an electric charge to segregate charged molecules, such as anions, cations, amino acids, peptides, and proteins. The typical approach
involves using an ion-exchange resin as the stationary phase, which contains charged functional groups that interact with oppositely charged groups of the
chemical in order to retain it. Ion exchange chromatography can be classified into two categories: Cation-Exchange and Anion-Exchange. In Cation-Exchange

Workshop-5 Module-1 Page 215 of 333


Chromatography, the stationary phase has a negative charge and the exchangeable ion is a cation. Conversely, in Anion-Exchange Chromatography, the
stationary phase carries a positive charge and the exchangeable ion is an anion.The user's text is "[28]". Ion exchange chromatography is a frequently employed
method for purifying proteins through the use of fast protein liquid chromatography (FPLC).

Ion chromatography
Wikipedia, the free encyclopedia
(Redirected from Ion exchange chromatography)

A modern ion chromatography system

Ion exchange chromatography

Acronym IC, IEC

Classification Chromatography

Other techniques

Related High performance liquid chromatography


Aqueous normal phase chromatography
Size exclusion chromatography
Micellar liquid chromatography

Ion chromatography (or ion-exchange chromatography) is a form of chromatography that separates ions and ionizable polar molecules based on their affinity to
the ion exchanger.[1] It works on almost any kind of charged molecule—including small inorganic anions,[2] large proteins,[3] small nucleotides,[4] and amino acids.
However, ion chromatography must be done in conditions that are one pH unit away from the isoelectric point of a protein.[5]

There are two distinct forms of ion chromatography: anion-exchange and cation-exchange. Cation-exchange chromatography is employed when the target molecule carries a positive charge. The molecule carries a positive charge due to the pH of the chromatography being lower
than its isoelectric point (pI).[6] This chromatography method involves a negatively charged stationary phase that attracts positively charged molecules. Anion-exchange chromatography involves a positively charged stationary phase that attracts negatively charged molecules (with a
pH greater than the isoelectric point) for separation. It is frequently employed in protein purification, water analysis, and quality control. Water-soluble and charged molecules, such as proteins, amino acids, and peptides, create ionic interactions with oppositely charged moieties on
the insoluble stationary phase. The equilibrated stationary phase is composed of an ionizable functional group that allows the targeted molecules of a mixture to bind while passing through the column. A cationic stationary phase is employed to separate anions, while an anionic
stationary phase is used to separate cations. Cation exchange chromatography is employed for the separation of cations, while anion exchange chromatography is utilised for the separation of anions.[11] The molecules that are attached to each other can be released and gathered
by passing a solution containing negatively charged and positively charged particles through the column at a higher concentration or by adjusting the acidity of the column.

An important benefit of using ion chromatography is that it involves just one interaction for separation, unlike other techniques. As a result, ion chromatography may have a better tolerance for matrix effects. An further benefit of ion exchange is the ability to anticipate elution patterns
by considering the presence of the ionizable group. When employing cation exchange chromatography, specific cations will elute earlier than others. Local charge equilibrium is consistently upheld. Nevertheless, ion-exchange chromatography also has drawbacks, including the
continuous evolution of the method, resulting in inconsistencies between different columns. A significant constraint of this purification approach is its restriction to ionizable groups.

History

Workshop-5 Module-1 Page 216 of 333


Ion exchange chromatography

Ion chromatography has progressed through the collection of information over a significant
period of time. Since 1947, Spedding and Powell employed displacement ion-exchange
chromatography to separate the rare earth elements. In addition, they demonstrated the ion-
exchange separation of the 14N and 15N isotopes in ammonia. In the early 1950s, Kraus and
Nelson showcased the application of several analytical techniques to identify metal ions
based on their separation of chloride, fluoride, nitrate, or sulphate complexes using anion
chromatography. From 1960 to 1980, there was a gradual implementation of automatic in-line
detection, along with the development of new chromatographic techniques for separating
metal ions. The contemporary ion chromatography was developed using a pioneering
approach by Small, Stevens, and Bauman at Dow Chemical Co. A system of suppressed
conductivity detection can now efficiently differentiate anions and cations. Gjerde et al.
invented a method in 1979 for anion chromatography with non-suppressed conductivity
detection. In 1980, a comparable technique for cation chromatography was introduced.The
user's text is enclosed in tags.

Consequently, a phase of intense rivalry ensued in the IC market, with advocates for both
suppressed and non-suppressed conductivity sensing. This competition resulted in the
proliferation of novel configurations and the swift advancement of integrated circuits.[15] An
obstacle that must be addressed in the future advancement of IC is the creation of
exceptionally effective monolithic ion-exchange columns. Overcoming this obstacle would
significantly contribute to the progress of IC.The user's text is a reference to a source or
citation.

The rise of Ion exchange chromatography originated mainly during 1935-1950, during World
War II. It was during the "Manhattan project" that the applications and utilisation of IC were
greatly expanded. Ion chromatography was initially pioneered by two English researchers, Sir
Thompson, an agricultural scientist, and J T Way, a chemist. Thompson and Way conducted
research on the effects of water-soluble fertiliser compounds, specifically ammonium sulphate
and potassium chloride. The extraction of these salts from the ground was hindered by the
precipitation. The researchers employed ion techniques to treat clays with salts, which led to
the extraction of ammonia and the liberation of calcium.The user's text is "[17]".[Questioning
the reliability of the source?] Theoretical models for ion chromatography (IC) were created
throughout the 1950s and 1960s to enhance understanding. It was not until the 1970s that
continuous detectors were employed, enabling the progression from low-pressure to high-
performance chromatography. The term "ion chromatography" was officially coined in 1975 to
describe the procedures, and it was subsequently adopted for marketing purposes. Today,
Ion Chromatography (IC) plays a crucial role in the analysis of aqueous systems, particularly
in the examination of drinking water. Anionic components or complexes are commonly
analysed using this method, which is effective in addressing environmentally significant
issues. Additionally, it possesses significant applications in the semiconductor sector.The text
is enclosed in the tags.

Chromatography has become the primary technique for ion analysis because to the wide
range of separating columns, elution systems, and detectors that are readily available.The
user's text consists of the number 19 enclosed in square brackets.

Initially, this technology was predominantly employed for the purpose of water treatment.
Since its inception in 1935, ion exchange chromatography has become widely utilised in
several domains of chemistry, such as distillation, adsorption, and filtering. This technology
has gained significant importance and is frequently employed due to its principles.

Principle

Ion chromatogram displaying


anion separation

Workshop-5 Module-1 Page 217 of 333


Ion-exchange chromatography is a technique that separates molecules by using their distinct charged groups. Ion-exchange chromatography
selectively maintains analyte molecules on the column by exploiting coulombic (ionic) interactions. The ion exchange chromatography matrix
comprises cations and anions. In essence, molecules engage in electrostatic interactions with opposing charges on the immobile phase matrix. The
stationary phase is composed of a fixed matrix that comprises ionizable functional groups or ligands with a charge.

The surface of the stationary phase contains ionic functional groups (R-X) that interact with analyte ions that have an opposing charge. In order to
maintain electroneutrality, these fixed charges interact with exchangeable counterions present in the solution. Ions that can undergo ionisation and
need to be purified, compete with the exchangeable counterions for binding to the immobilised charges on the stationary phase. These molecules
that can undergo ionisation are either kept or released depending on their electrical charge. Initially, molecules that exhibit low affinity or weak
binding to the stationary phase are the first to be eliminated during the washing process. Modified conditions are required to facilitate the separation
of the molecules that adhere to the stationary phase. The concentration of exchangeable counterions, which compete with the molecules for binding,
can be augmented, or the pH can be altered to influence the ionic charge of the eluent or the solute. Alterations in pH impact the charge of specific
molecules, thereby modifying their binding. When the net charge of the solute's molecules is decreased, they begin to elute. Thus, these
modifications can be employed to liberate the desired proteins. Furthermore, the retention of ionised molecules can be altered by progressively
adjusting the concentration of counterions, so facilitating their separation. This elution method is referred to as gradient elution. Alternatively, step
elution can be employed, wherein the concentration of counterions is systematically altered in discrete increments. This form of chromatography is
additionally categorised into cation exchange chromatography and anion-exchange chromatography. Cations are attracted to cation exchange
resins, whereas anions are attracted to anion exchange resins. The stationary phase can maintain the ionic compound composed of the cationic
species M+ and the anionic species B−.

Cation exchange chromatography selectively retains cations with positive charges due to the presence of a negatively charged functional group on
the stationary phase.

Anion exchange chromatography selectively retains anions by utilising functional groups that are positively charged.

It is important to consider that the ion strength of either C+ or A- in the mobile phase can be modified in order to change the equilibrium position and,
consequently, the retention time.

The ion chromatogram displays a characteristic chromatogram acquired using an anion exchange column.

1.1 Process

Prior to commencing ion-exchange chromatography, it is necessary to achieve equilibrium. The stationary phase must be conditioned to specific
criteria that vary depending on the experiment being conducted. After reaching equilibrium, the charged ions in the stationary phase will bind to
exchangeable ions of opposite charge, such as Cl- or Na+. Subsequently, a suitable buffer must be selected for the specific protein to adhere to.
Following the process of equilibration, it is necessary to perform a washing step on the column. The washing phase facilitates the removal of
contaminants that do not adhere to the matrix, while the protein of interest remains bound. The pH of this sample buffer must match the pH of the
equilibration buffer in order to facilitate the binding of the appropriate proteins. Proteins that have no charge will be released from the column at the
same rate as the buffer flowing through the column, without being retained. After loading the material onto the column and washing it with the buffer
to remove unwanted proteins, elution is performed under specified conditions to release the desired proteins that are attached to the matrix. Bound
proteins are released by employing a gradient of salt concentration that increases in a linear manner. As the ionic strength of the buffer increases,
salt ions will vie with the target proteins to attach to charged groups on the medium's surface. This will result in the desired proteins being separated
from the column. Proteins with a low net charge will be the first to be removed as the salt concentration rises, leading to an increase in ionic
strength. Proteins possessing a significant net charge require a greater ionic strength in order to be effectively removed from the column.

Ion exchange chromatography can be conducted either in bulk, where the appropriate stationary phase is applied on thin layers of glass or plastic
plates, or in chromatography columns. Thin layer chromatography and column chromatography are comparable in that they both operate based on
the same underlying principles. This includes the continuous and frequent movement of molecules as the mobile phase moves through the
stationary phase. Adding the sample in small volumes is not necessary because the exchange column's predefined parameters provide a significant
contact between the mobile and stationary phases. In addition, the elution process will result in the separation of molecules into different
compartments based on their specific chemical properties. This process occurs as a result of an elevation in salt concentrations at or close to the
upper part of the column. This displacement causes the molecules in that region to be displaced, while the molecules bound lower down are
released later when the higher salt concentration reaches that area. The principles underlying ion exchange chromatography make it an ideal choice
for the early chromatography steps in a complex purification method. This technique is capable of rapidly isolating small amounts of target
molecules, independent of the initial volume being processed.

Chamber (left) contains high salt concentration. Stirred chamber (right) contains
low salt concentration. Gradual stirring causes the formation of a salt gradient as salt travel from high to low concentrations.

Relatively uncomplicated devices are frequently employed to administer counterions with a progressively increasing gradient to a chromatography
column. Copper (II) counterions are frequently selected for their high efficacy in separating peptides and amino acids by forming complexes.

An uncomplicated apparatus can be employed to generate a gradient of salt. The elution buffer is consistently flowing from the chamber into the
mixing chamber, which results in a change in its buffer concentration. Typically, the buffer introduced into the chamber has a high initial
concentration, while the buffer introduced into the stirred chamber has a low concentration. As the highly concentrated buffer from the left chamber

Workshop-5 Module-1 Page 218 of 333


is combined and pulled into the column, the buffer concentration in the stirred column steadily increases. Modifying the configurations of the agitated
container, as well as of the boundary buffer, enables the generation of concave, linear, or convex gradients of counterion.

A variety of diverse mediums are employed for the stationary phase. The immobilised charged groups commonly employed include
trimethylaminoethyl (TAM), triethylaminoethyl (TEAE), diethyl-2-hydroxypropylaminoethyl (QAE), aminoethyl (AE), diethylaminoethyl (DEAE), sulpho
(S), sulphomethyl (SM), sulphopropyl (SP), carboxy (C), and carboxymethyl (CM).

Efficient column packing is a crucial element in ion chromatography. The stability and efficiency of a final column are contingent upon the packing
procedures, the solvent employed, and the factors that influence the mechanical properties of the column. Unlike the early and ineffective dry-
packing procedures, wet slurry packing, which involves delivering suspended particles in a suitable solvent into a column under pressure,
demonstrates notable enhancement. Wet slurry packing can be carried out using three distinct methods: the balanced density method, which
involves using a solvent with a density similar to that of porous silica particles; the high viscosity method, which uses a solvent with high viscosity;
and the low viscosity slurry method, which is performed using low viscosity solvents.

Polystyrene serves as a medium for ion exchange. The formation of this substance involves the polymerization of styrene using divinylbenzene and
benzoyl peroxide. These exchangers establish hydrophobic contacts with proteins that may be permanent. Polystyrene ion exchangers are
unsuitable for protein separation due to this characteristic. On the other hand, they are utilised for the purpose of separating tiny molecules in amino
acid separation and eliminating salt from water. Polystyrene ion exchangers that have wide holes are suitable for protein separation, however they
require a hydrophilic coating.

Cellulose-based media are suitable for separating big molecules due to their inclusion of sizable holes. The protein binding in this medium is
substantial and exhibits minimal hydrophobicity. DEAE is an anion exchange matrix derived from cellulose or Sephadex, with a positive
diethylaminoethyl side group. Agarose gel-based media also have wide pores, although their capacity to substitute is lesser compared to dextrans.
The medium's capacity to expand in a liquid is determined by the cross-linking of these components, as well as the pH and ion concentrations of the
buffers employed.

The utilisation of elevated temperature and pressure enables a substantial enhancement in the efficacy of ion chromatography, accompanied by a
reduction in duration. The selectivity is affected by temperature due to its impact on the retention qualities. The retention factor (k = (tRg − tMg)/(tMg
− text)) exhibits a positive correlation with temperature for small ions, while a negative correlation is recorded for bigger ions. Ongoing research is
being conducted to carry out ion exchange chromatography throughout the temperature range of 40–175 °C, despite variations in ion selectivity
across different mediums. The selection of a suitable solvent can be determined by observing the behaviour of column particles in different solvents.
By utilising an optical microscope, it is possible to readily differentiate between a desirable dispersed state of slurry and aggregated particles.The
number 25 is enclosed in square brackets.

1.1 Ion exchangers can be classified as weak or powerful.

A "resilient" ion exchanger will retain its charge on the matrix after equilibration, allowing for the usage of a diverse range of pH buffers. "Weak" ion
exchangers possess a certain pH range within which they retain their electrical charge. If the pH of the buffer used for a weak ion exchange column
exceeds the capacity range of the matrix, the column will lose its charge distribution, potentially resulting in the loss of the molecule of interest.
Weak ion exchangers, despite having a narrower pH range, are frequently preferred over strong ion exchangers because of their higher selectivity.
During certain investigations, the retention durations of weak ion exchangers are precisely calibrated to ensure the collection of needed data with a
high level of specificity.

The resins used in ion exchange columns, sometimes referred to as 'beads', can contain functional groups such as weak or strong acids and weak
or strong bases. Additionally, there are certain columns containing resins with amphoteric functional groups capable of exchanging both cations and
anions. Strong ion exchange resins possess functional groups such as quaternary ammonium cation (Q), which acts as an anion exchanger, and
sulfonic acid (S, -SO2OH), which acts as a cation exchanger. These exchangers can sustain their charge density across a pH range of 0–14. Weak
ion exchange resins possess functional groups such as diethylaminoethyl (DEAE, -C2H4N(CH2H5)2), which acts as an anion exchanger, and
carboxymethyl (CM, -CH2-COOH), which functions as a cation exchanger. Both of these types of exchangers are capable of preserving the charge
density of their columns within a pH range of 5–9.The information provided lacks a reliable source to support its validity.

The binding and degree of binding of ions in ion chromatography is determined by the interaction between the solute ions and the stationary phase,
which is dependent on their respective charges. An anion exchanger refers to a stationary phase that contains positive groups that attract anions,
while a cation exchanger refers to a stationary phase that contains negative groups that attract cations. The interaction between ions and the
stationary phase is also influenced by the resin, which refers to the organic particles employed as ion exchangers.

Each resin exhibits varying degrees of selectivity, depending on the solute ions present, which compete for binding to the resin group on the
stationary phase. The selectivity coefficient, which is analogous to the equilibrium constant, is calculated by comparing the concentrations of the
resin and each ion. Generally, ion exchangers have a preference for binding to ions with higher charges, smaller hydrated radii, and greater
polarizability, which refers to the ability of an ion's electron cloud to be influenced by other charges.

However, if an excessive amount of an ion with lower selectivity is given to the column, the smaller ion would bind more to the stationary phase due
to the selectivity coefficient allowing variations in the binding response during ion exchange chromatography.

Following table shows the commonly used ion exchangers [38]

Sr. No Name Type Functional group

DEAE Cellulose (Anion


1 Weakly basic DEAE (Diethylaminoethyl)
exchanger)

2 QAE Sephadex (Anion exchanger) Strongly basic QAE (Quaternary aminoethyl)

Workshop-5 Module-1 Page 219 of 333


3 Q Sepharose (Anion exchanger) Strongly basic Q (Quaternary ammonium)

4 CM- Cellulose (Cation exchanger) Weakly acidic CM (Carboxymethyl)

5 SP Sepharose (Cation exchanger) Strongly acidic SP (Sulfopropyl)

6 SOURCE S (Cation exchanger) Strongly acidic S (Methyl sulfate)

4.7 Typical technique


The introduction of a sample, either through human means or using an autosampler, is done into a sample loop with a predetermined volume. The sample is
transported from the loop onto a column containing a stationary phase material by a buffered aqueous solution called the mobile phase. The substance is
commonly a resin or gel matrix composed of agarose or cellulose beads that have charged functional groups covalently attached to them. The process of
equilibrating the stationary phase is necessary to achieve the desired charge of the column. Insufficient equilibration of the column can result in weak binding of the
desired molecule to the column. The desired substances (either anions or cations) are held on the stationary phase but can be removed by raising the
concentration of a species with the same charge, which displaces the analyte ions from the stationary phase. In cation exchange chromatography, the analyte with
a positive charge can be displaced by introducing positively charged sodium ions. The desired substances must thereafter be identified using a detection method,
commonly based on conductivity or absorption of UV/visible light.

Operating an integrated circuit (IC) system typically necessitates the use of a chromatographic data system (CDS). Furthermore, certain CDSs have the capability
to regulate both gas chromatography (GC) and high-performance liquid chromatography (HPLC), in addition to integrated circuit (IC) systems.

1.1 Membrane exchange chromatography

Membrane exchange is a novel form of ion exchange chromatography that aims to address the difficulties associated with employing bead-packed columns.
Membrane chromatographic devices, unlike other chromatography devices, are cost-effective for large-scale production and can be easily disposed of, eliminating
the need for maintenance and time-consuming revalidation. Typically, three types of membrane absorbers are employed for the purpose of separating substances.
The three categories include flat sheet, hollow fibre, and radial flow. The preferred absorber for membrane chromatography is numerous flat sheets due to its larger
absorbent capacity. It can be utilised to surpass restrictions in the transfer of mass[43] and decrease in pressure,[44] making it particularly beneficial for the
separation and purification of viruses, plasmid DNA, and other substantial macromolecules. The column is filled with microporous membranes that include interior
pores containing adsorptive moieties capable of binding the target protein. Adsorptive membranes come in various shapes and chemical compositions, making
them suitable for purification, as well as fractionation, concentration, and clarity. They offer an efficiency that is ten times greater than utilising beads.The number
45. Membranes can be created by isolating the membrane itself, which involves cutting the membranes into squares and immobilising them. A more contemporary
approach entails utilising viable cells that are affixed to a supporting membrane and employed for the purpose of identifying and purifying signalling chemicals.

4.8 Separating proteins

Preparative-scale ion exchange column used for protein purification.

Proteins can be separated using ion exchange chromatography due to the presence of charged functional groups. The ions of interest, specifically
charged proteins, are substituted with different ions, often H+, on a positively charged solid substrate. The solutes often exist in a liquid state,
predominantly composed of water. Consider proteins dissolved in water, which form a liquid phase that is then flowed through a column. The column
is referred to as the solid phase due to its composition of porous synthetic particles having a specific charge. These permeable particles, commonly
known as beads, can be either aminated (having amino groups) or include metal ions to acquire a charge. The column can be produced using
porous polymers. For macromolecules with a mass over 100,000 Da, the ideal size of the porous particle is approximately 1 μm2. The reason for
this is that the limited movement of solutes within the pores does not impede the effectiveness of the separation process. Anion exchange resins are
beads that possess positively charged groups, which have an affinity for negatively charged proteins. Glutamate and aspartate are the amino acids
with negatively charged side chains at pH 7, which is the pH of water. Cation exchange resins refer to beads that possess a negative charge, which
enables them to attract positively charged proteins. Lysine, histidine, and arginine are the amino acids with positively charged side chains at a pH of
7.

The isoelectric point refers to the pH value at which a substance, specifically a protein in this context, possesses no overall charge. The isoelectric
point (PI) of a protein can be found by considering the pKa values of its side chains. If the positive charge of the amino group is able to neutralise
the negative charge of the carboxyl group, the protein will be at its isoelectric point. Utilising non-aqueous buffers for proteins lacking charge at pH 7
is advantageous, since it allows for pH adjustment to modify ionic interactions between the proteins and the beads.The user's text is "[49]". Side
chains with weak acidity or basicity can acquire a charge when the pH is sufficiently high or low, respectively. Protein separation can be
accomplished by leveraging the inherent isoelectric point of the protein. Alternatively, a peptide tag can be genetically incorporated into the protein
to alter its isoelectric point, deviating from that of most natural proteins. For example, the addition of 6 arginines can facilitate binding to a cation-
exchange resin, whereas the addition of 6 glutamates can facilitate binding to an anion-exchange resin like DEAE-Sepharose.

Workshop-5 Module-1 Page 220 of 333


The process of elution through the gradual increase of the ionic strength of the mobile phase is more nuanced. The functionality of the process is
attributed to the interaction between ions present in the mobile phase and the immobilised ions on the stationary phase. This interaction effectively
protects the stationary phase from the protein, allowing the protein to be released.

Elution from ion-exchange columns can be susceptible to alterations in a single charge, a phenomenon known as chromatofocusing. Ion-exchange
chromatography is a valuable technique for isolating specific multimeric protein assemblies. It enables the purification of specific complexes based
on the number and location of charged peptide tags.

The Gibbs-Donnan effect

The Gibbs–Donnan effect is evident in ion exchange chromatography when there is a discrepancy of up to one pH unit between the pH of the buffer
used and the ion exchanger. In anion-exchange columns, the ion exchangers repel protons, resulting in a higher pH of the buffer near the column
compared to the remainder of the solvent.The number 52 is enclosed in square brackets. Consequently, the investigator must exercise caution to
ensure that the protein(s) of interest remains stable and possesses the correct charge at the specific pH level being used.

This phenomenon occurs when two particles with the same charge, one from the resin and one from the solution, do not distribute evenly between
the two surfaces. This leads to a preferential absorption of one ion over the other.The number 53 is enclosed in square brackets.The number is 54.
In the case of a sulphonated polystyrene resin, which is a type of cation exchange resin, the chlorine ion from a hydrochloric acid buffer should
reach a state of equilibrium within the resin. Due to the high concentration of sulphonic acid in the resin, the hydrogen in HCl does not exhibit any
inclination to enter the column. As a result of the requirement for electroneutrality, there is a minimal influx of hydrogen and chlorine into the resin.

1.1 Purposes

Practical usefulness in a clinical setting

Ion chromatography is employed in argentation chromatography.The information provided lacks a credible source. Typically, silver and compounds
that include acetylenic and ethylenic linkages exhibit extremely feeble interactions. This phenomena has undergone extensive testing on olefin
compounds. The ion complexes formed between the olefins and silver ions are characterised by weak interactions resulting from the overlapping of
pi, sigma, and d orbitals, as well as the availability of electrons. Consequently, these complexes do not induce any significant alterations to the
double bond. Silver ions were employed to regulate this behaviour in order to fractionate lipids, particularly fatty acids, from mixtures into fractions
with varying numbers of double bonds. The ion resins were saturated with silver ions, and subsequently subjected to different acids (such as silicic
acid) in order to extract fatty acids with distinct properties.

Alkali metal ions can be detected with a minimum limit of 1 μM.The number 55 is enclosed in square brackets. It can be utilised for the quantification
of HbA1c, porphyrin, and in the process of water purification. Ion Exchange Resins (IER) have been extensively utilised, particularly in
pharmaceuticals, owing to their high capacity and the straightforward nature of the separation procedure. An application of synthetic materials
involves the utilisation of Ion Exchange Resins in kidney dialysis. This technique use the cellulose membraned artificial kidney to segregate the
constituents of blood.

Ion chromatography is also utilised in the clinical field for the purpose of quick anion exchange chromatography. This technique is employed to
separate creatine kinase (CK) isoenzymes from human serum and autopsy-derived tissue samples, with a preference for tissues abundant in CK,
such as cardiac muscle and brain.The information provided lacks a reliable source to support its validity. These isoenzymes, namely MM, MB, and
BB, perform identical functions yet possess distinct amino acid sequences. The primary role of these isoenzymes is to catalyse the conversion of
creatine, utilising ATP, into phosphocreatine while releasing ADP. The mini columns were packed with DEAE-Sephadex A-50 and then washed with
tris-buffer sodium chloride at different concentrations (each concentration was carefully selected to control the elution process). Human tissue
extract was introduced into columns for the purpose of separation. Every fraction was examined to determine the overall CK activity, revealing that
each source of CK isoenzymes contained distinct isoenzymes. Initially, CK-MM was separated, subsequently CK-MB, and finally CK-BB. Hence, the
isoenzymes present in each sample could be employed for source identification, given their tissue-specific nature.

The results provide information that allows for linkage between patient diagnosis and the predominant activity of CK isoenzymes. Out of the 71
individuals tested, approximately 35 of them experienced a myocardial infarction (heart attack). These patients also exhibited a significant presence
of the CK-MM and CK-MB isoenzymes. The findings also indicate that several additional diagnoses, such as renal failure, cerebrovascular illness,
and pulmonary disease, only exhibited the CK-MM isoenzyme without any other isoenzyme. The findings of this study demonstrate associations
between different diseases and the CK isoenzymes identified, hence validating prior test results obtained through diverse methodologies. Research
on CK-MB, which is present in individuals who have suffered a heart attack, has significantly increased following the implementation of ion
chromatography in this particular study.

Applications in the industrial sector

Ion chromatography has been extensively utilised in several industries since 1975. The primary advantageous features are dependability,
exceptional accuracy and precision, elevated selectivity, rapidity, superior separation efficiency, and cost-effectiveness of consumables. Notable
advancements in ion chromatography include the introduction of novel sample preparation techniques, enhancements in the efficiency and
specificity of analyte separation, reduction in detection and quantification limits, broadening the range of applications, establishment of new standard
methods, miniaturisation of equipment, and expansion of the analysis capabilities to encompass a new class of substances. Enables the quantitative
analysis of electrolytes and exclusive additives in electroplating solutions. It represents a progression from qualitative hull cell testing or less precise
UV testing. It is possible to quantify ions, catalysts, brighteners, and accelerators. Ion exchange chromatography has gained popularity as a
versatile and widely recognised method for detecting both negatively charged (anionic) and positively charged (cationic) species. Applications for
such objectives have been created or are currently being created for several domains of interest, especially the pharmaceutical business. The
utilisation of ion exchange chromatography in the pharmaceutical industry has experienced a notable rise in recent years. As evidence of its
significance, in 2006, a dedicated chapter on ion exchange chromatography was officially incorporated into the United States Pharmacopia-National
Formulary (USP-NF). In addition, the United States Pharmacopia conducted many ion chromatography analyses in the 2009 release of the USP-NF.
These tests were performed using two techniques: conductivity detection and pulse amperometric detection. The majority of these applications are
typically utilised for quantifying and examining residual thresholds in medicines, encompassing the detection of thresholds for oxalate, iodide,
sulphate, sulfamate, phosphate, as well as diverse electrolytes such as potassium and sodium. The 2009 version of the USP-NF officially introduced
a total of twenty-eight detection methods for analysing active substances or their components. These procedures utilise either conductivity detection
or pulse amperometric detection.

Workshop-5 Module-1 Page 221 of 333


Drug development

An ion chromatography system used to detect and measure cations such as


sodium, ammonium and potassium in Expectorant Cough Formulations.

The use of IC in the analysis of pharmaceutical medications has been more popular. Integrated circuits (ICs) are utilised at many stages of product
development and for conducting quality control testing. IC, or ion chromatography, is employed to enhance the stability and solubility characteristics
of active pharmaceutical medication compounds. Additionally, it is utilised to identify systems that exhibit greater resistance to organic solvents. Ion
chromatography (IC) has been employed to determine analytes as a component of a dissolution test. For example, experiments involving the
dissolution of calcium have demonstrated that other ions in the solution may be effectively distinguished from both each other and the calcium ion.
Consequently, Ion Chromatography (IC) has been utilised in pharmaceuticals like tablets and capsules to quantify the rate at which medications
disintegrate over time. In addition, IC is extensively utilised for the identification and measurement of excipients or inert substances employed in
pharmaceutical compositions. The identification of sugar and sugar alcohol in these formulations using ion chromatography (IC) is possible because
the polar groups are separated in the ion column. The IC approach is extensively utilised in the analysis of contaminants in medicinal compounds
and products. The presence of impurities or other non-drug components in the chemical entity of a drug is assessed to determine the optimal range
of drug dosage that should be given to a patient on a daily basis.

4.9 See also

 Anion-exchange chromatography
 Chromatofocusing
 High performance liquid chromatography
 Isoelectric point

4.10 References

1. ^ Muntean, Edward (2022). Food analysis: using ion chromatography. De Gruyter STEM. Berlin Boston: De Gruyter. ISBN 978-3-11-
064440-1.
2. ^ Ngere, Judith B.; Ebrahimi, Kourosh H.; Williams, Rachel; Pires, Elisabete; Walsby-Tickle, John; McCullagh, James S. O. (10
January 2023). "Ion-Exchange Chromatography Coupled to Mass Spectrometry in Life Science, Environmental, and Medical
Research". Analytical Chemistry. 95 (1): 152–166. doi:10.1021/acs.analchem.2c04298. ISSN 0003-
2700. PMC 9835059. PMID 36625129.

Workshop-5 Module-1 Page 222 of 333


Q 3: Briefly describe ’Hydrophobic Interaction Chromatography’ (HIC).

Hydrophobic interaction chromatography (HIC) is a technique that separates molecules by exploiting their varying degrees of hydrophobicity. HIC, or hydrophobic
interaction chromatography, is an effective method for purifying proteins while preserving their biological activity. This is achieved by utilising conditions and
matrices that work under mild denaturing conditions.

Hydrophobic interaction chromatography (HIC)

Ion exchange chromatography separates proteins based on their charges, while hydrophobic interaction chromatography separates proteins based on their
hydrophobic characteristics. The hydrophobic groups present on the protein interact with the hydrophobic groups present on the column.

The protein's affinity for the column increases in proportion to its hydrophobicity.

Incubate the proteins in the presence of a high concentration of ammonium sulphate. Ammonium sulphate functions as a chaotropic agent. It elevates the level of
disorder (entropy) in water, thus enhancing hydrophobic interactions (the stronger the disorder in water, the stronger the hydrophobic interactions). Ammonium
sulphate has the additional function of protein stabilisation. Using an HIC column will lead to the protein being in its most stable conformation.

The hydrophobic column is filled with a phenyl agarose matrix. Under conditions of elevated salt concentrations, the phenyl groups on this matrix adhere to
hydrophobic regions of proteins. Manipulating the salt content or introducing solvents allows for the regulation of protein elution from various column-bound
surfaces.

Workshop-5 Module-1 Page 223 of 333


HIC separations are formulated by employing conditions that are diametrically opposed to those utilised in ion exchange chromatography. During this process, a
column is initially treated with a buffer solution containing a high concentration of ions, often ammonium sulphate.

The presence of salt in the buffer diminishes the process of solvation for the solutes in the sample. As a result, the hydrophobic areas that are exposed due to the
decrease in solvation are absorbed by the medium.

Chromatogram based on hydrophobic interactions

123456

The result of subtracting 4 from 5 is 1. - Biopharmaceuticals - Downstream Processing 2 Precise Instructions for APIs Produced by Cell Culture / Fermentation

The ICH Harmonised Tripartite Guideline, titled "Good Manufacturing Practice Guide for Active Pharmaceutical Ingredients Q7," is recommended for adoption at
Step 4 of the ICH Process on 10 November 2000 by the ICH Steering Committee.

ICH Q7 Section 18 provides specific guidance for the manufacturing of APIs using cell culture/fermentation methods.

1. Overall 2. Maintenance and Documentation of Cell Banks 3. Cell Cultivation/Fermentation

4. Extraction, Separation, and Refinement

5. Steps for removing or inactivating viruses

Hydrophobic interaction chromatography

Hydrophobic Interaction Chromatography (HIC) is a method used for purifying and analysing substances, particularly proteins, by exploiting the hydrophobic
interactions between the analyte and the chromatographic matrix. It offers a non-denaturing orthogonal method for reversed phase separation, which maintains the
original structures and potentially the function of proteins. Hydrophobic interaction chromatography involves the incorporation of hydrophobic groups into the matrix
material in a limited manner. These groups can vary in size and composition, including methyl, ethyl, propyl, butyl, octyl, or phenyl groups.The user's text is "[29]".
Under conditions of elevated salt concentrations, the non-polar sidechains located on the surface of proteins engage in interactions with hydrophobic groups. In
other words, both types of groups are repelled by the polar solvent due to the hydrophobic effects, which are intensified by the higher ionic strength. Consequently,
the sample is introduced into the column using a highly polar buffer, causing hydrophobic regions on the analyte to interact with the stationary phase. The eluent
commonly consists of an aqueous buffer that exhibits a gradual decrease in salt concentrations, an incremental increase in detergent concentrations (which
disturbs hydrophobic interactions), or alterations in pH. The choice of salt is crucial, with kosmotropic salts from the Hofmeister series being particularly effective at
structuring water around the molecule and creating hydrophobic pressure. Ammonium sulphate is commonly employed for this objective. Incorporating organic
solvents or other ingredients with lower polarity can enhance resolution.

Hydrophobic Interaction Chromatography (HIC) is particularly beneficial when the sample is susceptible to pH fluctuations or aggressive solvents commonly
employed in other chromatography methods, but not to elevated salt levels. Typically, the concentration of salt in the buffer is adjusted. In 2012, Müller and
Franzreb examined the impact of temperature on HIC by utilising Bovine Serum Albumin (BSA) in conjunction with four distinct hydrophobic resins. The study
manipulated temperature to impact the binding affinity of BSA to the matrix. The study determined that cycling the temperature from 50 to 10 degrees Celsius
would not be sufficient to completely remove all BSA from the matrix. However, it could be highly successful if the column is only intended for a few uses.The user's
text is "[30]". Utilising temperature as a means to induce change enables laboratories to reduce expenses on salt procurement and achieve monetary savings.

To prevent high salt concentrations and temperature variations, you can employ a more hydrophobic substance to compete with your sample and facilitate its
elution. The source of the information is indicated. The salt-independent approach of hydrophobic interaction chromatography (HIC) was employed to directly
isolate Human Immunoglobulin G (IgG) from serum, resulting in a good yield. This method utilised Beta-cyclodextrin as a competitor to displace IgG from the
matrix.The user's text is "[31]". This greatly expands the potential for utilising HIC with samples that are sensitive to salt, as we are aware that high amounts of salt
cause proteins to precipitate.

Hydrophobic Interaction Chromatography


Hydrophobic interaction chromatography (HIC) is a versatile method for purifying and separation of biomolecules using the function of hydrophobicity. Proteins
containing both hydrophilic and hydrophobic regions are applied to a HIC column under specified salt buffer conditions, which promotes binding and stabilizes the
molecule's structure. HIC steps are commonly used at all stages of the process, including capture, intermediate steps, and final polish purification.

POROS hydrophobic interaction chromatography (HIC) resins are based on the 50 μm POROS base bead and utilize a novel coating procedure to enable
functionalization with unique hydrophobic ligands. These resins are suitable for bind/elute and flow-through applications at lower salt concentrations and have
higher binding capacity and resolution than classical HIC resins, enabling more flexibility around process operating conditions. Key applications include monoclonal
antibodies, bispecific antibodies, antibody–drug conjugates (ADCs), product-related impurities and aggregate removal, and polysamides, RNAi, and
oligonucleotides.

Thermo Fisher Scientific provides a suite of POROS HIC resins offering a differentiating range in hydrophobicity suitable for bind/elute and flow-through
applications. The resins can be used for the purification of a wide variety of biomolecules, including therapeutic proteins, antibody fragments, antibody drug
conjugates (ADCs), and other large molecules.

The unique resin design of the 50 µm resin backbone consists of cross-linked poly(styrene-divinylbenzene) with a unique pore structure that provides rapid mass
transport and enhances productivity. The particle surface is coated with a novel polymer coating, which is then further derivatized with a range of hydrophobic
ligands for flexible purification process design. POROS HIC resins enable outstanding performance independent of flow rate, improving yield and purity, leading to
reduced column size and smaller footprint.

Workshop-5 Module-1 Page 224 of 333


Thermo Fisher Scientific provides a suite of POROS HIC resins, including POROS Benzyl Ultra HIC resins and columns, POROS Benzyl HIC resins and columns,
and POROS Ethyl HIC resins and columns. The selection tool allows users to determine the right chromatography resin for their purification process.

https://www.thermofisher.com/ie/en/home/life-science/bioproduction/poros-chromatography-resin/bioprocess-resins/hydrophobic-interaction-chromatography.html?
gclid=Cj0KCQiAhomtBhDgARIsABcaYykLmPEjrIbECa2ZltFGMsz4nXT1EC1beEi1Fyw0VUZM6o-
V4z4MaawaAi4fEALw_wcB&cid=bpd_prf_wha_r01_co_CP1494_PJT9154_bpd00000_0se_gaw_nt_awa_hicresins&s_kwcid=AL!3652!3!654808904787!p!!g!!hic
%20chromatography!1566007431!104160687254&ef_id=Cj0KCQiAhomtBhDgARIsABcaYykLmPEjrIbECa2ZltFGMsz4nXT1EC1beEi1Fyw0VUZM6o-
V4z4MaawaAi4fEALw_wcB:G:s&gad_source=1

4.11 Related resources


Article: Monoclonal Antibody Aggregate Polish and Viral Clearance Using Hydrophobic-Interaction Chromatography
Article: Novel Hydrophobic Interaction Chromatography Resins for next generation biotherapeutic challenges
Application note: Efficient removal of aggregates from monoclonal antibodies by hydrophobic interaction chromatography in flow-through mode Purification
Poster: Viral Clearance Capability of POROS HIC resins
Poster: Efficient Monoclonal Antibody Aggregate Removal by Hydrophobic Interaction Chromatography

4.12 Bioprocessing resources


Access a targeted collection of scientific application notes, case studies, posters, white papers and more for bioprocessing:

 Bioprocessing Learning Center


 Bioprocessing Technical Resource Library
 Download Bioprocessing Product Reference Guide

4.13 Support
Connect with a Bioprocessing rep or request more info

4.14 Events & Webinars

 View our events and webinars listing

Hydrophobic interaction chromatography (HIC) is a valuable tool used in protein purification applications, removing various impurities present in the solution,
including undesirable product-related impurities. HIC is often employed to remove product aggregate species, which possess different hydrophobic properties than
the target monomer species and can often be effectively removed using HIC.

HIC was first proposed by Tiselius in 1948 and introduced by Hjerten in 1973. It exploits stationary phase with weakly hydrophobic ligands immobilized on a
hydrophilic matrix. Adsorption occurs due to the hydrophobic interaction between the hydrophobic surface patches on a solute and the ligands at moderately high
salt concentrations (ion strength), usually 1–2 mol l−1 ammonium sulfate or 3 mol l−1 NaCl. Because kosmotropic salts such as (NH4)2SO4 and Na2SO4 promote
hydrophobic interactions, adsorption increases with salt concentration in the mobile phase, and vice versa. Therefore, elution is usually performed via a gradient or
stepwise reduction of salt concentration.

Ligands are crucial for the bioseparations by HIC, as ligand chemistry can affect HIC selectivity for different proteins. The hydrophobic interaction is proportional to
ligand hydrophobicity and coupling density on the surface, so ligand density should be varied according to the ligand hydrophobicity. Generally, immobilized ligand
density in commercial HIC adsorbents is in the range of 10–40 µmol ml−1.

The HIC can directly deal with a sample containing high salt concentration, making it promising for processing samples obtained from salting-out precipitation or
IEC elution. Because hydrophobic interaction strength can be readily adjusted by altering salt concentration in the mobile phase, HIC is an important method in the
bioseparations of therapeutic proteins, DNA vaccines, and hydrophobically tagged proteins.

Hydrophobic interaction chromatography (HIC) is a widely used method for separating aggregated protein species from monomeric forms. It offers superior
selectivity for removing aggregate species and may also remove undesirable misfolded or variant forms of a protein. However, HIC often requires high salt
concentrations to ensure sufficient hydrophobic interaction between the protein and the adsorbent. This can be costly to produce or dispose of properly, making
other forms of chromatography more desirable.

HIC can also be utilized for refolding various proteins. This method involves adsorption of unfolded protein to the HIC column and separation of denaturing agent
with replacement of renaturation buffer. The advantage of protein refolding using HIC is not only high refolding yield but also high degree of purity. Refolding yield
of recombinant human interferon by HIC was twofold higher than that obtained by a usual dilution method. Geng et al. reported refolding of the recombinant human
interferon-γ (rhIFN-γ) with simultaneous purification using HIC. The refolding yield (based on the bioactivity of rhIFN-γ) obtained by HIC was 62-fold higher than the
refolding yield of conventional dilution method.

Workshop-5 Module-1 Page 225 of 333


The potency of HIC media to enhance refolding efficiently is comparatively higher than other methods because the hydrophobic interaction between the media and
hydrophobic clusters of unfolded or partially folded proteins will prevent intermolecular interaction, which is the major cause of aggregate formation. Therefore, the
hydrophobicity of the protein should be considered during the selection of HIC media for refolding. Modifying the composition of buffer can control the hydrophobic
interaction of the proteins with the matrix. The combination of glycerol and urea in renaturation buffer alleviates the harshness of highly hydrophobic media,
facilitates protein refolding, and improves mass recovery by providing a gradual change of the refolding environment in the HIC column.

Wang et al. studied the influence of stationary phase, salt, pH, and gradient mode on recombinant prion protein and obtained 87% recovery with 96% purity in a
single 40-minute step by HIC. Wang et al. tried to refold consensus interferon using HIC and found that simple stepwise elution does not refold the protein. They
achieved successful refolding by gradient elution with a decreasing concentration of guanidine hydrochloride (guanidine HCl) and an increasing concentration of
PEG.

In conclusion, mobile-phase composition, elution mode, and flow rate are important factors that affect the efficiency of protein refolding using HIC.

Recombinant human interferon-γ (rhIFN-γ) was purified using high-performance hydrophobic interaction chromatography (HIC) by gradient elution using 3.0 M
(NH4)2SO4 and 0.05 M KH2PO4 (pH 7.0) as mobile phase A and 0.05 mol l−1 KH2PO4 (pH 7.0) as mobile phase B.

HIC separates biomolecules based on their hydrophobicity, which is suited for samples that have been precipitated by ammonium sulfate or after IEC, because the
sample contains elevated salt levels and can be applied directly to the HIC column. The most widely used supports are hydrophilic carbohydrates, such as cross-
linked agarose and synthetic copolymer materials. The hydrophobic groups (phenyl, butyl, octyl, ether, or isopropyl) are attached to the stationary column.

RPC and RP-HPLC involve the separation of molecules on the basis of hydrophobicity. The base matrix for the reversed phase media is generally composed of
silica or a synthetic organic polymer such as polystyrene with linear hydrocarbon chains (C18, C4, C8, phenyl, and cyanopropyl ligands). The porosity of the
reversed phase beads is a key factor of the available capacity for solute binding by the medium.

Hydrophobic interaction chromatography (HIC) separates proteins based on the hydrophobicity of the protein and has been used to follow oxidation and Asp
isomerization. Hydrophobic interactions tend to be strongest at high salt concentrations. Small hydrophobic residues such as phenyl or propyl groups are coupled
to the chromatographic matrix. After loading in high salt, the protein is eluted using a decreasing salt gradient. In this manner, the more hydrophobic proteins elute
later during the chromatography.

In theory, hydrophobic interaction chromatography (HIC) and RPLC are closely related, as in both techniques separation is based on hydrophobic interactions
between the surface of an analyte and the stationary phase. However, in application, the techniques are very different. The solid phase used in RPLC is
characteristically more hydrophobic than that used in HIC, resulting in stronger interactions between solute and solid phase compared to HIC. For elution from
RPLC, organic solvents must be used. In comparison, the weaker hydrophobic interactions present using HIC can be disrupted by decreasing the concentration of
salt in the mobile phase.

HIC offers an alternative system to exploit hydrophobic properties of molecules in a more polar and less denaturing environment. The stationary phase consists of
a nonionic group fused to an inert matrix, while the mobile phase consists of a phosphate buffer, pH 7, and a salt such as potassium chloride, ammonium sulfate, or
ammonium tartrate.

Hydrophobic interaction chromatography (HIC) is a method developed by Smyth et al. (1978) that separates molecules based on differences in their surface
hydrophobicity. This method is used for screening and purifying biosurfactant-producing organisms, such as Serratia marcescens, P. aeruginosa, Bacillus pumilus,
B. laterosporus, and Acinetobacter calcoaceticus. HIC provides a reliable and rapid method for screening biosurfactant producers such as Serratia marcescens, P

However, HIC is generally not compatible with detergents. Ion exchange chromatography is another method that can be used, but it is not recommended due to the
potential for partial masking of native charges by detergent molecules. Affinity chromatography, which focuses on the binding of proteins to the resin, is usually not
affected by detergents but is not recommended. Gel sizing columns are compatible with almost all detergents, but thorough column equilibration is essential.

The discovery of HIC resulted from an attempt to make affinity columns, which uncovered a unique mode of protein chromatography. On the surface, HIC
resembles reversed-phase chromatography in that the protein binds to the column through hydrophobic interactions in an aqueous solvent. Both resin types consist
of a stationary phase with a hydrophobic surface. HIC resins are typically constructed from polysaccharide or polymeric material, while reversed-phase resins are
typically bonded silicas.

Conformational changes are driven by high salts, such as ammonium sulfate, which present an ionic environment favorable to hydrophilic surfaces. Hydrophobic
surfaces are driven together so that exposure to the environment is reduced, partially "salted out" and adsorb to the resin. The use of ammonium sulfate is

Workshop-5 Module-1 Page 226 of 333


relatively gentle because most proteins are stabilized in the presence of high concentrations of ammonium sulfate. However, high salt may destabilize the particle,
possibly due to its high water content or because the capsid proteins are twisted into destabilized conformations.

The loading material may be adjusted to high salt prior to application to the column, or small amounts may be applied repeatedly, washing with equilibration buffer,
or the material could be diluted in equilibration buffer inline with the load. The advantage of the two later methods is that protein precipitation occurs slowly from the
time of salt addition. Limiting the time of exposure to high salt may improve the chromatography and mitigate yield loss. Elution is achieved by reducing the salt
concentration with a reverse gradient.

HIC columns are optimized and operated along the same lines as ion exchange columns, with residence time on the column should be minimized due to the
possibility of denaturation. Yields of virus from this type of chromatography typically range between 20% and 60%.

Hydrophobic Interaction Chromatography (HIC) is a technique used in biochemistry to separate and purify molecules based on their hydrophobic properties. If you
are interested in learning more about this technique, you may download a PDF document. Susha Cheriyedath, M.Sc., is a duplicate.

Written by Susha Cheriyedath, who holds a Master of Science degree.

Assessed by Afsaneh Khetrapal, Bachelor of Science

Hydrophobic interaction chromatography (HIC) is a highly effective method employed to purify proteins in both analytical and preparation contexts.

HIC chromatography is a widely used method for separating and purifying protein molecules based on their hydrophobicity. It is preferred over other
chromatography techniques because it provides a less damaging environment for protein separation. Thus, the proteins retain their biological activity without any
alteration.

HIC can be utilised with great efficacy to eliminate contaminants or aggregation species in aqueous solutions by capitalising on the disparity in hydrophobic
characteristics between the aggregates and the desired molecules. It is frequently employed in conjunction with methodologies such as ion exchange or gel
filtration chromatography.

The concept of Hydrophobic Interaction Chromatography

During Hydrophobic Interaction Chromatography (HIC), the protein molecules under examination are inserted into a column that contains a buffer solution with a
high concentration of salt. The addition of salt enhances the interaction between the hydrophilic and hydrophobic parts of the protein and the surrounding
environment by decreasing the solvation of the molecules in the sample and exposing their hydrophobic regions.

The quantity of salt required to facilitate binding is inversely related to the hydrophobicity of the molecules. Therefore, the sample molecules can be separated and
removed from the column in a sequence that corresponds to their increasing hydrophobicity, by employing a salt gradient that decreases over time. Protein
molecules that are attached can be efficiently removed by rinsing with a weak solution or water.

The Process of Hydrophobic Interaction Chromatography is a scientific technique used to separate and analyse the components of a mixture.

HIC media consist of alkyl or aryl ligands attached to an inert and porous matrix. These media are then packed onto a chromatography column in a packed bed
configuration.

A salt buffer with a modest concentration is employed to occupy the pores and interstitial spaces inside the matrix. The often employed salts include 1–2M
ammonium sulphate or 3M sodium chloride, which are chosen to enhance the primary contact between the protein sample and the medium while reducing the
interaction with less hydrophobic proteins (impurities).

The column is cleansed to eliminate proteins that are not bound.

The salt content is gradually decreased in order to initiate the elution of proteins. The adjustment of salt gradients enables the selective separation of proteins
based on their hydrophobicity, with the least hydrophobic proteins being eluted first.

Utilising a salt-free buffer for the last rinse effectively eliminates proteins that are strongly attached. The presence of additives in the buffer can enhance the
process of releasing the proteins that are attached to the surface. Possible additives include of alcohols that can mix with water, salt solutions that promote the
dissolution of substances, and detergents.

Occasionally, more severe conditions such as a solution of 0.5–1.0M sodium hydroxide, 70% ethanol, or 30% isopropanol may be necessary to completely
eliminate all attached proteins.

Key determinants impacting hydrophobic interactions

Workshop-5 Module-1 Page 227 of 333


The adsorption behaviour of proteins is determined by the type of ligand utilised. For example, straight chain alkyl ligands exhibit hydrophobic properties, while aryl
ligands have both aromatic and hydrophobic interactions.

CHEMUK - Key points from the 2022 eBook Aggregation of the foremost interviews, articles, and news from the past year.

Retrieve the most recent version

Matrix - The most often employed supports are hydrophilic carbohydrates, including cross-linked agarose and synthetic copolymer polymers. The selectivity of
different supports may differ even while using the same ligand.

Degree of substitution refers to the extent to which a ligand has replaced other molecules in binding to a protein. The binding capacity of a protein increases as the
degree of replacement of the ligand increases. Nevertheless, elevated amounts of ligand replacement enhance the intensity of the interaction and pose challenges
in eluting the proteins.

Temperature and the affinity of hydrophobic contacts have a direct and positive association. Elevated temperature also affects the configuration and solubility of
proteins. Temperature is rarely employed to control the elution of molecules in hydrophobic interaction chromatography (HIC).

pH - The mobile phases employed in hydrophobic interaction chromatography (HIC) typically have a pH range of 5 – 7, which is considered neutral. The impact of
pH on protein-medium interactions differs among different proteins. In general, the hydrophobic contact between the medium and protein diminishes as the pH
increases, due to an increase in protein charge. Although pH can influence protein binding, its impact is not deemed substantial enough to employ pH gradients for
solute molecule elution.

The addition of salt to the buffer and sample enhances the interaction between the ligand and protein. However, there is a potential for protein precipitation at
elevated salt concentrations. Sodium, ammonium, or potassium sulphates are recognised for their ability to induce greater precipitation effects, while also being
highly efficient in facilitating the interaction between the ligand and the protein.

Citations

The link provided is for the PubMed website, specifically for the article with the identifier 19892185.

The link provided is for the Bio-Rad website's page on the introduction to hydrophobic interaction chromatography (HIC).

The website link is: https://www.sigmaaldrich.com/IN/en

The website address is https://www.mcb.harvard.edu/.

Additional Sources for Further Study

Complete Compilation of Chromatography Material

An Introduction to Chromatography

Applications of Gas Chromatography-Mass Spectrometry (GC-MS)

High Performance Liquid Chromatography (HPLC)

Applications of Liquid Chromatography-Mass Spectrometry (LC-MS)

Workshop-5 Module-1 Page 228 of 333


Workshop-5 Module-1 Page 229 of 333
Workshop-5 Module-1 Page 230 of 333
Workshop-5 Module-1 Page 231 of 333
Workshop-5 Module-1 Page 232 of 333
Workshop-5 Module-1 Page 233 of 333
Q 4: Briefly describe ‘Affinity Chromatography’.

Workshop-5 Module-1 Page 234 of 333


Affinity chromatography is a technique for separating substances by exploiting the particular binding interaction between an immobilised ligand and its
corresponding binding partner. Illustrative instances comprise antibody/antigen, enzyme/substrate, and enzyme/inhibitor interactions.

- Biopharmaceuticals - Processing of the final product

Chromatograms

Module 1 focuses on the topic of downstream processing in the field of biopharmaceuticals. 10th page out of a total of 35 pages

1.5 Affinity Chromatography is a technique used in biochemistry to separate and purify specific molecules based on their affinity for a particular ligand immobilised
on a solid support.

Affinity chromatography is based on the selective binding of a protein to a stationary ligand, while the remaining proteins flow through the column.

The preferred ligands are monoclonal antibodies. To obtain a monoclonal antibody tailored to your protein, you need to have it manufactured.

Monoclonal antibodies, on the other hand, are somewhat costly. Additionally, they must undergo purification. Typically, these antibodies adhere strongly and are
difficult to detach. Therefore, it is necessary to employ severe circumstances that may render your protein inactive or damage some of the monoclonal antibodies.

Module 1: Biopharmaceuticals - Downstream Processing 11th page out of a total of 35 pages

Affinity chromatography is a technique used in biochemistry to separate and purify specific molecules based on their affinity for a particular ligand.

An affinity chromatogram is a graphical representation of the separation of molecules based on their specific interactions with an immobilised ligand.

Module 1 focuses on the topic of downstream processing in the field of biopharmaceuticals.

Affinity chromatography is a technique used in biochemistry to separate and purify specific molecules based on their affinity for a particular ligand or receptor.

Equipment

Source: Wikipedia, an online encyclopaedia that is freely accessible

Affinity chromatography is a technique used to separate a biomolecule from a mixture by exploiting a highly specific binding affinity between the biomolecule and
another substance. The binding contact between biomolecules varies depending on the specific type; often utilised binding interactions include antigen and
antibody, enzyme and substrate, receptor and ligand, or protein and nucleic acid[1]. These interactions are often employed for the isolation of different
biomolecules. Affinity chromatography is advantageous due to its exceptional selectivity and separation resolution, in comparison to alternative chromatographic
techniques.

Principle

Affinity chromatography offers the benefit of precise binding interactions between the analyte of interest (often dissolved in the mobile phase) and a binding partner
or ligand (fixed on the stationary phase). In a standard affinity chromatography experiment, the ligand is bound to a solid matrix that cannot dissolve, typically a
polymer like agarose or polyacrylamide. The matrix is chemically altered to have reactive functional groups, which can react with the ligand to create strong
covalent bonds. The stationary phase is initially placed in a column, and then the mobile phase is introduced. Molecules that form a complex with the ligand will
stay attached to the stationary phase. Subsequently, a wash buffer is employed to eliminate non-target biomolecules by disrupting their less strong connections
with the stationary phase, while the desired biomolecules will remain attached. The target biomolecules can be eliminated by using an elution buffer, which breaks
the contacts between the bound target biomolecules and the ligand. The desired molecule is consequently retrieved in the eluting solution.[5][page needed]

Affinity chromatography can be performed without prior knowledge of the molecular weight, charge, hydrophobicity, or other physical properties of the analyte.
However, understanding the binding properties of the analyte is beneficial for designing an effective separation protocol. The table below provides an overview of
the binding interactions commonly utilised in affinity chromatography procedures.

Common biological interactions employed in affinity chromatography[6]

1. Ligand types and corresponding target molecules:

1. Substrate analogue in relation to enzymes.

2 Immunoglobulin Foreign substance that triggers an immune response

3 Lectin Polysaccharide

4 Nucleic acid Sequence of bases that is complementary

Workshop-5 Module-1 Page 235 of 333


5 Hormone Receptor

6 Avidin Biotin/Biotinylated molecule

7 Calmodulin Protein that binds to calmodulin

8 Protein consisting of a fusion between Glutathione and GST

Protein A or Protein G are used to bind to immunoglobulins.

The Nickel-NTA polyhistidine fusion protein has a molecular weight of 10.

Arrangement and configuration of batches and columns

The principle of affinity column chromatography

Chromatography conducted in batches

The process of binding to the solid phase can be accomplished using column chromatography. In this method, the solid medium is packed into a column. The initial
mixture is then passed through the column to allow for settling. Next, a wash buffer is run through the column, followed by the application of the elution buffer to the
column, which is then collected. Typically, these procedures are carried out under normal atmospheric pressure. Alternatively, binding can be accomplished by a
batch treatment method. This involves adding the initial mixture to a vessel containing the solid phase, mixing them together, separating the solid phase, removing
the liquid phase, washing the solid phase, re-centrifuging it, adding the elution buffer, re-centrifuging again, and finally extracting the elute.

Occasionally, a hybrid approach is utilised where the binding process is carried out using the batch method. However, the solid phase containing the desired
molecule is subsequently packed onto a column, and the column is used for washing and elution.

The ligands employed in affinity chromatography are derived from both chemical and inorganic sources. Biological sources that can be cited as examples include
serum proteins, lectins, and antibodies. The inorganic sources mentioned include moronic acid, metal chelates, and triazine dyes.[7]

Another technique, known as expanded bed absorption, has been developed to incorporate the benefits of the aforementioned approaches. The solid phase
particles are positioned within a column, through which the liquid phase is introduced from the lower end and discharged from the upper end. The gravitational force
exerted on the particles prevents the solid phase from leaving the column along with the liquid phase.

Affinity columns can be eluted by altering the salt concentrations, pH, pI, charge, and ionic strength either directly or by using a gradient. This allows for the
separation of the desired particles.

In recent times, configurations utilising multiple columns arranged in a series have been created. An advantage of using multiple column setups is that the resin
material can be completely loaded, as any non-binding product is directly transferred to the next column with fresh column material. The term used to describe
these chromatographic techniques is periodic counter-current chromatography (PCC). The cost of resin per unit of product can be significantly decreased. By
eluting and regenerating one column while the other is loaded, the advantages can be fully utilised with just two columns. However, using additional columns can
provide more flexibility in terms of elution and regeneration times, although this comes at the expense of additional equipment and resin costs.

Precise applications

Affinity chromatography finds utility in various applications, such as nucleic acid purification, protein purification[9] from cell free extracts, and blood purification.

Affinity chromatography is a method that allows for the separation of proteins based on their ability to bind to a specific fragment. This technique relies on the
biological properties of the desired protein, making it a valuable tool for purification. It enables the purification of proteins to a high degree in a single step.

Diverse affinity media

There are numerous types of affinity media available for various purposes. These media are generally activated or functionalized, serving as a functional spacer
and support matrix. Additionally, they help avoid the need to handle harmful reagents.

Amino acid media is employed in conjunction with a diverse range of serum proteins, proteins, peptides, enzymes, as well as rRNA and dsDNA. Avidin biotin
medium is employed in the purification procedure of biotin/avidin and its derivatives.
Workshop-5 Module-1 Page 236 of 333
Carbohydrate bonding is commonly employed with glycoproteins or any other molecule that contains carbohydrates. Carbohydrates are utilised with lectins,
glycoproteins, or any other protein that is involved in carbohydrate metabolism. The dye ligand media is characterised by its lack of specificity, nonetheless, it
effectively imitates biological substrates and proteins. Glutathione is beneficial for the isolation of recombinant proteins that are tagged with GST. Heparin is a
versatile ligand with a broad affinity, primarily employed for the isolation of plasma coagulation proteins, nucleic acid enzymes, and lipases.

Hydrophobic interaction medium are primarily employed to selectively bind to free carboxyl groups and proteins.

Immunoaffinity media exploits the high specificity of antigens and antibodies to achieve separation. Immobilised metal affinity chromatography, on the other hand,
relies on interactions between metal ions and proteins (typically with special tags) for separation. Another method involves the use of nucleotide/coenzyme to
separate dehydrogenases, kinases, and transaminases.

Nucleic acids serve the purpose of capturing mRNA, DNA, rRNA, and other nucleic acids/oligonucleotides. The Protein A/G technique is employed for the
purification of immunoglobulins.

Specialty media are specifically formulated for a certain class or kind of protein/coenzyme. These media are exclusively effective in isolating and differentiating a
specific protein or coenzyme.

Immunoaffinity

Another use of this process is the extraction of antibodies from blood serum using affinity purification. If the serum is confirmed to possess antibodies targeting a
specific antigen (e.g., if the serum is derived from an organism that has been immunised against the antigen in question), it can be utilised for the process of affinity
purification of said antigen. It is alternatively referred to as Immunoaffinity Chromatography. For instance, when an organism is vaccinated against a GST-fusion
protein, it will generate antibodies specifically targeting the fusion-protein, and maybe also antibodies targeting the GST tag. Subsequently, the protein can be
chemically bonded to a stable material like agarose and employed as a specific binding agent in the process of isolating antibodies from immunological serum.

To ensure comprehensiveness, the GST protein and the GST-fusion protein might be individually coupled. Initially, the serum is let to attach to the GST affinity
matrix. This process will eliminate the antibodies targeting the GST component of the fusion protein. Subsequently, the serum is isolated from the solid support and
permitted to adhere to the GST-fusion protein matrix. This enables the immobilisation of any antibodies that can identify the antigen on the solid support. The
elution of the desired antibodies is typically accomplished by employing a low pH buffer, such as glycine with a pH of 2.8. The eluate is gathered into a neutral tris
or phosphate solution in order to counterbalance the acidic elution buffer and prevent any deterioration of the antibody's functionality. An exemplary demonstration
of affinity purification is the utilisation of this technique to purify the original GST-fusion protein, eliminate unwanted anti-GST antibodies from the serum, and purify
the desired target antibody.

Monoclonal antibodies can be specifically chosen to bind proteins with high precision, allowing for the gentle release of the protein. This can be utilised for future
research purposes.[14]

Peptide antigens are frequently purified using a simpler method while generating antibodies. When synthesising peptide antigens, a cysteine residue is added at
either the N- or C-terminus of the peptide. This cysteine residue possesses a sulfhydryl functional group, enabling the peptide to readily bind to a carrier protein
such as Keyhole limpet hemocyanin (KLH). The peptide containing cysteine is immobilised onto an agarose resin by attaching it to the cysteine residue. This
immobilised peptide is subsequently employed for purifying the antibody.

The purification of the majority of monoclonal antibodies has been accomplished by affinity chromatography, utilising immunoglobulin-specific Protein A or Protein
G, which are generated from bacteria.[15]

The technique of immunoaffinity chromatography, using monoclonal antibodies that are fixed onto a monolithic column, has shown effective in isolating extracellular
vesicles (such as exosomes and exomeres) from human blood plasma. This is achieved by specifically targeting the tetraspanins and integrins present on the
surface of these vesicles. This method has been documented in scientific literature.[16][17]

Immunoaffinity chromatography serves as the foundation for immunochromatographic test (ICT) strips, which offer a swift method of diagnosing patients in
healthcare settings. By utilising ICT, a technician can assess a patient's condition directly at their bedside, eliminating the necessity for a laboratory. ICT detection
is exceptionally accurate in identifying the precise microorganism responsible for an infection.

IMAC (Immobilised Metal Ion Affinity Chromatography)

Workshop-5 Module-1 Page 237 of 333


Immobilised metal ion affinity chromatography (IMAC) relies on the selective formation of coordinating covalent bonds between amino acids, specifically histidine,
and metal ions. This method operates by selectively retaining proteins that have a strong attraction to metal ions in a column that contains fixed metal ions, such as
cobalt, nickel, or copper. It is used to purify proteins or peptides that contain histidine by using immobilised metal ions, or to purify phosphorylated proteins or
peptides by using iron, zinc, or gallium. Several naturally existing proteins lack a strong attraction to metal ions. Consequently, recombinant DNA technology can be
employed to insert a protein tag of this nature into the appropriate gene. Techniques employed to separate the protein of interest involve altering the pH or
introducing a competing chemical, such as imidazole.[20][21]

The purification of proteins with histidine tags is achieved by using a chromatography column that contains nickel-agarose beads.

Additionally, please refer to the Polyhistidine-tag for further information.

Genetically engineered proteins

Affinity chromatography is primarily employed for purifying recombinant proteins, making it the most prevalent application of this technique. Proteins that have a
confirmed attraction are labelled with protein tags to facilitate their purification. The protein may have undergone genetic modification to enable it to be specifically
chosen for its ability to attach with high affinity. This modified protein is commonly referred to as a fusion protein. Protein tags encompass hexahistidine (His),
glutathione-S-transferase (GST), maltose binding protein (MBP), and the Colicin E7 variant CL7 tag. Histidine tags exhibit a strong attraction to nickel, cobalt, zinc,
copper, and iron ions that have been fixed in place by making coordinating covalent connections with a chelator integrated into the stationary phase. To achieve
elution, an abundant quantity of a chemical capable of functioning as a ligand for metal ions, such as imidazole, is employed. GST exhibits a strong attraction
towards glutathione, which can be obtained in a commercially available form as immobilised glutathione agarose. During elution, an excess amount of glutathione
is employed to displace the protein that has been labelled or tagged. CL7 exhibits a strong attraction and selectivity towards Immunity Protein 7 (Im7), which can be
obtained in a solid form as Im7 agarose resin that is available for purchase. To elute the tag-free protein, a potent and selective protease is used on the Im7 resin.
[22]

Lectins

Lectin affinity chromatography is a type of affinity chromatography that employs lectins to separate constituents within the sample. Lectins, like concanavalin A, are
proteins that have the ability to selectively bind to certain carbohydrate molecules, specifically alpha-D-mannose and alpha-D-glucose. Common carbohydrate
molecules utilised in lectin affinity chromatography include Con A-Sepharose and WGA-agarose.[23] Another instance of a lectin is wheat germ agglutinin, which
specifically binds to D-N-acetyl-glucosamine.[24] The primary purpose of this technique is to separate glycoproteins from non-glycosylated proteins or distinguish
between different glycoforms.[25] While there are multiple methods to conduct lectin affinity chromatography, the ultimate objective is to isolate the sugar ligand
associated with the target protein.[23]

Area of expertise

A further use of affinity chromatography is the purification of particular proteins through the utilisation of a distinct gel matrix tailored to a specific protein. As an
illustration, the process of purifying E. The purification of coli β-galactosidase is achieved using affinity chromatography, employing p-aminobenyl-1-thio-β-D-
galactopyranosyl agarose as the affinity matrix. The affinity matrix employed is p-aminobenyl-1-thio-β-D-galactopyranosyl agarose. It is chosen for its inclusion of a
galactopyranosyl group, which acts as an effective substrate analogue for enzyme E. The enzyme is called coli β-Galactosidase. The enzyme has the ability to
attach to the stationary phase of the affinity matrix. β-Galactosidase is released by gradually increasing the amount of salt added to the column.

Alkaline phosphatase

Alkaline phosphatase derived from E. coli. Coli can be isolated through purification utilising a DEAE-Cellulose matrix. The user's text is a single letter, "A."
Phosphatase possesses a small negative charge, enabling it to form weak associations with the positively charged amine groups present in the matrix. The
enzyme can be recovered by elution using a buffer solution with increased salt concentrations.[27]

Boronate affinity chromatography is a technique used in biochemistry to separate and purify molecules based on their interaction with boronate compounds.

Boronate affinity chromatography involves the utilisation of boronic acid or boronates to separate and measure quantities of glycoproteins. This sort of
chromatography has been utilised in clinical settings to examine the long-term condition of diabetes patients by analysing their glycated hemoglobin.[24]

Purification of serum albumin

Utilising affinity purification techniques can effectively eliminate surplus albumin and α2-macroglobulin contamination during mass spectrometry procedures. The
Cibacron Blue-Sepharose stationary phase is employed in the affinity purification of serum albumin to selectively collect and attract serum proteins. Next, the serum
proteins can be extracted from the adsorbent using a buffer solution that contains thiocyanate (SCN−).[28]

Low binding affinity chromatography

Weak affinity chromatography (WAC) is a liquid chromatographic technique used in drug discovery for affinity screening. It separates chemical compounds by
exploiting their varying weak affinities to an immobilised target. The greater the compound's affinity for the target, the more it will be retained in the separation unit,
resulting in a longer retention time. By processing the retention periods of analysed molecules, one can get the measurement of affinity and the ranking of affinity.
Affinity chromatography is a component of a broader range of methods employed in chemoproteomics for the purpose of identifying drug targets.

Workshop-5 Module-1 Page 238 of 333


The efficacy of the WAC technology is highlighted by its application to various protein targets, including proteases, kinases, chaperones, and protein-protein
interaction (PPI) targets. Research has demonstrated that WAC is more efficient than traditional approaches for fragment-based screening.[31]

The study of past events, particularly human activities, is known as history.

The concept and first development of affinity chromatography were pioneered by Pedro Cuatrecasas and Meir Wilchek.[32][33]

Citations

The authors of the study are Aizpurua-Olaizola, Oier; Sastre Torano, Javier; Pukin, Aliaksei; Fu, Ou; Boons, Geert Jan; de Jong, Gerhardus J.; Pieters, Roland J.
The date is January 2018. "Utilising affinity capillary electrophoresis to evaluate the binding affinity of cholera toxin inhibitors that are based on carbohydrates."
Electrophoresis is a technique used to separate and analyse charged particles, such as proteins or nucleic acids, based on their size and charge. It The citation is
"39 (2): 344–347." The given text is a DOI (Digital Object Identifier) for a scientific article. The DOI is 10.1002/elps.201700207.</text The PubMed ID for the article
is 28905402. The S2CID for this reference is 33657660.

Ninfa, Alexander J.; Ballou, David P.; Benore, Marilee (2009). The book titled "Fundamental Laboratory Approaches for Biochemistry and Biotechnology" is in its
second edition. Wiley. Page 133. The ISBN is 9780470087664.

"Introduction to Affinity Chromatography". The website is bio-rad.com. Bio-Rad is a company. September 14, 2020. Obtained on September 14, 2020.

The book was edited by Zachariou, Michael. 2008. The book titled "Affinity Chromatography: Methods and Protocols" is now in its second edition. The location of
the publisher is Totowa, New Jersey, and the name of the publisher is Humana Press. Pages 1 to 2. The International Standard Book Number (ISBN) for this
publication is 9781588296597.

Bonner, Philip L.R. The year 2007. The book titled "Protein Purification (2nd ed.)". The publication is titled "Totowa, N.J.: Taylor & Francis Group." The ISBN
number is 9780415385114.

Kumar, Pranav (2018). The fields of study are biophysics and molecular biology. The publication titled "Pathfinder" is based in New Delhi. Page 11. The ISBN is
978-93-80473-15-4.

Fanali, Salvatore; Haddad, Paul R.; Poole, Colin F.; Schoenmakers, Peter; Lloyd, David, are the editors of the publication. The year 2013. Applications of Liquid
Chromatography. The subject of the text is "Handbooks in Separation Science." The publication is titled "Saint Louis" and it is published by Elsevier. Page 3. The
ISBN number is 9780124158061.

The authors of the publication are Baur, Daniel; Angarita, Monica; Müller-Späth, Thomas; Steinebach, Fabian; Morbidelli, Massimo (2016). "Optimal design for
comparing batch and continuous multi-column protein A capture processes." The publication is titled "Biotechnology Journal." The article titled "11 (7): 920–931" is
published in a scholarly journal. The provided text is a DOI (Digital Object Identifier) for a scientific article. The DOI is 10.1002/biot.201500481.</text The identifier
for the resource is hdl:11311/1013726. The PubMed ID for the article is 26992151. The S2CID for this reference is 205492204.

Typical biological interactions used in affinity chromatography[6]

Sr. no Types of ligand Target molecule

1 Substrate analogue Enzymes

2 Antibody Antigen

3 Lectin Polysaccharide

4 Nucleic acid Complementary base sequence

5 Hormone Receptor

6 Avidin Biotin/Biotin-conjugated molecule

Workshop-5 Module-1 Page 239 of 333


7 Calmodulin Calmodulin binding partner

8 Glutathione GST fusion protein

9 Protein A or Protein G Immunoglobulins

10 Nickel-NTA polyhistidine fusion protein

4.15 Batch and column setups

Principle of affinity column chromatography


Batch chromatography

Binding to the solid phase may be achieved by column chromatography whereby the solid medium is packed onto a column, the initial mixture run through the
column to allow settling, a wash buffer run through the column and the elution buffer subsequently applied to the column and collected. These steps are usually
done at ambient pressure.

Many naturally occurring proteins do not have an affinity for metal ions, therefore recombinant DNA technology can be used to introduce such a protein tag into the
relevant gene. Methods used to elute the protein of interest include changing the pH, or adding a competitive molecule, such as imidazole.[20][21]

Workshop-5 Module-1 Page 240 of 333


A chromatography column containing nickel-agarose beads used for purification of proteins with histidine tags

See also: Polyhistidine-tag

pic

Workshop-5 Module-1 Page 241 of 333


5-5: Biopharmaceuticals Manufacturing: Special Considerations

Step 1

Warm up - Before watching the video, answer the question to 'unlock' your prior knowledge

Q: What is meant by the term ‘immortal’? What happens during the ‘ageing’ process?

Someone or something that is immortal will live or last forever and never die or be destroyed Biological immortality is an absence of aging. Specifically it is the
absence of a sustained increase in rate of mortality as a function of chronological age. A cell or organism that does not experience aging, or ceases to age at some
point, is biologically immortal. Someone or something that is immortal will live or last forever and never die or be destroyed Biological immortality is an absence of
aging. Specifically it is the absence of a sustained increase in rate of mortality as a function of chronological age. A cell or organism that does not experience aging,
or ceases to age at some point, is biologically immortal.

Workshop-5 Module-1 Page 242 of 333


On the quest to immortality: how close can humans get?
31 OCT 2018

WRITTEN BY ABIGAIL SAWYER (SENIOR EDITOR)

CELL AND TISSUE BIOLOGY NEWS TECH NEWS

From immortal cell lines to artificial intelligence, how can we define immortality and just how close can humans get?
What is immortality?
The Oxford dictionary defines immortality as “the ability to live forever; eternal life.” But, how should this be interpreted and what does it mean to be immortal? If we
define this as immortality in a physical sense, immortal cell lines do already exist. This is not to say humans as physical beings could become immortal. If we were
to look at immortality as our thoughts, personalities and ideas living on, there are currently initiatives at work that aim to have this in some form by 2045. Immortal
cells are divided into two kinds; embryonic stem cells and cancer cells. Immortality in cancer cells is related to telomere shortening. Telomeres are the section of
non-coding DNA at the end of a chromosome that the primer for DNA replication attaches to. As the new DNA strand is built up from this primer, its telomere will be
shorter. As normal cells continue to divide, telomeres will become shorter until, eventually, there is no more telomere left. When this happens replication will then
result in missing parts of coding sequences, which the cell will treat as DNA damage and lead to senescence and cell death. This is one of the factors that
contributes to the aging process and leads to a finite life span. Cancer cells overcome this issue by containing the enzyme telomerase, which synthesizes
telomeres, meaning they do not have the issue of telomere shortening. Embryonic stem cells are also considered to be immortal, as they do not age, can proliferate
indefinitely and can form any tissue of the organism. Fisetin: a drug to combat the aging process? Predicting memory loss in old age The fountain of youth The
origin of HeLa cells While not the first immortal cell line – there were immortal mice cell lines long before human ones – the HeLa cell line is the oldest and most
commonly utilized immortal human cell line. It is named after its cell donor – a tobacco farmer named Henrietta Lacks, who was diagnosed with cervical cancer in
the 1950s. Scientists at Johns Hopkins Hospital (MD, USA) took a biopsy of her cancerous cervical tumor on February 8 1951, and later cell biologist George Otto
Gey discovered that he was able to keep the cells alive. He then isolated one specific cell, multiplied it and developed a cell line. Prior to this, cultured human cells
would only last a couple of days. Since then, HeLa cells have been utilized in the development of a vaccine for polio in 1952, were the first human cells to be
successfully cloned in 1953 and scientists have grown an estimated 50 tons of them. It is also thought that up to approximately 20% of other cell lines could be
contaminated by HeLa cells due to their exceptional hardiness. Lacks and her family were completely unaware of this scientific breakthrough until years later.
Though an important milestone in medical research, this also highlighted an important bioethics issue. The scientists did not ask for permission before extracting
Lacks’ cells nor did they explain to her children the reasons why they would proceed to sample their blood at many different occasions over a number of years.
Whole brain emulation From immortal cells to immortal beings; whole brain emulation, or mind transfer, is something you may have heard horror stories about. With
Russian billionaire Dmitry Itskov’s 2045 Initiative to achieve immortality by uploading his brain to a computer, it may seem like something that will be possible over
the coming decades. “Within the next 30 years, I am going to make sure that we can all live forever. I’m 100% confident it will happen. Otherwise I wouldn’t have
started it,” commented Itskov. The end goal of the initiative, which is projected by 2045, is to create hologram-like avatars with the ‘uploaded’ minds of humans that
have passed away. This may seem like science fiction, and thus far it is, as much of the scientific discoveries this relies upon are not even close to becoming a
reality yet. One of the scientific discoveries this project relies upon is a complete map of the human brain, including the positions and interactions of all ~100 billion
neurons. It is the belief of some neuroscientists that if the brain and all of its sensory inputs and outputs could be approached as if it were a computer, Itskov’s goal
to transfer an individual’s personality into a new non-living entity, may be possible. “All of the evidence seems to say in theory it’s possible – it’s extremely difficult,
but it’s possible,” explained Randal Koene, formerly a professor at the Center for Memory and Brain of Boston University (MA, USA). “So then you could say
someone like that is visionary, but no mad because that implies you’re thinking of something that’s just impossible, and that’s not the case.” Even if our individual
immortality were possible in the future, there are many reasons why this might be negative for humankind. Those reasons include a decreased genetic variability, a
risk of extinction due to the limited timeframe of fertile females and human reproduction as well as many social, cultural, historical and economic issues that would
inevitably arise if people had the ability to live forever.

SOURCE
1. https://en.oxforddictionaries.com/definition/immortality
2. Rahbari R, Sheahan T, Modes V, Collier P, Macfarlane C, Badge RM. A novel L1 retrotransposon marker for HeLa cell line identification. BioTechniques
doi:10.2144/000113089 (2009)
3. https://www.future-science.com/btn/news/may17/10
4. https://www.independent.co.uk/news/science/dmitry-itskov-2045-initiative-immortality-brain-uploading-a6930416.html
5. https://www.bbc.co.uk/news/magazine-35786771
https://www.biotechniques.com/cell-and-tissue-biology/on-the-quest-to-immortality-how-close-can-humans-get/#:~:text=Immortal%20cells%20are%20divided
%20into,for%20DNA%20replication%20attaches%20to.

Immortality
Wikipedia, the free encyclopedia

Workshop-5 Module-1 Page 243 of 333


The Fountain of Eternal Life in Cleveland, Ohio, United States, is described as symbolizing "Man rising
above death, reaching upward to God and toward Peace."[1]

Immortality is the concept of eternal life.[2] Some species possess biological immortality.[3][4]

Workshop-5 Module-1 Page 244 of 333


Some scientists, futurists, and philosophers have speculated on the immortality of the human body, with some claiming that human immortality might be achieved
in the first few decades of the twenty-first century with the use of technologies such as mind uploading (digital immortality).[5] Other proponents argue that life
extension is a more realistic aim in the immediate future, while immortality awaits more research advances. The absence of ageing would give humanity biological
immortality, but not immunity from disease or harm. In the former view, whether the process of internal immortality is delivered over the next few years is primarily
dependent on research (particularly in neuron research in the case of internal immortality through an immortalised cell line), but in the later scenario, it may be an
awaiting objective.[6] The concept of eternal human life and the existence of an immaterial soul has long been a source of dispute and speculation in religion
[citation needed]. In religious contexts, immortality is frequently mentioned as one of the promises made by divinities to humans who practise virtue or follow divine
rule.[Citation needed]
1.1 Definitions.[edit]This section does not mention any sources. Please help enhance this section by including citations from credible sources. Unsourced material
can be questioned and removed. (November, 2022) (Find out how and when to remove this template message.)

Scientific[edit]
Main Article: Anti-Aging Movement
Life extension technologies claim to be working towards total rejuvenation. Cryonics holds up the possibility of resurrecting the dead in the future, assuming
appropriate medical advances. While it is possible for a species to be biologically eternal, as demonstrated by hydra and Planarian worms, these are animals that
are physiologically extremely different from humans, and it is unclear whether something similar would ever be viable for humans.
Religious[edit]
See also: Soul and Resurrection
Immortality in religion usually refers to either bodily immortality or a spiritual afterlife. In cultures such as ancient Egyptian, Mesopotamian, and Greek religions, the
immortal gods were thought to have corporeal bodies. In Mesopotamian and Greek religions, the gods made certain men and women physically immortal, whereas
many Christians believe that all sincere believers will be restored to bodily immortality. Rastafarians and Rebirthers hold similar ideas about the possibility of
physical immortality.
1.2 Physical immortality [edit].
Physical immortality is a state of life in which a person can prevent death while maintaining conscious thought. It can refer to a person's perpetual existence derived
from a physical source other than organic life, such as a computer.
Before the birth of modern science, alchemists attempted to construct the Philosopher's Stone,[7] and other cultures' legends such as the Fountain of Youth or the
Peaches of Immortality inspired attempts to discover an elixir of life.
To obtain true human physical immortality, modern scientific tendencies such as cryonics, digital immortality, rejuvenation discoveries, or forecasts of an eventual
technological singularity must overcome all causes of death.
Causes of Death [Edit]
Main Article: Death
The three main causes of death are natural ageing, sickness, and damage.[8] Such challenges can be handled with the solutions supplied in research to any goal,
as there are currently such alternate theories that require unification.
Aging[edit]
Aubrey de Grey, a leading researcher in the field,[9] defines ageing as "a collection of cumulative changes to the molecular and cellular structure of an adult
organism, which result in essential metabolic processes, but which also, once they progress far enough, increasingly disrupt metabolism, resulting in pathology and
death." Current causes of ageing in humans include cell loss (without replacement), DNA damage, oncogenic nuclear mutations and epimutations, cell
senescence, mitochondrial mutations, lysosomal aggregates, extracellular aggregates, random extracellular cross-linking, immune system decline, and endocrine
changes. To eliminate ageing, each of these reasons must be addressed, as part of a programme known as designed minimal senescence. There is also a large
amount of evidence showing that change is characterised by a loss of molecular fidelity.[10]
Disease[edit]
Disease is theoretically solvable via technology. In summary, it is an aberrant situation that affects an organism's body, which the body should not normally have to
deal with due to its inherent makeup.[11] Human understanding of genetics is leading to cures and therapies for a wide range of previously incurable diseases. The
methods by which other diseases cause damage are being better understood. Sophisticated approaches for early disease detection are being developed.
Preventative medicine is becoming increasingly understood. Neurodegenerative disorders such as Parkinson's and Alzheimer's may soon be cured through the use
of stem cells. Breakthroughs in cell biology and telomere studies are leading to cancer therapies. AIDS and TB vaccines are currently under development. Genes
linked to type 1 diabetes and certain cancers have been identified, paving the way for the development of novel medicines. Artificial gadgets connected directly to
the neurological system may restore sight to the blind. Drugs are being developed to address a wide range of various diseases and conditions.
Trauma[edit]
Physical trauma would continue to pose a threat to eternal physical existence, as an otherwise immortal individual would still be vulnerable to unforeseeable
accidents or disasters. The speed and quality of paramedic intervention remain critical factors in surviving severe trauma.[12] This effect would be mitigated by a
body that could automatically heal itself after extreme harm, such as one of nanotechnology's potential applications. If a continued physical existence is to be
maintained, the brain must be protected from harm. This sensitivity to trauma danger to the brain would naturally cause major behavioural changes, making
physical immortality undesirable for some people.
Environmental change.[edit]This section does not mention any sources. Please help enhance this section by including citations from credible sources. Unsourced
material can be questioned and removed. (November, 2022) (Find out how and when to remove this template message.)

Organisms that are not affected by these causes of death will still face the challenge of obtaining sustenance (whether from currently available agricultural
processes or hypothetical future technological processes) in the face of changing availability of suitable resources as environmental conditions shift. Even if you
avoid ageing, sickness, and trauma, you may die due to a lack of resources, such as hypoxia or famine.
If there is no limit on the degree of incremental risk mitigation, it is feasible that the cumulative chance of death over an infinite horizon is less than certainty, even if
the risk of fatal trauma in any finite interval exceeds zero. This is a mathematical part of reaching the 'actuarial escape velocity'.
Biological immortality

Workshop-5 Module-1 Page 245 of 333


Human chromosomes (grey) capped by telomeres (white)

Main article: Biological immortality

Biological immortality is an absence of aging. Specifically it is the absence of a sustained increase in rate of mortality as a function of chronological age. A cell or
organism that does not experience aging, or ceases to age at some point, is biologically immortal.[13]

Biologists have chosen the word "immortal" to designate cells that are not limited by the Hayflick limit, where cells no longer divide because of DNA damage or
shortened telomeres. The first and still most widely used immortal cell line is HeLa, developed from cells taken from the malignant cervical tumor of Henrietta
Lacks without her consent in 1951. Prior to the 1961 work of Leonard Hayflick, there was the erroneous belief fostered by Alexis Carrel that all normal somatic cells
are immortal. By preventing cells from reaching senescence one can achieve biological immortality; telomeres, a "cap" at the end of DNA, are thought to be the
cause of cell aging. Every time a cell divides the telomere becomes a bit shorter; when it is finally worn down, the cell is unable to split and dies. Telomerase is an
enzyme which rebuilds the telomeres in stem cells and cancer cells, allowing them to replicate an infinite number of times. [14] No definitive work has yet
demonstrated that telomerase can be used in human somatic cells to prevent healthy tissues from aging. On the other hand, scientists hope to be able to grow
organs with the help of stem cells, allowing organ transplants without the risk of rejection, another step in extending human life expectancy. These technologies are
the subject of ongoing research, and are not yet realized.[15]

Biologically immortal species[edit]


See also: List of longest-living organisms

Life defined as biologically immortal is still susceptible to causes of death besides aging, including disease and trauma, as defined above. Notable immortal species
include:

 Bacteria – Bacteria reproduce through binary fission. A parent bacterium splits itself into two identical daughter cells which eventually then split themselves
in half. This process repeats, thus making the bacterium essentially immortal. A 2005 PLoS Biology paper[16] suggests that after each division the daughter
cells can be identified as the older and the younger, and the older is slightly smaller, weaker, and more likely to die than the younger. [17]
 Turritopsis dohrnii, a jellyfish (phylum Cnidaria, class Hydrozoa, order Anthoathecata), after becoming a sexually mature adult, can transform itself back into
a polyp using the cell conversion process of transdifferentiation.[18] Turritopsis dohrnii repeats this cycle, meaning that it may have an indefinite lifespan.
[18]
Its immortal adaptation has allowed it to spread from its original habitat in the Caribbean to "all over the world". [19][20]
 Hydra is a genus belonging to the phylum Cnidaria, the class Hydrozoa and the order Anthomedusae. They are simple fresh-water predatory animals
possessing radial symmetry.[21][22]
Evolution of aging[edit]
Main article: Evolution of aging

As the existence of biologically immortal species demonstrates, there is no thermodynamic necessity for senescence: a defining feature of life is that it takes in free
energy from the environment and unloads its entropy as waste. Living systems can even build themselves up from seed, and routinely repair themselves. Aging is
therefore presumed to be a byproduct of evolution, but why mortality should be selected for remains a subject of research and debate. Programmed cell death and
the telomere "end replication problem" are found even in the earliest and simplest of organisms. [23] This may be a tradeoff between selecting for cancer and
selecting for aging.[24]

Modern theories on the evolution of aging include the following:

 Mutation accumulation is a theory formulated by Peter Medawar in 1952 to explain how evolution would select for aging. Essentially, aging is never selected
against, as organisms have offspring before the mortal mutations surface in an individual.
 Antagonistic pleiotropy is a theory proposed as an alternative by George C. Williams, a critic of Medawar, in 1957. In antagonistic pleiotropy, genes carry
effects that are both beneficial and detrimental. In essence this refers to genes that offer benefits early in life, but exact a cost later on, i.e. decline and
death.[25]
 The disposable soma theory was proposed in 1977 by Thomas Kirkwood, which states that an individual body must allocate energy for metabolism,
reproduction, and maintenance, and must compromise when there is food scarcity. Compromise in allocating energy to the repair function is what causes
the body gradually to deteriorate with age, according to Kirkwood.[26]
Immortality of the germline[edit]
Individual organisms ordinarily age and die, while the germlines which connect successive generations are potentially immortal. The basis for this difference is a
fundamental problem in biology. The Russian biologist and historian Zhores A. Medvedev[27] considered that the accuracy of genome replicative and other synthetic
systems alone cannot explain the immortality of germlines. Rather Medvedev thought that known features of the biochemistry and genetics of sexual
reproduction indicate the presence of unique information maintenance and restoration processes at the different stages of gametogenesis. In particular, Medvedev
considered that the most important opportunities for information maintenance of germ cells are created by recombination during meiosis and DNA repair; he saw
these as processes within the germ cells that were capable of restoring the integrity of DNA and chromosomes from the types of damage that cause irreversible
aging in somatic cells.

Workshop-5 Module-1 Page 246 of 333


Prospects for human biological immortality[edit]

Life-extending substances[edit]
Some[who?] scientists believe that boosting the amount or proportion of telomerase in the body, a naturally forming enzyme that helps maintain the protective caps at
the ends of chromosomes, could prevent cells from dying and so may ultimately lead to extended, healthier lifespans. A team of researchers at the Spanish
National Cancer Centre (Madrid) tested the hypothesis on mice. It was found that those mice which were " genetically engineered to produce 10 times the normal
levels of telomerase lived 50% longer than normal mice".[28]

In normal circumstances, without the presence of telomerase, if a cell divides repeatedly, at some point all the progeny will reach their Hayflick limit. With the
presence of telomerase, each dividing cell can replace the lost bit of DNA, and any single cell can then divide unbounded. While this unbounded growth property
has excited many researchers, caution is warranted in exploiting this property, as exactly this same unbounded growth is a crucial step in enabling cancerous
growth. If an organism can replicate its body cells faster, then it would theoretically stop aging.

Embryonic stem cells express telomerase, which allows them to divide repeatedly and form the individual. In adults, telomerase is highly expressed in cells that
need to divide regularly (e.g., in the immune system), whereas most somatic cells express it only at very low levels in a cell-cycle dependent manner.

Technological immortality, biological machines, and "swallowing the doctor"[edit]


Main article: Molecular machine

Technological immortality is the prospect for much longer life spans made possible by scientific advances in a variety of fields: nanotechnology, emergency room
procedures, genetics, biological engineering, regenerative medicine, microbiology, and others. Contemporary life spans in the advanced industrial societies are
already markedly longer than those of the past because of better nutrition, availability of health care, standard of living and bio-medical scientific advances. [citation
needed]
Technological immortality predicts further progress for the same reasons over the near term. An important aspect of current scientific thinking about
immortality is that some combination of human cloning, cryonics or nanotechnology will play an essential role in extreme life extension. Robert Freitas, a
nanorobotics theorist, suggests tiny medical nanorobots could be created to go through human bloodstreams, find dangerous things like cancer cells and bacteria,
and destroy them.[29] Freitas anticipates that gene-therapies and nanotechnology will eventually make the human body effectively self-sustainable and capable of
living indefinitely in empty space, short of severe brain trauma. This supports the theory that we will be able to continually create biological or synthetic replacement
parts to replace damaged or dying ones. Future advances in nanomedicine could give rise to life extension through the repair of many processes thought to be
responsible for aging. K. Eric Drexler, one of the founders of nanotechnology, postulated cell repair devices, including ones operating within cells and using as yet
hypothetical biological machines, in his 1986 book Engines of Creation. Raymond Kurzweil, a futurist and transhumanist, stated in his book The Singularity Is
Near that he believes that advanced medical nanorobotics could completely remedy the effects of aging by 2030. [30] According to Richard Feynman, it was his
former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman's theoretical
micromachines (see biological machine). Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be
possible to (as Feynman put it) "swallow the doctor". The idea was incorporated into Feynman's 1959 essay There's Plenty of Room at the Bottom.[31]

Workshop-5 Module-1 Page 247 of 333


Cryonics
Main article: Cryonics
Cryonics, the practice of preserving organisms (either intact specimens or only their brains) for possible future revival by storing them at cryogenic temperatures
where metabolism and decay are nearly completely stopped, can be used to 'pause' for those who believe that life extension technologies will not advance
sufficiently during their lifetime. Ideally, cryonics would allow clinically dead persons to be revived in the future, if solutions for their ailments have been discovered
and ageing is reversible. Modern cryonics methods include a process known as vitrification, which causes a glass-like state rather than freezing when the body is
exposed to low temperatures. This mechanism lowers the possibility of ice crystals injuring the cell structure, which would be especially harmful to cell structures in
the brain, where minute adjustments arouse the individual's thought.
Mind-to-computer uploading [edit]
Main Article: Mind Uploading
One advanced proposal is to upload an individual's habits and memories using a direct mind-computer interface. The individual's memories may be transferred to a
computer or a new organic body. Extropian futurists such as Moravec and Kurzweil have predicted that, with exponentially increasing computing power, it will be
feasible to upload human consciousness onto a computer system and live perpetually in a virtual environment.
This could be performed by advanced cybernetics, in which computer gear is initially implanted in the brain to help categorise memory or expedite mental
processes. Components would be added gradually until the individual's full brain functions were managed by artificial devices, avoiding rapid transitions that would
cause identification issues, putting the person at risk of being pronounced dead and so not being the lawful owner of his or her possessions. After this point, the
human body may be viewed as an optional accessory, with the programme that implements the person being transferred to any sufficiently powerful computer.
Another proposed method for mind upload is to do a comprehensive scan of an individual's original, organic brain and recreate the complete structure in a
computer. It is unclear what level of detail such scans and simulations would require to imitate awareness, as well as if the scanning process might harm the brain.
[a]
It is proposed that gaining immortality through this technique would necessitate a careful examination of the role of consciousness in mental operations. An
uploaded mind would be a replica of the original mind, not the conscious mind of the living entity involved in the transfer. Without a simultaneous upload of
consciousness, the original living entity remains mortal and does not achieve real immortality.[33] Research into neural correlates of consciousness is still
ambiguous on this topic. Whatever the path to mind upload, people in this state could be deemed basically immortal, barring the loss or traumatic destruction of the
devices that powered them.[Clarification needed][Citation needed]
Cybernetics
Main Article: Cyborg
The process of transforming a human into a cyborg may include the placement of brain implants or the extraction of a human processing unit, which is
subsequently placed into a robotic life-support system.The number 34 is enclosed by square brackets. Replacement of organic organs with robotic counterparts,
such as pacemakers, has the potential to increase life expectancy. Furthermore, depending on the definition, other technological advancements to the human
body, such as genetic alterations or the insertion of nanobots, may qualify a person as a cyborg. Some people believe that these changes would make a person
impervious to the ravages of ageing and sickness, potentially guaranteeing them immortality until they are purposely killed or destroyed.The information presented
lacks a credible source to back up its veracity.
Main Article: Digital Immortality
1.1 Religious attitudes
Main articles: afterlife and soul.
As recently as 1952, the editorial crew of the Syntopicon discovered in their compilation of the Great Books of the Western World that "the philosophical issue
concerning immortality cannot be separated from issues concerning the existence and nature of man's soul."[35] Thus, until to the twenty-first century, the majority
of speculation about immortality focused on the nature of the afterlife.
Abrahamic Religion
The views of Christianity, Islam, and Judaism on the concept of immortality differ because each religious system encompasses distinct theological interpretations
and doctrines on the enduring human soul or spirit.
Christianity
Main articles: Eternal life (Christianity), Christian conditionalism, Christian mortalism, and Universal resurrection

Adam and Eve condemned to mortality. Hans Holbein the Younger, Danse Macabre, 16th century

According to Christian theology, Adam and Eve, as well as all their descendants, lost the power to live forever in their physical bodies due to the Fall. However, it is
important to note that this original state of being "indestructible in the physical form of humans" was considered to be an extraordinary condition.[36] Christians who
affirm the Nicene Creed hold the belief in universal resurrection, which states that every deceased individual, regardless of their faith in Christ, will experience a
resurrection upon the Second Coming.The number 37 is enclosed in square brackets. Paul the Apostle, after abandoning his previous life as a Pharisee (a Jewish
social movement that believed in a future physical resurrection), presents a unified perspective on believers who are resurrected. According to Paul, both the

Workshop-5 Module-1 Page 248 of 333


physical and spiritual aspects of believers will be reconstructed to resemble the glorified body of Christ after his resurrection. This transformation will elevate our
humble bodies to the same glorious state as his, as stated in the ESV translation.[39] This idea reflects Paul's portrayal of believers being "immersed, therefore,
with him [that is, Christ] through baptism into death" (ESV).

N.T. Wright, a theologian and former Bishop of Durham, has pointed out that many individuals tend to overlook the corporeal dimension of Jesus' pledge. In his
conversation with Time, he stated that the resurrection of Jesus signifies the commencement of a process of restoration that will be fully accomplished when Jesus
returns. One aspect of this will involve the revival of deceased individuals, who will regain consciousness, take on physical form, and engage in the process of
rejuvenation. According to Wright, John Polkinghorne, who is both a physicist and a priest, has expressed the idea in the following manner: 'God will transfer our
digital instructions onto his physical infrastructure until the moment he provides us with fresh infrastructure to execute the instructions independently.' This
statement effectively highlights two key points: firstly, the period following death, known as the Intermediate state, entails being in the presence of God without
actively inhabiting our physical bodies. Secondly, it emphasises that the most significant transformation will occur when we are once again embodied and engaged
in the administration of Christ's kingdom. "This kingdom will be comprised of the celestial realm and the terrestrial realm, harmoniously united in a novel
manifestation," he declared.

The Christian apocrypha comprises of immortal human beings, like Cartaphilus, who were bestowed with eternal life as a punishment for committing different
offences against Christ during the Passion. The mediaeval Waldensians adhered to the belief in the eternal existence of the soul.. Leaders of sects, including John
Asgill and John Wroe, propagated the belief among their adherents that attaining physical immortality was feasible. Several Patristic authors have linked the
eternal, intellectual soul to the depiction of God in Genesis 1:26. Among them are Athanasius of Alexandria and Clement of Alexandria, who assert that the
immortal rational soul is itself the embodiment of God's image. The relationship between the immortal rational soul and the creation of humanity in the image of
God is evident in Early Christian Liturgies.The user's text consists of the number 46 enclosed in square brackets.

Islam

In Islamic doctrine, the idea of spiritual immortality is essential. After the death of an individual, their fate will be determined based on their beliefs and acts, and
they will enter a perpetual realm where they will find solace. A devout Muslim who adheres to the five pillars of Islam will get access to Jannah, the eternal paradise
where they will reside perpetually.

Al-Baqarah (2:25) states that individuals who have faith and engage in virtuous actions will be rewarded with gardens through which rivers flow. Whenever they are
provided with fruits from there, they exclaim, 'Indeed, this is what we were previously given,' as they are given similar things. They also have pure and holy
associates in that place, and they will dwell there eternally."

Conversely, the kafir maintain the paradoxical belief that they remain in Jahannam indefinitely. In Islam, angels are considered immortal according to Islamic
teachings. However, there is a common belief among many people that angels will eventually die, including the Angel of Death. It is important to note that there is
no definitive source in Islamic scripture that explicitly addresses this matter. Instead, there exist writings that may suggest this, along with the well recognised
hadeeth (narration) on the "trumpet," which is considered a munkar hadeeth (discredited story). [51] In contrast, Jinn possess an extended lifespan ranging from
1000 to 1500 years.The number 52 is enclosed in square brackets. Among many Muslim Sufi mystics, Khidr is believed to possess an extended lifespan, but not
immortality. There is considerable disagreement on the exact circumstances of Khidr's eventual death, which remains a topic of debate. The claim that Khidr drank
from the fountain of Life, as stated in [54], is completely baseless and lacks any supporting evidence. In Islam, Jesus was elevated to the sky by Allah's permission
in order to protect him from being crucified and to grant him a lengthy life until the arrival of the Dajjal.[56] Dijjal is bestowed with an extended lifespan. Jesus Christ
defeats the Dajjal over a period of 40 days, with each day having a different length: one day is as long as a year, another as long as a month, another as long as a
week, and the remaining days are of normal length. The Qur'an rejects the concept of rejuvenation and physical immortality, asserting that it is impossible for
mankind to achieve a true elixir of life.

Every individual will experience the phenomenon of death. The verse you mentioned is from the Quran, namely from chapter 3, verse 185.

It represents the ephemeral quality of existence and questions the notion of everlasting life in the material realm. This sentence exemplifies the transitory nature of
everything.

Judaism[revision]This article disproportionately relies on original sources for references. Please enhance this article by incorporating additional secondary or
tertiary sources.

Locate references: "Immortality" - news articles • newspapers • books • scholarly publications • JSTOR (June 2015) (Acquire knowledge on the process and timing
for removing this template message)

Prior to the Babylonian exile, Judaism did not embrace the notion of a separate and eternal soul that is not physical in nature. However, this belief emerged under
the influence of Persian and Hellenistic philosophy. The Hebrew term nephesh, despite being rendered as "soul" in several older English Bibles, really conveys a
meaning more akin to "living being".The number 59 is enclosed in square brackets.[need a quotation for verification] In the Septuagint, Nephesh was translated as
ψυχή (psūchê), which is the Greek term for 'soul'.The information provided lacks a credible source to support its validity.

The sole Hebrew term conventionally rendered as "soul" (nephesh) in English biblical texts pertains to a living, sentient physical entity, rather than an eternal soul.
[b] In the New Testament, the Greek term commonly rendered as "soul" (ψυχή) carries a similar significance to the Hebrew word, but does not imply the existence
of an everlasting soul.The user's text is empty. The term "soul" can be used to describe the entirety of an individual, their self. For example, in Acts 2:41, it is
mentioned that "three thousand souls" were converted. This usage is also seen in Acts 3:23.

The Hebrew Bible mentions Sheol (‫)שאול‬, initially understood as a term for the grave, where the deceased are laid to rest or where existence ceases, until the
eventual resurrection of the dead. The concept of resurrection is clearly discussed alone in Daniel 12:1–4, although it may be indirectly suggested in various other
passages. During the intertestamental period, new theories emerged regarding Sheol.

The perspective on immortality in Judaism is most effectively demonstrated by the numerous allusions to it during the Second Temple period. The notion of bodily
resurrection is seen in 2 Maccabees, wherein it is described as the reconstitution of the physical form. The detailed account of the resurrection of the dead can be
found in the extra-canonical writings of Enoch[62] and the Apocalypse of Baruch.[63] P.R. Davies, a British researcher specialising in ancient Judaism, asserts that
the passages found in the Dead Sea scrolls include scant or no explicit mention of either immortality or resurrection from the dead.The number 64. Both Josephus
and the New Testament document that the Sadducees held the view that there is no existence after death. However, there is inconsistency in the sources about
the beliefs of the Pharisees. The New Testament asserts that the Pharisees held the belief in resurrection, without explicitly clarifying whether this encompassed
the physical body or not. Josephus, a Pharisee himself, stated that the Pharisees believed in the immortality of the soul. They felt that the souls of virtuous
individuals would be reborn and inhabit new bodies, while the souls of the wicked would endure everlasting torment. is enclosed in square brackets. The Book of
Jubilees appears to make allusions exclusively to the resurrection of the soul, or even to a broader concept of an imperishable soul. Rabbinic Judaism asserts that
in the Messianic Age, the virtuous deceased will experience resurrection with the arrival of the messiah. Subsequently, they shall be bestowed with eternal life in an
impeccable realm. Conversely, the malevolent deceased will not experience any form of resurrection. There are other Jewish beliefs on the afterlife. The Tanakh
does not provide explicit details regarding the afterlife, resulting in significant variations in perspectives and interpretations among adherents.The information
provided lacks a reliable source to support its validity.

Workshop-5 Module-1 Page 249 of 333


Dharmic religions

The perspectives on immortality within Hinduism and Buddhism exhibit nuanced differences, with each spiritual tradition offering
distinctive theological interpretations and doctrines concerning the eternal essence of the human soul or consciousness.

Hinduism
See also: Chiranjivi and Naraka (Hinduism)

Representation of a soul undergoing punarjanma. Illustration from Hinduism Today, 2004

Hindus adhere to the belief in the existence of an eternal soul that undergoes reincarnation following death. In Hinduism, individuals undergo a cyclical process
known as samsara, which involves the repetition of life, death, and rebirth. By leading a virtuous life, individuals can enhance their karma, resulting in a greater
social status in their subsequent life. Conversely, if they lead an immoral life, their karma deteriorates, leading to a poorer social position in their next life. After
undergoing numerous cycles of refining its karma, the soul attains liberation and resides in eternal joy. In Hinduism, there is no concept of a permanent realm of
punishment. However, if a soul continuously engages in extremely wicked actions, it may descend to the lowest point of the cycle of existence.

The Upanishads include clear descriptions referring to a condition of physical immortality achieved by purification and sublimation of the five elements of the body.
In the Shvetashvatara Upanishad (Chapter 2, Verse 12), it is mentioned that when the five attributes of the elements - earth, water, fire, air, and sky - become
evident, the yogi's body undergoes purification through the practice of yoga. As a result, the yogi becomes free from illness, old age, and death.

Another perspective on immortality can be attributed to the Vedic tradition through the analysis of Maharishi Mahesh Yogi.

Indeed, that man whom these contacts belong to.

Please refrain from interrupting. Who possesses a balanced and impartial mindset?

The individual experiences both pleasure and agony, and remains resolute and suitable for the task at hand.

Seeking eternal life, O most excellent of individuals.[69]

In the perspective of Maharishi Mahesh Yogi, the verse signifies that when an individual comprehends the enduring essence of life, their mind transcends the
impact of both pleasure and pain. Such an unwavering individual surpasses the influence of death and enters a state of everlasting existence. Moreover, a person
who comprehends the boundless abundance of absolute existence is inherently liberated from the constraints of relative existence. This is what grants them the
status of immortal life.

Vallalar, an Indian Tamil saint, purportedly attained immortality and vanished permanently from a secured room in 1874

Buddhism

This section requires further elaboration. You can contribute by augmenting it. The date is June 2019.

Anattā, or "non-self", is one of the three signs of being in Buddhism. This doctrine asserts that the physical body lacks an immortal soul and is instead comprised of
five skandhas or aggregates. Furthermore, another characteristic of existence is impermanence, also known as anicca, which directly contradicts notions of
immortality or permanence. According to a teaching in Tibetan Buddhism called Dzogchen, it is believed that individuals have the ability to convert their physical
body into an eternal body of light known as the rainbow body.

Antiquated faith systems[edit]

Explore the concept of immortality within the complex web of ancient religious ideologies. Expand the scope of this intellectual investigation, enabling a more
thorough examination of the topic matter.

Ancient Greek religion

Workshop-5 Module-1 Page 250 of 333


The concept of immortality in ancient Greek religion encompassed the perpetual unity of both the physical body and the soul, as evidenced in the works of Homer,
Hesiod, and other ancient writings. In ancient beliefs, the soul was believed to possess an everlasting existence in the realm of Hades. However, when separated
from the physical body, the soul was regarded as deceased. While the majority of individuals anticipated an everlasting existence as a soul without a physical form,
a select group of individuals were believed to have achieved immortality and were granted eternal life in various locations such as Elysium, the Islands of the
Blessed, heaven, the ocean, or even beneath the earth's surface. Amphiaraus, Ganymede, Ino, Iphigenia, Menelaus, Peleus, and many more who participated in
the Trojan and Theban battles were granted immortality.

Certain individuals were regarded as having experienced death and subsequent resurrection prior to attaining bodily immortality. Zeus killed Asclepius, but later
restored him and elevated him to the status of a prominent divinity. According to several renditions of the Trojan War legend, Achilles, following his demise, was
seized from his burial pyre by his divine mother Thetis, revived, and transported to an everlasting existence in either Leuce, the Elysian meadows, or the Islands of
the Blessed. Memnon, who met his demise at the hands of Achilles, appears to have suffered a comparable destiny. Alcmene, Castor, Heracles, and Melicertes
are occasionally believed to have achieved physical immortality through resurrection.[73] According to Herodotus' Histories, the ancient sage Aristeas of
Proconnesus, who lived in the 7th century BCE, was initially discovered dead, but his body vanished from a securely locked room. Subsequently, it was discovered
that he had not only been brought back to life but had also acquired eternal life.

Early Christians, such as Justin Martyr, recognised the correlation between these conventional beliefs and the subsequent resurrection of Jesus.

"When we affirm the crucifixion, death, resurrection, and ascension into heaven of Jesus Christ, our instructor, we are not presenting anything that deviates from
your beliefs concerning individuals whom you regard as offspring of Zeus."

The concept of an immortal soul originated with either Pherecydes or the Orphics, and was notably championed by Plato and his disciples. Nevertheless, this did
not become the prevailing standard in Hellenistic philosophy. Throughout the Christian era, it is evident that many traditional Greeks held the belief that certain
individuals were resurrected from the dead and granted physical immortality, while others could only anticipate an existence as disembodied and deceased souls,
albeit eternal. This conviction was the subject of complaints by various philosophers.

Zoroastrianism

In Zoroastrianism, it is believed that four days after death, the human spirit departs from the physical body, leaving behind an empty vessel. In Zoroastrianism,
souls were believed to be destined for either heaven or hell. It is possible that these notions of the afterlife had an impact on the development of Abrahamic
religions. The Persian term denoting "immortal" is linked to the month "Amurdad" in the Iranian calendar, which signifies "deathless" in Persian and falls towards the
conclusion of July. In Persian culture, the month of Amurdad or Ameretat is commemorated due to the ancient Persians' belief that the "Angel of Immortality"
triumphed over the "Angel of Death" during this period.[76]

Norse Mythology

Examining the complex tapestry of ancient Norse religious beliefs, we discover a fascinating investigation of the notion of immortality that surpasses the domain of
human existence. The Norse universe, characterised by its varied worlds and divine beings, presents a complex web of concepts that enhance our comprehension
of existence beyond the terrestrial realm.[77]

Within the vastness of Norse mythology, three fundamental worlds serve as the foundations of existence: Asgard, the domain of the Aesir deities; Midgard, the land
of humanity; and Hel, the mysterious underworld ruled by the goddess Hel. Within the context of the universe, immortality presents itself in several complex forms,
providing warriors and mortals with a range of options in their quest for existence beyond death.[78]

The core of Norse faith revolves around the dynamic concept of the hereafter, which encompasses a diverse and expansive realm rather than a single destination.
Valhalla, the magnificent abode of Odin, awaits those valiant warriors who perish in a splendid conflict. At this location, the selected Einherjar, who are the
esteemed deceased, engage in perpetual feasting and make ready for the catastrophic occurrences of Ragnarök, which signifies the ultimate destruction of the
planet. This warrior's paradise epitomises a distinctive sort of eternal life, where courageous acts reverberate throughout infinity.[79]

In contrast, the realm of Hel presents a distinct perspective on the afterlife. Hel, governed by the deity Hel, functions as the ultimate resting place for individuals
who did not meet a valiant demise. This world serves as a sanctuary for repose and contemplation, signifying a divergence from the exuberant ambiance of
Valhalla. The province of Hel in Norse mythology represents the recognition of a less intense sort of immortality, which is connected to the soul's continuation in a
separate afterlife.

Yggdrasil, the cosmic tree, plays a crucial role in the interconnection and interdependence of different realms in the cosmos. Yggdrasil represents the deep
interdependence of all things, encompassing the divine Aesir, the mortal realm of Midgard, and the underworld of Hel. The branches of the tree ascend to the
celestial realm of Asgard, while its roots delve into the profound depths of Hel, symbolising the perpetual cycle of existence, mortality, and rejuvenation.[80]

This thorough analysis of Norse cosmology reveals a profound and intricate comprehension of eternal life. The concept is not singular, but rather a dynamic
interaction of fates. The brave achieve everlasting glory in Valhalla, while the departed find peace in the halls of Hel. Additionally, the cosmic tree Yggdrasil
intricately weaves the threads of life into a profound tapestry that reverberates across time.

Religions with a philosophical nature

Delve deeply into the concept of immortality within the philosophical and religious frameworks. Concurrently, broaden both the scope and intensity of this
intellectual investigation to enable a more detailed analysis of the topic.

Taoism

Additionally, please refer to the topics of Chinese alchemy, the relationship between Taoism and death, and Xian in Taoism.

The Lüshi Chunqiu repeatedly emphasises the inevitability of death.[81] Henri Maspero observes that many scholarly works depict Taoism as a philosophical
system centred around the pursuit of immortality.[82] Isabelle Robinet argues that Taoism is better understood as a way of life rather than a religion, and that its
followers have a different perspective on Taoism compared to non-Taoist historians.[83] According to the Tractate of Actions and their Retributions, a traditional
teaching, individuals who perform a sufficient number of virtuous deeds and lead a simple, pure life can attain spiritual immortality. The worthiness of a mortal is
assessed by tallying a list of virtuous acts and transgressions. In this context, spiritual immortality refers to the ability of the soul to transcend the earthy realms of
the afterlife and ascend to the pure realms in Taoist cosmology.

1.1 Philosophical justifications for the eternal existence of the soul[edit]

Alcmaeon of Croton is the subject of this edit.

Workshop-5 Module-1 Page 251 of 333


Alcmaeon of Croton posited that the soul is in a perpetual state of motion without interruption. The precise structure of his argument is ambiguous, however it
seems to have exerted an influence on Plato, Aristotle, and subsequent writers.[85]

Plato (edited)

Plato's Phaedo presents four reasons in support of the immortality of the soul.[86]

The Cyclical Argument, also known as the Opposites Argument, posits that Forms are eternal and immutable. Since the soul is perpetually associated with life, it
follows that it cannot experience death and is inherently "indestructible". Given the mortality and susceptibility to physical death of the body, it logically follows that
the soul must be its imperishable counterpart. Plato subsequently proposes the connection between fire and cold. If the cold form is indestructible, and fire, its
antithesis, is in close proximity, it would be compelled to retreat unharmed, just as the soul does after death. This can be compared to the concept of the opposing
polarities of magnets.

The Theory of Recollection posits that certain knowledge, such as the concept of Equality, is inherent in us from birth, suggesting the preexistence of the soul to
hold that knowledge. Plato's Meno also presents a different version of the notion, where Socrates suggests anamnesis, or the idea of having prior knowledge of
everything. However, in Phaedo, Socrates is not as confident in asserting this concept.

The Affinity Argument posits that there exists a distinction between unseen, immortal, and incorporeal entities and visible, mortal, and corporeal entities. The
essence of our being resides in our soul, which belongs to the realm of the eternal. In contrast, our physical bodies belong to the realm of the transient. Therefore,
even when our bodies perish and decompose, our soul will persist and endure.

The Argument from Form of Life, also known as The Final Argument, posits that the Forms, which are immaterial and unchanging entities, serve as the ultimate
cause of everything in the world, and that all things partake in these Forms. For instance, objects that are aesthetically pleasing are associated with the concept of
Beauty; the numeral four is associated with the concept of Evenness, and so on. The essence of the soul inherently engages with the Form of Life, so rendering the
soul immortal.

Plotinus was a philosopher.

Plotinus presents a rendition of the argument that Kant refers to as "The Achilles of Rationalist Psychology". Plotinus initially asserts that the soul possesses a
singular nature, and subsequently observes that a singular entity is incapable of undergoing decomposition. Numerous following philosophers have contended that
the soul possesses simplicity and is inherently immortal. The tradition reaches its peak with Moses Mendelssohn's Phaedon.[87]

Metochites (edit)

According to Theodore Metochites, the soul has an inherent ability to move itself. However, a particular movement will only stop if the cause of the movement is
detached from the object being moved. This is impossible if the cause and the object are same.

Avicenna

Avicenna advocated for the separate existence of the soul and the body, as well as the immortality of the soul.

Aquinas (edited)

The comprehensive justification for the eternal existence of the soul and Thomas Aquinas' expansion of Aristotelian doctrine can be located in Question 75 of the
First Part of the Summa Theologica.[94]

René Descartes

René Descartes supports the assertion that the soul is uncomplicated, and hence, it cannot break down into simpler parts. Descartes does not consider the
potential occurrence of the sudden disappearance of the soul.[95]

Leibniz (edit)

Gottfried Wilhelm Leibniz, in his early work, supports a variant of the argument that suggests the soul's immortality based on its simplicity. However, like to those
who came before him, he fails to consider the potential occurrence of the sudden disappearance of the soul. In his monadology, he presents a complex and
innovative argument in favour of the immortality of monads.[96]

Moses Mendelssohn

Moses Mendelssohn's Phaedon presents a robust argument in favour of the uncomplicated nature and eternal existence of the soul. The work consists of three
dialogues that revisit the Platonic discourse Phaedo. In these dialogues, Socrates presents arguments in favour of the immortality of the soul as he prepares for his
own death. Several philosophers, such as Plotinus, Descartes, and Leibniz, contend that the soul is uncomplicated, and due to the fact that uncomplicated entities
cannot break down, they must possess immortality. In Phaedon, Mendelssohn tackles the deficiencies seen in previous iterations of this argument, which Kant
refers to as the Achilles of Rationalist Psychology. The Phaedon presents a novel proposition regarding the inherent simplicity of the soul, as well as a unique
argument asserting that entities composed of basic elements cannot abruptly vanish. The text presents more original arguments supporting the idea that the soul
maintains its logical abilities for the duration of its existence.[97]

1.2 Ethics

Additionally, please refer to the section on Ethics and Politics in the Life Extension article.

The prospect of achieving clinical immortality gives rise to several medical, philosophical, and religious concerns, as well as ethical dilemmas. These encompass
enduring vegetative states, the evolution of personality over time, technology for emulating or replicating the mind or its functions, social and economic inequalities
resulting from longevity, and the ability to survive the eventual heat death of the universe.

Unfavorability[edit]

Workshop-5 Module-1 Page 252 of 333


Physical immortality has also been conceptualised as a state of perpetual suffering, as exemplified in the myth of Tithonus or in Mary Shelley's short work The
Mortal Immortal, where the main character endures the anguish of outliving all his loved ones. To find more examples in the realm of literature, please refer to the
section on Immortality in fiction.

Kagan (2012)[98] contends that any manifestation of human immortality would be undesirable. Kagan's argument is presented in the form of a dilemma. Our
characters either remain unchanged in an eternal afterlife, or they undergo transformation.

• If our characteristics stay fundamentally unchanged, meaning that we maintain our current desires, interests, and objectives, then throughout an endless expanse
of time, we will inevitably become bored and find eternal life painfully monotonous.

Alternatively, if our characters undergo drastic transformations, such as having our memories periodically erased by God or being endowed with rat-like brains that
get endless satisfaction from some basic enjoyments, we would become so dissimilar to our present selves that we would not be deeply concerned about their fate.

Regardless, Kagan asserts that immortality holds no appeal. Kagan says that the ideal scenario would involve humans living for as long as they like and then
embracing death as a welcome relief from the monotonous burden of immortality.[98]

1.3 Sociology

Should humans attain immortality, it is quite probable that the world's social systems would undergo a transformation. Sociologists contend that human behaviour is
influenced by individuals' cognizance of their own mortality.[100] Given the progress in medical technology that prolongs human lifespan, it becomes imperative to
carefully deliberate on prospective societal frameworks. The world is currently undergoing a worldwide demographic transition characterised by a growing
proportion of elderly individuals and declining birth rates. The adjustments made in society to manage this demographic shift may provide valuable insights on the
potential for achieving immortality.

The field of sociology includes an expanding collection of written works that focus on the study of immortality. This literature examines various endeavours to
achieve immortality, whether it be in a literal or symbolic sense, and its significance in the current day. These endeavours encompass various efforts such as
increased focus on the deceased in Western societies, the practice of memorialising individuals online, and biomedical interventions aimed at extending human
lifespan. The pursuit of immortality and its impact on societal structures have prompted some to argue that we are transitioning into a "Postmortal Society".
Anticipated changes in societies resulting from the quest for immortality would involve shifts in societal paradigms, worldviews, and Similarly, several methods of
achieving immortality could require a substantial restructuring of civilizations, ranging from a greater emphasis on technology to a greater alignment with nature.
The user's text is "[107]".

Immortality would result in accelerated population expansion, leading to many repercussions such as the environmental impact and exceeding planetary
boundaries.

1.4 Politics

While several scientists argue that achieving radical life extension and halting the ageing process is possible, there are currently no global or national initiatives
specifically dedicated to stopping ageing or achieving radical life extension. In 2012, political parties advocating for immortality were established in Russia, followed
by the United States, Israel, and the Netherlands. Their objective was to offer political backing to research and technologies focused on anti-aging and radical life
extension. Simultaneously, they aimed to progress towards radical life extension, a life free from ageing, and ultimately, immortality. Their goal was to ensure that
the majority of people currently alive would have access to these technologies.

Several experts criticise the growing endorsement of immortality endeavours. Panagiotis Pentaris suggests that if we were to overcome ageing as the cause of
death, it would result in a greater division among people in society and a larger gap between social classes. Some argue that other projects aiming for immortality,
such as transhumanist digital immortality, radical life extension, and cryonics, are part of a capitalist system that exploits and controls, with the intention of
prolonging the lives of the privileged economic elite. Consequently, immortality could become a political and economic battleground in the twenty-first century,
pitting the wealthy against the less privileged.

4.16 Symbols

The ankh

There exist a multitude of emblems that reflect the concept of immortality. The ankh is an Egyptian sign of life, which is associated with the concept of
immortality when held by the gods and pharaohs, who were believed to possess authority over the course of life. The Möbius strip, taking the form of a
trefoil knot, serves as an additional representation of eternal existence. Symbolic depictions of infinity or the life cycle frequently serve as symbols of
immortality, contingent upon the specific context in which they are employed. Additional instances encompass the Ouroboros, the Chinese fungus of
longevity, the ten kanji, the phoenix, the peacock in Christianity, and the colours amaranth (in Western culture) and peach (in Chinese culture).

4.17 See also

 Afterlife
 Akal (Sikh term)
 Ambrosia
 Amrita
 Bioethics

Workshop-5 Module-1 Page 253 of 333


 Biogerontology
 Brooke Greenberg
 Crown of Immortality
 Dyson's eternal intelligence
 Elixir of life
 Eternal return
 Eternal youth
 Ghost
 Immortal DNA strand hypothesis
 Immortalist Society
 Immortality in fiction
 Lich
 List of people claimed to be immortal in myth and legend
 Methuselah Mouse Prize
 Molecular nanotechnology
 Negligible senescence
 Tipler's Omega Point
 Organlegging
 Neidan
 Posthuman
 Resurrection
 Queen Mother of the West
 Simulated reality
 Suspended animation
 Undead
 Regeneration (theology)

4.18 Footnotes
1. ^ The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original
that, when run on appropriate hardware, it will behave in essentially the same way as the original brain.

— Sandberg & Boström (2008)[32]

2. ^ "Even as we are conscious of the broad and very common biblical usage of the term "soul", we must be clear that scripture does not
present even a rudimentarily developed theology of the soul. The creation narrative is clear that all life originates with God. Yet the Hebrew
scripture offers no specific understanding of the origin of individual souls, of when and how they become attached to specific bodies, or of
their potential existence, apart from the body, after death. The reason for this is that, as we noted at the beginning, the Hebrew Bible does
not present a theory of the soul developed much beyond the simple concept of a force associated with respiration, hence, a life-force." [60][full
citation needed]

3. ^ In the New Testament, "soul" (orig. ψυχή ) retains its basic Hebrew sense of meaning. "Soul" refers to one's life: Herod sought
Jesus' soul (Matt. 2:20); one might save a soul or take it (Mark 3:4); death occurs when God "requires your soul" (Luke 12:20).
4. ^ For Avicenna's views, see: Moussa, Dunya, & Zayed (1960); [89] Arberry (1964);[90] Michot (1986);[91] Janssen (1987);[92] Marmura (2005)
(complete translation).[93]

4.19 Notes

1. ^ Marshall Fredericks (2003). "GCVM History and Mission". Greater Cleveland Veteran's Memorial, Inc. Archived from the original on 16
February 2009. Retrieved 14 January 2009.
2. ^ "immortality". Oxford English Dictionary (Online ed.). Oxford University Press. doi:10.1093/OED/6198259326. (Subscription or participating
institution membership required.)
3. ^ Berthold, Emma (10 September 2018). "The animals that can live forever". Curious. Retrieved 20 September 2023.
4. ^ "7 Immortal Animals That Can Basically Live Forever". Reader's Digest. Retrieved 20 September 2023.
5. ^ "We'll be uploading our entire minds to computers by 2045 and our bodies will be replaced by machines within 90 years, Google expert
claims "Kurzweil"". Retrieved 30 December 2021.
What is an immortal vampire?

Immortality. Perhaps one of the most alluring aspects of vampires is their immortality. They are beings who live forever, never aging or dying and remaining
irresistibly beautiful.21 May 2023

Workshop-5 Module-1 Page 254 of 333


The allure of the immortal: Why do vampires remain a popular ...

https://medium.com › the-allure-of-the-immortal-wh...

Photo by David Balev on Unsplash

In this article, Erikka Innes explores the concept of vampires, a creature of the night with numerous versions that can be described in various ways. Vampires are
often associated with class differences and forbidden sexual desires, but historically, they were associated with the belief that the dead could still harm the living
and a misunderstanding of disease and how death affects the human body.

In early stories about vampires, they are described as awkward, sexual, and aggressively attacking people and livestock for their blood. They are creepy, unkempt,
and bestial. It wasn't until John Polidori's The Vampyre that they became wealthy, sexy predators, and Bram Stoker's Dracula that popular ideas about vampires
were solidified. Some people, like Spanish neurologist Dr. J. Gómez-Alonso, believe that folklore about early vampires may actually be about people suffering from
rabies.

The rabies virus, which causes partial paralysis, makes it difficult for someone to walk. Early vampires before John Polidori's story were described as having a
shambling gait, which is similar to vampirism. The lore of vampires has evolved over time, with some arguing that the shambling gait and other symptoms of rabies
are a result of the virus.

Vampires are immune to natural death, physical aging, and diseases. They can be weakened or killed by werewolf venom, vervain, wooden bullets, heart ripouts,
burning, decapitatement, or a wooden stake through the heart. Magic can also be used to kill vampires. Vampires will die if the original vampire dies, usually within
hours after the original vampire's death. Some vampires may grow immunity to their effects over time.

The concept of vampires stopping aging at a specific age is a popular trope in fiction and folklore. In many vampire stories, the explanation for their eternal youth is
often linked to their transformation into vampires. This transformation typically involves being bitten by a vampire and then undergoing a process that changes their
physiology, making them immortal and impervious to the effects of aging. In some stories, the specific age at which a vampire stops aging is tied to the age at
which they were turned into a vampire. For example, if a person is turned into a vampire at the age of 25, they may remain physically 25 years old for eternity. This
idea adds a layer of tragedy to the vampire mythos, as they are forever frozen in time at the moment of their transformation. It's important to note that the concept
of vampires and their abilities varies widely across different cultures and fictional works, so there isn't a single definitive explanation for how vampires stop aging at
a specific age. This trope is a fascinating aspect of vampire lore that has captured the imagination of storytellers and audiences for centuries.

A vampire is a mythical creature that subsists by feeding on the vital essence of the living. In European folklore, vampires were undead creatures that visited loved
ones and caused mischief or deaths in their inhabited neighborhoods. They wore shrouds and were often described as bloated and of ruddy or dark countenance.
The term vampire was popularized in Western Europe after reports of an 18th-century mass hysteria of a pre-existing folk belief in Southeastern and Eastern
Europe. Local variants in Southeastern Europe were also known by different names, such as shtriga in Albania, vrykolakas in Greece, and strigoi in Romania,
cognate to Italian 'Strega', meaning Witch.

In modern times, the vampire is generally held to be a fictitious entity, although belief in similar vampiric creatures (such as the chupacabra) still persists in some
cultures. The charismatic and sophisticated vampire of modern fiction was born in 1819 with the publication of "The Vampyre" by John Polidori. Bram Stoker's 1897
novel Dracula is remembered as the quintessential vampire novel and provided the basis of the modern vampire legend. The vampire has since become a
dominant figure in the horror genre.

The concept of vampirism has existed for millennia, with tales of demons and spirits in ancient civilizations like Mesopotamians, Hebrews, Ancient Greeks,
Manipuri, and Romans. However, the folklore for the vampire originates mainly from early 18th-century southeastern Europe. Vampires are often revenants of evil

Workshop-5 Module-1 Page 255 of 333


beings, suicide victims, or witches, but they can also be created by a malevolent spirit possessing a corpse or by being bitten by a vampire. Belief in these legends
became so pervasive that in some areas it caused mass hysteria and even public executions of people believed to be vampires.

Vampires were usually reported as bloated in appearance, ruddy, purplish, or dark in color, often attributed to the recent drinking of blood. They would be clad in
the linen shroud they were buried in, with their teeth, hair, and nails may have grown somewhat. Chewing sounds were reported emanating from graves.

Prevention of vampires was common in original folklore, with cultural practices such as burying a corpse upside-down, placing earthly objects near the grave, and
counting fallen grains. Identifying vampires involved rituals such as leading a virgin boy through a graveyard or church grounds on a virgin stallion, finding holes in
the earth over a grave, and describing corpses as having a healthier appearance than expected.

Vampire folklore often involves the use of apotropaics, such as garlic, Bibles, crucifixes, rosaries, holy water, and mirrors, to ward off or identify vampires. Some
traditions also state that vampires cannot walk on consecrated ground or cross running water. Mirrors have been used to ward off vampires when placed on a door,
as they may not have a reflection or shadow. Some traditions also hold that a vampire cannot enter a house unless invited by the owner.

Vampires were believed to be more active at night but not generally vulnerable to sunlight. Some stories suggest that eating bread baked with blood mixed into the
flour or drinking it would grant protection. Methods of destroying suspected vampires varied, with staking being the most commonly cited method, particularly in
South Slavic cultures. Decapitation was the preferred method in German and western Slavic areas, with the head buried between the feet, behind the buttocks, or
away from the body. Other measures included pouring boiling water over the grave or complete incineration of the body. In Southeastern Europe, vampires could
be killed by being shot or drowned, repeating the funeral service, sprinkling holy water on the body, or by exorcism.

Ancient beliefs about supernatural beings consuming the blood or flesh of the living have been found in nearly every culture around the world for many centuries.
The term vampire did not exist in ancient times, and blood-drinking activities were attributed to demons or spirits who would eat flesh and drink blood. Almost every
culture associates blood drinking with some kind of revenant or demon, or in some cases a deity. In India, tales of vetālas, ghoulish beings that inhabit corpses,
have been compiled in the Baitāl Pacīsī. Piśāca, the returned spirits of evil-doers or those who died insane, also bear vampiric attributes.

The Persians were one of the first civilizations to have tales of blood-drinking demons, while Ancient Babylonia and Assyria had tales of the mythical Lilitu,
synonymous with and giving rise to Lilith (Hebrew ‫ )לילית‬and her daughters the Lilu from Hebrew demonology. Greco-Roman mythology described the Empusae,
the Lamia, the Mormo, and the striges, which over time became general words to describe witches and demons respectively.

Many myths surrounding vampires originated during the medieval period, with accounts of revenants, Old Norse draugr, and the Greek librarian of the Vatican, Leo
Allatius, producing the first methodological description of the Balkan beliefs in vampires. Vampires properly originating in folklore were widely reported from Eastern
Europe in the late 17th and 18th centuries, which formed the basis of the vampire legend that later entered Germany and England.

The 18th-century vampire controversy in Eastern Europe began with an outbreak of alleged vampire attacks in East Prussia and the Habsburg monarchy, which
spread to other localities. The panic continued for a generation, with rural epidemics of vampire attacks, undoubtedly caused by the higher amount of superstition
present in village communities.

The controversy in Austria ceased when Empress Maria Theresa sent her personal physician, Gerard van Swieten, to investigate the claims of vampiric entities. He
concluded that vampires did not exist and passed laws prohibiting the opening of graves and desecration of bodies, ending the vampire epidemics. Other European
countries followed suit, but the vampire lived on in artistic works and local folklore.

Non-European beliefs also included beings with many of the attributes of European vampires appearing in the folklore of Africa, Asia, North and South America,
and India. These beings share the thirst for blood and are classified as vampires. In Africa, various regions have folktales featuring beings with vampiric abilities,
such as the Ashanti people of West Africa, the Ewe people of the adze, the impundulu in the eastern Cape region, and the Betsileo people of Madagascar.

In the Americas, the Loogaroo is an example of how a vampire belief can result from a combination of French and African Vodu or voodoo. In the late 18th and
19th centuries, the belief in vampires was widespread in parts of New England, particularly in Rhode Island and eastern Connecticut.

Vampires have been a part of Asian folklore since the late 1950s, with various creatures such as the Nukekubi, Tagalog Mandurugo, Visayan Manananggal,
Malaysian Penanggalan, Indonesian Leyak, Tai Dam ethnic minority of Vietnam, and Jiangshi. The Nukekubi is a being with a detachable head and neck, while the
Tagalog Mandurugo is a blood-sucker with wings and a long, hollow tongue. The Visayan Manananggal is an older, beautiful woman with batlike wings, preying on
pregnant women. The Penanggalan is a woman who uses black magic to obtain beauty and detach her fanged head to fly at night. The Leyak is a woman who died
during childbirth and became undead, terrorizing villages. In Vietnam, the term "ma cà r ồng" refers to a demon haunting modern-day Phú Th ọ Province. Jiangshi,
or "Chinese vampires," are reanimated corpses that kill living creatures to absorb life essence from their victims. They are mindless creatures with greenish-white
furry skin and have inspired a genre of films and literature in Hong Kong and East Asia.

Workshop-5 Module-1 Page 256 of 333


Modern vampires are often depicted as charismatic villains, and vampire hunting societies still exist for social reasons. Allegations of vampire attacks in Malawi
during 2002 and 2003, based on the belief that the government was colluding with vampires, have recurred in late 2017. In Europe, the vampire is usually
considered a fictitious being, and many communities may have embraced the revenant for economic purposes.

The origins of vampire beliefs and mass hysteria can be traced back to pre-industrial societies attempting to explain the natural process of death and
decomposition. Decomposition, which involves the accumulation of gases in the torso, can cause the body to look "plump", "well-fed", and "ruddy," making it appear
as if the corpse had recently been engaging in vampiric activity.

Premature burial may have also been influenced by individuals being buried alive due to shortcomings in medical knowledge of the time. In some cases, people
reported sounds emanating from a specific coffin, which were later dug up and fingernail marks were discovered on the inside. Another explanation for noise is the
bubbling of escaping gases from natural decomposition of bodies.

Disease has also been associated with clusters of deaths from unidentifiable or mysterious illnesses, usually within the same family or small community. For
example, tuberculosis was associated with outbreaks of vampirism in New England, where a specific disease, tuberculosis, was associated with outbreaks of
vampirism.

Vampire
Wikipedia, the free encyclopedia

The Vampire, by Philip Burne-Jones, 1897

A vampire is a legendary creature that survives by feeding on the vital essence (often blood) of the living. In European legend, vampires are undead entities who
frequently visited loved ones and caused mischief or fatalities in the neighbourhoods where they lived when living.
Description and common attributes

Vampire (1895) by Edvard Munch

Workshop-5 Module-1 Page 257 of 333


It is difficult to give a single, precise definition of the folklore vampire, yet there are some traits that appear in numerous European traditions. Vampires were
typically described as bloated and ruddy, purplish, or dark in colour; these characteristics were frequently attributed to recent blood consumption, as blood could be
seen seeping from the mouth and nose when one was seen in its shroud or coffin, and the left eye was frequently open.[21] It would be dressed in the linen shroud
it was buried in, and its teeth, hair, and nails might have grown slightly, though fangs were not a feature.[22] Chewing sounds were heard coming from tombs.[23]
Creating vampires

Illustration of a vampire from Max Ernst's Une Semaine de Bonté (1934)

In the original legend, the causes of vampire generation were numerous and diverse. In Slavic and Chinese beliefs, any corpse leaped over by an animal,
especially a dog or a cat, was thought to become one of the undead.[24] A corpse with a wound that had not been treated with hot water was likewise at risk.
According to Russian tradition, vampires were previously witches or those who revolted against the Russian Orthodox Church while alive.[25]
Protection

Garlic, Bibles, crucifixes, rosaries, holy water, and mirrors have all been seen in various folkloric traditions as means of warding against or identifying vampires.[38][39]

Workshop-5 Module-1 Page 258 of 333


In vampire mythology, apotropaics—items that may stave off revenants—are frequently mentioned. A famous example is garlic;[40] wild rose and hawthorn
branches are frequently thought to kill vampires, and mustard seeds were traditionally sprinkling on house roofs in Europe to ward them off.[41] Other apotropaics
are holy objects like holy water, rosaries, and crucifixes. According to certain folklore, vampires are also unable to cross flowing water or tread on hallowed land like
churches or temples.[39]
Mirrors, facing outwards, have been used to ward off vampires, even though they are not typically considered apotropaic (in many traditions, vampires lack a
reflection and occasionally a shadow, maybe as a sign of their lack of a soul).[42] While not a universal quality (the Greek vrykolakas/tympanios was able to cast
both reflection and shadow), Bram Stoker employed it in Dracula, and it has since become a staple among writers and filmmakers.[43]
Additionally, according to certain customs, a vampire can enter a home only by invitation from the owner; after that, they are free to come and leave as they want.
[42] Vampires in folklore were thought to be more active at night, but they were not thought to be susceptible to sunlight.[43]
Methods of destruction

A runestone with an inscription to keep the deceased in its grave.[46]

Staking was the most often mentioned technique of eliminating suspected vampires, especially in South Slavic civilizations.[47] Ash was the chosen wood in
Russia and the Baltic states,[48] or hawthorn in Serbia,[49] with a record of oak in Silesia.Since it was thought that aspen was used to make Christ's cross, aspen
was also used for stakes [50][51]. Aspen branches placed on the graves of alleged vampires were also thought to keep the vampires from rising at night.

800-year-old skeleton found in Bulgaria stabbed through the chest with an iron rod.[58]

Romani people buried their dead with pieces of steel in their mouths, over their eyes, ears, and between their fingers, and with steel or iron needles driven into their
hearts. They also drove a hawthorn stake through the legs of the body or stuffed hawthorn in its sock. The researchers who found a brick shoved into the throat of
a female body in a 16th-century burial near Venice in 2006 have deduced that the object was used in a ritualistic vampire-slaying ceremony.[59] More than a
hundred skeletons have been found in Bulgaria with metal objects, like plough parts, embedded in their torsos.[58]
Ancient beliefs

Lilith, 1887 by John Collier. Stories of Lilith depict her as a demon drinking blood.

Workshop-5 Module-1 Page 259 of 333


For many years, myths of otherworldly entities feeding human flesh or blood have existed in almost every civilization on the planet.[61] In the past, there was no
such thing as a vampire. Blood drinking and related practices were associated with spirits or demons that consumed flesh and drank blood; the devil was even
thought to be synonymous with the vampire.[62] Blood drinking is almost universally associated with some form of revenant, demon, or even god. Tales from India
about vetālas, eerie creatures that live inside dead bodies, have been collected in the Baitāl Pacīsī. One of the most well-known stories from the Kathāsaritsāgara
is about King Vikramāditya and his nocturnal hunts to apprehend the elusive one.[63] The resurrected spirits of evildoers or insane people, known as piśāca, also
have vampire characteristics.[64]
Medieval and later European folklore

Main article: Vampire folklore by region

Lithograph showing townsfolk burning the exhumed skeleton of an alleged vampire.

It was during the Middle Ages when many of the vampire beliefs first emerged. Though traces of vampire entities in English mythology are scarce after the 12th
century, historians and chroniclers William of Newburgh and Walter Map of Britain documented tales of revenants [20] [72].[73] The Another example of a
supernatural being having vampire-like characteristics from the Middle Ages is the Old Norse draugr.[74] Rarely do Jewish texts discuss vampires; nevertheless, in
the 16th century, rabbi David ben Solomon ibn Abi Zimra (Radbaz) recounted the story of a heartless old woman whose corpse went unattended for three days
after her death, at which point it reanimated as a vampire and murdered hundreds. He made the connection between this occurrence and the absence of a shmirah
(guard) after death, which could allow evil spirits to inhabit the body.

Title page of treatise on the chewing and smacking of the dead in graves (1734), a book on vampirology
by Michael Ranft.

Workshop-5 Module-1 Page 260 of 333


Engraving of Dom Augustine Calmet from 1750

Both occurrences were well recorded. After analysing the remains, government investigators created case reports and publications that were distributed across
Europe.[85] The The "18th-Century Vampire Controversy" characterised by widespread panic, persisted for an entire generation. As if things weren't bad enough,
the widespread belief in vampires in rural areas led to an increase in the practice of staking and other gross forms of superstition, such as the practice of digging up

corpses. A stilt house typical of the Tai Dam ethnic minority of Vietnam, whose communities were
said to be terrorized by the blood-sucking ma cà rồng.

"Ma cà rồng" was initially used to describe a demon that haunts the communities of the Tai Dam ethnic minority in modern-day Phú Th ọ Province, which is where
the term "Western vampires" originated in Vietnam.

Current views

The vampire in contemporary literature is often portrayed as a charming antagonist.[22] is a Societies that chase vampires do so, but for more pragmatic reasons.In
[20], Mobs stoned one person to death and attacked at least four others, including Governor Eric Chiwaya, in late 2002 and early 2003 in Malawi, all because they
thought the government was in cahoots with vampires.The number 115 Six individuals were murdered in late 2017 on suspicion of being vampires, reviving fears
and violence.The number 116

A vampire costume

Workshop-5 Module-1 Page 261 of 333


Around the beginning of 1970, the local newspapers began spreading tales that Highgate Cemetery in London was haunted by a vampire. The cemetery was
visited by a significant number of amateur vampire hunters when they arrived. Sean Manchester, a local resident who was among the first to suggest the existence
of the "Highgate Vampire" and who later claimed to have exorcised and destroyed an entire nest of vampires in the region, is the author of quite a few books that
have been written about the case. One of the most notable of these books is authored by Sean Manchester.
Political interpretations

Political cartoon from 1885, depicting the Irish National League as the "Irish Vampire" preying on a sleeping
woman.

Vampire bats

Main article: Vampire bat

A vampire bat in Peru.

Literature

Main article: Vampire literature

Workshop-5 Module-1 Page 262 of 333


Cover from one of the original serialized editions of Varney the Vampire

Carmilla by Sheridan Le Fanu, illustrated by D. H. Friston, 1872.

highly popular Vampire Chronicles (1976–2003),[162] and Stephenie Meyer's Twilight series (2005–2008).[163]

Film and television

Main articles: Vampire film, List of vampire films, and List of vampire television series

A scene from F. W. Murnau's Nosferatu, 1922.

Count Dracula as portrayed by Béla Lugosi in 1931's Dracula.

Workshop-5 Module-1 Page 263 of 333


1960s television's Dark Shadows, with Jonathan Frid's Barnabas Collins vampire character.

Vlad the Impaler (or Dracula), Prince of Wallachia.

Bram Stoker’s Dracula, the iconic 1897 tale of a vampire from Transylvania, is often thought to be inspired by a formidable 15th-century governor from present-day
Romania named Vlad the Impaler.

The Battle With Torches by Romanian artist Theodor Aman depicts the nighttime raid of Vlad III against Mehmed II as he sought to end the Ottoman invasion of
Wallachia. (Image credit: Public Domain/Muzeul Theodor Aman)

Vlad the Impaler, also known as Vlad III, Prince of Wallachia, was a 15th-century warlord, in what today is Romania, in south-eastern Europe. Stoker used
elements of Vlad's real story for the title character of his 1897 novel "Dracula." The book has since inspired countless horror movies, television shows and other
bloodcurdling tales.

Search inside image

Workshop-5 Module-1 Page 264 of 333


5 Najstarsza osoba na świecie nie żyje - rp.pl

The oldest person in the world has died. Sister Lucile Randon was 118 years old
Sister Lucile Randon, who was the oldest living person in the world, has died at the age of 118, AFP reports.
Published: 18/01/2023 06:05

Andre's sister
Photo: AFP
arb
Randon, known as Sister Andre, was born in southern France on February 11, 1904.
She died in her sleep in a nursing home in Toulon, AFP reports.
David Tavella, a spokesman for the nursing home, announced her death and said that she "wanted to join her beloved brother." - For her, death was a release - he
added.
Randon long remained the oldest European in the world, and after the death in 2022 of the Japanese woman Kane Tanaka, who was 119 years old, she became
the longest-living person on Earth.
117th birthday of a Japanese woman, the oldest person in the world
Kane Tanaka, from Japan holds, her record as the oldest person in the world. She celebrated her 117th birthday at a nursing home in Futuoka.

6 The oldest man in the world - Guinness record


Barbara Cykowska

7 months ago

2018 , 2019 , 2020 , 2023 , Body , Individual , Guinness World Records

Spain , Japan

People have been looking for a recipe for immortality since the beginning of time. It still remains unknown, but there are people who have achieved a foretaste of it,
crossing the extraordinary mark of 100 years. Learn the surprising stories of supercentenarians who shared their methods for longevity with the world!

6.1 The oldest woman in the world

Current Guinness record

María Branyas Morera

The current Guinness record holder, who can boast as many as two titles - as the oldest living person in the world and therefore the oldest living woman - is María
Branyas Morera. On the day of official entry into the Guinness Book of Records, January 17, 2023, the Spanish woman was 115 years and 139 days old . Some
time later, in March, the woman celebrated her 116th birthday!

Workshop-5 Module-1 Page 265 of 333


Maria Branyas Morera was born before the sinking of the Titanic and the outbreak of World War I, on March 4, 1907, in San Francisco, California. Then, together
with her parents and four siblings, she moved to New Orleans, from where in 1915 they left for Olot in Catalonia. During a trip to Spain, as a result of a fall, Branyas
Morera suffered an eardrum injury, as a result of which she permanently lost hearing in one ear. At the end of the journey, Maria's father, Joseph Branyas Julia,
died of pulmonary tuberculosis, leaving Branyas Morera's mother to raise her family of five on her own.

In 1931, the record holder married doctor Joan Moret. The couple had three children. Years later, the woman can boast of 11 grandchildren and 11 great-
grandchildren.

Branyas Morera with her husband on their wedding day


in 1931

In 2000, at the age of 93, she moved to a nursing home in Olot, where she currently lives. He still reads the newspaper every day. The superstar is deaf in one ear
and partially in the other. In April 2020, at the age of 113, Branyas Morera tested positive for coronavirus, but fortunately she successfully recovered. She received
both doses of the vaccine in January 2021, making her one of the oldest supercentenarians to be vaccinated.

The director of the nursing home center, Montse Valdayo, says that the woman teaches employees and other residents a new life lesson every day and willingly
shares her memories. He describes her as a very intelligent and beautiful person. According to Maria's relatives, she never had any health problems.

When asked about the secret of longevity, the senior responds that she has never followed any diet regime. She ate everything, but in small quantities. The
delicacy without which he cannot imagine his day is natural yogurt.

I think longevity is also happiness. Happiness and good genes.

The record holder has always avoided toxic relationships and was optimistic about what the future would bring. She also did not forget about physical activity - she
always walked a lot and spent every free moment outdoors.

Workshop-5 Module-1 Page 266 of 333


Interestingly, the oldest living person in the world is active on social media with the help of her family. You can follow the senior on Twitter under the name
MariaBranyas112. A profile description that reads "I'm old, very old, but I'm not an idiot" brings a smile to my face.

Previous Guinness World Records

Kane Tanaka

The world's oldest woman was Mrs. Kane Tanaka from Fukuoka, Japan. On March 9, 2019, on the day the record was recognized, she was exactly 116 years and
66 days old.

Unfortunately, on April 19, 2022, Kane Tanaka died. She lived to be 119 years old.

Kane was born on January 2, 1903 as a premature baby. Interestingly, in the same year the Wright brothers were the first in the world to fly in an airplane.

The Japanese woman married Hideo Tanaka in 1922 at the age of 19. Even though it was a traditionally arranged marriage and Kane did not know her chosen one
before the wedding, it did not prevent them from having four children and adopting a fifth . Kane's husband ran the family business for many years - a restaurant
that sold the sweet dish "sticky rice" and Udon noodles, popular in Japan.

Workshop-5 Module-1 Page 267 of 333


Despite undergoing several surgeries - including one for cataracts and another for colon cancer - Kane lived a peaceful and prosperous life in a home for the
elderly in Fukuoka. The 116-year-old from Japan got up at 6 a.m. every day. In the morning, she read books and explored the world of science - she was mainly
interested in science subjects. In the afternoon she played a game of her favorite game, Othello. Over the years, the record holder has become a true expert in this
classic board game, often beating the nursing home staff.

During the ceremony of presenting the official certificate confirming the record from Guinness World Records, Kane received a box of chocolates as a gift, which
she immediately opened and began to eat. When asked how many chocolates she wanted to eat on this special day, she replied 100!

Chiyo Miyako

The previous record holder was Chiyo Miyako, who was 117 years and 81 days old on the day she received the title of the oldest woman in the
world. Unfortunately, shortly after receiving the certificate, the woman died.

Workshop-5 Module-1 Page 268 of 333


Chiyo was born on May 2, 1901 in Japan, in the Kansai region, in the town of Wakayama. The woman's family always considered her a very sociable and talkative
person. They said she was patient and kind, and brought joy to everyone who knew her.

Her greatest passion was calligraphy. The record holder learned it when she was young, during her school days. She created her last works until the day she
died. She loved eating - her favorite dishes were sushi and eels, which she ate almost every day. She traveled a lot thanks to her husband working on the
railway. They visited many beautiful places together.

Violet Brown

The title of the oldest woman in the world also belonged to Jamaican supercentenarian Violet Brown, who lived for 117 and 36 days. She died on September 15,
2017, at the age of 117 years and 189 days

Violet was born on October 3, 1900 in Jamaica. She was one of the last living people born in the 19th century. Interestingly, in all these years the woman never
changed her place of residence! Her home was the quiet town of Duanvale in the Trelawny region of Jamaica.

Mrs. Mosse Brown could always count on the support of her relatives, devoted friend and caregiver - Delita Grant. Everyone appreciated the company of the
smiling old woman who shared her faith with them and recited her favorite poetry from memory.

Violet and her husband worked as farmers, growing sugar cane. After his retirement, Mr. Brown became caretaker of a nearby cemetery. Violet, on the other hand,
supported him by keeping a register of all the people buried there, which she considered a reason for pride and gratitude.

As for her diet, Violet was fond of fish and lamb, but she did not eat pork or chicken. She loved Irish sweet potatoes, breadfruit, and oranges and mangoes.

Emma Morano

Emma Morano was once the oldest woman in the world. The Italian was born on November 29, 1899, and died on April 15, 2017 in Verbania, northern Italy, at the
age of 116 years and 165 days.

Workshop-5 Module-1 Page 269 of 333


Emma was one of eight children. She survived two world wars and over 90 Italian governments. The first radio signal was transmitted in the year she was born, and
four years later the Wright brothers took to the air for the first time.

Emma attributed her longevity to two factors: leaving her abusive husband and including two raw eggs and a small amount of raw minced meat in her diet, which
she ate every day since she was 20. The unusual diet was prescribed to her by a doctor to combat the anemia she had suffered from in her youth. Interestingly,
Emma rarely ate vegetables and fruit, but loved omelets and chicken.

Considering that Emma's mother, aunt and some of her siblings lived to the age of ninety, and her sister died at the age of 102, it can be concluded that good
genes were of great importance in her case!

Susannah Mushatt Jones

The certificate of the oldest woman in the world also belonged to S usannah Mushatt Jones. The American was born on July 6, 1899 in Alabama. She died on May
12, 2016 in New York, having lived happily for 1,15 years and 345 days.

According to Mrs. Mushatt, the recipe for a long life was to avoid cigarettes and alcohol, surround yourself with love and positive energy, and get enough sleep.

The American moved to New York in 1923. She earned her living as a babysitter for children. Her parents were Mary and Callie Mushatt, who had nine other
children. Mrs Mushatt's numerous siblings gave the record holder 100 nephews and nieces.

Despite losing her sight and hearing problems, Susannah tried not to spend all day in bed. She enjoyed going out and meeting guests. Interestingly, she took only
two medications a day for the rest of her life.

Susannah Mushatt Jones, who lived at the turn of the three centuries, witnessed the most important social, industrial and technological changes. She had the
opportunity to watch the first planes take off, as well as the moment when mass production of cars began. During her life, as many as four monarchs sat on the
British throne, while the United States elected 20 presidents.

Workshop-5 Module-1 Page 270 of 333


6.2 The oldest man in the world

Juan Vicente Pérez Mora

The current Guinness World Record holder as the world's oldest living man is Juan Vicente Pérez Mora. During the official presentation of the certificate on
February 4, 2022, the Venezuelan was 112 years and 253 days old.

The supercentenarian was born on May 27, 1909 in El Cobre, Tachira. In 1913, he came to the village of Los Paujiles, where, together with his eight brothers and
his father, he began working in agriculture. In 1938, at the age of 28, he married Ediofina del Rosario García Carrero. The couple had six sons and five daughters
and lived together for over 60 years until her death around 1999.

In the 1950s, he worked on the construction of the road from Queniquea to San José de Bolívar. Since there were no machines, he did all the work with a pickaxe
and shovel. He worked most of his life as a farmer, with a short break when he became sheriff of Caricuena in 1948. In 1962, he sold his farm in Caricuena and
bought land and a house in San José de Bolívar, where he remains today.

The super senior has 18 grandchildren, 41 great-grandchildren and 12 great-great-grandchildren. He declares that he owes his longevity to hard work, praying the
rosary twice a day and drinking a glass of aguardiente (a strong drink made from sugar cane) every day.

Previous Guinness World Records

Saturnino de la Fuente García

The title of Guinness World Record holder for the oldest living man was held by Saturnino de la Fuente García. The record holder was 112 years and 211 days old
when the result was announced on September 10, 2021 .

Workshop-5 Module-1 Page 271 of 333


The man was born on February 11, 1909 in the Puente Castro district of León. He had seven daughters, 14 grandchildren and 22 great-grandchildren.

Thanks to his small height (only 1.50 m), Saturnino avoided being drafted into the army during the Spanish Civil War in 1936. Instead, he started a thriving
shoemaking business. His craftsmanship skills led him to create shoes for the military and he became one of the most famous craftsmen in the area. When he
wasn't busy with his successful business, Saturnino cultivated another passion: playing soccer.

He died in his home at the age of 112 years and 341 days. He would have celebrated his 113th birthday early next month.

Bob Weighton

The previous record holder was Bob Weighton. On March 30, 2020, on the day of the meeting with the GWR commission, the Briton was 112 years and 1 day
old . Sadly, on May 28, 2020, Bob died at his apartment in Alton, UK.

The man was born on March 29, 1908 in Kingston-Upon-Hull in Yorkshire, Great Britain, and was one of seven siblings. Bob's father paid £3 for each term of his
son's education until he was 16. Later, the record holder began training as a marine engineer. He then moved to Taiwan, where he spent two years learning
Mandarin and began teaching at a mission school.

In 1937 he married his wife Agnes, who was also a teacher. Their first child was born in Taiwan. In 1939, the couple decided to return to Great Britain - but the plan
failed because with the outbreak of World War II the family was sent to Toronto, Canada. They remained there until the end of the war in 1945, and in the
meantime the couple's two more children were born.

Workshop-5 Module-1 Page 272 of 333


After returning to Great Britain, the man took up a position as a lecturer at City University in London. He had 10 grandchildren and 25 great-grandchildren. Despite
his old age, he moved independently using a walker until the end of his life.

Chitetsu Watanabe

The oldest man in the world was also Chitetsu Watanabe. The record was officially approved on February 12, 2020, when the Japanese was 112 years and 334
days old. He died a few days later, on February 23, 2020.

Chitetsu was born on March 5, 1907 in Niigata, Japan, and was the oldest child of eight siblings. After graduating from agricultural school, he moved to Taiwan,
where he helped conclude contracts for a sugar cane plantation. There he met his wife Mitsue and became the father of four children. The fifth was born later, after
returning to Japan. He then took a job in an agricultural office, where he remained until his retirement.

Until his 104th birthday, Chitetsu Watanabe grew his own fruits and vegetables on a farm. He also worked on growing over 100 Bonsai trees, which he cared for
until he moved into a nursing home.

Workshop-5 Module-1 Page 273 of 333


Many years of work in a sugar company had a significant impact on Chitetsu - he was always eager to reach for sweets. The record holder's favorite delicacy was
brown sugar, but since he lost his teeth, he chose sweets that did not require chewing, such as pudding or sweet cream in cakes. His hobbies included origami
folding, calligraphy and mathematical exercises.

When asked by journalists about the secret of longevity, he replied that the key to staying healthy for so many years was not to get angry and to always keep a
smile on your face.

Massazo Nonaka

On the day he received his certificate of above-average age, August 11, 2017, Masazo Nonaka was 112 years and 259 days old. He died on January 20, 2019 at
the age of 114.

The Japanese was born on July 25, 1905 in Ashoro on the island of Hokkaido. On a daily basis, he loved watching TV (especially sumo matches and opera). He
was a real gourmet of cakes and cakes.

Particular attention to health can be considered the secret of Nonaka's longevity. The man regularly used, among others, from hot springs. According to his
daughter, the reason for her father's long-term well-being was his stress-free approach to life.

Francisco Núnez Olivera

One of the record holders was Francisco Núnez Olivera. The Spaniard was born on December 13, 1904, and died on January 29, 2018, at the age of 113 years
and 47 days.

Workshop-5 Module-1 Page 274 of 333


Francisco was born in the small Spanish village of Bienvenida, in Extremadura. He was 10 years old when the First World War broke out. At the age of 19, he
joined the army and went to the front in Morocco. He had four children, nine grandchildren and fifteen great-grandchildren.

According to the elderly man, the secret to longevity is hard work and not wasting time sitting idle. His daughter adds that the peace of life in a small village, being
your own boss, not entering into conflicts with your family and enjoying life were also important.

Yisra'el Kristal

The oldest men in the world also included Yisra'el Kristal, born as Izrael Icek Kryształ. The man was born on September 15, 1903 in Żarnów near Łódź, died on
August 11, 2017. He was 113 years and 345 days old.

The history of Israel delights and moves. Although many people would envy him at such a good age, knowing what he had to go through in his life, there would
probably be no one who would want to share his fate.

Israel was born into an Orthodox Jewish family in the early 20th century. When he was 17, in 1920, he moved to Łódź, where he worked in his father's
confectionery shop. Eight years later, Izrael married Haje Fajga Frucht and opened his own sweets production plant, where he worked for the next 12 years. In
1940, a real nightmare began for Israel and his loved ones. Due to their origins, they were locked in the Łódź ghetto. The two sons of Israel and Haye did not
survive this time. The nearly forty-year-old confectioner and his wife were deported to the German concentration camp Auschwitz-Birkenau. Only he survived. He
lived to see the liberation in 1945.

After the war, Israel returned to his homeland. In 1947, he got married in Łódź and had a third son. However, three years later, in 1950, he decided to leave his
homeland and settle in Israel. There he found peace. He will once again open his own sweets factory, continuing the family tradition. He settled in Haifa, where he
lived until the end of his days.

Yisra'el Kristal died a month before his 114th birthday. Just a year earlier, he celebrated a bar mitzvah ceremony, which his family organized for him as a birthday
gift.

Workshop-5 Module-1 Page 275 of 333


6.3 The world's oldest man in history

Jiroemon Kimura

Jiroemon Kimura (born Kinjirō Miyake) was born on April 19, 1897 in Japan. He died on June 12, 2013 in his homeland. He lived exactly 116 years and 54
days, thanks to which he entered the Guinness Book of Records as the world's oldest man in history.

Jiroemon was the third of six siblings. From 1914 he worked at the post office. Although he officially retired at the age of 65, he worked as a farmer until the age of
90. The supercentenarian had seven children, 15 grandchildren, 25 great-grandchildren and 13 great-great-grandchildren, and his life spanned three centuries!

By the age of 114, Jiroemon was able to walk with the aid of a walker. A year later, although confined to a wheelchair, he still maintained his cheerful spirit and
good mental condition. At the end of his life, the care of the old man was taken over by the 84-year-old widow of his eldest son and the 60-year-old widow of his
grandson. Luckily, Jiroemon hardly got sick. His family remembers him as an optimistic person with a positive attitude to life.

https://layah.org/aktualnosci/dlugowiecznosc-najstarsi-ludzie-swiata-zyja-ponad-120-lat

photos

Workshop-5 Module-1 Page 276 of 333


Workshop-5 Module-1 Page 277 of 333
Workshop-5 Module-1 Page 278 of 333
Workshop-5 Module-1 Page 279 of 333
Workshop-5 Module-1 Page 280 of 333
Workshop-5 Module-1 Page 281 of 333
Workshop-5 Module-1 Page 282 of 333
Workshop-5 Module-1 Page 283 of 333
Workshop-5 Module-1 Page 284 of 333
InformedHealth.org is part of the Internet.

See the contents of the show.

Term for search

< Previous Next > What happens to you as you become older?

Created on September 10, 2020; the next update is scheduled for 2023.

It is possible that some questions will come up in relation to the process of ageing, such as: Why do our bodies age, and how old can people get? However, the
number of years you have lived is not the only factor that determines how old you are.

The human body is a complicated entity that possesses an infinite number of traits and functions. Because of the passage of time, it is natural for our cells and
tissues to sustain harm or make mistakes. During our earlier years, these changes do not pose a concern because our bodies are typically able to repair a
significant number of them or have sufficient reserves to compensate for them. On the other hand, as we get older, our capacity to deal with this injury reduces. As
a result, the accumulation of evidence of ageing begins to take place.

Please refer to: When do we become considered "old"?

According to German culture, individuals who are between the ages of 60 and 75 are referred to as "of older age" or "elderly." Individuals who are between the
ages of 75 and 90 are sometimes referred to as "old," while those who are between 90 and 100 are referred to as "very old." Centenarians are individuals who have
reached the age of 100 or more.

On the other hand, the number of years that you have lived is only one of the many ways that you can establish your age, which is then referred to as your
chronological age. On the other hand, individuals who are the same chronological age most of the time have not aged to the same degree. This can be explained
by something that is referred to as your "biological age," which is determined by how healthy you are in general, in addition to your physical and mental fitness.

For further information, see: How old can people get?

It is estimated that the maximum lifespan of a human being is little more than 120 years. However, reaching such a ripe old age is an incredibly uncommon
occurrence for people. At the moment, the average life expectancy for newborn boys in Germany is approximately 78 years, while the average life expectancy for
newborn girls is approximately 83 years. If you have already reached a specific age, your life expectancy is slightly higher than it would be otherwise, according to
statistics: For example, a guy who is sixty years old in Germany can anticipate living to approximately 82 years old, whereas a woman can anticipate living to
approximately 85 years old. Exactly why women tend to live longer than males is a mystery to researchers.

Workshop-5 Module-1 Page 285 of 333


According to one school of thinking, the genes that you have inherited, or more specifically, the DNA that is present in your cells, play a role in determining the age
that you reach. Some people may not experience the onset of frailty until a later stage, and as a consequence, they may live for a longer period of time. In addition,
there are other factors that have a favourable impact. A healthy lifestyle that includes a lot of exercise and a balanced diet, emotional stability, and having a social
network that is intact are some of the lifestyle choices that are included in this category.

Check out the article titled "What Happens to Your Body as You Age?"

Several different kinds of tissue are found throughout your body. There are some that are made up of cells that do not live for a very long time, which means that
they need to be replaced on a regular basis. Due to the fact that fewer skin cells are able to proliferate with time, the rate at which these cells are replaced slows
down over time. Other organs do not have cells that divide at any point in time. For instance, nerve cells in the brain are an example. It is possible that these cells
will eventually die and not be replaced, despite the fact that they survive for a very long time.

In the event that cells are not replaced or if they die, the organs that are impacted will no longer be able to perform as effectively as they did in the past. As time
passes, a great number of organs experience a loss of bulk, commonly known as "thinning out." It takes a considerable amount of time for this reduction in reserves
to become visible. This is because our organs contain enormous reserves that allow them to deal with increased strain when it is required. The normal indications
of ageing do not manifest themselves until the reserves have been significantly reduced in size. On the other hand, these indicators of ageing are not medical
issues, and it is frequently possible to delay or even reverse them for a considerable amount of time: By way of illustration, if you see that your muscles are
becoming weaker, you can perform workouts to strengthen them. In any case, sports and exercise are regarded to be beneficial for you, for example, to maintain
the fitness of your cardiovascular system (which includes your heart and blood vessels) as well as the health of other organs.

Click here to learn more about the typical indicators of ageing.

On the outside, there are a few indications that one is getting older: In addition to the appearance of wrinkles and age spots on your skin, your hair begins to
become grey. To provide one example, as we become older, our bodies become less capable of storing fluid, which causes our spinal discs to become less elastic
and shrink. People tend to become smaller as they get older as a consequence of this.

Typically, it takes a considerable amount of time for these kinds of alterations to become apparent in the organs and tissues that are located anywhere inside the
body. Some people don't show any signs of them until they are under a lot of stress or when they are quite elderly. In some cases, this take place much earlier.

As we get older, the time it takes for impulses to travel along our nerves increases, and our brains reduce their capacity to process information as effectively as
they did in the past. Because of this, it is more difficult to remember new information and to react swiftly. The sensory organs in our bodies also gradually
deteriorate over time. For example, it is common to have age-related farsightedness around the middle of your forties and hearing problems as you increase in age.
In addition, your sense of smell and taste may deteriorate with time like any other sense.

Check out this article: What does being older mean?

A vast variety of experiences and changes, both mental and physical, are inevitable as one gets older. Growing older also implies going through these changes.
During the course of our lives, our bodies and minds are able to adjust to various external circumstances and events, including the process of ageing itself. It is
possible for this to occur in a manner that is both subtle and unconscious over an extended period of time, such as during the course of your professional or even
your family life. On the other hand, it could occur more openly and on purpose, such as when one is practicing for a sporting goal or when one is recovering from a
terrible illness.

For as long as they are alive, people will continue to undergo change. It is possible that growing very old will be accompanied with feelings of loss and limits, as
well as the difficulty of having to adjust to new situations on a consistent basis. On the other hand, the process of ageing typically occurs at such a sluggish pace
that this adaptation is both consistent and progressive. You go through many of the changes together with your family and friends because they are also getting
older at the same time as you are. As things begin to become more challenging, maintaining a physically active lifestyle and drawing on the knowledge and
experience you've gained throughout your life can be helpful in overcoming a significant portion of the obstacles you encounter.

Happiness and contentment are just as important in later years of life as they were in the younger years of one's life. There are a lot of senior people who take
pleasure in their retirement since they are no longer subject to the limits and expectations that they had in the past. Some people are satisfied with having more
time for themselves, their loved ones, and their friends, while others are looking for new duties to take on. Maintaining a state of mental and physical activity for as
long as possible is the most crucial thing to be concerned with.

Navigate to: Sources

"Lehrbuch Anatomie" by H. Lippert, published by Urban und Fischer in Munich in 2017.

Edited by Menche N. Munich: Urban und Fischer, 2016. "Biologie Anatomie Physiologie," a book published in 2016.

Pschyrembel, Klinisches Wortbuch, published by De Gruyter in Berlin in the year 2017.

Workshop-5 Module-1 Page 286 of 333


Statistisches Bundesamt (often known as Destatis). The average people's expectations for their lives September 2019.

Information on health that is provided by IQWiG is created with the intention of assisting individuals in comprehending the benefits and drawbacks associated with
the primary treatment options and health care delivery systems.

As a result of the fact that IQWiG is a German institute, portions of the material that is presented here pertain only to the health care system in Germany. Talking to
a medical professional is the best way to decide whether or not any of the above choices are appropriate for a particular situation. One-on-one consultations are
not something that we provide.

The findings of investigations of a high standard serve as the foundation for our knowledge. It is written by a group of professionals working in the health care
industry, scientists, and editors, and it is reviewed by specialists from outside the agency. Within our methodology, you can discover a comprehensive explanation
of the process by which our health information is generated and kept up to date.

This book is owned by the Institute for Quality and Efficiency in Health Care (IQWiG), and its shelf number is NBK563107.

What happens in the ageing process?

With age, your skin thins and becomes less elastic and more fragile, and fatty tissue just below the skin decreases . You might notice that you bruise more easily.
Decreased production of natural oils might make your skin drier. Wrinkles, age spots and small growths called skin tags are more common.
The process of ageing: what to anticipate

Do you ever wonder what aspects of the ageing process are considered to be typical? Here is what you should anticipate as you get older, as well as what you can
do about it.

The Staff of the Mayo Clinic

You are aware that your wrinkles and grey hair will most likely appear as you become older. But are you aware of the ways in which your teeth, heart, and sexuality
may change as you get older? Discover the changes that you can anticipate as you continue to age, as well as the ways in which you can support good health at
any age.

Your circulatory system in general

What is taking place...

Increasing stiffness of the blood vessels and arteries is the most common change that occurs in the cardiovascular system. This causes your heart to work harder
in order to pump blood through the blood vessels and arteries. The muscles of the heart undergo changes in order to accommodate the increasing workload. Your
heart rate will remain relatively unchanged when you are at rest, but it will not increase as much when you are engaged in physical activity as it used to. Because of
these changes, the likelihood of developing hypertension (high blood pressure) and other cardiovascular issues is considerably increased.

Things that you can do to improve the health of your heart:

You should make physical activity a regular part of your regimen. Walk, swim, or participate in any other activities that you enjoy doing. Participating in regular,
moderate physical activity can assist you in maintaining a healthy weight and reducing the likelihood of developing heart disease.

Eat a diet that is nutritious. Make sure to consume enough of vegetables, fruits, whole grains, meals that are high in fibre, and lean forms of protein like salmon.
Consume fewer foods that are heavy in salt and saturated fat.

Avoid smoking at all costs. Both your blood pressure and your heart rate will increase as a result of smoking, which also contributes to the hardening of your
arteries. If you smoke or use other tobacco products, you should seek the assistance of your physician in order to quit.

Control your tension. Your heart can suffer damage as a result of stress. You can alleviate stress by engaging in activities such as meditation, physical activity, or
talk therapy.

Get a sufficient amount of sleep. In order to facilitate the healing and regeneration of your heart and blood vessels, getting enough quality sleep is of critical
importance. Aim to get between seven and nine hours of sleep each night.

The muscles, joints, and bones in your body

What is taking place...

As people get older, their bones have a tendency to decrease in size and density, which diminishes their strength and makes them more prone to fractures. You
could even end up becoming a little bit shorter. The strength, endurance, and flexibility of your muscles tend to decrease over time, which is a factor that might
have an impact on your coordination, stability, and balance.

Workshop-5 Module-1 Page 287 of 333


To improve the health of your bones, joints, and muscles, you can perform the following:

Consume a proper amount of calcium per day. Calcium should be consumed on a daily basis by adults in amounts of at least one thousand milligrammes (mg),
according to the National Academy of Science, Engineering, and Medicine. The recommendation is increased to 1,200 milligrammes per day for women who are 51
years old or older and for men who are 71 years old or older. Calcium can be obtained through the consumption of dairy products, broccoli, kale, salmon, and tofu,
among other foods. Have a conversation with your physician about taking calcium supplements if you are having trouble getting enough calcium from the food you
eat.

Make sure you get enough vitamin D in your diet. For individuals up to the age of 70, the recommended daily consumption of vitamin D is 600 international units,
whereas for adults over the age of 70, the recommended daily intake is 800 IU. Sunlight is the primary source of vitamin D for a significant number of individuals.
There are also other sources, such as eggs, salmon, tuna, milk that has been fortified with vitamin D, and vitamin D tablets.

You should make physical activity a regular part of your regimen. Walking, running, tennis, climbing stairs, and weight training are all examples of weight-bearing
workouts that can help you create strong bones and slow the rate at which your bones are losing their density.

Stay away from substance misuse. Put an end to smoking and cut back on alcoholic beverages. Inquire with your physician about the amount of alcohol that may
be considered safe for your age, gender, and overall health.

All of your digestive organs

What is taking place...

The anatomical changes that occur in the large intestine as a result of ageing can put older persons at a greater risk of experiencing constipation. In addition, a diet
poor in fibre, a lack of physical activity, and not drinking enough fluids are all factors that contribute to this condition. Additional factors that may contribute to
constipation include the use of certain medications, such as diuretics and iron supplements, as well as certain medical disorders, such as diabetes.

In order to avoid constipation, you can do the following:

Eat a diet that is nutritious. You should make sure that your diet include foods that are high in fibre, such as fruits, vegetables, and grains that are whole. If you
want to avoid constipation, you should limit your consumption of high-fat meats, dairy products, and sweets. Make sure you consume a lot of water and other
drinks.

You should make physical activity a regular part of your regimen. It is possible to avoid constipation by engaging in regular physical activity.

It is important to not disregard the urge to make a bowel movement. Constipation can be caused by delaying a bowel movement for an excessive amount of time.

As well as your urinary tract and bladder

What is taking place...

It's possible that as you get older, your bladder will get less elastic, which will cause you to have to urinate more frequently. A decrease in the strength of the
muscles that control the bladder and the pelvic floor might make it harder for you to completely empty your bladder or for you to lose control of your bladder, which
is referred to as urine incontinence. Additionally, incontinence and difficulty emptying the bladder are two symptoms that can be brought on by an enlarged or
inflammatory prostate in men.

Incontinence can also be caused by other reasons, such as being overweight, having nerve damage as a result of diabetes, taking certain drugs, and drinking
alcohol or caffeine.

Here are some things you can do to improve the health of your bladder and urinary tract:

Visit the loo on a regular basis. You might want to think about urinating on a regular basis, perhaps once every hour. Gradually increase the amount of time that
passes between your trips to the bathroom.

Keep your weight at a healthy level. If you are overweight, you should try to drop some weight.

Avoid smoking at all costs. If you smoke or use other tobacco products, you should seek the assistance of your physician in order to quit.

Work on your Kegel muscles. By squeezing the muscles that you would use to stop yourself from passing gas, you can exercise the muscles that are located on
the pelvic floor (Kegel exercises). At a time, give it a try for three seconds, and then take a moment to rest for the count of three. At least three times per day, you
should work your way up to performing the exercise ten to fifteen times in a succession.

Steer clear of things that irritate the bladder. Incontinence can be made worse by various substances, including caffeine, acidic meals, alcohol, and carbonated
beverages.

Avoid being constipated. Consume a greater quantity of fibre and take additional measures to prevent constipation, which can make incontinence worse.

Your capacity for recollection and thought

Workshop-5 Module-1 Page 288 of 333


What is taking place...

As you become older, your brain goes through a series of changes that may have a slight impact on your ability to remember things or think critically. In the case of
healthy older persons, for instance, they can forget familiar names or words, or they might find it more challenging to combine multiple tasks at once.

What you are able to do

Through the use of the following actions, you can improve your cognitive health:

You should make physical activity a regular part of your regimen. There is an increase in blood flow to your entire body, including your brain, when you engage in
physical activity. Studies have shown that engaging in regular physical activity is connected with improved brain function, as well as a reduction in stress and
depression, both of which are detrimental to memory.

Eat a diet that is nutritious. A diet that is good for your heart may be beneficial to your brain. Pay attention to fruits, vegetables, and grains that are entire. Fish, lean
meat, and chicken without skin are all examples of low-fat protein sources that you should choose. Consuming excessive amounts of alcohol might result in
disorientation and memory loss.

Maintain your mental activity. Your ability to think clearly and remember things may be maintained if you maintain mental activity. Reading, playing word games,
starting a new hobby, enrolling in lessons, or learning to play an instrument are all options for you.

Have a social life. It is possible that depression and stress, both of which can lead to memory loss, can be prevented through social engagement. Participating in
social events, spending time with family and friends, or volunteering at a local school or nonprofit organisation are all possibilities.

Cardiovascular disease should be treated. The management of cardiovascular risk factors, such as high blood pressure, high cholesterol, and diabetes, which may
increase the risk of cognitive decline, should be carried out in accordance with the advice provided by your physician.

Give up smoking. Those who smoke may find that giving up the habit is beneficial to their mental health.

Have a conversation with your physician if you are concerned about changes in your thinking abilities, such as memory loss or other problems.

The eyes and ears of yours

What is taking place...

Getting older may cause you to have trouble concentrating on things that are in close proximity to you. There is a possibility that you will become more sensitive to
glare and could have difficulty adjusting to varied levels of light. One of the effects of ageing is that it can alter the lens of your eye, which can result in cataracts.

Additionally, your hearing may get impaired. There is a possibility that you will have trouble hearing high frequencies or following a conversation in a space that is
too crowded.

Things that you can do to improve the health of your ears and eyes:

Schedule checkups on a regular basis. Whether it be glasses, contact lenses, hearing aids, or any other corrective device, it is important to follow the advise of
your doctor.

Please exercise caution. Whenever you are outside, make sure to wear sunglasses or a hat with a wide brim, and make sure to use earplugs when you are in the
vicinity of noisy machinery or other loud noises.

Tooth and gums

What is taking place...

It's possible that your gums will slip away from your teeth. Some drugs, such as those used to treat asthma, allergies, high blood pressure, and high cholesterol,
are also capable of causing dry mouth because of their side effects. Because of this, it is possible that your teeth and gums will become slightly more susceptible to
diseases like decay and infection.

A few things you may do to improve your oral health are:

Brush your teeth and floss your teeth. You should brush your teeth twice a day, and you should clean the spaces between your teeth once a day using either
ordinary dental floss or an electronic interdental cleaner.

Schedule checkups on a regular basis. Regular dental checks should be performed at the dentist or dental hygienist of your choice.

Workshop-5 Module-1 Page 289 of 333


On your skin

What is taking place...

As you become older, your skin will become thinner, less elastic, and more fragile. Additionally, the fatty tissue that is located directly below the skin will decline.
Perhaps you have noticed that you are more prone to bruising. One possible cause of dryness in the skin is a reduction in the production of natural oils. Wrinkles,
age spots and small growths called skin tags are more common.

What you can do

To promote healthy skin:

Just be kind. Bathe or shower in warm — not hot — water. Use mild soap and moisturizer.

Please exercise caution. When you're outdoors, use sunscreen and wear protective clothing. Check your skin regularly and report changes to your doctor.

Avoid smoking at all costs. If you smoke or use other tobacco products, you should seek the assistance of your physician in order to quit. Smoking contributes to
skin damage, such as wrinkling.

Your weight

What is taking place...

How your body burns calories (metabolism) slows down as you age. If you decrease activities as you age, but continue to eat the same as usual, you'll gain weight.
To maintain a healthy weight, stay active and eat healthy.

What you can do

To maintain a healthy weight:

You should make physical activity a regular part of your regimen. Regular moderate physical activity can help you maintain a healthy weight.

Eat a diet that is nutritious. Make sure to consume enough of vegetables, fruits, whole grains, meals that are high in fibre, and lean forms of protein like salmon.
Limit sugar and foods high in saturated fat.

Watch your portion sizes. To cut calories, keep an eye on your portion sizes.

Your sexuality

What is taking place...

With age, sexual needs and performance might change. Illness or medication might affect your ability to enjoy sex. For women, vaginal dryness can make sex
uncomfortable. For men, impotence might become a concern. It might take longer to get an erection, and erections might not be as firm as they used to be.

What you can do

To promote your sexual health:

Share your needs and concerns with your partner. You might find the physical intimacy without intercourse is right for you, or you may experiment with different
sexual activities.

Get regular exercise. Exercise improves the release of sexual hormones, cardiovascular health, flexibility, mood and self-image — all factors that contribute to good
sexual health.

Talk to your doctor. Your doctor might offer specific treatment suggestions — such as estrogen cream for vaginal dryness or oral medication for erectile dysfunction
in men.

You can't stop the aging process, but you can make choices that improve your ability to maintain an active life, to do the things you enjoy, and to spend time with
loved ones.

From Mayo Clinic to your inbox

Workshop-5 Module-1 Page 290 of 333


As you age, you may notice changes in your teeth, heart, and sexuality. The most common change is the increasing stiffness of blood vessels and arteries, which
causes the heart to work harder to pump blood through these vessels and arteries. This increases the likelihood of developing hypertension and other
cardiovascular issues. To improve the health of your heart, it is important to make physical activity a regular part of your routine, such as walking, swimming, or
participating in activities that you enjoy.

Eating a nutritious diet rich in vegetables, fruits, whole grains, fiber-rich meals, and lean protein like salmon can help maintain a healthy weight and reduce the risk
of heart disease. Avoid smoking at all costs, as it increases blood pressure and heart rate, contributing to the hardening of your arteries. Control your stress by
engaging in activities such as meditation, physical activity, or talk therapy.

Getting sufficient sleep is crucial for healing and regeneration of your heart and blood vessels. Aim for seven to nine hours of sleep each night.

As people age, their bones tend to decrease in size and density, making them more prone to fractures and becoming shorter. The strength, endurance, and
flexibility of muscles decrease over time, impacting coordination, stability, and balance. To improve the health of your bones, joints, and muscles, you can consume
a proper amount of calcium per day, which should be at least 1,000 milligrammes (mg) for adults and 1,200 milligrammes (mg) for women and men.

Ensure you get enough vitamin D in your diet, with 600 international units for individuals up to 70 and 800 IU for adults over 70. Sunlight is the primary source of
vitamin D, but other sources include eggs, salmon, tuna, milk fortified with vitamin D, and vitamin D tablets.

Incorporate regular physical activity into your routine, such as walking, running, tennis, climbing stairs, and weight training, to create strong bones and slow the rate
at which bones are losing their density.

Stay away from substance misuse, such as smoking and alcohol, and consult with your physician about safe alcohol consumption for your age, gender, and overall
health.

As you age, the large intestine undergoes anatomical changes that increase the risk of constipation. Factors contributing to this condition include a poor diet, lack
of physical activity, and insufficient fluid intake. Additionally, certain medications and medical disorders like diabetes can also contribute to constipation. To avoid
constipation, it is essential to maintain a nutritious diet, limit high-fat meats, dairy products, and sweets, and engage in regular physical activity.

In addition, delaying bowel movements can lead to constipation. Delaying bowel movements for an excessive amount of time can cause constipation.

As you age, your bladder becomes less elastic, causing you to urinate more frequently. This can be due to a decrease in the strength of the muscles controlling the
bladder and the pelvic floor, which can lead to urine incontinence. Incontinence can also be caused by factors such as being overweight, having nerve damage
from diabetes, taking certain drugs, and drinking alcohol or caffeine.

To improve the health of your bladder and urinary tract, it is recommended to visit the toilet regularly, maintain a healthy weight, avoid smoking, and work on Kegel
muscles. Exercise the pelvic floor muscles by squeezing them and resting for three seconds before performing exercises ten to fifteen times in succession.

Steer clear of substances that irritate the bladder, such as caffeine, acidic meals, alcohol, and carbonated beverages. Consume more fiber and take additional
measures to prevent constipation.

As you age, your brain undergoes changes that may slightly impact your ability to remember things or think critically. For example, healthy older individuals may
forget familiar names or words or find it difficult to combine multiple tasks at once.

To improve cognitive health, it is essential to make regular physical activity a part of your routine, as it increases blood flow to the entire body, including the brain. A
nutritious diet, including fruits, vegetables, whole grains, and low-fat protein sources like fish, lean meat, and chicken without skin, can be beneficial for brain
function. Consuming excessive amounts of alcohol may result in disorientation and memory loss.

Maintaining mental activity can help maintain cognitive ability by engaging in reading, playing word games, starting a new hobby, enrolling in lessons, or learning to
play an instrument. Engaging in social life can prevent depression and stress, which can lead to memory loss.

Workshop-5 Module-1 Page 291 of 333


Managing cardiovascular risk factors, such as high blood pressure, high cholesterol, and diabetes, can also increase the risk of cognitive decline. It is
recommended to give up smoking and have a conversation with your physician if you are concerned about changes in thinking abilities, such as memory loss or
other problems.

Aging can cause difficulty concentrating on things in close proximity, as it can alter the lens of the eye, leading to cataracts and impaired hearing. To improve the
health of your ears and eyes, schedule regular checkups, wear sunglasses or a wide brim when outside, and use earplugs when in noisy environments.

Tooth and gum health can be improved by brushing and flossing teeth twice a day, cleaning spaces between teeth once a day, and scheduling regular dental
checks at a dentist or dental hygienist of your choice.

As you age, your skin will become thinner, less elastic, and more fragile, with a decline in fatty tissue located directly below the skin. Wrinkles, age spots, and small
growths called skin tags are more common. To promote healthy skin, bathe or shower in warm water, use mild soap and moisturizer, exercise caution when
outdoors, wear protective clothing, and report changes to your doctor.

Avoid smoking at all costs, as it contributes to skin damage, such as wrinkling.

To maintain a healthy weight, make physical activity a regular part of your regimen, eat a nutritious diet, limit sugar and saturated fat, and watch portion sizes.

In summary, maintaining cognitive health is crucial for overall well-being. Regular physical activity, a nutritious diet, and maintaining a healthy lifestyle can help
improve cognitive function, reduce stress and depression, and prevent cognitive decline.

As you age, your sexual needs and performance may change, and illnesses or medication can impact your ability to enjoy sex. For women, vaginal dryness can
make sex uncomfortable, while for men, impotence may cause longer erection times and less firm erections. To promote sexual health, share your needs and
concerns with your partner, engage in regular exercise, and consult your doctor for specific treatment suggestions. Exercise improves the release of sexual
hormones, cardiovascular health, flexibility, mood, and self-image. While the aging process cannot be stopped, you can make choices that improve your ability to
maintain an active life, enjoy activities, and spend time with loved ones.

Ageing
Wikipedia, the free encyclopedia
This article is about ageing specifically in humans. For the ageing of whole organisms including animals, see Senescence.

. Ageing

Article

Talk

Read

Edit

See history

Tools

Wikipedia, the free encyclopaedia

This article discusses human ageing. See Senescence for whole-organism ageing, including animals.

Ageing has numerous meanings.

Part of series on

Human growth

and growth

Workshop-5 Module-1 Page 292 of 333


Stages

GameteZygoteEmbryoFetus

InfantToddler

ChildPreadolescentAdolescent

Early adulthoodYoung adultMiddle adultOld adultDying

Biological milestones

Childbirth, walking, language learning, puberty, menopause, ageing, and death

Psychology and development

Infant and childNature vs. nurtureAdolescentYouthYoung adultAdultMaturity

Developmental theories

MoralCognitiveCultural-historicalEvolutionary AttachmentEcologicalPsychosocialPsychosexual development

Psychological gateway

vte

Photos of supercentenarian Ann Pouder (8 April 1807 – 10 July 1917) on her 110th birthday. In old age, faces get deeply lined.

Getting older is ageing. Although bacteria, perennial plants, and some simple animals may be biologically immortal, ageing can also refer to single cells within an
organism that have stopped dividing or a species' population.

Ageing in humans includes physical, psychological, and social changes. Reaction time may reduce, but memories and general knowledge improve. About two-
thirds of the 150,000 individuals who die daily worldwide die from age-related causes.

The damage concept states that damage (such as DNA oxidation) can cause biological systems to fail, while the programmed ageing concept states that internal
processes (epigenetic maintenance like DNA methylation) can cause ageing [7][8]. Programmable cell death is different from programmed ageing.

Dietary calorie restriction in non-primate animals delays ageing while maintaining health and physiological functioning, while obesity accelerates ageing[9][10].
Such life-extending effects in primates (including humans) are unknown.

Age vs. immortality

Immortal Hydra, a jellyfish related

Ageing and death affect humans and animals. Many species are potentially immortal, such as bacteria that fission to produce daughter cells, strawberry plants that
grow runners to produce clones, and Hydra animals that can regenerate.

Single-celled creatures first appeared on Earth at least 3.7 billion years ago[12]. Prokaryotes, Protozoans, and algae multiply by fission into daughter cells, hence
they do not age and may be eternal under certain conditions.

The development of the fungal/animal kingdoms roughly a billion years ago and the evolution of seed-producing plants 320 million years ago enabled sexual
reproduction and ageing. The sexual organism might pass on some of its genetic material to make new individuals and become disposable for species survival
[15]. However, the revelation that the bacterium E. coli may split into distinct daughter cells, suggesting bacteria have "age classes"[16].

In artificial cloning, adult cells can be rejuvenated to embryonic status and used to grow a new tissue or animal without ageing. [19] Normal human cells die after
about 50.

Symptoms
Workshop-5 Module-1 Page 293 of 333
See also: Signs of Old Age

Age-related hearing loss

10 seconds.0:10

Many persons over 25 cannot hear this 10-second audio clip at 17.4 kHz.

Having trouble playing this file? Get media help.

Old people's enlarged ears and noses are frequently attributed to cartilage growth, but gravity is more likely.[22]

Dynamics of men's and women's body mass (1, 2) and height-normalized mass (3, 4) by age [23].

Alzheimer's disease brain vs. normal-aged brain (left).

Most people encounter several ageing symptoms.

Kids lose high-frequency hearing above 20 kHz as teens.

Photoageing causes wrinkles, especially on the face [24].

Female fertility drops after peaking in the late teens and 20s.

Human body mass decreases beyond 30 till 70, then dampens.[23]

Most people experience presbyopia by age 45–50 due to lens hardening by decreasing alpha-crystallin levels, which can be accelerated by higher temperatures.

Grey hair appears at age 50. Pattern hair loss affects 30–50% of men[31] and 25% of women[32].

Menopause usually occurs between 44 and 58.[33]

Osteoarthritis is 53% among 60–64-year-olds. However, 20% report debilitating osteoarthritis at this age[34].

Almost half of people over 75 have hearing loss (presbycusis), which hinders spoken communication. [35] Fish, birds, and amphibians can regenerate their
cochlear sensory cells, but mammals like humans cannot.

Over half of Americans have cataracts or have had cataract surgery by 80.

Frailty affects 25% of those over 85.[38][39] Muscles have a reduced capacity to respond to exercise or injury, and sarcopenia is common.[40] Maximum oxygen
use and heart rate decline.[41] Hand strength and mobility decrease.[42]

Cardiovascular disease (stroke and heart attack) is the leading cause of death worldwide.[45] Vessel ageing causes vascular remodelling and loss of arterial
elasticity, which stiffens the vasculature.

Recent research implies that age-related mortality plateaus after 105.[46] The maximum human lifetime is 115 years.[47][48] Jeanne Calment, who died aged 122
in 1997, was the oldest reliably recorded individual.

The spectrum of dementia ranges from mild cognitive impairment to neurodegenerative diseases like Alzheimer's, cerebrovascular, Parkinson's, and Lou Gehrig's;
it becomes more common with age. Although many types of memory deteriorate with age, semantic memory and general information like vocabulary definitions
usually develop or remain steady until late adulthood[51] (see Ageing brain). Intelligence drops with age, however the rate varies by type and may remain stable for
most of the lifespan before plummeting precipitously near death. People have various lengths of life, which may explain differences in cognitive decline. After 20
years of age, the brain's myelinated axons shorten by 10% per decade.

Visibility loss due to age might diminish nonverbal communication, leading to isolation and despair [55]. Macular degeneration causes vision loss and increases
with age, affecting nearly 12% of those over 80. [57] Systemic changes in waste product circulation and abnormal vessel growth around the retina cause this
degeneration. [58] Other visual diseases that often appear with age include. Over time, cataracts clog the lens of the eye, blurring vision and finally causing
blindness [59]. They are particularly common in older people. Surgery treats cataracts. Another prevalent visual problem in older adults is glaucoma. Glaucoma
causes vision loss due to optic nerve injury [60]. It normally develops over time, although some cases are abrupt. The damage caused by glaucoma cannot be
reversed, however there are certain treatments. Prevention is preferable for glaucoma.

There are two types of age-related impacts: "proximal ageing" (effects caused by recent events) and "distal ageing" (effects caused by early life events such
childhood poliomyelitis).

Workshop-5 Module-1 Page 294 of 333


Age is a major risk factor for most diseases. About 2/3 of the 150,000 people who die daily worldwide—100,000—die from age-related causes.[61] In industrialised
nations, the proportion is 90%.[61][62][63]

Biological basis

Main article: Senescence

95-year-old woman cradling 5-month-old boy

In the 21st century, researchers are only beginning to study the biological basis of ageing in yeast and other simple organisms like yeast [64]. Mammalian ageing is
less well understood due to the longer lifespans of even small mammals like mice (around 3 years). A model organism for ageing research is the nematode C.
elegans, which has a short lifespan of 2–3 weeks, allows genetic manipulations or reduction of gene activity using RNA interference and other factors [65]. Most
known lifespan-extending mutations and RNA interference targets were discovered in C. elegans.[66]

Programming factors follow a biological timetable that may be a continuation of inherent mechanisms that regulate childhood growth and development. This
regulation would depend on changes in gene expression that affect the systems responsible for maintenance, repair, and defence responses.

Molecular and cellular signs of ageing

Key signs of ageing

One 2013 review used the damage theory to propose nine biochemical "hallmarks" of ageing in many organisms, mainly mammals:[68]

genomic instability (mutations in nuclear, mtDNA, and nuclear lamina)

Telomere attrition (artificial telomerase gives mortal cells non-cancerous immortality)

epigenetic changes (DNA methylation, histone post-translational modification, chromatin remodelling). Ageing and disease are linked to gene expression
misregulation by hypomethylation and hypermethylation. [69]

disruption of protein folding and proteolysis

deregulated nutrient sensing (the most conserved ageing-controlling pathway in evolution, the Growth hormone/Insulin-like growth factor 1 signalling pathway
targets FOXO3/Sirtuin transcription factors and mTOR complexes, which may be responsive to caloric restriction)

mitochondrial malfunction (current research does not show a causal link between ageing and increasing mitochondrial reactive oxygen species generation).

Senescence (collection of non-dividing cells in tissues, induced by p16INK4a/Rb and p19ARF/p53 to limit malignant cell growth)

stem cell exhaustion (authors believe induced by damaging factors like those above)

altered intercellular communication (particularly inflammation but maybe other interactions)

Inflammageing, a persistent inflammatory phenotype in the elderly without viral infection, is caused by innate immune system overactivation and decreased
precision.

Biological age, not chronological age, is associated with gut microbiome dysbiosis (e.g., loss of microbial diversity, enteropathogen growth, and altered vitamin B12
production).

Age-related metabolic pathways

Below are three metabolic pathways that affect ageing:

The FOXO3/Sirtuin pathway may respond to calorie restriction.

growth hormone/IGF-1 signalling

electron transport chain activity in mitochondria[71] and chloroplasts (in plants).

Since targeting many routes simultaneously enhances lifespan, most likely influence ageing separately.[72]

Programmed factors
Workshop-5 Module-1 Page 295 of 333
Ageing rates vary widely across animals, and this is mostly hereditary. Common perennial plants like strawberries, potatoes, and willow trees create clones by
vegetative reproduction and are potentially everlasting, while annual plants like wheat and watermelons die each year and reproduce sexually. The oldest known
animals are 15,000-year-old Antarctic sponges, which can reproduce sexually and clonally. In 2008, inactivating two genes in the annual plant Arabidopsis thaliana
turned it into a possibly eternal perennial plant.

Clonal immortality aside, some species have exceptionally long lifespans, such as the bristlecone pine at 5062 years[75] or 5067 years, the hard clam (known as
quahog in New England) at 508 years, the Greenland shark at 400 years, various deep-sea tube worms at over 300 years, and fish like the sturgeon, rockfish, sea
anemone, and lobster.

Ageing evolution

Ageing evolution

Evolution selects for lifespan and other traits. Early survival and reproduction traits will be selected for even if they cause early mortality. The antagonistic pleiotropy
effect refers to a gene that allows reproduction at a young age but reduces life expectancy in old age, while the disposable soma effect refers to an entire genetic
programme that diverts limited resources from maintenance to reproduction.

Damage-related considerations

DNA damage theory of ageing: Genetic damage, mutations, and epimutations can cause abnormal gene expression. DNA damage stops cell division or triggers
apoptosis, disrupting stem cell pools and preventing regeneration. However, lifetime mouse studies imply that most mutations occur during embryonic and juvenile
development, when cells divide frequently and DNA replication errors are possible.

Genetic instability: Dogs lose 3.3% of their heart muscle DNA annually, while humans lose 0.6%. They are close to the ratio of the two species' maximum lifespans
(120 years vs. 20 years, 6/1 ratio). Dogs and humans lose similar amounts of brain and lymphocyte DNA annually. According to primary author Bernard L. Strehler,
"... genetic damage (particularly gene loss) is almost certainly (or probably the) central cause of ageing."[87]

Waste accumulation:

Cell waste accumulation likely slows metabolism. Complex cell reactions that bind fat to proteins produce lipofuscin, a waste product. Older cells may store
lipofuscin as tiny granules.[88][dubious - discuss]

Overproduction of particular proteins is associated with ageing yeast cells.

Autophagy induction improves lifespan in yeast, worms, flies, rats, and primates by clearing toxic intracellular waste linked with neurodegenerative disorders. The
discovery that autophagy upregulation occurs throughout ageing complicates matters.

Wear-and-tear theory: Chance damage accumulates over time, causing ageing.

Accumulation of errors: The theory that chance occurrences that escape proofreading processes harm the genetic code and cause ageing.

Heterochromatin loss, ageing model.

Cross-linkage: That cross-linked chemicals interfere with cell function and cause aging[92].

It has been shown that somatic mtDNA mutations can directly produce a range of ageing symptoms in mice. The authors propose that mtDNA mutations cause
respiratory-chain-deficient cells, apoptosis, and cell loss. The conventional notion that mitochondrial mutations and dysfunction promote ROS production was
challenged experimentally [93].

Free-radical theory: Damage by free radicals, or reactive oxygen species or oxidative stress, may cause ageing symptoms.[94] Calorie restriction may increase
mitochondrial free radical formation, which increases antioxidant defence capacity.[95]

The mitochondrial theory of ageing states that mitochondrial free radicals harm cellular components and age them.

Caloric restriction reduces 8-OH-dG DNA damage in organs of ageing rats and mice, which slows ageing and increases lifespan. In a 2021 review article, Vijg
stated that "Based on an abundance of evidence, DNA damage is now considered as the single most important driver of the degenerative processes that
collectively cause ageing."

Research

Also see: Life extension

Diet

The Mediterranean diet reduces the risk of heart disease and early death by increasing consumption of vegetables, fish, fruits, nuts, and monounsaturated fatty
acids like olive oil.

No clinical evidence links calorie restriction or any diet to ageing as of 2021.

Workshop-5 Module-1 Page 296 of 333


Exercise

Exercise has a lower mortality rate than inactivity. [104] Most of its benefits are achieved with 3500 metabolic equivalent (MET) minutes per week. [105] For
example, climbing stairs 10 minutes, vacuuming 15 minutes, gardening 20 minutes, running 20 minutes, and walking or bicycling 25 minutes a day would achieve
30 MET minutes.

Social factors

A meta-analysis found that loneliness kills more than smoking[106].

Culture and society

A grandmother and grandson

Society and ageing

See also: Gerontology

See quadragenarian, quinquagenarian, sexagenarian, septuagenarian, octogenarian, and nonagenarian in Wiktionary, the free dictionary.

An old man

Different cultures express age differently. Adults are often measured in years since birth. (East Asian age reckoning is becoming less common, especially in official
contexts.) Arbitrary divisions of life may include juvenile (infancy through childhood, preadolescence, and adolescence), early adulthood, middle adulthood, and late
adulthood. Informal words include "tweens", "teenagers", "twentysomething", "thirtysomething", "denarian", "vicenarian", "tricenarian", "quadragenarian", etc.

Most legal systems set an age for when certain activities are authorised or required. Voting, drinking, consent, majority, criminal responsibility, marriageable,
candidature, and mandatory retirement ages are specified. A motion picture rating system may limit cinema admission by age. Young and old may get bus
discounts. Age is classified differently by each nation, government, and NGO. Chronological ageing differs from "social ageing" (culture expectations of how
individuals should act as they age) and "biological ageing" (an organism's physical state as it ages).

A Yale School of Public Health study found that ageism cost the US $63 billion in one year.[108] A UNFPA report on ageing in the 21st century called for "develop
a new rights-based culture of ageing and a change of mindset and societal attitudes towards ageing and older persons, from welfare recipients to active,
contributing members of society" [109]. So, "A study of Bolivian migrants who [had] moved to Spain found that 69% left their children at home, usually with
grandparents. In rural China, grandparents care for 38% of children aged under five whose parents have gone to work in cities." [109]

Economics

Also: Ageing population

A map of 2017 median ages

Ageing is the rise in the number and share of older persons. Population ageing can be caused by migration, greater life expectancy, and lower birth rates. Ageing
affects society greatly. If they are under 18, young people have fewer legal privileges, are more inclined to fight for political and social change, invent and embrace
new technology, and need education. Older individuals have various needs from society and government and different values, such as property and pension rights
[111].

The United Nations Population Fund (UNFPA) estimates that by 2050, 22% of the world's population will be 60 or older. Development has improved nutrition,
sanitation, health care, education, and economic well-being, causing ageing. Life expectancy has increased as fertility rates have decreased. In 33 countries, birth
life expectancy exceeds 80. Ageing, a "global phenomenon" that is fastest in developing countries, including those with large youth populations, poses social and
economic challenges to work that can be overcome with "the right set of policies to equip individuals, families and societies to address these challenges and to reap
its benefits"[113].

As life expectancy rises and birth rates fall in industrialised countries, median age rises. The UN reports that this is happening in nearly every country.[114] As the
workforce ages and the number of old workers and retirees outnumbers young workers, a rising median age can have serious social and economic consequences.
Workshop-5 Module-1 Page 297 of 333
In most developed countries, an ageing workforce is unavoidable due to higher health-related, worker's compensation, and pension costs. By 2020, one in four US
workers will be 55 or older, according to the Bureau of Labour Statistics.

Income security is a global concern for seniors. This confronts governments with ageing populations to ensure pension system investments offer economic
independence and decrease old age poverty. These issues differ for emerging and developed nations. UNFPA said, "Sustainability of these systems is of particular
concern, particularly in developed countries, while social protection and old-age pension coverage remain a challenge for developing countries, where a large
proportion of the labour force is found in the informal sector." [109]

Fiscal pressure to secure retirement security and health care has intensified due to the global economic crisis. To ease this burden "social protection floors must be
implemented in order to guarantee income security and access to essential health and social services for all older persons and provide a safety net that contributes
to the postponement of disability and prevention of impoverishment in old age" .[109]

Because elderly people value their pensions and savings, population ageing has been suggested to hurt economic growth[116] and lower inflation. Pensions
support elderly people and their families, especially in times of crisis when households may lose jobs. The Australian Government projected in 2003 that "women
between the ages of 65 and 74 years contribute A$16 billion per year in unpaid caregiving and voluntary work. Similarly, men in the same age group contributed
A$10 billion per year."[109]

In approaching decades, health care costs will expand relative to the economy due to the ageing population. To mitigate the detrimental effects of ageing, worker
productivity enhancement should be considered.[117]

Sociology

Christoffer Wilhelm Eckersberg: Human Ages

Ageing is seen in five ways in sociology and mental health: maturity, decline, life-cycle event, generation, and survival. Positive correlates with ageing include
economics, employment, marriage, children, education, and sense of control, among others. The social science of ageing comprises disengagement, activity,
selectivity, and continuity theories. As cyborgs are on the rise[120], some theorists argue that new definitions of ageing are needed, such as a bio-techno-social
definition.

Given limited health care resources, is longevity and senescence postponement cost-effective? Bioethicist Ezekiel Emanuel believes that the compression of
morbidity hypothesis is a "fantasy" and that human life is not worth living after 75; longevity should not be a goal of health care policy. [122] Neurosurgeon and
medical ethicist Miguel Faria disagrees, arguing that life can be worthwhile in old age and that longevity should be pursued in assimilation.

Healthcare demand

Age causes biological changes that increase sickness and disability risk. UNFPA says:[113]

"A life-cycle approach to health care – one that starts early, continues through the reproductive years and lasts into old age – is essential for the physical and
emotional well-being of older persons, and, indeed, all people. Public policies and programmes should additionally address the needs of older impoverished people
who cannot afford health care."

Many Western European and Japanese societies are ageing. The impacts on society are complicated, but health care demand is a problem. The literature
proposes many interventions to meet the expected rise in long-term care demand in ageing societies: improve system performance, redesign service delivery,
support informal carers, and shift demographic parameters [125].

Several health problems become more prevalent as people get older, but rising incomes, costly new medical technology, a shortage of health care workers, and
informational asymmetries between providers and patients have driven the annual growth in national health spending. These include mental and physical health
issues, including dementia.

Since 1970, medical spending has grown 4.3% annually, with population ageing accounting for only 0.2%. US Medicare reforms reduced older home health care
costs by 12.5% per year between 1996 and 2000.[127]

Self-perception

Workshop-5 Module-1 Page 298 of 333


As scientific research in cosmeceuticals, cosmetic products with medicinal benefits like anti-aging creams, has increased, the industry has expanded and their
serums and creams have become part of many people's personal care routines[128].

Due to rising demand for cosmeceuticals, scientists are finding components in unusual locations. Cryptomphalus aspersa (the brown garden snail) secretion has
antioxidant properties, increases skin cell proliferation, and increases extracellular proteins like collagen and fibronectin [129]. Botox toxin onobotulinumtoxinA is
another substance used to prevent the physical signs of ageing.

Some civilizations revere old age. Korea holds a special party called hwangap to celebrate and congratulate a 60-year-old.[131] In China, respect for the elderly
has been the foundation of community organisation and morality for thousands of years. Older individuals are revered for their knowledge and frequently consulted
on major decisions. This is true for most Asian nations, including the Philippines, Thailand, Vietnam, and Singapore.

Positive self-perceptions of ageing are associated with better mental and physical health and well-being.[132] Positive self-perception of health has been correlated
with higher well-being and reduced mortality among the elderly.[133][134] Various reasons have been proposed for this association; people who are objectively
healthy may naturally rate their health best.

The "paradox of ageing" occurs when objective health is controlled while subjective health improves with age. Due to social comparison, elderly adults may
perceive themselves as healthier than their younger peers.[139] Elderly persons generally attribute their functional and physical decline to normal aging.[140][141]

An ageing suit can help younger individuals experience ageing. The GERT, R70i exoskeleton, and AGNES (Age Gain Now Empathy Suit) suits add weight and
pressure to the wrists, ankles, and other joints to simulate ageing. The suits also simulate vision and hearing impairments in different ways. Uniform gloves
simulate the elderly's loss of feeling in their hands.

These costumes may improve empathy for the elderly and may be good for anyone learning about ageing or who work with the elderly, such as nurses or care
centre staff.

Empathy from these suits may help designers understand what it's like to have old age impairments, which can help them design buildings, packaging, and tools for
simple daily tasks that are harder with less dexterity. Designing for the elderly may lessen the unpleasant sentiments connected with losing abilities.

Healthy ageing

World Health Organization's healthy ageing framework [145] defines health as functional ability resulting from innate capacity and environment.

Inner capacity

Intrinsic capacity includes physical and mental abilities that can be used as people mature. Intrinsic capability includes intellect, movement, vitality/nutrition,
psychological, and sensory (visual and hearing). [147]

A recent study showed four "profiles" or "statuses" of intrinsic capacity in older adults: high IC (43% at baseline), low deterioration with reduced locomotion (17%),
high deterioration without cognitive impairment (22%), and high deterioration with cognitive impairment (18%). 61% of the study sample remained unchanged at
baseline and follow-up. One-quarter of patients went from high IC to low deterioration, and only 3% improved. High degradation showed improved potential. The
latent statuses of low and high degradation were associated with higher rates of frailty, disability, and dementia than high IC.

Successful ageing

Successful ageing originated in the 1950s and became fashionable in the 1980s. In 1987, Rowe and Kahn defined successful ageing as three components:
freedom from disease and disability, high cognitive and physical functioning, and social and productive engagement. With current understanding, scientists began
studying spirituality and successful ageing. Cultures differ in which components are most essential. Social interaction was most highly regarded across cultures,
however effective ageing definitions vary.[151]

Cultural allusions

According to Euripides (5th century BC), the multiple-headed mythological monster Hydra may regenerate and become immortal, hence the name of the biological
genus Hydra. The Book of Job (c. 6th century BC) compares human longevity to a felled tree's innate immortality during vegetative regeneration.

Workshop-5 Module-1 Page 299 of 333


See also

Ageing brain

Age-related movement control

European Ageing

Ageing studies

Anti-aging movement

Human longevity biodemography

Biogerontology

Biological immortality

Biological signs of ageing

Clinic geropsychology

Death

DNA damage ageing theory

Epigenetic clock

Ageing evolution

Ageing genetics

Gerontechnology

Gerontology

Gerascophobia

Topics on life extension

Longevity

The mitochondrial theory of ageing

Ageing neuroscience

Old age

Particulates

Pollutants

Ageing population

Progeria

Rejuvenation

The stem cell theory of ageing

Supercentenarian

Human thermoregulation

Transgenerational design

References

Samadent.com (2021). "Age Calculator" . Smadent. 2 (1). Accessed February 12, 2021.

Liochev SI (Dec 2015). "Which Is the Most Significant Cause of Ageing?" . Antioxidants. 4 (4): 793–810. doi:10.3390/antiox4040793. PMC 4712935. PMID
26783959.

"Understanding the Dynamics of the Ageing Process" . National Ageing Institute. Retrieved May 19, 2021.

October 1997, Prakash IJ. "Women & ageing" . Indian Medical Research Journal. 106: 396–408. PMID 9361474.

Workshop-5 Module-1 Page 300 of 333


Part of a series on

Human growth
and development

Stages

Gamete
Zygote
Embryo
Fetus
Infant
Toddler
Child
Preadolescent
Adolescent
Emerging and early adulthood
Young adult
Middle adult
Old adult
Dying

Biological milestones

Fertilization
Pregnancy
Childbirth
Walking
Language acquisition
Puberty
Menopause
Ageing
Death

Development and psychology

Pre- and perinatal


Infant and child
Nature versus nurture
Adolescent
Youth
Young adult
Adult
Maturity

Developmental stage theories

Workshop-5 Module-1 Page 301 of 333


Attachment
Ecological
Psychosocial
Psychosexual development
Moral
Cognitive
Cultural-historical
Evolutionary

 Psychology portal

v
t
e

Supercentenarian Ann Pouder (8 April 1807 – 10 July 1917) photographed on her 110th
birthday. A heavily lined face is common in ageing.

6.4 Ageing versus immortality

Immortal Hydra, a relative of the jellyfish

Humans and other species age and die, while some species, like bacteria and strawberry plants, can be potentially immortal. Early life forms on Earth were single-
celled organisms that multiply into daughter cells, making them potentially immortal. Sexual reproduction enabled individual organisms to age and become
disposable for their species' survival. However, recent discoveries suggest that bacteria like E. coli may split into daughter cells, potentially opening up "age
classes" among bacteria.

Workshop-5 Module-1 Page 302 of 333


Enlarged ears and noses of old humans are sometimes blamed on continual cartilage growth, but the cause is
more probably gravity

.[22] Age dynamics of the body mass (1, 2) and mass normalized to height (3, 4) of men (1, 3) and
women (2, 4)

[23]
Comparison of a normal aged brain (left) and a brain affected by Alzheimer's disease
Many humans experience characteristic ageing symptoms throughout their lifetimes.

Main article: Senescence

95-year-old woman holding a five-month-old boy

Workshop-5 Module-1 Page 303 of 333


6.5 Researchers are exploring the biological basis of ageing in simple organisms like
yeast, but little is known about mammalian ageing due to the longer lifespans of
small mammals like mice. A model organism for studying ageing is the nematode
C. elegans, which has a short lifespan of 2-3 weeks, allowing genetic
manipulations and gene suppression through RNA interference. Most known
mutations and RNA interference targets that extend lifespan were first discovered
in C. elegans.

6.6 Society and culture

A grandmother and her grandchild

Main article: Aging and society

See also: Gerontology

Look up quadragenarian, quinquagenarian, sexagenarian, septuagenarian, octogenarian, or nonagenarian in Wiktionary, the free dictionary.

An elderly man

Age is typically measured in whole years since birth, with East Asian age reckoning becoming less common in official contexts.

Economics

See also: Population ageing

Workshop-5 Module-1 Page 304 of 333


A map showing median age figures for 2017

Population ageing refers to the rise in the number and proportion of older individuals, influenced by factors such as migration, longer life expectancy, and reduced
birth rates.
Sociology

Christoffer Wilhelm Eckersberg: Ages of Man

Ageing is viewed in sociology and mental health in five perspectives: maturity, decline, life-cycle event, generation, and survival.

Step 4

Self Assessment - Answer the following questions to self-assess your knowledge of the subject.

Q 1: What is an immortal cell?

Immortalised cell lines are cells that have been altered to continuously multiply without limit, allowing them to be cultivated for extended durations.

Immortalised cell lines are obtained from many sources that exhibit chromosomal defects or mutations, enabling them to undergo continuous division, such as in
the case of tumours.

Immortal cells replicate indefinitely, making sure that there is always a constant supply of quickly growing cells for experiments available. Immortal cells were first
discovered in the 1950s with the best-known HeLa cell line.

Immortal cell lines are a popular choice for researchers studying the development of cancer, testing cancer treatments, or evaluating the toxicity of compounds or
drugs. These cells are taken directly from healthy or diseased tissue and kept in culture. However, they have limitations such as limited replication cycles, death
after a certain lifespan, and different types of cells need different media to grow and survive.

Human primary cells are taken directly from healthy donors, organ donation, surgical specimens, fetal tissues, or post-mortem donors and kept in culture. They
have the same morphology and phenotype as their original source but may also implicate difficulties. Primary cells in culture often have a limited number of
replication cycles and die after a certain lifespan. As they age, they show morphological and functional changes. The source of human primary cells is limited as
one might not be able to get extra material from the same donor, meaning that researchers can't repeat their experiments on identical cells or use the same cells
for extended studies and long-term experiments.

Immortal cells overcome these issues by offering an easy, inexpensive, and stable platform. They are cultured in special vessels such as Petri dishes, flasks, or
multiwell plates in a controlled environment for longer periods of time. Culture media containing nutrients and optional supplements provide the necessary
conditions for optimized cell growth. Immortal cells replicate indefinitely, making sure there is always a constant supply of quickly growing cells for experiments
available.

Workshop-5 Module-1 Page 305 of 333


Immortal cell lines were first discovered in the 1950s with the best-known HeLa cell line. Cell biologist George Otto Gey took a cancer cell from Henrietta Lacks,
allowed it to divide, and found the culture survived indefinitely if given nutrients and a suitable environment. As the original cells continued to mutate, there are now
many strains of HeLa commercially available, all derived from the same single tumor cell.

Immortal cell lines have revolutionized scientific research and are used in vaccine production, drug metabolism and cytotoxicity, antibody production, study of gene
function, generation of artificial tissues (e.g., artificial skin), and synthesis of biological compounds. This educational guide by ZEISS will introduce you to the basics
of working with cell lines, including interesting background information and additional useful resources.

Animal and human cell lines are commonly used in labs for various purposes, including studying disease mechanisms, assessing novel therapies, and bioindustrial
uses such as recombinant protein expression, virus production, pathogen detection, and toxicity screening. Animal cells can also provide insights into areas of
developmental biology, intracellular signaling, and genetic evolution.

Human primary cells immortal cell lines have different origins and availability, with some isolated from healthy or cancer tissue and others derived from primary cell
culture. These lines have unique characteristics and applications, such as uniform genetics across cells, consistent and reproducible results for vaccine and
antibody production, and investigating gene functions. However, they require special media and adjusted culture conditions, which can be difficult to handle.

Some examples of immortal cell lines include HeLa (cervical cancer), COS-7 (green monkey kidney), SH-SY5Y (neuroblastoma), Vero (green monkey kidney
epithelial), HEK 293 (embryonic kidney), MDCK (Madin-Darby canine kidney), MCF-7 (breast cancer), and Sf9 insect epithelial cells H1, H9 (Embryonic stem cells).

HeLa is the first and most famous immortal cell line, named after Henrietta Lacks, an African-American woman who died of cancer in 1951. The cell line was found
to be durable, proliferating, and dividing nearly endlessly. It was used to develop the famous polio vaccine and continues to be the most widely used cell line in
research labs worldwide.

HEK 293 is a human embryonic epithelial cell line from the kidney, originally isolated and grown by Dutch biologist Alex van der Eb in the early 1970s. Frank
Graham incorporated adenoviral genes Ad5 into the cells, allowing them to produce very high levels of recombinant proteins. They have a low maintenance, rapidly
dividing, robust cell line with a good reputation for post-translational modification of its heterologous expressed proteins. Their cell doubling time is about every 36
hours, and they can be cultured in suspension or as a monolayer.

HEK 293 is the second most widely used cell line after HeLa due to their advantages and versatility. It is used in transient and stable transformation experiments,
protein expression and production, electrophysiological experiments, transfection studies, and to produce therapeutic proteins and viruses for gene therapy.

The history of immortal cell lines dates back to the Johns Hopkins Hospital in Baltimore, where Henrietta's gynaecologist Wesley Telinde developed a theory on
cervical cancer that could drastically reduce deaths from cervical cancer. Telinde proposed that carcinoma in situ should be treated in the same aggressive way as
invasive cancers to prevent it from spreading. He gave a sample of Henrietta's cervical carcinoma to George Otto Gey, who ran a tissue culture research laboratory
with his wife, Margaret.

In conclusion, immortal cell lines offer valuable insights into various fields, including disease models, drug development, and gene therapy. While some immortal
cell lines may not be as reliable in vivo as other types, their versatility and potential for advancement in medicine make them valuable tools in the field.

SH-SY5Y is a human-derived cell line used in scientific research, with its original cell line, SK-N-SH, being isolated from a bone marrow biopsy taken from a four-
year-old female with neuroblastoma. These cells are often used as in vitro models of neuronal function and differentiation, and have been used to study Parkinson's
disease, neurogenesis, and other characteristics of brain cells.

CHO (Chinese hamster ovary) cells are an epithelial cell line derived from the ovary of the Chinese hamster, traditionally classified as Cricetulus griseus. The
original cell line was created in the 1950s and has served as a model host for culture due to its small size, short gestation period, and low mammalian chromosome
number. CHO can exist both as adherent or suspension cells in culture, and has become an important mammalian cell line used for the industrial production of
glycosylated therapeutic proteins and in cytogenetic toxicity assays.

COS-7 cells are derived from kidney cells of African green monkeys and are also known as non-steroidogenic cells. Established by Professor Yakov Gluzman in
1981, these cells carry the SV40 genetic material and are derived from CV-1 (simian) in Origin. Three COS lines were created (COS-1, COS-3, and COS-7), of
which two are commonly used (COS-1 and COS-7). In culture, COS-7 cells display adherent growth to glass and plastic surfaces and are fibroblast-like. The
combination of fibroblastic-like growth and virus susceptibility make COS-7 a great choice for transfection experiments for DNA plasmids and mutations to the SV40
virus.

Workshop-5 Module-1 Page 306 of 333


MDCK cells are a model mammalian cell line used in biomedical research, primarily employed as a model for viral infection of mammalian cells. They are one of
few cell culture models that is suited for 3D cell culture and multicellular rearrangements known as branching morphogenesis. MDCK cells are used for a wide
variety of cell biology studies, including cell polarity, cell-cell adhesions, collective cell motility, and responses to growth factors.

In summary, SH-SY5Y, CHO, COS-7, MDCK, and MDCK are all cell lines used in various fields, including neurobiology, neurochemistry, and cell biology research.
Each cell line has its unique characteristics and applications, making them valuable tools for understanding and studying various biological processes.

file:///C:/Users/Lenovo/Downloads/EN_wp_Commonly-used-immortal-cell-lines.pdf

Immortalised Cell Line


Immortalized cell lines are cells that have been manipulated to proliferate indefinitely and can thus be cultured for long periods of time.
From: Guide to Research Techniques in Neuroscience, 2010
6.7 Related terms:
 Cell Culture
 Epithelial Cells
 Fibroblast
 Astrocyte
 Eicosanoid Receptor
 In Vitro
 In Vivo
 Stem Cell
 Cell Line
 Blood Brain Barrier
View all Topics

6.8 Cell Culture Techniques


Matt Carter, ... Manasi Iyer, in Guide to Research Techniques in Neuroscience (Third Edition), 2022

Immortalized Cell Lines

Immortalised cell lines are either composed of tumour cells that exhibit continuous division or cells that have been artificially modified to proliferate indefinitely,
enabling them to be cultured over multiple generations (Table 13.1). Due to their continuous division, immortalised cells eventually occupy the entire container in
which they are grown. To create more space for further proliferation, scientists perform passaging (also known as splitting) by transferring a portion of the
multiplying cells into new containers. Examples of commonly used immortalised cell lines include human embryonic kidney 293T (HEK-293T) cells and HeLa cells.

Table 13.1. Commonly Used Immortalized Cell Lines.

Cell Origin and Cell Type Comments


Line

HEK- Human embryonic kidney cell Easy to transfect and manipulate; commonly used as an
293T expression system to study signaling and recombinant
proteins

HeLa Human epithelial cell From a cervical cancer in a human patient named Henrietta
Lacks; able to grow in suspension (i.e., grow without
adhering to the bottom of plates)

COS Monkey kidney cell Efficiently transfected; commonly used as an expression


system for high-level, short-term expression of proteins

3T3 Mouse embryonic fibroblast Robust and easy to handle; contact inhibited; stops
growing at very high densities

MDCK Dog kidney epithelial cell Polarized with distinct apical and basal sides, used in
studying trafficking

CHO Chinese hamster ovary Useful for stable gene expression and high protein yields
for biochemical assays; commonly used as an expression
system for studying cell signaling and recombinant proteins

S2 Drosophila macrophage-like Well-characterized Drosophila cell line; highly susceptible


cells to RNAi treatment

PC12 Rat pheochromocytoma Neuron-like, derived from a neuroendocrine adrenal tumor;


chromaffin cell can differentiate into a neuron-like cell in the presence of
NGF

Workshop-5 Module-1 Page 307 of 333


Cell Origin and Cell Type Comments
Line

Neuro- Mouse neuroblastoma Model system for studying pathways involved in neuronal
2a differentiation; can be driven to differentiate by cannabinoid
(N2a) and serotonin receptor stimulation

SH- Human neuroblastoma, Grows as clusters of neuroblast-type cells with short, fine
SY5Y cloned from bone marrow neurites; can be dopaminergic, noradrenergic,
acetylcholinergic, glutamatergic, adenosinergic

There are many advantages to using immortalized cell lines. Because these lines are used by many different labs in various experimental contexts, they are well
characterized. Furthermore, they are homogeneous, genetically identical populations of cells, allowing for consistent and reproducible results. Immortalized cells
tend to be easier to culture than cells used in primary cultures in that they grow more robustly and do not require extraction from a living animal. Also, because they
grow quickly and continuously, it is possible to extract large amounts of protein for biochemical assays (Chapter 14). It is also possible to create cell lines that
continuously express a gene of interest, such as a fluorescently tagged or mutant version of a protein. The major disadvantage to using immortalized cells is that
these cells cannot be considered “normal,” in that they divide indefinitely and sometimes express unique gene patterns not found in any cell type in vivo. Therefore,
they might not have the relevant attributes or functions of typical cells. Also, after several passages, cell characteristics can change and become even more
different from those of a normal cell. Thus, it is important to periodically validate the characteristics of cultured cells and not use cells that have been passaged too
many times. Immortalized cell lines of neuronal origin can be used to study properties unique to neurons. Scientists have used neuronal cell lines to investigate
processes that occur during differentiation in neurons, such as axon selection, guidance, and growth. However, most neuronal immortalized cell culture models are
derived from tumors and are sometimes genomically abnormal. One popular neuronal cell line, called PC12, is a rat pheochromocytoma cell line derived from an
adrenal gland tumor. The addition of nerve growth factor causes PC12 cells to reversibly differentiate into a neuronal phenotype. These cells can synthesize
dopamine, norepinephrine, and acetylcholine. PC12 cells have been used to study molecular phenomena associated with neuronal differentiation and have even
been used in experiments to replace dopaminergic neurons in an animal model of Parkinson's disease. Neuroblastoma cell lines, like mouse Neuro2A, also
express neurotransmitters and have been used in electrophysiology and neurodevelopment studies. As useful as immortalized cell lines of neural origin can be for
certain experiments, they show obviously abnormal traits, such as the unusual combination of neurotransmitters they produce (no normal neuron produces
dopamine, norepinephrine, and acetylcholine in the same cell!). Therefore, it is advantageous, when possible, to use primary cultured cells—cells extracted from
living animals. View chapterExplore book 1.1 Cell Culture Techniques Matt Carter, Jennifer C. Shieh, in Guide to Research Techniques in Neuroscience, 2010
Immortalized Cell Lines Immortalized cell lines are cells that have been manipulated to proliferate indefinitely and can thus be cultured for long periods of time
(Table 13.1). Immortalized cell lines are derived from a variety of sources that have chromosomal abnormalities or mutations that permit them to continually divide,
such as tumors. Because immortalized cells continuously divide, they eventually fill up the dish or flask in which they are growing. By passaging (also known as
splitting), scientists transfer a fraction of the multiplying cells into new dishes to provide space for continuing proliferation.

Table 13.1. Commonly Used Immortalized Cell Lines

Empty Cell Cell Type and Origin Comments

3T3 Mouse embryonic fibroblast Robust and easy to handle; contact inhibited; stops
growing at very high densities

HeLa Human epithelial cell From cervical cancer in a human patient named
Henrietta Lacks; may contaminate other cultured cell
lines; able to grow in suspension

COS Monkey kidney Efficiently transfected; commonly used as an expression


system for high-level, short-term expression of proteins

293/293T/ Human embryonic kidney Easy to transfect and manipulate; commonly used as an
HEK-293T expression system to study signaling and recombinant
proteins

MDCK Dog kidney epithelial cell Polarized with distinct apical and basal sides, used in
studying trafficking

CHO Chinese hamster ovary Useful for stable gene expression and high protein
yields for biochemical assays; commonly used as an
expression system for studying cell signaling and
recombinant proteins

S2 Drosophila macrophage-like Well-characterized Drosophila cell line; highly


cells susceptible to RNAi treatment

PC12 Rat pheochromocytoma Neuron-like, derived from a neuroendocrine adrenal


chromaffin cell tumor; can differentiate into a neuron-like cell in the
presence of NGF

Neuro-2a/ Mouse neuroblastoma Model system for studying pathways involved in


N2a neuronal differentiation; can be driven to differentiate by
cannabinoid and serotonin receptor stimulation

SH-SY5Y Human neuroblastoma, Dopamine beta hydroxylase activity, acetylcholinergic,


cloned from bone marrow glutamatergic, adenosinergic; grow as clusters of

Workshop-5 Module-1 Page 308 of 333


Empty Cell Cell Type and Origin Comments

neuroblast-type cells with short, fine neurites

There are many advantages to using immortalized cell lines. Because there are standard lines used by many different labs, immortalized cells are fairly well
characterized. They are, at least theoretically, homogeneous, genetically identical populations, which aids in providing consistent and reproducible results.
Immortalized cells tend to be easier to culture than cells used in primary cultures in that they grow more robustly and do not require extraction from a living animal.
Also, because they grow quickly and continuously, it is possible to extract large amounts of proteins for biochemical assays (Chapter 14). It is also possible to
create cell lines that continuously express a gene of interest, such as a fluorescently tagged or mutant version of a protein.
The major disadvantage to using immortalized cells is that these cells cannot be considered “normal,” in that they divide indefinitely and sometimes express unique
gene patterns not found in any cell type in vivo. Therefore, they might not have the relevant attributes or functions of relatively normal cells. Also, after periods of
continuous growth, cell characteristics can change and become even more different from those of a “normal” cell. Thus, it is important to periodically validate the
characteristics of cultured cells and not use cells that have been passaged too many times.
Immortalized cell lines of neuronal origin can be used to study properties unique to neurons. Scientists have used neuronal cell lines to investigate processes that
occur during differentiation in neurons, such as axon selection, guidance, and growth. However, most neuronal immortalized cell culture models are derived from
tumors and are sometimes genomically abnormal. One popular neuronal cell line is a rat pheochromocytoma cell line derived from an adrenal gland tumor called
PC12. The addition of nerve growth factor (NGF) causes PC12 cells to reversibly differentiate with a neuronal phenotype (Figure 13.2). These cells can
synthesize catecholamines, dopamine, and norepinephrine, and express tyrosine hydroxylase (TH) and choline acetylase (ChAT), enzymes involved in the
production of neurotransmitters. PC12 cells have been used to study molecular phenomena associated with neuronal differentiation and have even been used in
experiments to replace dopaminergic neurons in an animal model of Parkinson’s disease. Neuroblastoma cell lines, like mouse Neuro2A, can express TH, ChAT,
and acetylcholinesterase (AChE) and have been used in electrophysiology and development studies.

Sign in to download full-size image

Figure 13.2. Adding NGF to PC12 cells causes them to differentiate into neuron-like cells. Differentiated PC12 cells grow neurites and synthesize

neurotransmitters, such as dopamine and norepinephrine.

Although immortalized cell lines of neural origin can be useful for some experiments, it is advantageous, when possible, to use primary cultured cells—cells
extracted from living animals.
6.9 Cell Culture Techniques
Matt Carter, Jennifer Shieh, in Guide to Research Techniques in Neuroscience (Second Edition), 2015

Immortalized Cell Lines

Immortalized cell lines are either tumorous cells that do not stop dividing or cells that have been artificially manipulated to proliferate indefinitely and can, thus, be
cultured over several generations (Table 14.1). Because immortalized cells continuously divide, they eventually fill up the dish or flask in which they grow. By
passaging (also known as splitting), scientists transfer a fraction of the multiplying cells into new dishes to provide space for continuing proliferation.

Table 14.1. Commonly Used Immortalized Cell Lines

Empty Cell Origin and cell type Comments

3T3 Mouse embryonic fibroblast Robust and easy to handle; contact inhibited; stops growing at
very high densities

HeLa Human epithelial cell From cervical cancer in a human patient named Henrietta Lacks;
may contaminate other cultured cell lines; able to grow in
suspension (i.e., grow without adhering to bottom of plates)

COS (Cv-1 in Origin, Monkey kidney Efficiently transfected; commonly used as an expression system
carrying Sv40 genetic for high-level, short-term expression of proteins
material)

293/293T/HEK-293T Human embryonic kidney Easy to transfect and manipulate; commonly used as an
expression system to study signaling and recombinant proteins

MDCK (Madin-Darby Dog kidney epithelial cell Polarized with distinct apical and basal sides, used in studying
canine kidney) trafficking

CHO Chinese hamster ovary Useful for stable gene expression and high protein yields for
biochemical assays; commonly used as an expression system for
studying cell signaling and recombinant proteins

S2 Drosophila macrophage-like Well-characterized Drosophila cell line; highly susceptible to

Workshop-5 Module-1 Page 309 of 333


Empty Cell Origin and cell type Comments

cells RNAi treatment

PC12 Rat pheochromocytoma Neuron-like, derived from a neuroendocrine adrenal tumor; can
chromaffin cell differentiate into a neuron-like cell in the presence of nerve
growth factor

Neuro-2a/N2a Mouse neuroblastoma Model system for studying pathways involved in neuronal
differentiation; can be driven to differentiate by cannabinoid and
serotonin receptor stimulation

SH-SY5Y Human neuroblastoma, Dopamine beta hydroxylase activity, acetylcholinergic,


cloned from bone marrow glutamatergic, adenosinergic; grow as clusters of neuroblast-type
cells with short, fine neurites

There are several advantages to employing immortalized cell lines. Because these are standard lines used by many different labs, immortalized cells are quite well
described. They are, at least theoretically, homogeneous, genetically identical populations, which contribute in generating consistent and reproducible results.
Immortalized cells tend to be easier to cultivate than cells utilised in primary cultures in that they grow more robustly and do not require extraction from a living
animal. Also, because they grow swiftly and constantly, it is easy to extract vast numbers of proteins for biochemical experiments (Chapter 15). It is also feasible to
develop cell lines that continuously express a gene of interest, such as a fluorescently tagged or mutant version of a protein.

The biggest downside to employing immortalized cells is that these cells cannot be considered normal, in that they divide endlessly and sometimes express unique
gene patterns not found in any cell type in vivo. Therefore, they might not have the relevant features or functions of normal cells. Also, after numerous passes, cell
properties might change and become even more dissimilar from those of a normal cell. Thus, it is vital to periodically validate the features of cultivated cells and not
employ cells that have been passaged too many times.

Immortalized cell lines of neuronal origin can be utilised to examine features unique to neurons. Scientists have employed neuronal cell lines to explore
mechanisms that occur during differentiation in neurons, such as axon selection, guidance, and growth. However, most neuronal immortalized cell culture models
are originated from malignancies and are occasionally genomically aberrant. One prominent neuronal cell line, dubbed PC12, is a rat pheochromocytoma cell line
originating from an adrenal gland tumor. The addition of nerve growth factor (NGF) leads PC12 cells to reversibly develop into a neuronal phenotype (Figure 14.2).
These cells can synthesize dopamine, norepinephrine, and acetylcholine. PC12 cells have been employed to investigate molecular events linked to neuronal
differentiation and have also been utilised in experiments to substitute dopaminergic neurons in an animal model of Parkinson's disease. Neuroblastoma cell lines,
like mouse Neuro2A, also express neurotransmitters and have been used in electrophysiology and neurodevelopment studies.

Sign in to download full-size image

Figure 14.2. Neuronal differentiation.

Addition of growth factor to PC12 cells causes them to differentiate into neuron-like cells that grow neurites and synthesize neurotransmitters.

While immortalised cell lines of neural origin can be valuable for specific research, it is important to note that they exhibit distinct aberrant characteristics, such as
the atypical coexistence of neurotransmitters within a single cell (no typical neuron generates dopamine, norepinephrine, and acetylcholine simultaneously). Hence,
it is beneficial, if feasible, to utilise primary cultured cells—cells obtained from living organisms.

Chapter 1.1 of the book is available for viewing.Regulation of respiratory diseases by non-coding RNAs

Ankur Kulshreshtha and Anurag Agrawal authored the book "RNA-Based Regulation in Human Health and Disease" in 2020.

Respiratory research model systems

Respiratory research model systems are experimental systems designed to replicate different aspects of human disease biology. Commonly employed model
systems encompass traditional options including immortalised cell lines, patient-derived organs, and animal models, alongside more contemporary innovations
such as lung organoids and microfluidic organ-on-chip technologies.

Cell lines that have been rendered immortal

Immortalised cell lines are cells that have been altered in a way that allows them to be cultured indefinitely [6]. This can be accomplished by either isolating and
cultivating tumour cells, or by introducing an immortalising gene. Various genes have been discovered that offer the potential for effective perpetuation of cultured
cells, including as Epstein-Barr virus, adenovirus E1A, simian virus 40 big T antigen, papillomaviruses E6 and E7, herpesvirus saimiri, human T-cell leukaemia
virus, oncogenes, hTERT, and mutant p53 gene.

Human organs belonging to a patient

Workshop-5 Module-1 Page 310 of 333


Human organs have shown to be a significant resource for understanding the pathological alterations that occur during the progression of diseases. Obtaining
blood is a straightforward process and it has been widely used to investigate the alterations in gene expression associated with different diseases such as asthma
[5], COPD [4], cystic fibrosis [15], and cancer [2, 13, 16]. These expression profiles not only provide information regarding illness causes and signs, but they have
also played a crucial role in identifying biomarkers for early detection and monitoring therapy regimens.

In addition to blood, lung slices and lung explants have been utilised for several gene expression and histological investigations.

Animal models

The majority of respiratory disorders are intricate, including the interaction and mutual effect of multiple cell types, which leads to the development of disease
pathology. Although immortalised cell lines are crucial for comprehending the biology of diseases, they are unable to accurately replicate these intricate
relationships. Therefore, the utilisation of animal models becomes necessary. Murine models continue to be the preferred model systems for various respiratory
illnesses, including asthma, COPD [7], cystic fibrosis [11], and cancer [10]. Especially in the field of cancer research, they have demonstrated great use by enabling
the cultivation of human tumours in mice lacking a functional immune system. This allows for the evaluation of different treatment protocols for personalised cancer
treatments.

In addition to mouse models, numerous other animals have been used, either to more accurately reproduce the illness phenotype or for convenience in handling.
One major benefit of using animal models instead of cell lines is the ability to investigate the cross-interaction between an organism and its microbiome in the
development of diseases.

Organoids of the lungs

Lung organoids are three-dimensional structures formed by lung epithelial progenitor cells, either with or without mesenchymal support cells, by a process of self-
assembly. Although these organoids lack the intricate structure and interactions found in the lung, particularly in the alveolar region, they have proven a valuable
tool for fundamental biology and translational research [3]. Lung organoids are primarily derived from three types of epithelial progenitor populations: basal cells,
Clara cells, and AEC2 cells. Ongoing research is being conducted to create organoids using embryonic stem cells and induced pluripotent stem cells (iPSCs).
These organoids have the potential to offer valuable insights into the mechanisms of lung regeneration, which can be utilised to treat damaged lungs in disease
states.

Microfluidic devices that mimic the structure and function of human organs

Organ-on-a-chip devices are microfluidic devices that, like lung-organoids, consist of numerous cell types. However, they add an additional level of complexity by
include mechanical movements of the lungs. The cyclic vacuum suction is utilised to induce repeated contraction and relaxation of the lung, mimicking the
physiological breathing motion. This is performed by applying the suction to the side chambers of the cell-culture surface. This device comprises two distinct
channels: an air channel and a blood channel. It provides an air-liquid interface that closely resembles that of the lung. The gadget was fabricated utilising the soft
lithography process, a form of microfabrication [9].

Examine the chapterExamine book 1.2Hepatotoxicity screening using in vitro models and the significance of 'omics

Joost van Delft, ... Leo S. Price authored the book "Toxicogenomics-Based Cellular Models" in 2014.

Cell lines that have been rendered immortal

Immortalised cell lines are cells that exhibit perpetual growth and division in a laboratory setting, specifically in vitro, when provided with ideal culture conditions
[44]. The HepaRG and HepG2 cell lines are the predominant choices for toxicity investigations among the currently accessible human hepatic cell lines.
Nevertheless, these cell lines are typically obtained from tumours and have acclimated to cultivation conditions. Consequently, they lack the structural organisation
of liver tissue, as well as the intercellular communication and liver-specific activities, which tend to diminish with prolonged culture duration [45,46]. They frequently
develop a molecular phenotype that is significantly distinct from liver cells in their natural state. The primary constraint they face is a comparatively reduced level of
drug-metabolizing enzymes, although HepaRG definitely surpasses HepG2 in this regard.

HepG2 cells are widely used in high-throughput platforms, particularly for high-content screening of cytotoxicity and other molecular endpoints [47–52]. They are
also crucial to the US Environmental Protection Agency's ToxCast™ and Tox21™ programmes [52–54]. HepG2 cells are extensively studied and commonly
employed in toxicology and pharmacology research [55–57]. A study conducted in 2012 on toxicogenomics has shown that HepG2 cells can provide precise
evaluation of genotoxicity. Additionally, several assays can be employed to detect substances that necessitate metabolic activation [58]. An early finding from NTC
involved a detailed study of proteins and revealed that the effects of the cholestatic substance cyclosporin A on HepG2 cells can be differentiated from the effects
of other hepatotoxic compounds such as amiodarone and acetaminophen. Thus, it is likely that the HepG2 in vitro cell system possesses unique attributes that
enable the early identification of cholestasis throughout the drug discovery process [59].

The HepaRG cell line, generated from human liver cancer, was established in 2004. When these cells are grown until they become inactive and under appropriate
medium conditions, they exhibit characteristics that are consistent with highly specialised liver cells known as hepatocytes [60]. Specifically, they exhibit the
production of several cytochrome P450 (CYP) enzyme activities at levels that are similar to primary human hepatocytes (PHH) [60–64]. HepaRG cultures comprise
a blend of hepatocyte-like and biliary-like epithelial cells. The proportion of hepatocyte-like cells in HepaRG cultures exhibits variability ranging from 45% to 90%
across different batches and passages [65].

Several studies have compared the full-genome basal expression profiles of HepG2 and HepaRG cells with those of PHH [25,26,66]. These findings indicate that
the basal gene expression patterns of HepaRG and HepG2 differ from the expression pattern observed in primary human hepatocytes (PHH). However, the gene
expression profile of HepaRG appears to be somewhat more similar to that of PHH.

Given the similarity and usefulness of these cell types in toxicogenomics research, compound-induced gene-expression patterns are equally significant to baseline
gene-expression profiles. A toxicogenomics study conducted in 2010 showed that HepaRG is a better in vitro liver model for understanding the biological effects of
chemical exposure. On the other hand, HepG2 is more suitable for classification studies utilising the toxicogenomics technique [26].

Examine the chapterExamine the book on "Hormones and Stem Cells" in detail.

Naomi Even-Zohar and Shlomo Melmed discuss αT1-1 and αT3-1 (mouse) in their book "Vitamins and Hormones" published in 2021.

Researchers created immortalised cell lines that represent several stages of development in the anterior pituitary using targeted mutagenesis in transgenic mice
(Alarid, Windle, Whyte, & Mellon, 1996; Windle, Weiner, & Mellon, 1990). An exogenous DNA sequence including 1.8 kilobases of the 5'-regulatory region of the
human α-subunit gene was introduced and connected to the coding area of the oncogene SV40 T-antigen. This resulted in the immortalization of immature pituitary
cells at the initial phases of pituitary embryonic development. The study conducted by Alarid et al. in 1996 showed the presence of early-appearing pituitary
transcription factors, namely LHX3 and HESX1. The αT1-1 cell line, derived from E11.5, exhibits expression of the α-subunit in response to hypothalamic GnRH,

Workshop-5 Module-1 Page 311 of 333


while the β-subunits of FSH and LH are not expressed. Therefore, it serves as a versatile precursor to the thyrotroph and gonadotroph lineages. The αT3-1 cell line
at E14.5 is a better developed and specialised immature gonadotroph that expresses the GnRH receptor and SF-1. TαT-1 and LβT2 represent fully developed
thyrotroph and gonadotroph cells at embryonic day 14.5 and embryonic day 17.5, respectively. The aforementioned cells have been utilised to ascertain the
expression of genes that potentially contribute to the development and differentiation of the anterior pituitary (Aikawa, Sato, Ono, Kato, & Kato, 2006; Barnhart &
Mellon, 1994; Holley, Hall, & Mellon, 2002; Horn, Windle, Barnhart, & Mellon, 1992; Laverrière et al., 2016; Xie et al., 2017).

Examine the chapterInvestigate the book titled "Transplants for Chronic Pain" in section 1.4.

The authors of the article "Cellular Transplantation" published in 2007 are Jacqueline Sagen and Shyam Gajavelli.

Cell lines that have been rendered conditionally immortalised.

Conditionally immortalised cell lines offer a method that preserves certain benefits of tumour cell lines, such as the ability to cultivate vast quantities of uniform cell
populations and the ability to introduce genetic material, while perhaps mitigating the negative consequences of uncontrolled proliferation. The first documented
utilisation of this method for pain control involved the creation of a serotonergic neuronal cell line derived from the embryonic rat medullary raphe nucleus [146].
The cells were generated using the thermolabile form of the SV40 big T antigen, which is a temperature-sensitive derivative. At the "permissive" temperature (e.g.,
33° C), continuous cell division occurs while the T-antigen is expressed. Nevertheless, when exposed to "nonpermissive" temperatures (such as 38° C), cells halt
the production of T-antigen, end their division, and undergo the process of differentiation. The RN46A cell line [36] necessitated the inclusion of brain-derived
neurotrophic factor (BDNF) in order to acquire the serotonergic phenotype [142]. Therefore, BDNF was introduced into these cells to regulate serotonin production
on its own [37]. When introduced into the spinal subarachnoid space of rats with chronic constriction nerve injury, thermal hyperalgesia, cold allodynia, and
mechanical hyperalgesia were alleviated within one week. This was observed in rats that received 1 106 cells, whereas rats implanted with the non-serotonergic
parent line continued to experience pain symptoms [38]. Additional genes that have been examined for pain relief utilising this cellular framework include the GAD
gene, responsible for the production of GABA, the preproenkephalin gene, involved in the synthesis of met-enkephalin, and the preprogalanin gene, which
contributes to the production of the peptide galanin [17, 31, 35]. Transplantation of the RN33-GAD67, which produces and releases GABA after in vitro
differentiation, has been conducted in a spinal cord injury pain model. This resulted in the observation of decreased thermal hyperalgesia and tactile allodynia [28].

Examine the chapterExamine the book titled "Telomere Maintenance in the Dynamic Nuclear Architecture" with a rating of 1.5.

E. Micheli and S. Cacchione discuss the topic of Alternative Lengthening of Telomeres in their book "Chromatin Regulation and Dynamics" published in 2017.
Alternative Mechanisms for Telomere Maintenance in the Absence of Telomerase

A number of cell lines have been made immortal, and approximately 10-15% of tumours do not have telomerase yet nonetheless have fully intact telomeres. This
observation prompted the hypothesis that there must be an alternative method of maintaining telomere length, known as ALT. For a comprehensive study, refer to
references [17,18]. Cells that depend on Alternative Lengthening of Telomeres (ALT) exhibit many atypical characteristics in their telomeres. For instance, the
length of telomeres varies greatly, ranging from very small amounts of telomeric sequences to tens of thousands of bases of telomeric repeats. Furthermore, the
dimensions of individual telomeres have the ability to experience swift alterations. ALT can be identified by the presence of extrachromosomal telomeric DNA,
primarily in a circular form known as t-circles, as well as the development of ALT-associated promyelocytic leukaemia (PML) nuclear bodies (APBs). APBs, or
alternative PML bodies, are a specific type of nuclear macromolecular structures that exhibit functional and structural diversity. They are believed to have a role in
DNA damage response (DDR) and other nuclear activities. APBs consist of telomeric DNA, both chromosomal and extrachromosomal, along with telomere-
associated proteins and proteins that participate in homologous recombination. The prevailing consensus is that telomere maintenance in ALT cells relies on
homologous recombination (HR), notwithstanding the unresolved molecular intricacies of the ALT processes [18]. Given that HR is suppressed in both normal cells
and telomerase-positive immortalised cells, it is probable that the activation of ALT involves the inactivation of one or more factors that suppress HR. For instance,
the protein α-thalassemia/mental retardation syndrome X-linked (ATRX) not only hinders HR, but also suppresses ALT activity when temporarily introduced into
ALT-positive/ATRX-negative cells [19]. ATRX is a member of the SWItch/sucrose nonfermentable complex (SWI/SNF) family, which is responsible for modifying the
structure of chromatin. ATRX specifically interacts with DNA tandem repeats located at both telomeres and euchromatin regions. ATRX has the ability to attach to
G-quadruplex structures in a laboratory setting, indicating that it may have a function in resolving G-quadruplex structures that form at telomeres during replication.
This function helps prevent replication fork stalling and HR (homologous recombination). Alongside the histone chaperone DAXX, it places the histone variation
H3.3 in pericentric heterochromatic areas and telomeres [21]. Furthermore, mutations in ATRX/DAXX and/or H3.3 have been observed in various ALT-positive
tumours, providing additional evidence for their involvement in the inhibition of ALT [22]. An outcome of the unsuccessful deposition of H3.3 is the elimination of
heterochromatic markers at telomeres. However, it remains uncertain whether this characteristic plays a role in the development of alternative lengthening of
telomeres (ALT) [21].

TERRA (Section 13.4.4) is another potential participant in the establishment of ALT. The levels of TERRA are notably elevated in ALT cells, indicating that TERRA
may enhance the HR process [23].

6.10 Providing Pharmacological Access to the Brain

M. Giordano, ... W.J. Freed, in Methods in Neurosciences, 1994

General Guidelines

When creating a cell line that may live indefinitely, one of the initial stages involves choosing the specific area of the brain that is of interest and determining the
most suitable age during gestation. The initial cells' capacity to undergo division is crucial for achieving immortality, as the integration of viral genes into the host
genome necessitates the host cell to undergo at least one cycle of DNA synthesis (1). For mouse striatal tissue, the highest point of cell division is on either day 14
or 15. However, the process of neurogenesis takes place between day 12 of gestation and the initial days after birth (32). Another crucial stage is the careful
selection of the gene that will be employed for the purpose of immortalization. The two genes that have undergone the most comprehensive research are v-myc
and SV40 big T antigen. An important consideration is the potential for regulating gene activity. Temperature-sensitive mutations enable the gene to be active at a
temperature that allows it and dormant at a temperature that does not, thus facilitating differentiation (1).

The next section provides a detailed description of the methodology employed to create immortalised cell lines from foetal rat ventral mesencephalic and striatal
tissue (refer to Fig. 2).

Workshop-5 Module-1 Page 312 of 333


Sign in to download full-size image

FIG. 2. General Procedure for Immortalization of Primary Cells using a Temperature-Sensitive Allele of SV40 large T Antigen.

6.11 Model systems


Monty Montano, in Translational Biology in Medicine, 2014

2.2.1 Cellular senescence

Primary cells and immortalised cell lines undergo a finite number of divisions before they cease to divide, a phenomenon known as cellular senescence. The
cessation of cycling in this context is separate from differentiation, as it also leads to the cessation of cellular division and the initiation of a post-mitotic commitment
to a certain lineage. The utilisation of cell lines has proven to be valuable in the investigation of senescence. Laboratory tissue culture protocols frequently
acknowledge a well-established threshold for the number of passes a cell line can undergo before developing resistance to further division. This procedure has
been associated with a practical biomarker, the manifestation of an innate beta-galactosidase [7], which can be quantified using a colorimetric analysis and the X-
gal substrate. When beta-galactosidase is present, this substrate reagent undergoes a colour change to blue. Senescence can be studied in a controlled manner
by causing immortalised cells, like HeLa cells, to stop growing. This is done by introducing a gene that deactivates the factor responsible for immortalization. For
example, in the case of HeLa cells, the introduction of the transcription factor E2 deactivates the immortalising factor E7. The utilisation of cell lines has facilitated
the analysis of molecular processes associated with senescence. For instance, it was initially shown that cell lines undergo a certain number of divisions, known as
the Hayflick limit [9]. The WI-38 fibroblast cell line was utilised to ascertain the gene linked to lysosomal expression of B-galactosidase, GLP-1, which is observed
during the induced senescence of HeLa cells. The senescence phenotype has been analysed in WI-38/HCA2 cells [10], and the reduction of this phenotype has
been achieved by employing reagents like rapamycin [11] in HT1080 cells.

6.12 Recommended publications

Journal of Neuroscience Methods

Journal

Neuroscience

Workshop-5 Module-1 Page 313 of 333


Journal

Cell Reports

Journal

7 Immortalized versus primary cells: considerations for optimal application in cell cultures
14 JUL 2022

WRITTEN BY COOK MYOSITE

CELL THERAPY

Primary cells and immortalized cell lines each have their own benefits in biotechnology and research.

Immortalised cells have an indefinite lifespan and are produced from a single common ancestor cell. While primary cultures have a limited lifespan, they frequently
have more physiological significance than immortalised equivalents. Choosing the right cell culture is critical for achieving best results in biochemical and cell-
based experiments. View this whitepaper to learn about the benefits and drawbacks of primary cells and immortalised cell culture models, as well as the factors
that must be considered to increase the possibility of effective results.

This content was provided by Cook MyoSite, Inc.

Q 2: Describe the process of cell line preservation.

The sole efficient method for preserving animal cells is by freezing, which can be achieved using either liquid nitrogen or cryogenic freezers. The freezing process
entails gradually lowering the temperature of prepared cells to a range of -30 to -60°C, and subsequently transferring them to temperatures below -130°C.

What is the process of cell line preservation?


The major steps in cryopreservation are (1): the mixing of CPAs with cells or tissues before cooling; (2) cooling of the cells or tissues to a low temperature and its
storage; (3) warming of the cells or tissues; and (4) removal of CPAs from the cells or tissues after thawing.

Cryopreservation and its clinical applications

Tae Hoon Jang,a,1 Sung Choel Park,a,1 Ji Hyun Yang,a Jung Yoon Kim,a Jae Hong Seok,a Ui Seo Park,a Chang Won Choi,a Sung Ryul Lee,b,⁎ and Jin Hanb,⁎

Author information Article notes Copyright and License information PMC Disclaimer

Workshop-5 Module-1 Page 314 of 333


7.1 Abstract

7.2 Cryopreservation is a method of preserving organelles, cells, tissues, and other


biological constructions by cooling them to extremely low temperatures. The
responses of live cells to ice formation are both theoretically interesting and
practically relevant. Stem cells and other viable tissues, which have a high
potential for use in basic research and many medical applications, cannot be
stored for an extended period of time using simple cooling or freezing because ice
crystal formation, osmotic shock, and membrane damage during freezing and
thawing cause cell death. Cell and tissue cryopreservation has been increasingly
successful in recent years, thanks to the introduction of cryoprotective chemicals
and temperature control devices. Continuous understanding of the physical and
chemical aspects that occur during the freezing and thawing cycles is required for
efficient cryopreservation of cells or tissues and their clinical applications. In this
study, we will briefly discuss representative cryopreservation mechanisms, such
as gradual freezing and vitrification, as well as various cryoprotective agents. In
addition, certain negative consequences of cryopreservation are discussed.

7.3 Keywords: cryoinjury, cryopreservation, cryoprotective agent, gradual freezing,


and vitrification

7.4 Go to: 1.11. Introduction.


At low temperatures, biological and chemical reactions in living cells are substantially slowed, which has the potential to result in the long-term preservation of cells
and tissues. However, freezing is lethal to most living things because it causes the formation of both intra- and extracellular ice crystals and affects the chemical
environment of cells, resulting in cellular mechanical restrictions and damage.1 The greatest challenge for cells at low temperatures is the transformation from
water to ice.2, 3 Cell damage at fast cooling rates is linked to intracellular ice production, whereas slow cooling induces osmotic alterations as a result of exposure
to highly concentrated intra- and extracellular fluids or mechanical interactions between cells and extracellular ice. Cryopreservation is a procedure that suspends
biological samples at freezing temperatures for an extended period of time in order to retain their fine structure.3, 4 The freezing behaviour of cells can be altered in
the presence of a cryoprotective agent (CPA; also known as a cryoprotectant), which influences the rates of water transport, nucleation, and ice crystal
development. Numerous cryopreservation research publications have investigated the physical and biological aspects that influence cell survival at low
temperatures during the cooling and warming processes.5 Unlike single cell suspensions, bulk tissues have variable heat and mass transfer effects during
cryopreservation, making it more challenging to achieve quick cooling and warming rates as well as an even distribution of CPAs.1, 6 Cryopreserved cells or
tissues have some advantages in basic research, as well as current and prospective clinical uses. With the constant availability of cryopreserved cells and tissues,
rigorous quality testing can be undertaken to assess whether the cells or tissue are suitable for transplantation without the need for fresh samples.7 Cell and tissue
cryopreservation has become more successful in recent years thanks to the use of CPAs and temperature control technology (Table 1). In this study, we will
provide a quick overview of cryopreservation principles and their clinical applications.

Workshop-5 Module-1 Page 315 of 333


Fig. 1

Physical events and cryoinjury of cells during freezing and thawing. Cryoinjuries are caused, at least in part, by the solution effect (leading to osmotic shock) and
intracellular ice formation (leading to breakdown of intracellular structures).

CPA, cryoprotective agent.

Table 1

Comparison between the slow-freezing and vitrification methods


Procedure

Characteristic

Slow freezing Vitrification

Working time More than 3 h Fast, less than 10 min

Expensive, freezing machine Inexpensive, no special machine


Cost
needed needed

Sample volume (μL) 100–250 1–2

Concentration of CPA Low High

Risk of freeze injury, including ice crystal


High Low
formation

Post-thaw viability High High

Risk of toxicity of CPA Low High

Status of system Closed system only Opened or closed system

Potential contamination with pathogenic agents Low High

Manipulation skill Easy Difficult

Open in a separate window

CPA, cryoprotective agent.

Workshop-5 Module-1 Page 316 of 333


7.5 2. Cryopreservation

2.1. Cryopreservation procedure

Cryopreservation is the use of extremely low temperatures to preserve structurally intact living cells and tissues for extended periods of time.2 The cryobiological
response and cryosurvival throughout the freezing and thawing cycle vary greatly depending on cell types or given cells among different mammalian species (Fig. 1 and
Table 1).5 Cryopreservation processes are classified as follows: (1) slow freezing8, 9; (2) vitrification, which involves the solidification of the cell or tissue's aqueous milieu
into a noncrystalline glassy phase10; (3) subzero nonfreezing storage; and (4) dry preservation.11 Mammalian cells are often not suitable for dry storage due to
challenges in delivering the disaccharide trehalose (disaccharide of glucose, 342 Da)12 and amino acids (used as preservatives in plants) into the intracellular region.13
The primary processes in cryopreservation are (1) combining CPAs with cells or tissues prior to cooling; (2) cooling the cells or tissues to a low temperature and storing
them; (3) reheating the cells or tissues; and (4) removing CPAs from the cells or tissues after thawing.14 The proper application of CPAs is thus critical to improving the
survivability of the material to be cryopreserved.

2.2. Cryoinjury The specific mechanism of cryoinjury, which causes cell damage due to water phase changes at low temperatures, remains unclear.5 Cooling and thawing
velocities can have a significant impact on physicochemical and biophysical processes, influencing survival rates. Cryoinjury mechanisms involving osmotic rupture
generated by extracellularly concentrated solutes and intracellular ice production have been widely proposed (15, 16, 17), both of which are dependent on cooling rate
(Fig. 1).5, 18 In addition, cell viability limits are essentially defined in terms of an intact plasma membrane with normal semipermeable characteristics. Indeed, conditions
that allow the plasma membrane to persist may preclude the survival of key organelles within cells.5

2.3. CPAs

The CPA, which is typically a fluid, decreases freezing injury throughout the cryopreservation process (Fig. 1). CPAs should be biologically acceptable, able to enter cells,
and low in toxicity.2 Various CPAs have been created (Table 2) and are utilised to reduce ice formation at any given temperature, depending on the cell type, cooling rate,
and warming rate.2 To maximise cell and tissue survivability, the sample volume, chilling rate, warming rate, and CPA concentrations should be optimised according on
cell type and tissue context.18 Because of heat and mass transmission constraints in these bulk systems, the macroscopic physical size of the tissue is an important
factor to consider when developing a cryopreservation strategy.1, 18 CPAs can be grouped into two groups: (1) cryoprotectants that permeate cell membranes, such as
dimethyl sulfoxide (DMSO), glycerol,19, and 1,2-propanediol; and (2) cryoprotectants that do not permeate cell membranes, such as 2-methyl-2,4-pentanediol and
polymers such polyvinyl pyrrolidone, hydroxyethyl starch, and sugars.1, 4 In contrast to synthetic chemicals, biomaterials such as alginates, polyvinyl alcohol, and
chitosan, as well as classic small molecules, can be utilised to inhibit ice crystal development.4 To prevent cell death caused by processes such as apoptosis during the
freezing and thawing cycle, antioxidants and other chemicals have been employed to directly inhibit ice crystal formation.20, 21, 22, 23 Common CPAs are briefly
discussed in the following subsections and Table 2.

Table 2

Commonly used cryoprotective agents and their uses


Cryoprotective agents Membrane permeability Possible toxicity30 Applied in cryopreservation

Adipose tissue-derived stem cells29


Amniotic fluid
Cell Banker series Yes Unknown but less than that of DMSO Bone marrow36
Mammalian cells
Synovium36

Adipocyte tissue36
Amniotic fluid and umbilical cord36
Bone marrow36
Dental pulp36
Embryo (combined with EG or propylene glycol)44
Dimethylsulfoxide Reduction in heart rate Embryonic stem cells (alone or combined with EG)37
Yes
(DMSO) Toxicity to cell membrane Hepatocytes11
Microorganisms26
Oocyte (combined with EG)37, 45
Platelet27
36
Teeth
Testicular cell/tissue

Gastrointestinal irritation
Amniotic fluid36
Ethylene glycol (EG) Yes Pulmonary edema
Dental pulp36
Lung inflammation

Amniotic fluid
Microorganisms26
Glycerol Yes Renal failure
Red blood cell37, 38, 39Spermatozoa
Teeth36

Adipose-derived stem cells (combined with vitrification) 28


Embryo (combined with vitrification)
Ovarian tissue (combined with vitrification)
Trehalose No Relatively less toxic
Red blood cell38
Spermatozoa12
Stem cells (combined with propylene glycol)37

Propylene glycol (1,2-propanediol) Yes Impairment in the developmental potential of mouse Embryo30, 31

Workshop-5 Module-1 Page 317 of 333


Cryoprotective agents Membrane permeability Possible toxicity30 Applied in cryopreservation

zygotes Hepatocytes11

Open in a separate window

2.3.1. Glycerol
Polge et al.24 identified glycerol's cryoprotective activity in 1949, and this polyol molecule remained the most effective addition until Lovelock and Bishop
demonstrated the protective effect of DMSO in 1959.25 Glycerol is a nonelectrolyte, therefore it may reduce the electrolyte concentration in the remaining unfrozen
solution in and around a cell at any given temperature. It is commonly used to store microorganisms and animal sperm.26

2.3.2. DMSO

DMSO, which was first synthesised by Russian scientist Alexander Zaytsev in 1866, is widely employed for the cryopreservation of cultured mammalian cells due
to its low cost and minimal cytotoxicity.25, 27 DMSO, like glycerol, works by lowering the electrolyte content in the remaining unfrozen solution in and around a cell
at any temperature. However, DNA methylation and histone modifications have been shown to reduce survival rates and induce cell differentiation.28, 29 These
unfavourable effects of DMSO in cryopreservation make it difficult to use in typical clinical applications.

2.3.3 Polymers.

Another method for controlling cell placement is to entrap CPAs within a capsule during cell resuspension in an encapsulating medium.4 Among encapsulating
materials, synthetic nonpenetrating polymers can provide cryoprotection of cells within the scaffold, overcoming the constraints of diffusion in higher-dimensional
cryopreservation.4 Vinyl-derived polymers, such as polyethylene glycol (C2nH4n+2On+1, molecular weight: 200-9500 Da), polyvinyl alcohol [(C2H4O)n, molecular
weight: 30-70 kDa], and hydroxyethyl starch (130-200 kDa), have the ability to reduce the size of produced ice crystals.4, 30, 31

2.3.4. Proteins.

Sericin, a water-soluble sticky protein (∼30 kDa) obtained from the silkworm cocoon, has been produced as a foetal bovine serum or DMSO-replacing CPA for
human adipose tissue-derived stem or progenitor cells, or hepatocytes.26, 27 Small antifreeze proteins produced from marine teleosts or fishes have also been
identified as CPAs.32

2.3.5. Cell Banker Series.

The Cell Banker series (Nippon Zenyaku Kogyo Co., Ltd., Fukushima, Japan) enables rapid cell cryopreservation at -80 °C, resulting in higher survival rates after
freezing and thawing.29, 33 The Cell Banker series of cryopreservation media includes 10% DMSO, glucose, a high polymer concentration, and pH adjusters.33
Serum-containing Cell Bankers 1 and 1+ can be employed to cryopreserve practically all mammalian cells. Indeed, common cryopreservation media include foetal
bovine serum, which contains a variety of growth hormones, cytokines, and unknown compounds such as bovine exosomes, making their use prohibited in the
development of a standardised cryopreservation procedure for clinical application in humans.34 In this regard, the nonserum type Cell Banker 2 is ideal for
cryopreservation of cells in serum-free cultures. Cell Banker 3 (or Stem Cell Banker) is made up of 10% DMSO and other inorganic compounds (US20130198876)
and meets the condition of a chemically defined recognised constituent that is xeno-free, making it ideal for the preservation of somatic stem cells and induced
pluripotent stem cells.

Go to: 1.13. Freezing methods include traditional slow freezing and vitrification.

Cryopreservation can be performed through gradual freezing and vitrification (Table 1). The main variations between the two are CPA concentrations and cooling
speeds. In theory, if cooling is gradual enough, cells may rapidly efflux internal water, preventing supercooling and consequently intracellular ice formation.5
Because various cells have varying capacities for moving water across the plasma membrane, ideal cooling rates will vary according to cell type. Slow freezing
initially replaces water in the cytoplasm with CPAs, reducing cell damage and adjusting the chilling rate based on the permeability of the cell membrane. Slow-
cooling techniques typically entail a cooling rate of roughly 1 °C/min in the presence of less than 1.0M of CPA, using a high-cost controlled-rate freezer or a
benchtop portable freezing container.8, 9 The benefits of slow freezing include a low risk of contamination during procedures and the absence of the need for
advanced manipulation abilities. However, delayed freezing increases the risk of freeze injury due to the development of extracellular ice (Table 1). In contrast to
the slow-freezing technique, vitrification is a procedure in which cell suspensions are changed directly from the aqueous phase to a glass state after being exposed
to liquid nitrogen.35 The procedure necessitates freezing the cells or tissues to deep cryogenic temperatures (i.e., liquid nitrogen) following exposure to high
concentrations of CPA (in the weight/volume ratio of 40-60%), followed by quick cooling to prevent ice formation.18 Vitrification is mostly determined by three
factors: (1) sample viscosity; (2) chilling and warming speeds; and (3) sample volume.18 To guarantee proper vitrification, all important components must be
carefully balanced. There are two types of vitrification: equilibrium and non-equilibrium. Equilibrium vitrification necessitates the preparation of multimolar CPA
combinations and their administration into cell suspensions. Nonequilibrium vitrification, which is further classified as carrier-based (including the former plastic
straws, quartz microcapillaries, and cryoloops for obtaining a minimum drop volume18) and carrier-free systems, employs an extremely high freezing rate as well
as lower CPA mixture concentrations. A significant advantage of vitrification is the reduced risk of freeze injury, which ensures a high cell survival rate. However,
the procedure has a considerable risk of contamination with pathogenic pathogens, and it demands good manipulation abilities.

Navigate to: 1.2 4. Applications of Cryopreservation.

The applications of cryopreservation (Table 2) can be classified into the following areas: (1) cryopreservation of cells or organs5; (2) cryosurgery; (3) biochemistry
and molecular biology; (4) food sciences; (5) ecology and plant physiology; and (6) many medical applications, such as blood transfusion, bone marrow
transplantation, artificial insemination, and in vitro fertilisation.3, 5, 8, 11, 36, 37, 38, 39 Some suggested benefits of cryopreservation include the possibility of
banking cells for human leukocyte antigen typing for organ transplantation, allowing enough time for cell and tissue transport between medical centres, and
providing research sources for identifying unknown transmissible diseases or pathogens.5 Furthermore, long-term stem cell preservation remains the first step
towards tissue engineering, which has the potential to regenerate soft tissue aesthetic function and treat established disorders for which there is presently no
treatment.40

4.1: Oocytes and embryos

The first case of embryo cryopreservation for fertility preservation occurred in 1996, when a natural IVF cycle was used before chemotherapy in a woman with
breast cancer. Cryopreservation of mature oocytes is an established method for retaining reproductive capability. A retrospective study of 11,768 cryopreserved
human embryos that underwent at least one thaw cycle between 1986 and 2007 found that the duration of storage had no significant effect on clinical pregnancy,
miscarriage, implantation, or live birth rate, whether from IVF or oocyte donation cycles.41 Because oocytes are very susceptible to chilling injury10,
cryopreservation of immature oocytes and ovarian tissue is a promising strategy (with reports of live births), but further research is needed.37, 42, 43, 44, 45

4.2. Sperm, Semen, and Testicular Tissue

Workshop-5 Module-1 Page 318 of 333


Chemical or physical toxicity, sickness, or genetic susceptibility can all lead to germ cell depletion at any age.6 Fertility preservation is critical to ensuring the quality
of life for people undergoing chemotherapy and radiotherapy.46 Following adequate cryopreservation, sperm and semen can be utilised practically indefinitely. New
trials are being conducted to cryopreserve testicular tissues in the form of cell suspensions, tubular sections, and whole gonads6, 47, but this approach is still in its
early stages. Overall, cryopreservation can be utilised as a first-line method of preserving fertility in males undergoing vasectomy or other therapies that may impair
fertility, such as chemotherapy, radiotherapy, or surgery.

4.3. Stem cells Adult stem cells can differentiate into numerous types of cells and can be obtained from sources other than bone marrow, such as adipose tissue,
periosteum, amniotic fluid, and umbilical cord blood.9 Stem cells are classified into embryonic stem cells, mesenchymal stromal cells,29, 36, 48, and hematopoietic
stem cells, all of which are regarded as potential goldmines for regenerative medicine.28, 29, 49, 50 Tissue engineering, gene therapy, regenerative medicine, and
cell transplantation all rely heavily on the ability to retain, store, and transport these stem cells without altering their genetic and/or biological contents.

4.4. Hepatocytes

Over the last 40 years, isolated hepatocytes have been used in a variety of scientific and medical applications, including physiological studies, investigations into
liver metabolism, organ preservation and drug detoxification, and experimental and clinical transplantation.7, 11 Furthermore, there is a growing interest in the
applications of liver progenitor cells in a variety of scientific fields, including regenerative medicine and biotechnology, highlighting the necessity for cryobanking.11

4.5. Others.

Although primary neuronal cells and cardiomyocytes are commonly employed in neuroscience and cardiology research, no gold standard methodology for
preserving these cells has yet been established. With the discovery of glucocorticoid-free immunosuppressive regimens,51 pancreatic islet transplantation may be
investigated as a therapy option for type 1 diabetes. As a result, research into islet cryopreservation methods continues, but the outcomes remain poor, with a
survival rate of less than 50%.51

Go to: 1.35. Limitations to cryopreservation

Although the cryopreservation technology has a wide range of applications in both scientific and clinical research, significant restrictions remain. At low
temperatures like −196 °C (liquid nitrogen), cells do not metabolise much, leading to genetic drift and changes in lipids and proteins. This can impair cellular activity
and structure. If there was no limit on the amount of CPA that might be utilised, cells would be perfectly preserved.1 In normal circumstances, however, CPAs can
be harmful to cells, particularly when utilised in high doses. For example, DMSO may disrupt chromosome stability, posing a risk of tumour growth.52, 53 Aside
from endogenous alterations in cells, possible infection or contamination with cells such as tumour cells should be avoided.

Go to:

1.46. Summary and Perspectives.

Improvements in freezing and thawing speeds, osmotic conditions, CPA selection and concentration, and equilibration durations may result in improved survivability
and functionality of human tissue and cell samples, allowing for effective future therapeutic use.6 This review briefly discusses the freezing methods used in
cryopreservation (slow freezing vs. vitrification) as well as numerous CPAs. Because many recognised compounds are naturally hazardous, researchers are
continuously looking for new CPAs.4, 30 A greater understanding of the chemistry and biology of freezing and thawing will be required for future process
development, as well as the identification of the safest and most successful cryopreservation approach. Successful cryopreservation of biological samples may play
a critical role in research related to clinical value for all types of human trials. The most important future aims of cryopreservation should be the development of
processes that have minimal impact on the integrity of cryopreserved cells or tissues, followed by the standardisation and optimisation of the technology for routine
application.

Navigate to: 1.5 Conflicts of Interest.

The authors have declared no conflicts of interest.

Navigate to: 1.6 Acknowledgments.

We apologise for the large number of outstanding publications that were unable to be referenced due to space limits. This research was conducted as part of the
Medicinal Scientist Development Programme at Inje University's College of Medicine. This research was supported by the National Research Foundation of Korea's
Priority Research Centres Programme (2010-0020224) and the Basic Science Research Programme (2015R1A2A1A13001900 and 2015R1D1A3A01015596),
which is funded by the Ministry of Education, Science, and Technology.

7.6 References

1. Karlsson J.O., Toner M. Long-term storage of tissues by cryopreservation: critical issues. Biomaterials. 1996;17:243–256. [PubMed] [Google Scholar]

2. Pegg D.E. Principles of cryopreservation. Methods Mol Biol. 2007;368:39–57. [PubMed] [Google Scholar]

3. Mazur P. Cryobiology: the freezing of biological systems. Science. 1970;168:939–949. [PubMed] [Google Scholar]

4. Sambu S. A Bayesian approach to optimizing cryopreservation protocols. PeerJ. 2015;3:e1039. [PMC free article] [PubMed] [Google Scholar]

5. Gao D., Critser J.K. Mechanisms of cryoinjury in living cells. ILAR J. 2000;41:187–196. [PubMed] [Google Scholar]

6. Onofre J., Baert Y., Faes K., Goossens E. Cryopreservation of testicular tissue or testicular cell suspensions: a pivotal step in fertility preservation. Hum Reprod
Update. 2016;22:744–761. [PMC free article] [PubMed] [Google Scholar]

7. Ibars E.P., Cortes M., Tolosa L., Gómez-Lechón M.J., López S., Castell J.V. Hepatocyte transplantation program: Lessons learned and future strategies. World J
Gastroenterol. 2016;22:874–886. [PMC free article] [PubMed] [Google Scholar]

8. Mandawala A.A., Harvey S.C., Roy T.K., Fowler K.E. Cryopreservation of animal oocytes and embryos: Current progress and future
prospects. Theriogenology. 2016;86:1637–1644. [PubMed] [Google Scholar]

9. Yong K.W., Wan Safwani W.K., Xu F., Wan Abas W.A., Choi J.R., Pingguan-Murphy B. Cryopreservation of human mesenchymal stem cells for clinical
applications: current methods and challenges. Biopreserv Biobank. 2015;13:231–239. [PubMed] [Google Scholar]

Workshop-5 Module-1 Page 319 of 333


10. Zeron Y., Pearl M., Borochov A., Arav A. Kinetic and temporal factors influence chilling injury to germinal vesicle and mature bovine
oocytes. Cryobiology. 1999;38:35–42. [PubMed] [Google Scholar]

11. Fuller B.J., Petrenko A.Y., Rodriguez J.V., Somov A.Y., Balaban C.L., Guibert E.E. Biopreservation of hepatocytes: current concepts on hypothermic
preservation, cryopreservation, and vitrification. Cryo Letters. 2013;34:432–452. [PubMed] [Google Scholar]

8 Cell Culture Fundamentals: Cryopreservation and Storage of Cell Lines

ECACC Laboratory Handbook 4th Edition

8.1 CRYOPRESERVATION OF CELL LINES


The purpose of cryopreservation, also known as cell freezing, is to facilitate the storage of cell stocks, hence eliminating the necessity of maintaining all cell lines in
culture continuously. It is extremely beneficial when working with cells that have a limited lifespan. Additional primary benefits of cryopreservation include: •
Decreased likelihood of microbial contamination • Decreased likelihood of cross contamination with other cell lines

• Decreased likelihood of genetic drift and alterations in physical characteristics

• The work was performed using cells that were at a uniform passage number. • (refer to section 8 'Good Cell Banking Practices')

• Decreased expenses (related to consumables and personnel time)

Extensive efforts have been made to achieve effective freezing and revival of diverse cell lines from various cell categories through developmental work. The
fundamental approach for achieving good cryopreservation and resuscitation is to freeze slowly and thaw rapidly. While the specific requirements may differ for
different cell lines, it is generally recommended to chill the cells at a rate of -1 °C to -3 °C per minute and then rapidly thaw them by incubating in a 37 °C water
bath for 3-5 minutes. By adhering to these instructions and implementing the supplementary guidelines provided, the majority of cell lines can be cryopreserved
with a high rate of success.

1. Cultures should exhibit robust health with a viability exceeding 90% and no indications of microbial contamination.

2. Cultures should be in the exponential growth phase (this can be done by using cultures that are not yet at their maximal cell density and by changing the culture
medium 24 hours before freezing).

It is recommended to utilise a serum/protein concentration greater than 20%. Typically, serum is utilised at a concentration of 90%. 4. Employ a cryoprotectant,
such as dimethyl sulphoxide (DMSO) or glycerol, to safeguard the cells from damage caused by the production of ice crystals. DMSO, with a final concentration of
10%, is the prevailing cryoprotectant. Nevertheless, this approach is unsuitable for certain cell lines, particularly when DMSO is employed to trigger differentiation.
When encountering such situations, it is advisable to utilise an alternate substance like glycerol (please consult the ECACC data sheet for specific information
regarding the appropriate cryoprotectant). We provide pre-made cell freezing solutions that include DMSO, glycerol, as well as serum-free formulations that also
contain DMSO.

5. Gradually lowering the temperature by approximately 1 °C per minute through slow freezing, utilising either a Nalgene Mr. Frosty Freezing Container or a
Corning Cool Cell Freezing Container, can help ensure successful preservation of cells.

Figure 1 illustrates the mechanism of cell cryopreservation. Cryoprotectants, such as dimethyl sulfoxide (DMSO) or glycerol, can be used in cell culture media to
provide protection from freezing damage for cells. DMSO mitigates ice crystal formation, hence safeguarding cellular viability during freezing. A concentration of
around 10% volume/volume can be employed in conjunction with a gradual freezing technique (lowering the temperature by around 1°C per minute), allowing the
cells to be frozen at -80 °C (-112 °F) or preserved in liquid nitrogen for prolonged durations.

8.2 ULTRA-LOW TEMPERATURE STORAGE OF CELL LINES


Following controlled rate freezing in the presence of cryoprotectants, cell lines can be cryopreserved in a suspended state for indefinite periods provided a
temperature of less than -135 °C is maintained. Such ultra-low temperatures can only be attained by specialized electric freezers or more usually by immersion in
liquid or vapor phase nitrogen. The advantages and disadvantages can be summarized as follows:

Method Advantages Disadvantages

Electric (-135 °C)


Freezer
 Ease of Maintenance  Requires liquid nitrogen back-up
 Steady temperature  Mechanically complex
 Low running costs  Storage temperatures high relative to liquid
nitrogen

Liquid
Nitrogen
Phase  Steady ultra-low (-196°C) temperature  Requires regular supply of liquid nitrogen
 Simplicity and mechanical reliability  High running costs
 Risk of cross-contamination via the liquid nitrogen

Workshop-5 Module-1 Page 320 of 333


Vapor
Nitrogen
Phase  No risk of cross-contamination from liquid  Requires regular supply of liquid nitrogen
nitrogen  High running costs
 Low temperatures achieved
 Temperature fluctuations between -135 °C and -
 Simplicity and reliability 190 °C

Storing in liquid phase nitrogen ensures a consistently low storage temperature, but necessitates the usage of significant amounts of liquid nitrogen, which poses a
potential danger. There have been recorded instances of virus pathogens spreading by cross contamination using the liquid nitrogen medium. Vapour phase
nitrogen is the most often used method for ultra-low temperature storage due to these reasons.

In the case of vapour phase nitrogen storage, the ampoules are placed above a shallow container of liquid nitrogen, and it is crucial to maintain the precise depth of
the liquid nitrogen. A temperature gradient will be present in the vapour phase, with the specific extremes determined by the liquid levels, vessel construction, and
frequency of opening. If routine maintenance is neglected, the upper parts of a vapour phase storage vessel might experience significant temperature fluctuations.
Contemporary liquid nitrogen storage tanks are progressively providing enhanced vapour phase storage technologies.

Inadequate storage maintenance often leads to the complete loss of cell supplies, which is unfortunately a typical occurrence. It is essential for all containers used
for storing liquid nitrogen to be equipped with alarms that provide a warning when the liquid nitrogen levels are low. Additionally, these containers should be
continuously monitored and alarmed to ensure that the temperature remains consistent. This is especially accurate when it comes to vapour phase storage
devices. The bulk liquid nitrogen storage vessel must maintain a minimum fill level of 50% and should be replenished before falling below this threshold. This will
guarantee that the absence of at least one liquid nitrogen delivery will not result in any disastrous outcomes. Storing valuable cell stocks at a second site is strongly
advised. ECACC provides a Safe Deposit Service specifically for this intention.

Figure 2 displays typical liquid nitrogen storage tanks utilised for cell cryopreservation.

________________________________________

1.1 Management of Cell Line Inventory

It is essential for all storage vessels with extremely low temperatures to have a racking/inventory system that is specifically designed to arrange the contents in a
way that makes it easy to find and retrieve frozen cryovials. Accurate record keeping and inventory control should be implemented to facilitate this, encompassing
the following:

• Each ampoule must be labelled individually with liquid nitrogen resistant labels containing identity, lot number, passage number, and date of freezing. • The
location of each ampoule should be documented, preferably on an electronic database or spreadsheet, as well as on a paper storage plan. • A control system must
be in place to ensure that no ampoule can be added or removed without updating the records.

1.2 Safety Precautions for Handling Liquid Nitrogen

Overall Safety Concerns

Staff must undergo training in the proper utilisation of liquid nitrogen and its accompanying equipment, including the safe venting of storage tanks and the filling of
containers. When working with nitrogen in the laboratory, it is essential to always wear personal protection equipment. This includes a full-face visor, thermally
insulated gloves, a laboratory coat, and preferably a splash-proof plastic apron. Adhering to appropriate training protocols and utilising protective gear will
effectively mitigate the likelihood of experiencing frostbite, burns, and other unfavourable occurrences.

Potential for suffocation

The primary safety concern is the possible hazard of asphyxiation caused by the escape of nitrogen, which vaporises and replaces ambient oxygen. It is crucial
because a quick depletion of oxygen might lead to sudden loss of consciousness. Therefore, it is advisable to position liquid nitrogen freezers in well ventilated
spaces to reduce the likelihood of this hazard and ensure regular scheduled maintenance. It is necessary for large retail businesses to be equipped with low
oxygen alert systems.

Guidelines for Ensuring Safety in Designated Liquid Nitrogen Storage Zones

• Utilise oxygen alarms programmed to detect oxygen levels of 18% (v/v).

• Staff training – Employees should get training on promptly evacuating the premises upon hearing the alert and refraining from re-entering until the oxygen levels
have returned to normal, which is around 20% v/v.

• It is recommended that staff engage in pairs while handling liquid nitrogen. • The usage of nitrogen outside of normal working hours should be strictly forbidden. •
If feasible, mechanical ventilation systems should be implemented.

Q 3: Why are viruses generally a concern for finished biopharmaceutical products?

If these viruses find their way into finished drugs, they could infect patients. As a result, biopharma production, whether it is in batch mode or continuous mode,
includes virus inactivation and removal steps.

Examining the presence of viral contamination in the production of biologics and its potential impact on developing therapeutics.

Workshop-5 Module-1 Page 321 of 333


Paul W. Barone, Michael E. Wiebe, James C. Leung, Islam T. M. Hussein, Flora J. Keumurian, James Bouressa, Audrey Brussel, Dayue Chen, Ming Chong,
Houman Dehghani, Lionel Gerentes, James Gilbert, Dan Gold, Robert Kiss, Thomas R. Kreil, René Labatut, Yuling Li, Jürgen Müllberg, Laurent Mallet, Christian
Menzel, Mark Moody, Serge Monpoeho, Marie Murphy, Mark Plavsic, and others.Stacy L. Springs Display the authors of the publication titled "Nature
Biotechnology" in volume 38, spanning pages 563 to 572, published in 2020.Provide a citation for this article.

Summary

Recombinant protein treatments, vaccines, and plasma products have a well-established history of safety. Nevertheless, the utilisation of cell culture for the
production of recombinant proteins remains vulnerable to viral contamination. These contaminations incur significant financial costs for recovery, have the potential
to deprive patients of necessary treatments, and are characterised by their infrequency, hence posing challenges in drawing lessons from previous occurrences. A
coalition of biotechnology firms, in collaboration with the Massachusetts Institute of Technology, has assembled to gather data on these occurrences. This
comprehensive study offers valuable insights into the prevalent viral contaminants, their origins, the specific cell lines they affect, the necessary corrective
measures, and the consequences of such occurrences. These findings have consequences for the secure and efficient manufacturing of not only existing products,
but also upcoming cell and gene therapies that have demonstrated significant therapeutic potential.

Other individuals are currently viewing content that is similar in nature.

Presence of plasmid DNA as an impurity in molecular reagents

Text Accessible without restrictions 07 February 2019

Addressing viral contamination in CHO cells through the manipulation of innate immune mechanisms

Analysis of mRNA vaccine quality with RNA sequencing

Open access article Date: 21 September 2023

The primary or central part.

During the twentieth century, the manufacture of many vaccination preparations inadvertently resulted in contamination with undesired viruses1,2,3. This involved
the contamination of the poliovirus vaccination with simian virus 40 (SV40), the health effects of which remained uncertain for several decades. During the early
1980s, therapeutic proteins derived from human plasma, which were tainted without knowledge, led to the extensive transmission of viruses, including human
immunodeficiency virus (HIV), to individuals with haemophilia who had these treatments5,6. Consequently, the public's confidence in the plasma industry's capacity
to produce these medicines safely decreased7,8. In order to guarantee the safety of existing plasma-derived, vaccination, and recombinant biotherapeutics,
additional safety measures were devised and put into practice to minimise the possibility of viral contamination9,10,11.

Subsequently, the manufacturing of therapeutic proteins has predominantly transitioned to utilising recombinant DNA technology in both prokaryotic and eukaryotic
cells12. Nevertheless, the process of cultivating these cells is vulnerable to infection by adventitious pathogens, predominantly bacteria and viruses. Viruses
provide a significant problem due to their tendency to be more elusive than other types of microbial contaminants1. In the context of mammalian cell culture, they
have the capacity to reproduce human diseases. The present best practice is based on the lessons learned from the past and is built upon three main principles:
the careful selection of starting and raw materials that have a low likelihood of containing unintended viruses; rigorous testing of cell banks and materials at various
stages of production to ensure they are free from detectable viruses; and the inclusion of specific measures to eliminate and render inactive any potential
undetected unintended and naturally occurring viral contaminan Thanks to this methodology, these products have remained secure for more over 35 years, and as
far as we know, there has been no instance of a harmful virus being transmitted to a patient via a therapeutic protein created using recombinant DNA technology.

Although the safety record is commendable, there is a genuine risk of viral infection in mammalian cell culture, which can have serious implications. Even in the
absence of any released tainted lots, patients in need of treatment may still be impacted by medicine shortages, and the public's trust in the biotech industry might
be significantly undermined. These incidents can incur expenses amounting to tens of millions of dollars, covering the costs of investigation, cleanup, implementing
corrective measures, lost sales, and the downtime of production plants. In addition, they distract company executives, foster competition, and have the potential to
diminish company worth. Ultimately, these actions subject the corporation to rigorous regulatory examination and may cause a postponement in the authorization of
new products or the expedited authorization of a rival's product16,17.

However, properly learning from previous contamination episodes is a difficulty, despite the negative repercussions mentioned above. These occurrences are
infrequent; we have knowledge of just 26 instances of virus contamination in the last 36 years (Table 1)18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35, 18
of which were specifically reported as part of this study. Moreover, there has been no prior publication of a thorough dataset and assessment of virus
contaminations in biomanufacturing. The Consortium on Adventitious Agent Contamination in Biomanufacturing (CAACB) is a biopharmaceutical industry
consortium consisting of over 20 biotechnology companies located at the Massachusetts Institute of Technology's Centre for Biomedical Innovation. This
consortium has gathered extensive data on virus contaminations in cell culture operations from its member companies. The data were amalgamated with
information derived from published reports of viral contamination incidents. As far as we know, this dataset is the first complete collection of information on
accidental viral contaminations in mammalian cell cultures within the biotech industry. This comprehensive study, unprecedented in the industry, offers valuable
insights into the prevailing viral contaminants, their origins, the specific cell lines they affect, the remedial measures implemented, and the consequences of such
occurrences.

Workshop-5 Module-1 Page 322 of 333


Table 1 displays the occurrences of virus contaminations in mammalian cell cultures used for protein and vaccine production. The data is categorised by year and
includes both publicly reported incidents and those documented in the CAACB investigation.

Table of standard dimensions

This Perspective provides an overview of our current progress and examines the potential impact of our discoveries on makers of recombinant protein therapeutics.
Subsequently, we use these valuable observations to delineate the factors to be taken into account regarding the spread of viral contaminants for creators of novel
gene and cell therapies. The objective of this study is to assist the industry achieve its goal of manufacturing biologic products that are both safe and effective. It
should be noted that this study is ongoing and we anticipate an ongoing process of data collection and analysis in the future.

A study on the contamination of the CAACB virus in biomanufacturing.

So far, the CAACB has gathered a thorough collection of data about instances of virus contamination and the measures implemented to prevent such
contaminations. This data has been obtained from 20 prominent biopharmaceutical producers. The study utilised a survey consisting of 166 questions, which was
administered to the CAACB members (see to Supplementary Note for details). In order to maintain a dataset that is easily handled for the purpose of comparing
procedures, the study was restricted to examining virus contaminations specifically in the manufacture of mammalian cell cultures. The project excluded bacterial or
yeast fermentation, plasma fractionation, and egg-based synthesis of vaccines. It encompassed manufacturing processes from pilot to commercial sizes, using
both current Good Manufacturing Practice (cGMP) and non-cGMP activities. Unless explicitly stated differently, all data and commentary presented here pertain
exclusively to material that has been directly reported to the CAACB. It does not incorporate any information from other published reports.

Nearly half of the industry respondents, namely nine out of the 20 companies that participated, disclosed experiencing adventitious virus contaminations in their
mammalian cell culture activities since 1985. In total, these companies reported a combined 18 instances of virus contamination to the CAACB. All the documented
instances of contamination took place in manufacturing facilities in North America and Europe. However, there is not enough evidence to establish if one region has
a significantly higher likelihood of contamination compared to the other.

Varying host cells entail varying dangers.

Table 2 displays the specific cell type, virus responsible for contamination, and the suspected origin of contamination for the 18 reported incidents to the CAACB.
Chinese hamster ovary (CHO) cells were the manufacturing platform in 67% of reported occurrences, whereas human or primate cell lines were engaged in the
remaining 33% of events. It is not surprising that this outcome occurred, considering that CHO cells are the predominant choice for host cells in the recombinant-
biologic business. Published reports suggest that around 70% of authorised biotech products are produced using CHO cells. The documented viral contaminations
took place at various points throughout the product's life cycle. Specifically, there were 3 incidents during the preclinical non-cGMP manufacturing phase, 2
incidents during the clinical cGMP manufacturing phase, and the remaining 13 incidents during the commercial manufacturing phase. It may be surprising that the
majority of contaminations reported to the CAACB occurred during cGMP production, despite the rigorous controls used for clinical and commercial manufacture. A
plausible reason is that the magnitude of cGMP production and the quantities of media employed were significantly larger compared to non-cGMP manufacturing,
hence providing a greater chance for the introduction of a contaminant at a low level. Furthermore, virus testing is not obligatory for non-cGMP manufacturing, and
certain instances of contamination may have gone unnoticed. All of the widely employed cell culture techniques in the industry, including roller bottle (one
occurrence), batch culture (three occurrences), fed-batch culture (four occurrences), and perfusion culture (four occurrences), were implicated. Respondents did
not identify the type of cell culture technique in six occurrences. The provided data does not provide sufficient evidence to ascertain whether one operational mode
has a greater risk of contamination compared to another.

Table 2 provides a list of viruses that have been found to contaminate mammalian cell culture operations used for vaccine or recombinant protein manufacturing.
The table includes information on the number of instances where the virus source was discovered, as well as the source of the contamination.

Actual dimensions piece of furniture with a flat top and one or more legs, used for various purposes such as eating, working, or playing games

Table 2 reports that a total of 18 virus contamination occurrences have been attributed to nine specific viral contaminants, as detected by the CAACB. There is no
intersection between the four viruses detected in CHO cell culture and the five viruses detected in human or primate cells. This underscores the fact that the
contamination and safety hazards differ between CHO cells and human or primate cells. Out of the 12 reported contaminations in CHO cell culture, a raw material
or medium component was recognised or suspected as the source in 11 cases. When comparing the human and primate cell lines, it was assumed that either the
manufacturing operators or the cell line itself were the source of the issue. The reason why operators are only identified as a source of contamination in human or
monkey cell culture, but not in CHO cell culture, is likely because of the 'species barrier' that prevents viral infection between human or primate cells and rodent
cells. In essence, human-infecting viruses have a higher propensity to reproduce within human cells compared to non-human mammalian cells.

Implications for safety

Among the five viruses that were discovered to infect human and primate cell lines, four of them (herpesvirus, human adenovirus type 1, parainfluenza virus type 3,
and reovirus type 3) are known to cause disease in humans. On the other hand, only one virus (Cache Valley virus) that was found to contaminate CHO cell culture
has been reported to cause disease in humans. These statistics emphasise that protein products manufactured in human or primate cell lines have a greater
potential to be contaminated with viruses, which poses a larger safety risk to patients and the manufacturing process. This is because human cell lines are more
susceptible to infection by viruses that are harmful to humans.

Animal-derived raw materials (ADRM), particularly serum, provide a greater risk of viral contamination and are therefore being substituted whenever feasible across
the industry1,9,13. Our data provides additional evidence to support this claim: out of the four viruses that contaminated the CHO cell culture, three (blue tongue
virus, Cache Valley virus, and vesivirus 2117) were either suspected or definitively recognised as originating from serum. Nevertheless, the elimination of ADRMs
does not eradicate the possibility of contamination. During a contamination involving the minute virus of mice (MVM), no abnormal DNA replication motifs (ADRM)
were detected in the process. The minute virus of mice poses a significant challenge as a possible pollutant. The shedding of the virus occurs from ubiquitous

Workshop-5 Module-1 Page 323 of 333


populations of wild mice, and it may not be detectable even with recognised measures for controlling rodents. Furthermore, the virus can last in the environment
and in raw materials for a significant period of time after being shed.

Although raw materials were identified as the probable origin of the contamination in 11 instances, analysing these raw materials did not necessarily identify the
presence of the contaminating virus. The viral contamination was directly discovered in the suspect raw material in only 3 instances, as shown in Figure 1. In each
of the three instances, it was important to elevate the viral load to a discernible level using PCR, achieved either by amplifying the virus through replication in cell
culture or by concentrating the raw material. In the remaining 8 instances of contamination, viral testing of raw materials yielded negative results, and the origin of
the contamination was solely determined through indirect evidence.

Fig. 1: Virus testing of contaminated processes.

Virus tests on samples from different process steps of the affected runs during investigation of the contamination events reported to the CAACB. The data provided
to the CAACB consisted of samples obtained from cGMP operations. These samples were either tested positive (dark orange) or were below the limit of detection
of the assay and considered negative (dark blue). Additionally, samples from non-cGMP operations were included, which were either tested positive (light orange)
or were below the limit of detection of the assay and considered negative (light blue). Disclaimer: It should be noted that not all materials underwent testing during
every instance of contamination.

1.1 Methods for detecting viruses

Given that viruses are molecular parasites that rely on the cellular machinery of the host cell they invade, it is reasonable to anticipate that their presence in
mammalian cell culture would result in noticeable alterations in culture performance measures, such as viable cell density. Out of the 18 contamination incidents
reported to the CAACB, 11 of them were identified by a modification in cell culture parameters as the primary indicator of contamination. For the remaining 5
occurrences, which happened quite some time ago, it is uncertain whether there was any alteration in cell culture parameters. Nevertheless, in two instances, there
was no observable alteration in the efficiency of cell growth, and the presence of the contaminating virus was solely identified using a viral-specific PCR analysis.
This implies that the performance of cell culture alone may not offer adequate indication of contamination. Furthermore, alterations in cell culture efficiency might
arise from several sources apart than viral contamination.

The in vitro virus (IVV) assay is a cell-based method employed to examine cell culture harvest materials for potential viral contaminants39. The assay has the
capability to identify a diverse array of viruses and was employed as a quality control (QC) lot release test in all 15 instances conducted during cGMP
manufacturing. However, in 4 instances, the IVV assay yielded negative results, and the contamination was identified using an other method (Table 3). These
studies suggest that ensuring the safety of biologic products should not just depend on testing, including orthogonal approaches. Instead, it should be ensured by
the implementation of numerous controls, such as prevention, detection, and viral clearance, at every stage of the process.

Table 3 Methods used for the detection (both initial detection and confirmation of a contamination) and identification of the viral contaminant of a virus
contamination in cell culture operations

Table 3 Methods used for the detection (both initial detection and confirmation of a contamination) and identification of the viral contaminant of a virus
contamination in cell culture operations

From: Viral contamination in biologic manufacture and implications for emerging therapies

Methods Detection Virus identification Used for QC lot release assay


Used for QC test QC test positive QC test negative
PCR 8 11 1a 1 0

IVV test 7 14 11b 4b

Electron microscopy 4 3

Viral genome sequencing 3

Immunofluorescence 2 2

Workshop-5 Module-1 Page 324 of 333


Methods Detection Virus identification Used for QC lot release assay
Used for QC test QC test positive QC test negative

Mass spectrometry protein sequencing 2

RNA fingerprinting 1 1

Serology 6

Massively parallel sequencing 1 1

1. The number of contamination events where each test was used is listed in each column; multiple tests may have been used in each event. Tests used for QC lot
release, and whether they were positive or negative, are indicated.
a
2. One company used both PCR and IVV tests for QC lot release; in one event, PCR was positive, but the IVV test was negative. bOne company found some
bioreactors to be positive by PCR but negative by IVV test, whereas other bioreactors were positive by both tests. This is counted in both columns.

The findings presented in Table 3 demonstrate the limitations of many commonly employed detection assays. However, the implementation of quick virus detection
assays has effectively halted the dissemination of a viral contaminant within a production facility. Out of the 18 instances of contamination that were reported to the
CAACB, 7 were confined to cell culture bioreactors (Fig. 2). It is important to note that in 3 instances, virus-specific PCR testing conducted prior to bioreactor
harvest successfully discovered and identified a viral contaminant in the bioreactor. This prevented the virus from spreading to downstream purification operations
and other areas of the manufacturing plant. This significantly decreased the duration, exertion, and expenses associated with both examining the incident and
restoring the operational status of the manufacturing facility. In contrast, there were no fast PCR assays available during the 6 instances when tainted cell culture
fluid was processed downstream. PCR assays are specifically intended to identify a particular target virus or a group of viruses. Therefore, the presence of viral
contamination can only be identified if the primers and probes for the contaminating virus are included in the assay. Nevertheless, these data emphasise the
potential of expedited detection tests to mitigate business risk and enhance product safety, particularly in well-documented high-impact scenarios.

Fig. 2: Extent of contamination.

Schematic showing the extent of contamination in the manufacturing process and the use of virus detection as a process forwarding criteria. For seven events, the
contamination was contained in the cell culture, for six events the contamination was spread to downstream purification operations, and for five events the extent of
contamination was unknown. The ability of the downstream process to remove or inactivate the viral contaminant was evaluated in four of the six contamination
events and was found to remove contaminating virus below the limit of detection of the assay. For all six contaminations that spread to downstream processes, no
virus testing was implemented as process forwarding criteria. LRV stands for log reduction value and is a measure of the ability of the process to remove or

Workshop-5 Module-1 Page 325 of 333


inactivate virus. As an example, a process that is capable of reducing the viral load by a factor of 10 4, such as from a viral titer of 1010 to a titer of 106, is said to
have a LRV of 4.

8.3 Current viral safety approaches work


The data obtained from the CAACB investigation suggests that the existing production controls employed to mitigate the transmission of a potential contaminant
within manufacturing facilities are successful, as no instances of cross-contamination with other ongoing manufacturing activities were reported. The results for in-
process materials tested for virus during the post-contamination study are displayed in Figure 1. In the context of cGMP production, it was determined that out of
the eight cell cultures before the reactor, five were first detected as contaminated. Additionally, out of the six concurrent seed trains, one was discovered to be
infected. No concurrent cell cultures for distinct products were found to be contaminated. The contamination of concurrent cell culture processes in all instances
originated from a common raw material, rather than from cross-contamination within the production plant. This corroborates the efficacy of the existing cross-
contamination controls. While not directly addressed in this work, a thorough examination of biomanufacturing procedures for preventing cross-contamination may
be found in the ISPE Baseline Guide Volume 6: Biopharmaceutical Manufacturing Facilities40. It is important to mention that, during a single instance of
contamination, high-efficiency particle absorption (HEPA) filters were found to be positive for the virus causing contamination. While it may be unlikely for a virus to
become aerosolized in a manufacturing setting, it is important to acknowledge that it is not impossible. The 0.2-µm vent filters on bioreactors are not intended to
capture viruses. Therefore, when designing manufacturing facilities and planning decontamination activities in case of contamination, it is crucial to consider this
possibility. For instance, using a decontamination method that has been proven to kill viruses and can reach areas that may have been exposed to aerosols.

The data provided to the CAACB also corroborate the efficacy of downstream purification processes in removing and deactivating the virus, as previously
demonstrated in other sources (41,42). To ensure safety, the unit operations of the downstream purification process, such as chromatography, are assessed on a
small scale to determine their effectiveness in separating potential viral contamination from the final product. Downstream purification processes incorporate
specific measures to inactivate viruses, such as subjecting them to low-pH conditions or treating them with solvents or detergents. Additionally, the usage of
nanofiltration is employed to eliminate viruses. The efficacy of these procedures is assessed based on their capacity to eliminate model adventitious viruses
exhibiting various biochemical and biophysical characteristics. Since these studies do not aim to assess a particular safety hazard, the guideline does not provide
any recommended minimum clearance. Fig. 2 examined the effectiveness of large-scale purification processes in clearing the virus in four out of the six
downstream contaminations. While certain purification process intermediates yielded positive results for the virus (Fig. 1), it is important to note that in every
instance, the whole purification process successfully decreased the presence of the virus to levels that were undetectable by the assay. Upon testing, it was
determined that both the drug material and drug product were free from any traces of virus. Furthermore, there was no implementation of virus testing (such as
PCR) for all six contaminations that moved to downstream operations. This testing would require a negative result in order to proceed with the processing of the cell
culture harvest. It is crucial to emphasise that in every case reported to the CAACB, no drug substance derived from a contaminated cell culture was ever
authorised for human use, and all such drug compounds were thereafter eradicated.

Undoubtedly, the presence of an accidental viral contamination during the production of a biologic in cell culture is highly disruptive. Conducting an inquiry into a
viral contamination incident requires both a significant amount of time and resources. The duration of the inquiry may vary depending on the seriousness of the
incident, but it might potentially span over several months for the individuals involved. The reported cost of conducting such an inquiry, as communicated to the
CAACB, ranged from $1 to $10 million. However, in the most severe instances, the expenses associated with investigating a pollution, adopting necessary
corrective measures, decontaminating the facility, and other related costs might reach hundreds of millions of dollars. Several incidents reported to the CAACB
have led to contamination, causing certain products to suffer from a competitive disadvantage. Moreover, viral contaminations have been documented to result in
disruptions in manufacturing operations (with a median duration of 1 to 2 months for plant shutdown; Fig. 3), financial losses, disposal of batches, legal actions
against the company, a decline in company reputation as evidenced by stock value, and substantial setbacks in product development. Ultimately, these
occurrences can have an impact on patients. During a particular incident, the adverse effect on the availability of drugs led to a revised treatment plan, causing
certain patients to be unable to access sufficient quantities of medication until manufacturing operations were resumed.

Fig. 3: Manufacturing shut down.

Months that manufacturing plants were shut down due to virus contaminations.

8.4 Insights from the CAACB study


The study results of the CAACB have various implications for the strategic strategy of biologic manufacturers towards addressing viral contamination in producer
cell lines. Our results indicate that viral contaminations in cell-culture-based biopharmaceutical manufacturing are few occurrences when compared to the total
volume of the biotechnology industry over the previous 35 years. Nevertheless, our results also suggest that, when considering each individual organisation
(among those that participated in our survey), the occurrence is not uncommon. Out of the 20 companies that participated in the CAACB virus contamination
survey, 45% of the respondents said that they have encountered at least one virus contamination incident from 1985 to 2018. This percentage is greater than our
initial expectations. According to certain calculations, the enterprises involved in the CAACB study make up more than 75% of the global capacity for making
mammalian cell cultures. As a result, these companies are more likely to face a larger risk of virus infection, considering the overall amount of material they
process. Nevertheless, there is no direct relationship between the number of contaminations reported to the CAACB per firm and the overall manufacturing volume.
This suggests that a variety of factors, such as specific circumstances, existing manufacturing controls, and previous failure to disclose viral contamination, may
have influenced this rate. Furthermore, these findings emphasise that any manufacturer is susceptible to a contamination incident.

Workshop-5 Module-1 Page 326 of 333


Furthermore, the CHO cell cultures were infected with viruses that were distinct from the ones infecting human or primate cell lines, as indicated in Table 2. The
origins of the viruses infecting CHO cell culture and human or primate cell culture were distinct as well. It is implied that various host cells may necessitate the
assessment and control of distinct viral contamination hazards, particularly human and primate cell lines which are more vulnerable to contamination caused by
operators.

Our data clearly illustrate the existing constraints of virus testing in guaranteeing viral safety. When examining bioreactor-harvest samples infected with virus using
the IVV assay, we found that virus was not detected in 4 out of 14 cases, which accounts for 28.6% of the cases reported to the CAACB (Table 3). The occurrence
of false negatives can be attributed to three main factors: the virus not replicating in the selected indicator cell lines used for the test, viral replication not resulting in
measurable cytopathic effect in the chosen indicator cells, or the viral isolate replicating at a rate that is too slow to be detected by the end of the test. The IVV
assay has a duration of 14 to 28 days, which is excessively lengthy for evaluating the reactor's contamination prior to proceeding with downstream purification.
Consequently, a few of participants utilised PCR assays as a swift virus test prior to bioreactor harvest. If a viral contaminant was found to be a match for a PCR
target, it proved to be successful in preventing contamination of the entire production plant. Ultimately, the CAACB discovered that testing raw materials had
minimal efficacy based on the reported instances. During the 11 instances of contamination, where the viral contaminant originated from raw materials, the first
testing of those raw materials failed to reveal the presence of the virus. The virus was only discovered in the raw material after the viral load was increased, either
through concentration or biological amplification. This occurred in just three instances, as shown in Figure 1.

Due to the evident constraints of testing, numerous companies have prioritised prevention by adopting or investigating techniques to eliminate or render inactive
viruses in media or its constituents. The industry has primarily investigated many prevalent technologies, such as flash pasteurisation (often referred to as high-
temperature, short-time heat treatment or HTST), UV-C irradiation, and nanofiltration (Fig. 4). Despite the limited sample size, none of the four manufacturers who
have employed HTST heat treatment to deactivate possible viruses in media have encountered any contamination incidents following its deployment.

Fig. 4: Implementing treatment of media.

The percentage of respondents who have implemented (blue) or are evaluating (orange) technologies such as high-temperature, short-time (HTST) treatment, UV-
C irradiation or nanofiltration to remove or inactivate potential viral contaminants from cell culture media and medium components.

Our data conclusively demonstrate that viral clearance is efficient in downstream protein purification procedures (as shown in Fig. 2) for large-scale production
processes. For every case examined, the contaminating virus was effectively eliminated or rendered inactive through downstream purification techniques such as
chromatography, as well as specific measures designed to inactivate or remove the virus, such as low-pH hold, solvent/detergent treatment, or nanofiltration.
These methods successfully reduced the virus to levels undetectable by the assay. This emphasises the importance, in terms of viral safety, of conducting
downstream virus-clearance operations to eliminate potentially unnoticed contamination.

1.1 Implications for producers of cell and gene therapy

By the conclusion of the third quarter of 2019, more than 1,052 advanced therapy medical products (ATMPs) were undergoing phase 1-3 clinical trials (source:
https://alliancerm.org/publication/q3-2019-data-report/), and a number of cell-based and gene-based medicines have obtained regulatory approval. Autologous cell-
based products are customised for an individual patient with a significant medical condition and are typically used as a final resort treatment. Prolonged
administration of medication, resulting from contamination of the production process, may have adverse consequences on patients, potentially resulting in fatality.
However, implementing effective measures to minimise the danger of virus contamination is difficult, particularly for organisations that lack established protocols for
viral safety and have limited resources. Here, we present an overview of the main problems related to viral safety and explain how the insights gained from the
CAACB Virus Contamination in Biomanufacturing Study can be used to guarantee the safety of these new products.

Risks of viral contamination

The primary hazards associated with virus contamination in cell culture for therapeutic production are the origin of the cells, the materials employed in the cell
culture, and the potential exposure of the cell culture process stream to the operator or surrounding environment. We thoroughly analyse each risk in the following
sections.

Regarding cell sources, both recombinant biopharmaceutical goods and viral vector gene therapy products pose a little danger of contaminated starting cell
sources. This is because both production processes commence with meticulously characterised master cell banks. According to regulatory advice, it is necessary
to characterise the donor cells used in allogeneic therapies, which include using cells from one donor to develop therapies for several patients, in order to ensure
that they are free from viruses. Autologous cell therapy products, on the other hand, are derived from the collection of cells from human blood or tissues whenever
a production process is started. Assuring that the generated cells are devoid of adventitious virus by testing is usually not possible before starting the manufacture
of cell therapy. This method typically involves some level of risk. It has been previously observed that human cells are more prone to the reproduction of several
human viruses compared to CHO cells (Table 2).

Cell culture procedures used in the production of biopharmaceutical and ATMP products rely on diverse basal medium formulations consisting of over 50 vital
nutrients (such as amino acids, vitamins, and trace elements) and other substances. Prior to use, these undergo filter sterilisation using sterilizing-grade filters with
a rating of 0.1 µm, allowing the passage of most viruses. If any media components become contaminated with a virus during their production or handling, they have
the potential to cause an infection during the cell culture process. Animal-derived components, as shown in Table 2, and human-derived components such as
serum and growth factors, which have a greater likelihood of containing viruses compared to other components, are frequently included in media used for the
creation of Advanced Therapy Medicinal Products (ATMPs). Except for certain outdated products, these components are often not included in media used for
protein and vaccine production.

Workshop-5 Module-1 Page 327 of 333


The ultimate risk of viral contamination arises from the exposure of cell-culture process streams to potential virus contamination from the environment, particularly
during open production stages such as vessel transfers, and includes personnel. In both traditional biopharmaceutical and ATMP manufacturing processes, these
open processes are carried out in a tightly regulated environment by skilled operators who wear gowns and masks and adhere to aseptic standards. This results in
what is known as an operationally closed process. Nevertheless, ATMPs may heavily depend on open cell culture transfers to a greater extent than recombinant
proteins and vaccines, owing to the magnitude of their production. There is a higher likelihood of viral contamination from open operations for these specific
products.

Strategies for reducing the danger of viruses

The virus-risk mitigation strategies for biopharmaceutical manufacturing consist of three complementary approaches: (i) employing manufacturing controls and
selecting low-risk starting and raw materials to prevent virus entry; (ii) conducting tests on in-process materials to ensure their virus-free status and allow for lot
rejection if necessary; and (iii) eliminating the virus from the product through inactivation and/or removal methods9,14. Among the three factors, viral clearance has
been demonstrated to be exceptionally significant in minimising the likelihood of virus contamination in the final product (Fig. 2)52. Therefore, a crucial inquiry
arises: can the risk reduction strategies employed in conventional biopharmaceutical production be adapted for gene therapy and cell therapy manufacturing?

When evaluating the implementation of these three methods for reducing virus risk in ATMPs, it is evident that virus clearance is the most vulnerable aspect of
ATMP virus safety. Several virus clearance unit procedures mentioned in the purification process of therapeutic proteins are not compatible with, or have not been
extensively utilised for, advanced therapy medicinal products (ATMPs). If the product is a virus or a living cell, what methods will be used to eliminate or render
inactive any potential viral contaminants? Commonly utilised viral vectors in gene therapy can be utilised to selectively remove several potential contaminating
viruses by differential clearance53. Table 4 illustrates two instances where conventional virus eradication methods can be employed with distinct viral vectors. By
employing differential clearance tactics and comprehending the probable viral hazards, prospective virus origins, and the vulnerability of host cell lines to these
viruses, it becomes possible to devise a virus removal approach.

Table 4 Common viral clearance methods, their suitability for use with adeno-associated virus (AAV) and lentiviral vectors, and potential approaches to viral
clearance

Table 4 Common viral clearance methods, their suitability for use with adeno-associated virus (AAV) and lentiviral vectors, and potential approaches to
viral clearance

From: Viral contamination in biologic manufacture and implications for emerging therapies

Clearance method AAV Lentivirus


Size 20 nm; non-enveloped Size 80–100 nm; enveloped
Heat stable; application of heat will inactivate heat-
Heat Heat sensitive; may not be a suitable clearance method
sensitive viruses with minimal impact on AAV
Low-pH stable; a hold at low pH will inactivate pH-
Low pH pH sensitive; may not be a suitable clearance method
sensitive viruses with minimal impact on AAV
Non-enveloped virus; detergent can be used to inactivate
Solvent/detergent Enveloped virus sensitive to detergent; not a suitable clearance method
enveloped virus with minimal impact on AAV
Differences in surface charge can allow AAV to be Differences in surface charge can allow lentivirus to be separated from
Chromatography
separated from other viruses other viruses
AAV will pass through, for example, 35-nm filters, allowing Lentivirus will be retained by nanofilters (pore size, for example, 35 or
Nanofiltration
separation of AAV from larger viral contaminants 50 nm), potentially allowing separation from smaller viral contaminants
Nature Biotechnology (Nat Biotechnol) ISSN 1546-1696 (online) ISSN 1087-0156 (print)

Living cell-based therapies necessitate viral clearance to eliminate or render viruses inactive in the cell culture supernatant. Additionally, it involves segregating or
eliminating infected cells, which contain the virus, from uninfected cells. As far as we know, there is currently no technology that can effectively address this
difficulty. Furthermore, none of the techniques employed to deactivate viruses in traditional biopharmaceutical production are conducive to the viability of living
cells. Consequently, the assurance of cell therapy's protection against viruses now depends entirely on measures used to prevent contamination, detect any issues
throughout the procedure, and reject batches that are compromised.

Prevention of contamination

Viral-vectored gene therapy products employ plasmids or recombinant viruses to induce production54. Plasmids are produced within prokaryotic cells and must be
devoid of viruses capable of replicating in mammalian cell cultures. In the case of recombinant viruses, master virus banks are created and meticulously assessed
for the presence of adventitious viruses55. Testing for adventitious viruses in the presence of recombinant viral stocks is a difficult task. However, effective
methods for developing and implementing virus assays have been devised and utilised. In addition, advanced detection technologies like high-throughput
sequencing (HTS) have been able to identify unintended viral contamination even when there is a presence of virus product57. These technologies are currently
being investigated for potential use in the industry58,59.

Our investigation demonstrated that the direct testing method for controlling raw materials had limited efficacy (Fig. 1). This is due to either the virus concentrations
in the raw material being below the detection limit of the assay, or the contaminating virus not being uniformly distributed in the raw material. Hence, as previously
mentioned, several biopharmaceutical producers take measures to render media inactive or eliminate potential viruses in raw materials before utilisation (as seen in
Figure 4). Manufacturers of ATMPs should seriously contemplate adopting this method, if possible.

The utilisation of raw materials sourced from animals and humans during the development of Advanced Therapy Medicinal Products (ATMPs) heightens the danger
of virus infection. When it is necessary to use these materials, one way to reduce the risk is to amplify or concentrate the viral titer of a potential contaminant in a
high-risk raw material until it reaches a detectable level. The CAACB found that viral detection was possible in 27% (3 out of 11 cases) of raw material samples
(Fig. 1).

An alternative approach involves the treatment of a high-risk substance in order to minimise the potential for contamination. For instance, research has
demonstrated that exposing serum to gamma irradiation is useful in combating several viruses60. Although it is not currently a common procedure for human
serum, it is highly advisable to seriously contemplate its implementation in order to mitigate the potential hazards associated with these raw materials, particularly if
safer alternatives are not viable. Although treating raw materials is generally effective, it is important to acknowledge that certain materials produced from animals
or humans may be susceptible to damage by heat, radiation, or UV exposure. This might potentially have an impact on the growth and performance of cells.

Workshop-5 Module-1 Page 328 of 333


An alternative and more effective approach, which is also employed in the recombinant protein manufacturing sector, involves the creation of media that eliminate
the need for raw materials derived from animals or humans. Some advanced therapy medicinal products (ATMPs) face difficulties, particularly those with unclear
nutrient requirements like primary cell cultures, or those with varying initial cell populations like autologous cell therapies.

Furthermore, it is necessary to individually evaluate the virus risk of ancillary materials, such as monoclonal antibodies and retrovirus vectors, used in the
production of cell therapy products. This assessment ensures that these materials are free from any unintended viruses before they are utilised in the
manufacturing process of cell therapy.

It is crucial to prevent the introduction of viruses from the environment throughout the production operations of ATMP cell culture. This can be achieved by the
utilisation of functionally closed systems, which commonly utilise single-use, disposable equipment. If closed transfer systems are not available for cell culture
transfers, they must be performed in hoods that receive HEPA-filtered air. The operators must be adequately gowned and employ aseptic technique. ATMP
manufacturing is a greater challenge due to the presence of numerous open operations that increase the risk of environmental contamination. Additionally, the
production of multiple small lots simultaneously adds to the complexity.

Testing conducted during the manufacturing process and the subsequent rejection of batches that do not meet the required standards.

For many years, in the creation of recombinant proteins, testing for adventitious virus contamination has been carried out at key stages in the cell culture
manufacturing process, typically right before the production cell culture harvest. The IVV assay, a cell-based assay, is currently considered the most reliable
method for lot release testing in recombinant protein products. It offers a wide range of detection capabilities for identifying possible viral contamination.
Nevertheless, our investigation revealed that when conducting the IVV assay on bioreactor pre-harvest samples in runs that were contaminated with virus, around
25% of the cases did not provide any viral detection (Table 3). A further complication is that the IVV assay requires a time frame of 14 to 28 days for
completion39,56,62,63 and is not suitable for the quick approval needed for certain ATMP products. Nucleic acid-based tests, such as PCR, exhibit greater speed
compared to the IVV assay, completing within a time frame of less than 24 hours. However, PCR techniques necessitate previous awareness of possible
contamination and solely identify viral nucleic acids. High-throughput sequencing (HTS) offers a wider range of detection capabilities compared to polymerase
chain reaction (PCR) and is gaining significant attention from the vaccine and recombinant protein industry59. Nevertheless, the existing methods for HTS sample
preparation and bioinformatic pipelines are not as expeditious as PCR and typically require a duration of 7-10 days58. In addition, it may be necessary to utilise a
separate approach to determine if the contaminant found in a nucleic acid-based assay is biologically active. However, it is worth mentioning that high-throughput
sequencing (HTS) of viral RNA has been utilised to confirm the biological activity of a virus64. Notwithstanding these difficulties, it is imperative to carry out testing
for ATMP production on samples obtained prior to virus harvest (for viral-vectored gene therapy products) and at the conclusion of the manufacturing process (for
cell therapy products). This will enable the identification of any adventitious virus contamination and facilitate informed determinations regarding the rejection of
product batches.

Given the constraints of viral clearance for ATMPs and the limited durability of autologous cell therapy products, it is advisable for companies engaged in the
development of ATMP manufacturing processes and analytical methods to prioritise viral detection techniques, such as HTS, that are swift, comprehensive, and
more responsive than conventional methods.

Based on the insights gained from the CAACB virus contamination project and the preceding discussion, it can be inferred that, given the present level of
technological advancement, the viral safety of certain ATMPs, particularly autologous cell therapies, will primarily depend on the prevention of contamination
through the implementation of stringent process barriers. These barriers may include measures such as treating media, minimising the use of high-risk materials,
testing high-risk materials that cannot be eliminated or treated to mitigate risk, and employing closed manufacturing systems. The practice of checking for viruses
while a procedure is ongoing, especially in the case of autologous cell treatments, has distinct difficulties. Existing methods are unable to achieve both wide-
ranging detection and swift outcomes. Nevertheless, it is imperative to incorporate suitable virus detection tests into the existing safety testing conducted on every
batch. The final test results would be obtained following the initiation of autologous cell therapy treatment. These results would provide valuable information for
making educated decisions on patient treatment, particularly in the event of detecting or suspecting a viral contaminant. To summarise, organisations involved in
the development and operation of ATMP manufacturing processes should prioritise efforts to prevent virus contamination from occurring initially. However, it is
important to acknowledge that the most effective practices may evolve as new technologies are created to address current testing and viral clearance challenges
for ATMPs.

1.1 Summary

The biotechnology industry has a lengthy track record of providing secure and efficient treatments to patients because to the rigorous measures in place to
guarantee the safety of products. Although there are precautions in place, the possibility of viral infection in cell culture is a genuine threat that can have serious
repercussions. Historically, it has been difficult to learn from these events. However, the work provided here offers a thorough compilation and analysis of
previously undisclosed viral contamination data across the whole industry. The CAACB study has discovered five viruses that have been found to contaminate
CHO cell culture, as well as four viruses that have contaminated cell culture of human or primate cells. Crucially, the viruses that have demonstrated the ability to
contaminate human or primate cell lines are also capable of infecting humans. The selection of the appropriate cell line for the manufacture of recombinant proteins
or vaccines is a complex issue, in which the potential risks of viral contamination are merely one factor to be taken into account. Nevertheless, manufacturers
utilising human or primate cells must acknowledge the disparity in potential patient danger posed by a viral contamination in goods derived from these cells as
opposed to CHO cells.

The reported statistics suggest that relying solely on testing is insufficient to guarantee the absence of viral contamination in biotechnology products. Therefore, a
comprehensive and diverse approach is necessary to ensure viral safety. This statement is particularly accurate when dealing with a newly discovered virus, like
SARS-CoV-2, when the ability of the virus to infect cell lines used for production or to be detected in existing tests is initially uncertain. Implementing quick PCR
tests for making prompt processing decisions has been demonstrated to improve containment and mitigate the transmission of a contaminated virus to other areas
of the production facility. We are confident that the combined endeavour and shared expertise can guarantee the ongoing triumph of the life-preserving treatments
of both the present and the future.

Based on the CAACB investigation, we may infer that the viral safety of certain advanced therapy medicinal products (ATMPs) mostly depends on preventing
contamination by implementing strict process controls. Moreover, the little duration of time required for the use of numerous ATMPs, in comparison to their
production, poses a difficulty for existing viral testing methods and presents a distinct chance for technical progress.

8.5 References
1. Merten, O. W. Virus contaminations of cell cultures — a biotechnological view. Cytotechnology 39, 91–116 (2002).

CAS PubMed PubMed Central Google Scholar

2. Sawyer, W. A. et al. Jaundice in army personnel in the western region of the United States and its relation to vaccination against yellow fever: Part
I. Am. J. Epidemiol. 39, 337–430 (1944).

Workshop-5 Module-1 Page 329 of 333


Google Scholar

3. Shah, K. & Nathanson, N. Human exposure to SV40: review and comment. Am. J. Epidemiol. 103, 1–12 (1976).

Detection and Clearance of Viruses in the Biopharmaceutical Industry


by Shada WarrethTuesday, December 17, 2019 12:44 pm

Figure 1: Structure of enveloped viruses (represented by herpesvirus and retrovirus) and nonenveloped viruses (represented by adenovirus and parvovirus)
www.istockphoto.com

Viral contamination poses a prevalent risk to biopharmaceuticals originating from both animals and humans. Biomanufacturers must conduct viral testing studies
and integrate viral clearance methods into their processes to mitigate the potential impact of this form of contamination on any stage of a bioproduction process.

Viral contamination may originate from cell lines, such as endogenous retroviruses, or from adventitious introduction, such as mycoplasma, during the process of
medication manufacture. The International Council on Harmonisation of Technical Requirements for the Registration of Pharmaceuticals for Human Use (ICH)
requires virus testing of master cell banks (MCBs), working cell banks (WCBs), end-of-production cell banks, and bulk unprocessed harvest material, as stated in
their guidance document Q5A (1). Regulators also advise producers to ensure proper sourcing of raw materials and employ efficient procedures to demonstrate
virus clearance. They should also adopt a risk-based approach in these processes.

Therefore, it is necessary to conduct viral detection and clearance investigations in order to establish the elimination of viruses that are known to be associated with
a particular process. By demonstrating the process's capability to eliminate both known and unknown viruses, these tests also evaluate the effectiveness of the
process in eliminating any accidental viruses that may enter the production process through various means.

This text will analyse the test methodologies utilised for virus identification, as well as the viral clearance mechanisms employed. Furthermore, it is imperative to do
study on those procedures in order to ascertain their potential and limitations in terms of viral clearance. Due to the impracticality of encompassing all categories of
biopharmaceutical products and cell lines, I employ Chinese hamster ovary (CHO) cells as an illustrative example when discussing prevalent viruses.

Context

Certain viruses can induce alterations in the structure of the cells they infect, making them relatively simple to identify. Conversely, there are viruses that do not
impact the morphology of the host cells they infect, making their detection significantly more challenging. Furthermore, a product has the potential to be infected by
both adventitious and endogenous viruses. Adventitious viruses, as defined by the World Health Organisation (WHO), are viruses that have been accidentally
introduced into the manufacturing process of a biological product (3). They can originate from various sources including raw materials, cell culture media, cell lines,
as well as equipment, staff, and the environment. In contrast, endogenous viruses refer to those viruses whose genetic material is integrated into a cell substrate.
The genome of the original animal from which the cells were generated contains endogenous viruses. It is uncertain whether they contain a complete or contagious
virus.

A virus is a highly infectious pathogen that can invade a host cell and replicate itself. Viruses are composed of nucleic acid, either DNA or RNA, enclosed within a
protein coat or capsid. Viruses that possess an exterior lipid membrane are known as enveloped viruses, while those lacking this covering are categorised as
nonenveloped or naked viruses. Enveloped viruses include DNA-based herpesviruses and RNA-based retroviruses, while nonenveloped viruses include DNA-
based adenoviruses and RNA-based parvoviruses (Figure 1).

Ensuring the safety of the product: There is no one technique that can guarantee the safety of all biological products. The presence of analytical restrictions
prevents the demonstration of complete absence of viruses. Ensuring the absence of viral contamination in a finished product is not just reliant on virus detection
testing. Additionally, it necessitates the implementation of viral clearance techniques to render them inactive and/or eliminate them (4).

As previously stated, viral infection can occur either from the cell line itself or via accidental introduction during bioprocessing. Both types of viral contaminants can
be addressed by the implementation of various complimentary techniques. These include conducting tests on the product at specific stages of manufacturing and
processing to identify the presence of viral contaminants, examining cell lines and raw materials for the existence of endogenous viruses, and subjecting the
process to rigorous challenges to assess its effectiveness in eliminating them.

Occasionally, detection methods lack sufficient sensitivity to detect viral contamination, allowing both adventitious and indigenous viruses to go undetected. This
can occur due to many factors, such as limited assay sensitivity (5). Furthermore, the constant presence of novel and developing viral contamination is
Workshop-5 Module-1 Page 330 of 333
unavoidable.

Biopharmaceutical product producers and suppliers have implemented several measures to prevent virus contamination. Examples encompass the utilisation of
extensively studied and defined cell lines, as well as the use of chemically derived substances instead of those sourced from animals. Additionally, the
implementation of measures to minimise and control risks associated with obtaining raw materials and excipients, and the adoption of bioprocess techniques that
effectively prevent viral contamination are also noteworthy (6).

Identification of Viruses

Various techniques can be employed to identify both naturally occurring and accidental viruses. Biopharmaceutical makers must carefully choose and develop the
most suitable virus detection test methods to be used throughout the whole manufacturing process, starting from the original cell lines and extending to the final
product stage. This includes testing raw materials and the bulk drug-substance harvest. Regulators mandate comprehensive testing at each level of the
biomanufacturing process (7).

When formulating and selecting a viral detection/testing approach, it is crucial to take into account the potential for contamination from both naturally occurring and
unintentionally introduced viruses, both in the initial materials and in the end result. Virus identification necessitates the employment of many orthogonal methods,
encompassing both general and specific approaches, as no one assay can identify all viruses (7). Test methods can be categorised into three main groups:
species-specific assays for identified possible contaminants, general methods that are not specific, and retrovirus assays.

These methods employ distinct techniques to identify viruses. Several assays are available for detecting viral contamination in test cells. Some assays specifically
target viral proteins or particles, while others also identify viral indicators in addition to the presence of viral genomes (8). Here are illustrations of each:

• Testing cells for viral contamination using in vitro and in vivo assays. • Analysing viral proteins by enzyme assays to detect reverse transcriptase (RT) in
retroviruses. • Examining viral particles using electron microscopy (EM).

• Viral indicators are detected using polymerase chain reaction (PCR) tests.

Distinct and general techniques: The industry standard methods for detecting adventitious viruses include transmission electron microscopy (TEM) and infectivity
testing utilising animals (in vivo) and cell cultures (in vitro). Quantitative PCR (qPCR) is the most often employed species-specific (target-specific) assay. Although
nonspecific tests have the ability to identify a wide variety of viruses, many types may remain undiscovered due to the various physiological characteristics of
viruses and limitations in the sensitivity of the assay. In contrast, species-specific approaches are typically quite sensitive but can only identify predetermined
targets. Moreover, PCR-based assays have the ability to differentiate between viruses that are capable of causing infection and those that are not. Therefore,
targeted techniques are frequently employed as instruments to aid in viral contamination investigations after positive outcomes from broad screening tests (9).

Retrovirus assays: Furthermore, regulatory agencies need corporations to employ electron microscopy (EM) as a method for detecting retroviruses in cells.
Additionally, these agencies demand that product sponsors measure the quantity of retrovirus-like particles in large-scale cell harvests, such as those derived from
rodent cells (e.g., CHO cells) (7). The traditional approaches are constrained by restrictions such as extended experiment durations, restricted detection of certain
target viruses, and an incapacity to recognise unfamiliar viruses. Furthermore, they are unable to identify viruses that do not cause cytopathogenic effects (CPEs),
which refer to the structural alterations in the host cells they have infiltrated. Hence, in order to overcome the aforementioned constraints in detecting adventitious
viruses, there has been a recent use of next-generation sequencing (NGS) technology, also referred to as massive parallel sequencing (MPS) or deep sequencing.
NGS is primarily a DNA sequencing technique that can efficiently sequence millions of nucleic acids in a given sample. This potent instrument possesses the
adaptability to arrange a variety of nucleic acids and has the capability to identify a broad spectrum of both novel and unfamiliar viruses. NGS also provides the
remarkable capability to sequence over 40 human genomes within a single day, in contrast to the 13 years it previously took to sequence the initial human genome
(10, 11).

Despite being a relatively recent technology, NGS/MPS is gaining recognition from regulatory organisations due to its advantageous features. As an illustration, the
ninth edition of the European Pharmacopoeial Commission has released a guidance document that suggests the utilisation of deep sequencing as a substitute for
in vivo tests or as a supplementary/alternative method to in vitro tests for the detection of adventitious viruses (12).

Elimination of viral particles

There is no single test that can identify all viruses, and every method used requires a certain minimum amount of viruses to be present in order to be recognised.
Several techniques assess indicators of infection, yielding outcomes that are delayed relative to the time of infection. Therefore, guaranteeing the absence of viral
contamination in a product relies on verifying that a method can effectively deactivate or eliminate viruses (via the conduction of viral clearance validation studies)
and also conducting tests to detect their existence. Adopting this method is the sole means of guaranteeing that biopharmaceuticals will be devoid of viral infection
and deemed safe for human consumption (13).
When conducting viral-clearance validation tests, it is crucial to identify the potential viruses that may contaminate a product and infiltrate its production process.
Almost every cellular genome contains retrovirus sequences. Indeed, certain cell lines frequently employed for the synthesis of biopharmaceuticals, such as CHO
and murine cells, have been observed to discharge retrovirus-like particles. While the retroviruses discovered in CHO cells have been confirmed to be non-
infectious (14), the ones detected in murine cells have the potential to be infectious (15). Additional viruses that undergo replication in CHO cells include vesivirus,
reovirus, mouse minute virus (MMV), and Cache Valley virus (CVV), among others (16, 17).

Workshop-5 Module-1 Page 331 of 333


Testing the efficacy of clearance methods in eliminating viruses in bioprocessing is crucial, hence it is recommended to target a diverse range of physicochemical
parameters. The US Food and Drug Administration (FDA) classifies viruses utilised for clearance studies into three categories: particular model viruses, nonspecific
model viruses, and product-/process-relevant viruses (1). Employing non-specific model viruses with diverse physicochemical characteristics enables a corporation
to assess the efficacy of their manufacturing process in deactivating and/or eliminating viruses in a broad sense.

Relevant viruses refer to those that have the potential to contaminate cell substrates or other materials utilised in a biomanufacturing process. These viruses must
be eliminated by a viral inactivation/removal procedure. A specific model virus is one that exhibits a close relationship to a relevant virus, belonging to the same
family or genus, and shares similar chemical and physical features. Alternative viruses are employed in cases where a suitable virus is not accessible, such as
when it cannot be cultivated in the laboratory at the necessary quantities for experimentation (1).

Typical examples of viruses that are used to illustrate a broad spectrum of physicochemical qualities include small, nonenveloped viruses like poliovirus or animal
parvovirus; huge DNA viruses like herpesvirus; and large enveloped RNA viruses like murine retrovirus or parainfluenza virus. During viral clearance tests,
companies intentionally introduce samples of these viruses at various points during a scaled-down manufacturing process. They then assess the effectiveness of
the chosen strategy in deactivating or eliminating the viruses. By doing this, it will facilitate the identification of efficient methods and enable the measurement of
their efficacy. Choosing methods and processes that closely mimic the intended production-scale process is crucial.

Not every individual stage of a biomanufacturing process necessitates testing. Validation studies should only contain actions that are proven to effectively eradicate
infections. It is necessary to add a sufficient amount of viruses to the sample materials in order to test the effectiveness of the process. However, the amount
should not be excessive so as to avoid changing the properties of the sample material (18). The quantities typically employed are greater than 10% of the viral load
present in the sample being analysed. Furthermore, it is customary to assess and examine a procedure in both standard and atypical operating circumstances
(such as elevated temperatures, pH levels, and agitation) to confirm its resilience.

In order to avoid the introduction of viruses into a production facility, viral clearance investigations are typically conducted in specialised virological laboratories
using scaled-down versions that replicate a biomanufacturing process. It is important to establish if the lowering of viral infectivity is achieved through inactivation or
elimination. Results acquired from reduced-scale techniques are calibrated and contrasted with characteristics (such as pH, protein concentration, and
temperature) of a full-scale operation. Test results must be measurable and possess appropriate levels of sensitivity and repeatability. Conventional techniques
employed for these investigations involve quantal endpoint titrations, such as tissue culture infectious dose (TCID50) assays, and qualitative nucleic-acid
amplification methods, such as PCR.

Every procedure is prone to error. PCR is widely acknowledged as a highly sensitive technology for detecting viral genomes. However, it has the drawback of also
detecting "inactivated" viruses, which might result in false-positive outcomes and hence misrepresent the effectiveness of viral inactivation. As a result, PCR
techniques are commonly employed to evaluate viral "elimination" approaches rather than those for inactivation.
When evaluating the suitability of a specific clearance method, it is crucial to take into account various factors, such as the types of test viruses employed, the
extent of log reduction values (LRVs) attained, the kinetics of viral inactivation, the techniques employed for inactivation or removal, and the limit of detection (LoD)
of the assay.

Figure 2: Amgen’s (top) and Biogen’s (bottom) monoclonal antibody downstream processing approaches include their chosen viral clearance methods (18); UF/DF
= ultrafiltration/diafiltration, AEX = anion-exchange chromatography, HIC = hydrophobic-interaction chromatography

Viral clearance procedures are typically conducted as a component of downstream processing (Figure 2). Nevertheless, due to the possibility of viral contamination
in the initial stages of production, especially in bioreactors, many biomanufacturers opt to use viral clearance screening and mitigation methods in the early stages,
specifically for cell culture medium and components (17).

Several viral clearance techniques are now employed in biopharmaceutical manufacturing to achieve viral inactivation or elimination. For instance, in upstream
processing areas, high-temperature–short-time (HTST) treatment and UV-C (ultraviolet-C radiation with wavelengths ranging from 200 to 280 nm) are employed. In
the previous approach, a solution containing proteins is continuously passed through and exposed to elevated temperatures for brief durations, usually around 70–
75 °C for 30 seconds. The UV-C technique is highly efficient in rendering nonenveloped viruses inactive (19). Commonly employed viral inactivation procedures in
downstream processes encompass solvent/detergent treatment and low-pH (acidic) viral inactivation, which specifically target enveloped viruses. Additional, less
prevalent techniques employed to render encapsulated viruses inactive encompass microwave heating, irradiation (UV and gamma), pasteurisation, and HTST.

Workshop-5 Module-1 Page 332 of 333


Generally, individual viral clearance methods are evaluated independently, and the total clearance achieved by a complete process is determined by aggregating
the results of all methods (20). Product sponsors are mandated by regulators to provide documented evidence of viral clearance, measured in terms of a Log
Reduction Value (LRV) of 19.

In pH-based viral inactivation, an acid such as phosphoric acid is introduced into a protein solution. The solution is then maintained at a low pH level for a specified
duration to ensure effectiveness. A pH value of 3.9 or lower is regarded to be strong (20). This method has the ability to render encapsulated viruses non-
functional. The pH of the protein solution is subsequently readjusted to a physiological level, often pH 7 or above. It is crucial to guarantee that the acidic conditions
utilised to specifically attack viruses do not undermine the stability of the protein product itself. Viral inactivation at low pH is typically carried out immediately
following protein A affinity capture chromatography, where low-pH buffers are used to elute the column (21).

In the process of solvent/detergent viral inactivation, a protein solution is subjected to incubation with a detergent and an organic solvent, such as tri-(N-butyl)
phosphate, for a specified duration. Once viral inactivation is finished, the solvent and detergents need to be eliminated, usually by employing a sorbent such a
polymer (19).

Techniques for Eliminating Viruses

Size-exclusion techniques, such as chromatography and viral filtration (nanofiltration), are effective in eliminating nonenveloped viruses. These viruses are typically
more difficult to remove than enveloped viruses due to their chemical resistance (6). While column chromatography is not specifically intended for virus elimination,
it is capable of eliminating both enclosed and nonenveloped viruses (22). Nevertheless, the procedure is regulated by many operating factors (such as
temperature, flow rates, buffers, and wash volumes) that can impact the degree of viral reduction attained. Therefore, filtration is a more preferred method
compared to chromatography (17).

Nanofiltration is typically the preferred method for eliminating both tiny and big enveloped viruses, as well as nonenveloped viruses. Membrane chromatography is
gaining popularity due to the utilisation of virus-binding ligands in conjunction with ion-exchange adsorbers. Furthermore, it has the capability to function at much
elevated flow rates compared to conventional column chromatography techniques. This, in turn, facilitates the acceleration of bioprocessing (19).
Regulatory organisations recommend the integration of various orthogonal techniques to ensure viral elimination: employing independent approaches with distinct
clearance methodologies (13). Implementing an effective viral clearance technique can reduce the likelihood of biopharmaceutical contamination by viruses to a
probability of less than one in a million (17).
References
1 ICH 5A: Viral Safety Evaluation of Biotechnology Products Derived from Cell Lines of Human or Animal Origin. US Fed. Reg. 63(185) 1998: 51074;
https://database.ich.org/sites/default/files/Q5A_R1_Guideline.pdf.
2 Merten OW. Virus Contaminations of Cell Cultures: A Biotechnological View. Cytotechnol. 39 (2) 2002: 91–116; doi:10.1023/A:1022969101804.
3 WHO Technical Report Series #978, Annex 3. Recommendations for the Evaluation of Animal Cell Cultures as Substrates for the Manufacture of Biological
Medicinal Products and for the Characterization of Cell Banks. World Health Organization: Geneva, Switzerland, 2010;
www.who.int/biologicals/vaccines/TRS_978_Annex_3.pdf?ua=1.
4 Klug B, Robertson JS, Condit RC. Adventitious Agents and Live Viral Vectored Vaccines: Considerations for Archiving Samples of Biological Materials for
Retrospective Analysis. Vaccine 34 (51) 2016: 6617–6625; doi:10.1016/j.vaccine.2016.02.015.
5 Aranha H. Virus Safety of Biopharmaceuticals: Absence of Evidence Is Not Evidence of Absence. Contract Pharma 14 November 2011;
www.contractpharma.com/issues/2011-11/view_features/virus-safety-of-biopharmaceuticals.
6 Challener CA. Viral Clearance Challenges in Bioprocessing. BioPharm Int. 27(11) 2014: www.biopharminternational.com/viral-clearance-challenges-
bioprocessing.
7 Adair R. Control Viral Contaminants with Effective Testing. BioPharm Int. 30(10): 18–27; www.biopharminternational.com/control-viral-contaminants-effective-
testing.

Workshop-5 Module-1 Page 333 of 333

You might also like